mirror of
https://github.com/ansible/awx.git
synced 2026-03-14 07:27:28 -02:30
Merge branch 'release_3.1.0' into devel
* release_3.1.0: (633 commits) query for failed projects to include canceled Style Audit of Host Event Modal Remove npm hacks in RPM builder Update po files Update pot files Make sure to delete any nodes that need to be deleted before attempting to associate super-user requests to HostDetail go through rbac Launch job with prompts fixes add notifications to cleanup_jobs allow can_add to be called for permission info Elaborate on system job default i accidentally removed the "-1" from this check, adding it back in remove job_event text filters, tweaked RBAC see issue 4958 for the RBAC details fixing jshint Adding tags was creating this nested array structure which was causing buggy behavior. This should fix that. bump vmware inventory script Fix up some unit tests moving appendToBottom function elsewhere rearranging logic to match integrity of existing logic Fixed location of sub-category dropdowns ...
This commit is contained in:
20
COPYING
20
COPYING
@@ -1,5 +1,19 @@
|
||||
The Ansible Tower Software is a commercial software licensed to you pursuant to the Ansible Software Subscription and Services Agreement (“EULA”) located at www.ansible.com/subscription-agreement and an annual Order/Agreement with Ansible, Inc.
|
||||
ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT
|
||||
|
||||
The Ansible Tower Software is free for use up to ten (10) Nodes, any additional Nodes shall be purchased.
|
||||
This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism.
|
||||
|
||||
Ansible and Ansible Tower are registered Trademarks of Ansible, Inc.
|
||||
1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license.
|
||||
|
||||
2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License.
|
||||
|
||||
3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation.
|
||||
|
||||
4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements.
|
||||
|
||||
5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability.
|
||||
|
||||
6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions.
|
||||
|
||||
7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods.
|
||||
|
||||
Copyright © 2015 Red Hat, Inc. All rights reserved. "Red Hat" and “Ansible Tower” are registered trademarks of Red Hat, Inc. All other trademarks are the property of their respective owners.
|
||||
|
||||
47
Makefile
47
Makefile
@@ -10,9 +10,8 @@ DEPS_SCRIPT ?= packaging/bundle/deps.py
|
||||
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
|
||||
|
||||
GCLOUD_AUTH ?= $(shell gcloud auth print-access-token)
|
||||
COMPOSE_TAG ?= devel
|
||||
# NOTE: This defaults the container image version to the branch that's active
|
||||
# COMPOSE_TAG ?= $(GIT_BRANCH)
|
||||
COMPOSE_TAG ?= $(GIT_BRANCH)
|
||||
|
||||
COMPOSE_HOST ?= $(shell hostname)
|
||||
|
||||
@@ -176,7 +175,6 @@ UI_RELEASE_FLAG_FILE = awx/ui/.release_built
|
||||
.DEFAULT_GOAL := build
|
||||
|
||||
.PHONY: clean clean-tmp clean-venv rebase push requirements requirements_dev \
|
||||
requirements_jenkins \
|
||||
develop refresh adduser migrate dbchange dbshell runserver celeryd \
|
||||
receiver test test_unit test_coverage coverage_html test_jenkins dev_build \
|
||||
release_build release_clean sdist rpmtar mock-rpm mock-srpm rpm-sign \
|
||||
@@ -286,8 +284,10 @@ requirements_ansible: virtualenv_ansible
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/ansible/bin/activate; \
|
||||
$(VENV_BASE)/ansible/bin/pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements_ansible.txt ;\
|
||||
$(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt; \
|
||||
else \
|
||||
pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements_ansible.txt ; \
|
||||
pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt; \
|
||||
fi
|
||||
|
||||
# Install third-party requirements needed for Tower's environment.
|
||||
@@ -295,29 +295,24 @@ requirements_tower: virtualenv_tower
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
$(VENV_BASE)/tower/bin/pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements.txt ;\
|
||||
$(VENV_BASE)/tower/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt; \
|
||||
else \
|
||||
pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements.txt ; \
|
||||
pip uninstall --yes -r requirements/requirements_tower_uninstall.txt; \
|
||||
fi
|
||||
|
||||
requirements_tower_dev:
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
$(VENV_BASE)/tower/bin/pip install -r requirements/requirements_dev.txt; \
|
||||
fi
|
||||
|
||||
# Install third-party requirements needed for running unittests in jenkins
|
||||
requirements_jenkins:
|
||||
if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate && pip install -Ir requirements/requirements_jenkins.txt; \
|
||||
else \
|
||||
pip install -Ir requirements/requirements_jenkins.txt; \
|
||||
$(VENV_BASE)/tower/bin/pip uninstall --yes -r requirements/requirements_dev_uninstall.txt; \
|
||||
fi
|
||||
|
||||
requirements: requirements_ansible requirements_tower
|
||||
|
||||
requirements_dev: requirements requirements_tower_dev
|
||||
|
||||
requirements_test: requirements requirements_jenkins
|
||||
requirements_test: requirements
|
||||
|
||||
# "Install" ansible-tower package in development mode.
|
||||
develop:
|
||||
@@ -407,7 +402,7 @@ uwsgi: collectstatic
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
fi; \
|
||||
uwsgi -b 32768 --socket :8050 --module=awx.wsgi:application --home=/venv/tower --chdir=/tower_devel/ --vacuum --processes=5 --harakiri=60 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket
|
||||
uwsgi -b 32768 --socket :8050 --module=awx.wsgi:application --home=/venv/tower --chdir=/tower_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --master-fifo=/var/lib/awx/awxfifo --lazy-apps
|
||||
|
||||
daphne:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
@@ -433,7 +428,7 @@ celeryd:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
fi; \
|
||||
$(PYTHON) manage.py celeryd -l DEBUG -B --autoreload --autoscale=20,3 --schedule=$(CELERY_SCHEDULE_FILE) -Q projects,jobs,default,scheduler,broadcast_all,$(COMPOSE_HOST)
|
||||
$(PYTHON) manage.py celeryd -l DEBUG -B --autoreload --autoscale=20,3 --schedule=$(CELERY_SCHEDULE_FILE) -Q projects,jobs,default,scheduler,broadcast_all,$(COMPOSE_HOST) -n celery@$(COMPOSE_HOST)
|
||||
#$(PYTHON) manage.py celery multi show projects jobs default -l DEBUG -Q:projects projects -Q:jobs jobs -Q:default default -c:projects 1 -c:jobs 3 -c:default 3 -Ofair -B --schedule=$(CELERY_SCHEDULE_FILE)
|
||||
|
||||
# Run to start the zeromq callback receiver
|
||||
@@ -511,6 +506,14 @@ test_tox:
|
||||
# Alias existing make target so old versions run against Jekins the same way
|
||||
test_jenkins : test_coverage
|
||||
|
||||
# Make fake data
|
||||
DATA_GEN_PRESET = ""
|
||||
bulk_data:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
fi; \
|
||||
$(PYTHON) tools/data_generators/rbac_dummy_data_generator.py --preset=$(DATA_GEN_PRESET)
|
||||
|
||||
# l10n TASKS
|
||||
# --------------------------------------
|
||||
|
||||
@@ -559,10 +562,7 @@ messages:
|
||||
# generate l10n .json .mo
|
||||
languages: $(UI_DEPS_FLAG_FILE) check-po
|
||||
$(NPM_BIN) --prefix awx/ui run languages
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/tower/bin/activate; \
|
||||
fi; \
|
||||
$(PYTHON) manage.py compilemessages
|
||||
$(PYTHON) tools/scripts/compilemessages.py
|
||||
|
||||
# End l10n TASKS
|
||||
# --------------------------------------
|
||||
@@ -572,16 +572,16 @@ languages: $(UI_DEPS_FLAG_FILE) check-po
|
||||
|
||||
ui-deps: $(UI_DEPS_FLAG_FILE)
|
||||
|
||||
$(UI_DEPS_FLAG_FILE): awx/ui/package.json
|
||||
$(UI_DEPS_FLAG_FILE):
|
||||
$(NPM_BIN) --unsafe-perm --prefix awx/ui install awx/ui
|
||||
touch $(UI_DEPS_FLAG_FILE)
|
||||
|
||||
ui-docker-machine: $(UI_DEPS_FLAG_FILE)
|
||||
$(NPM_BIN) --prefix awx/ui run build-docker-machine -- $(MAKEFLAGS)
|
||||
$(NPM_BIN) --prefix awx/ui run ui-docker-machine -- $(MAKEFLAGS)
|
||||
|
||||
# Native docker. Builds UI and raises BrowserSync & filesystem polling.
|
||||
ui-docker: $(UI_DEPS_FLAG_FILE)
|
||||
$(NPM_BIN) --prefix awx/ui run build-docker-cid -- $(MAKEFLAGS)
|
||||
$(NPM_BIN) --prefix awx/ui run ui-docker -- $(MAKEFLAGS)
|
||||
|
||||
# Builds UI with development UI without raising browser-sync or filesystem polling.
|
||||
ui-devel: $(UI_DEPS_FLAG_FILE)
|
||||
@@ -589,8 +589,7 @@ ui-devel: $(UI_DEPS_FLAG_FILE)
|
||||
|
||||
ui-release: $(UI_RELEASE_FLAG_FILE)
|
||||
|
||||
# todo: include languages target when .po deliverables are added to source control
|
||||
$(UI_RELEASE_FLAG_FILE): $(UI_DEPS_FLAG_FILE)
|
||||
$(UI_RELEASE_FLAG_FILE): languages $(UI_DEPS_FLAG_FILE)
|
||||
$(NPM_BIN) --prefix awx/ui run build-release
|
||||
touch $(UI_RELEASE_FLAG_FILE)
|
||||
|
||||
@@ -690,7 +689,7 @@ rpm-build:
|
||||
|
||||
rpm-build/$(SDIST_TAR_FILE): rpm-build dist/$(SDIST_TAR_FILE)
|
||||
cp packaging/rpm/$(NAME).spec rpm-build/
|
||||
cp packaging/rpm/$(NAME).te rpm-build/
|
||||
cp packaging/rpm/tower.te rpm-build/
|
||||
cp packaging/rpm/$(NAME).sysconfig rpm-build/
|
||||
cp packaging/remove_tower_source.py rpm-build/
|
||||
cp packaging/bytecompile.sh rpm-build/
|
||||
|
||||
@@ -19,6 +19,7 @@ from rest_framework.filters import BaseFilterBackend
|
||||
|
||||
# Ansible Tower
|
||||
from awx.main.utils import get_type_for_model, to_python_boolean
|
||||
from awx.main.models.rbac import RoleAncestorEntry
|
||||
|
||||
|
||||
class MongoFilterBackend(BaseFilterBackend):
|
||||
@@ -76,7 +77,7 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
SUPPORTED_LOOKUPS = ('exact', 'iexact', 'contains', 'icontains',
|
||||
'startswith', 'istartswith', 'endswith', 'iendswith',
|
||||
'regex', 'iregex', 'gt', 'gte', 'lt', 'lte', 'in',
|
||||
'isnull')
|
||||
'isnull', 'search')
|
||||
|
||||
def get_field_from_lookup(self, model, lookup):
|
||||
field = None
|
||||
@@ -147,6 +148,15 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
re.compile(value)
|
||||
except re.error as e:
|
||||
raise ValueError(e.args[0])
|
||||
elif new_lookup.endswith('__search'):
|
||||
related_model = getattr(field, 'related_model', None)
|
||||
if not related_model:
|
||||
raise ValueError('%s is not searchable' % new_lookup[:-8])
|
||||
new_lookups = []
|
||||
for rm_field in related_model._meta.fields:
|
||||
if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description'):
|
||||
new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name))
|
||||
return value, new_lookups
|
||||
else:
|
||||
value = self.value_to_python_for_field(field, value)
|
||||
return value, new_lookup
|
||||
@@ -158,6 +168,8 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
and_filters = []
|
||||
or_filters = []
|
||||
chain_filters = []
|
||||
role_filters = []
|
||||
search_filters = []
|
||||
for key, values in request.query_params.lists():
|
||||
if key in self.RESERVED_NAMES:
|
||||
continue
|
||||
@@ -174,6 +186,21 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
key = key[:-5]
|
||||
q_int = True
|
||||
|
||||
# RBAC filtering
|
||||
if key == 'role_level':
|
||||
role_filters.append(values[0])
|
||||
continue
|
||||
|
||||
# Search across related objects.
|
||||
if key.endswith('__search'):
|
||||
for value in values:
|
||||
for search_term in force_text(value).replace(',', ' ').split():
|
||||
search_value, new_keys = self.value_to_python(queryset.model, key, search_term)
|
||||
assert isinstance(new_keys, list)
|
||||
for new_key in new_keys:
|
||||
search_filters.append((new_key, search_value))
|
||||
continue
|
||||
|
||||
# Custom chain__ and or__ filters, mutually exclusive (both can
|
||||
# precede not__).
|
||||
q_chain = False
|
||||
@@ -204,13 +231,21 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
and_filters.append((q_not, new_key, value))
|
||||
|
||||
# Now build Q objects for database query filter.
|
||||
if and_filters or or_filters or chain_filters:
|
||||
if and_filters or or_filters or chain_filters or role_filters or search_filters:
|
||||
args = []
|
||||
for n, k, v in and_filters:
|
||||
if n:
|
||||
args.append(~Q(**{k:v}))
|
||||
else:
|
||||
args.append(Q(**{k:v}))
|
||||
for role_name in role_filters:
|
||||
args.append(
|
||||
Q(pk__in=RoleAncestorEntry.objects.filter(
|
||||
ancestor__in=request.user.roles.all(),
|
||||
content_type_id=ContentType.objects.get_for_model(queryset.model).id,
|
||||
role_field=role_name
|
||||
).values_list('object_id').distinct())
|
||||
)
|
||||
if or_filters:
|
||||
q = Q()
|
||||
for n,k,v in or_filters:
|
||||
@@ -219,6 +254,11 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
else:
|
||||
q |= Q(**{k:v})
|
||||
args.append(q)
|
||||
if search_filters:
|
||||
q = Q()
|
||||
for k,v in search_filters:
|
||||
q |= Q(**{k:v})
|
||||
args.append(q)
|
||||
for n,k,v in chain_filters:
|
||||
if n:
|
||||
q = ~Q(**{k:v})
|
||||
@@ -227,7 +267,7 @@ class FieldLookupBackend(BaseFilterBackend):
|
||||
queryset = queryset.filter(q)
|
||||
queryset = queryset.filter(*args).distinct()
|
||||
return queryset
|
||||
except (FieldError, FieldDoesNotExist, ValueError) as e:
|
||||
except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e:
|
||||
raise ParseError(e.args[0])
|
||||
except ValidationError as e:
|
||||
raise ParseError(e.messages)
|
||||
|
||||
@@ -156,6 +156,7 @@ class APIView(views.APIView):
|
||||
'new_in_240': getattr(self, 'new_in_240', False),
|
||||
'new_in_300': getattr(self, 'new_in_300', False),
|
||||
'new_in_310': getattr(self, 'new_in_310', False),
|
||||
'deprecated': getattr(self, 'deprecated', False),
|
||||
}
|
||||
|
||||
def get_description(self, html=False):
|
||||
@@ -267,10 +268,25 @@ class ListAPIView(generics.ListAPIView, GenericAPIView):
|
||||
fields = []
|
||||
for field in self.model._meta.fields:
|
||||
if field.name in ('username', 'first_name', 'last_name', 'email',
|
||||
'name', 'description', 'email'):
|
||||
'name', 'description'):
|
||||
fields.append(field.name)
|
||||
return fields
|
||||
|
||||
@property
|
||||
def related_search_fields(self):
|
||||
fields = []
|
||||
for field in self.model._meta.fields:
|
||||
if field.name.endswith('_role'):
|
||||
continue
|
||||
if getattr(field, 'related_model', None):
|
||||
fields.append('{}__search'.format(field.name))
|
||||
for rel in self.model._meta.related_objects:
|
||||
name = rel.get_accessor_name()
|
||||
if name.endswith('_set'):
|
||||
continue
|
||||
fields.append('{}__search'.format(name))
|
||||
return fields
|
||||
|
||||
|
||||
class ListCreateAPIView(ListAPIView, generics.ListCreateAPIView):
|
||||
# Base class for a list view that allows creating new objects.
|
||||
@@ -543,14 +559,12 @@ class DestroyAPIView(GenericAPIView, generics.DestroyAPIView):
|
||||
pass
|
||||
|
||||
|
||||
class ResourceAccessList(ListAPIView):
|
||||
class ResourceAccessList(ParentMixin, ListAPIView):
|
||||
|
||||
serializer_class = ResourceAccessListElementSerializer
|
||||
|
||||
def get_queryset(self):
|
||||
self.object_id = self.kwargs['pk']
|
||||
resource_model = getattr(self, 'resource_model')
|
||||
obj = get_object_or_404(resource_model, pk=self.object_id)
|
||||
obj = self.get_parent_object()
|
||||
|
||||
content_type = ContentType.objects.get_for_model(obj)
|
||||
roles = set(Role.objects.filter(content_type=content_type, object_id=obj.id))
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved
|
||||
|
||||
import sys
|
||||
|
||||
from optparse import make_option
|
||||
from django.core.management.base import BaseCommand
|
||||
from awx.main.ha import is_ha_environment
|
||||
from awx.main.task_engine import TaskEnhancer
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
"""Return a exit status of 0 if MongoDB should be active, and an
|
||||
exit status of 1 otherwise.
|
||||
|
||||
This script is intended to be used by bash and init scripts to
|
||||
conditionally start MongoDB, so its focus is on being bash-friendly.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
super(Command, self).__init__()
|
||||
BaseCommand.option_list += (make_option('--local',
|
||||
dest='local',
|
||||
default=False,
|
||||
action="store_true",
|
||||
help="Only check if mongo should be running locally"),)
|
||||
|
||||
def handle(self, *args, **kwargs):
|
||||
# Get the license data.
|
||||
license_data = TaskEnhancer().validate_enhancements()
|
||||
|
||||
# Does the license have features, at all?
|
||||
# If there is no license yet, then all features are clearly off.
|
||||
if 'features' not in license_data:
|
||||
print('No license available.')
|
||||
sys.exit(2)
|
||||
|
||||
# Does the license contain the system tracking feature?
|
||||
# If and only if it does, MongoDB should run.
|
||||
system_tracking = license_data['features']['system_tracking']
|
||||
|
||||
# Okay, do we need MongoDB to be turned on?
|
||||
# This is a silly variable assignment right now, but I expect the
|
||||
# rules here will grow more complicated over time.
|
||||
uses_mongo = system_tracking # noqa
|
||||
|
||||
if is_ha_environment() and kwargs['local'] and uses_mongo:
|
||||
print("HA Configuration detected. Database should be remote")
|
||||
uses_mongo = False
|
||||
|
||||
# If we do not need Mongo, return a non-zero exit status.
|
||||
if not uses_mongo:
|
||||
print('MongoDB NOT required')
|
||||
sys.exit(1)
|
||||
|
||||
# We do need Mongo, return zero.
|
||||
print('MongoDB required')
|
||||
sys.exit(0)
|
||||
@@ -13,7 +13,7 @@ from django.utils.translation import ugettext_lazy as _
|
||||
from rest_framework import exceptions
|
||||
from rest_framework import metadata
|
||||
from rest_framework import serializers
|
||||
from rest_framework.relations import RelatedField
|
||||
from rest_framework.relations import RelatedField, ManyRelatedField
|
||||
from rest_framework.request import clone_request
|
||||
|
||||
# Ansible Tower
|
||||
@@ -75,7 +75,7 @@ class Metadata(metadata.SimpleMetadata):
|
||||
elif getattr(field, 'fields', None):
|
||||
field_info['children'] = self.get_serializer_info(field)
|
||||
|
||||
if hasattr(field, 'choices') and not isinstance(field, RelatedField):
|
||||
if not isinstance(field, (RelatedField, ManyRelatedField)) and hasattr(field, 'choices'):
|
||||
field_info['choices'] = [(choice_value, choice_name) for choice_value, choice_name in field.choices.items()]
|
||||
|
||||
# Indicate if a field is write-only.
|
||||
@@ -183,6 +183,10 @@ class Metadata(metadata.SimpleMetadata):
|
||||
if getattr(view, 'search_fields', None):
|
||||
metadata['search_fields'] = view.search_fields
|
||||
|
||||
# Add related search fields if available from the view.
|
||||
if getattr(view, 'related_search_fields', None):
|
||||
metadata['related_search_fields'] = view.related_search_fields
|
||||
|
||||
return metadata
|
||||
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
# All Rights Reserved.
|
||||
|
||||
# Django REST Framework
|
||||
from django.conf import settings
|
||||
from rest_framework import pagination
|
||||
from rest_framework.utils.urls import replace_query_param
|
||||
|
||||
@@ -9,11 +10,13 @@ from rest_framework.utils.urls import replace_query_param
|
||||
class Pagination(pagination.PageNumberPagination):
|
||||
|
||||
page_size_query_param = 'page_size'
|
||||
max_page_size = settings.MAX_PAGE_SIZE
|
||||
|
||||
def get_next_link(self):
|
||||
if not self.page.has_next():
|
||||
return None
|
||||
url = self.request and self.request.get_full_path() or ''
|
||||
url = url.encode('utf-8')
|
||||
page_number = self.page.next_page_number()
|
||||
return replace_query_param(url, self.page_query_param, page_number)
|
||||
|
||||
@@ -21,5 +24,6 @@ class Pagination(pagination.PageNumberPagination):
|
||||
if not self.page.has_previous():
|
||||
return None
|
||||
url = self.request and self.request.get_full_path() or ''
|
||||
url = url.encode('utf-8')
|
||||
page_number = self.page.previous_page_number()
|
||||
return replace_query_param(url, self.page_query_param, page_number)
|
||||
|
||||
@@ -4,9 +4,6 @@
|
||||
# Python
|
||||
import logging
|
||||
|
||||
# Django
|
||||
from django.http import Http404
|
||||
|
||||
# Django REST Framework
|
||||
from rest_framework.exceptions import MethodNotAllowed, PermissionDenied
|
||||
from rest_framework import permissions
|
||||
@@ -19,7 +16,7 @@ from awx.main.utils import get_object_or_400
|
||||
logger = logging.getLogger('awx.api.permissions')
|
||||
|
||||
__all__ = ['ModelAccessPermission', 'JobTemplateCallbackPermission',
|
||||
'TaskPermission', 'ProjectUpdatePermission', 'UserPermission']
|
||||
'TaskPermission', 'ProjectUpdatePermission', 'UserPermission',]
|
||||
|
||||
|
||||
class ModelAccessPermission(permissions.BasePermission):
|
||||
@@ -96,13 +93,6 @@ class ModelAccessPermission(permissions.BasePermission):
|
||||
method based on the request method.
|
||||
'''
|
||||
|
||||
# Check that obj (if given) is active, otherwise raise a 404.
|
||||
active = getattr(obj, 'active', getattr(obj, 'is_active', True))
|
||||
if callable(active):
|
||||
active = active()
|
||||
if not active:
|
||||
raise Http404()
|
||||
|
||||
# Don't allow anonymous users. 401, not 403, hence no raised exception.
|
||||
if not request.user or request.user.is_anonymous():
|
||||
return False
|
||||
@@ -216,3 +206,5 @@ class UserPermission(ModelAccessPermission):
|
||||
elif request.user.is_superuser:
|
||||
return True
|
||||
raise PermissionDenied()
|
||||
|
||||
|
||||
|
||||
@@ -80,3 +80,8 @@ class AnsiTextRenderer(PlainTextRenderer):
|
||||
|
||||
media_type = 'text/plain'
|
||||
format = 'ansi'
|
||||
|
||||
|
||||
class AnsiDownloadRenderer(PlainTextRenderer):
|
||||
|
||||
format = "ansi_download"
|
||||
|
||||
@@ -76,13 +76,15 @@ SUMMARIZABLE_FK_FIELDS = {
|
||||
'total_groups',
|
||||
'groups_with_active_failures',
|
||||
'has_inventory_sources'),
|
||||
'project': DEFAULT_SUMMARY_FIELDS + ('status',),
|
||||
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
|
||||
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
|
||||
'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud'),
|
||||
'cloud_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud'),
|
||||
'network_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'net'),
|
||||
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
|
||||
'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed'),
|
||||
'job_template': DEFAULT_SUMMARY_FIELDS,
|
||||
'workflow_job_template': DEFAULT_SUMMARY_FIELDS,
|
||||
'workflow_job': DEFAULT_SUMMARY_FIELDS,
|
||||
'schedule': DEFAULT_SUMMARY_FIELDS + ('next_run',),
|
||||
'unified_job_template': DEFAULT_SUMMARY_FIELDS + ('unified_job_type',),
|
||||
'last_job': DEFAULT_SUMMARY_FIELDS + ('finished', 'status', 'failed', 'license_error'),
|
||||
@@ -250,6 +252,8 @@ class BaseSerializer(serializers.ModelSerializer):
|
||||
'project_update': _('SCM Update'),
|
||||
'inventory_update': _('Inventory Sync'),
|
||||
'system_job': _('Management Job'),
|
||||
'workflow_job': _('Workflow Job'),
|
||||
'workflow_job_template': _('Workflow Template'),
|
||||
}
|
||||
choices = []
|
||||
for t in self.get_types():
|
||||
@@ -518,7 +522,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
|
||||
|
||||
class Meta:
|
||||
model = UnifiedJobTemplate
|
||||
fields = ('*', 'last_job_run', 'last_job_failed', 'has_schedules',
|
||||
fields = ('*', 'last_job_run', 'last_job_failed',
|
||||
'next_job_run', 'status')
|
||||
|
||||
def get_related(self, obj):
|
||||
@@ -607,7 +611,11 @@ class UnifiedJobSerializer(BaseSerializer):
|
||||
summary_fields = super(UnifiedJobSerializer, self).get_summary_fields(obj)
|
||||
if obj.spawned_by_workflow:
|
||||
summary_fields['source_workflow_job'] = {}
|
||||
summary_obj = obj.unified_job_node.workflow_job
|
||||
try:
|
||||
summary_obj = obj.unified_job_node.workflow_job
|
||||
except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist:
|
||||
return summary_fields
|
||||
|
||||
for field in SUMMARIZABLE_FK_FIELDS['job']:
|
||||
val = getattr(summary_obj, field, None)
|
||||
if val is not None:
|
||||
@@ -666,7 +674,7 @@ class UnifiedJobListSerializer(UnifiedJobSerializer):
|
||||
|
||||
def get_types(self):
|
||||
if type(self) is UnifiedJobListSerializer:
|
||||
return ['project_update', 'inventory_update', 'job', 'ad_hoc_command', 'system_job']
|
||||
return ['project_update', 'inventory_update', 'job', 'ad_hoc_command', 'system_job', 'workflow_job']
|
||||
else:
|
||||
return super(UnifiedJobListSerializer, self).get_types()
|
||||
|
||||
@@ -1581,8 +1589,7 @@ class ResourceAccessListElementSerializer(UserSerializer):
|
||||
the resource.
|
||||
'''
|
||||
ret = super(ResourceAccessListElementSerializer, self).to_representation(user)
|
||||
object_id = self.context['view'].object_id
|
||||
obj = self.context['view'].resource_model.objects.get(pk=object_id)
|
||||
obj = self.context['view'].get_parent_object()
|
||||
if self.context['view'].request is not None:
|
||||
requesting_user = self.context['view'].request.user
|
||||
else:
|
||||
@@ -1615,7 +1622,8 @@ class ResourceAccessListElementSerializer(UserSerializer):
|
||||
'name': role.name,
|
||||
'description': role.description,
|
||||
'team_id': team_role.object_id,
|
||||
'team_name': team_role.content_object.name
|
||||
'team_name': team_role.content_object.name,
|
||||
'team_organization_name': team_role.content_object.organization.name,
|
||||
}
|
||||
if role.content_type is not None:
|
||||
role_dict['resource_name'] = role.content_object.name
|
||||
@@ -1757,9 +1765,9 @@ class CredentialSerializerCreate(CredentialSerializer):
|
||||
'do not give either user or organization. Only valid for creation.'))
|
||||
organization = serializers.PrimaryKeyRelatedField(
|
||||
queryset=Organization.objects.all(),
|
||||
required=False, default=None, write_only=True, allow_null=True,
|
||||
help_text=_('Write-only field used to add organization to owner role. If provided, '
|
||||
'do not give either team or team. Only valid for creation.'))
|
||||
required=False, default=None, allow_null=True,
|
||||
help_text=_('Inherit permissions from organization roles. If provided on creation, '
|
||||
'do not give either user or team.'))
|
||||
|
||||
class Meta:
|
||||
model = Credential
|
||||
@@ -1985,8 +1993,6 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer):
|
||||
res = super(JobSerializer, self).get_related(obj)
|
||||
res.update(dict(
|
||||
job_events = reverse('api:job_job_events_list', args=(obj.pk,)),
|
||||
job_plays = reverse('api:job_job_plays_list', args=(obj.pk,)),
|
||||
job_tasks = reverse('api:job_job_tasks_list', args=(obj.pk,)),
|
||||
job_host_summaries = reverse('api:job_job_host_summaries_list', args=(obj.pk,)),
|
||||
activity_stream = reverse('api:job_activity_stream_list', args=(obj.pk,)),
|
||||
notifications = reverse('api:job_notifications_list', args=(obj.pk,)),
|
||||
@@ -2365,7 +2371,7 @@ class WorkflowJobTemplateNodeSerializer(WorkflowNodeBaseSerializer):
|
||||
if view and view.request:
|
||||
request_method = view.request.method
|
||||
if request_method in ['PATCH']:
|
||||
obj = view.get_object()
|
||||
obj = self.instance
|
||||
char_prompts = copy.copy(obj.char_prompts)
|
||||
char_prompts.update(self.extract_char_prompts(data))
|
||||
else:
|
||||
@@ -2415,7 +2421,7 @@ class WorkflowJobNodeSerializer(WorkflowNodeBaseSerializer):
|
||||
res['failure_nodes'] = reverse('api:workflow_job_node_failure_nodes_list', args=(obj.pk,))
|
||||
res['always_nodes'] = reverse('api:workflow_job_node_always_nodes_list', args=(obj.pk,))
|
||||
if obj.job:
|
||||
res['job'] = reverse('api:job_detail', args=(obj.job.pk,))
|
||||
res['job'] = obj.job.get_absolute_url()
|
||||
if obj.workflow_job:
|
||||
res['workflow_job'] = reverse('api:workflow_job_detail', args=(obj.workflow_job.pk,))
|
||||
return res
|
||||
@@ -2497,8 +2503,8 @@ class JobEventSerializer(BaseSerializer):
|
||||
model = JobEvent
|
||||
fields = ('*', '-name', '-description', 'job', 'event', 'counter',
|
||||
'event_display', 'event_data', 'event_level', 'failed',
|
||||
'changed', 'uuid', 'host', 'host_name', 'parent', 'playbook',
|
||||
'play', 'task', 'role', 'stdout', 'start_line', 'end_line',
|
||||
'changed', 'uuid', 'parent_uuid', 'host', 'host_name', 'parent',
|
||||
'playbook', 'play', 'task', 'role', 'stdout', 'start_line', 'end_line',
|
||||
'verbosity')
|
||||
|
||||
def get_related(self, obj):
|
||||
@@ -2704,18 +2710,15 @@ class WorkflowJobLaunchSerializer(BaseSerializer):
|
||||
variables_needed_to_start = serializers.ReadOnlyField()
|
||||
survey_enabled = serializers.SerializerMethodField()
|
||||
extra_vars = VerbatimField(required=False, write_only=True)
|
||||
warnings = serializers.SerializerMethodField()
|
||||
workflow_job_template_data = serializers.SerializerMethodField()
|
||||
|
||||
class Meta:
|
||||
model = WorkflowJobTemplate
|
||||
fields = ('can_start_without_user_input', 'extra_vars', 'warnings',
|
||||
fields = ('can_start_without_user_input', 'extra_vars',
|
||||
'survey_enabled', 'variables_needed_to_start',
|
||||
'node_templates_missing', 'node_prompts_rejected',
|
||||
'workflow_job_template_data')
|
||||
|
||||
def get_warnings(self, obj):
|
||||
return obj.get_warnings()
|
||||
|
||||
def get_survey_enabled(self, obj):
|
||||
if obj:
|
||||
return obj.survey_enabled and 'spec' in obj.survey_spec
|
||||
@@ -2999,10 +3002,14 @@ class ActivityStreamSerializer(BaseSerializer):
|
||||
for fk, __ in SUMMARIZABLE_FK_FIELDS.items():
|
||||
if not hasattr(obj, fk):
|
||||
continue
|
||||
allm2m = getattr(obj, fk).distinct()
|
||||
allm2m = getattr(obj, fk).all()
|
||||
if getattr(obj, fk).exists():
|
||||
rel[fk] = []
|
||||
id_list = []
|
||||
for thisItem in allm2m:
|
||||
if getattr(thisItem, 'id', None) in id_list:
|
||||
continue
|
||||
id_list.append(getattr(thisItem, 'id', None))
|
||||
if fk == 'custom_inventory_script':
|
||||
rel[fk].append(reverse('api:inventory_script_detail', args=(thisItem.id,)))
|
||||
else:
|
||||
@@ -3018,7 +3025,7 @@ class ActivityStreamSerializer(BaseSerializer):
|
||||
try:
|
||||
if not hasattr(obj, fk):
|
||||
continue
|
||||
allm2m = getattr(obj, fk).distinct()
|
||||
allm2m = getattr(obj, fk).all()
|
||||
if getattr(obj, fk).exists():
|
||||
summary_fields[fk] = []
|
||||
for thisItem in allm2m:
|
||||
@@ -3047,6 +3054,9 @@ class ActivityStreamSerializer(BaseSerializer):
|
||||
thisItemDict[field] = fval
|
||||
if fk == 'group':
|
||||
thisItemDict['inventory_id'] = getattr(thisItem, 'inventory_id', None)
|
||||
if thisItemDict.get('id', None):
|
||||
if thisItemDict.get('id', None) in [obj_dict.get('id', None) for obj_dict in summary_fields[fk]]:
|
||||
continue
|
||||
summary_fields[fk].append(thisItemDict)
|
||||
except ObjectDoesNotExist:
|
||||
pass
|
||||
|
||||
@@ -56,6 +56,10 @@ within all designated text fields of a model.
|
||||
|
||||
_Added in AWX 1.4_
|
||||
|
||||
(_Added in Ansible Tower 3.1.0_) Search across related fields:
|
||||
|
||||
?related__search=findme
|
||||
|
||||
## Filtering
|
||||
|
||||
Any additional query string parameters may be used to filter the list of
|
||||
@@ -132,3 +136,8 @@ values.
|
||||
|
||||
Lists (for the `in` lookup) may be specified as a comma-separated list of
|
||||
values.
|
||||
|
||||
(_Added in Ansible Tower 3.1.0_) Filtering based on the requesting user's
|
||||
level of access by query string parameter.
|
||||
|
||||
* `role_level`: Level of role to filter on, such as `admin_role`
|
||||
|
||||
@@ -3,10 +3,11 @@
|
||||
{% if new_in_14 %}> _Added in AWX 1.4_{% endif %}
|
||||
{% if new_in_145 %}> _Added in Ansible Tower 1.4.5_{% endif %}
|
||||
{% if new_in_148 %}> _Added in Ansible Tower 1.4.8_{% endif %}
|
||||
{% if new_in_200 %}> _New in Ansible Tower 2.0.0_{% endif %}
|
||||
{% if new_in_220 %}> _New in Ansible Tower 2.2.0_{% endif %}
|
||||
{% if new_in_230 %}> _New in Ansible Tower 2.3.0_{% endif %}
|
||||
{% if new_in_240 %}> _New in Ansible Tower 2.4.0_{% endif %}
|
||||
{% if new_in_300 %}> _New in Ansible Tower 3.0.0_{% endif %}
|
||||
{% if new_in_200 %}> _Added in Ansible Tower 2.0.0_{% endif %}
|
||||
{% if new_in_220 %}> _Added in Ansible Tower 2.2.0_{% endif %}
|
||||
{% if new_in_230 %}> _Added in Ansible Tower 2.3.0_{% endif %}
|
||||
{% if new_in_240 %}> _Added in Ansible Tower 2.4.0_{% endif %}
|
||||
{% if new_in_300 %}> _Added in Ansible Tower 3.0.0_{% endif %}
|
||||
{% if new_in_310 %}> _New in Ansible Tower 3.1.0_{% endif %}
|
||||
{% if deprecated %}> _This resource has been deprecated and will be removed in a future release_{% endif %}
|
||||
{% endif %}
|
||||
|
||||
@@ -2,8 +2,9 @@ Launch a Job Template:
|
||||
|
||||
Make a POST request to this resource to launch the system job template.
|
||||
|
||||
An extra parameter `extra_vars` is suggested in order to pass extra parameters
|
||||
to the system job task.
|
||||
Variables specified inside of the parameter `extra_vars` are passed to the
|
||||
system job task as command line parameters. These tasks can be ran manually
|
||||
on the host system via the `tower-manage` command.
|
||||
|
||||
For example on `cleanup_jobs` and `cleanup_activitystream`:
|
||||
|
||||
@@ -13,9 +14,17 @@ Which will act on data older than 30 days.
|
||||
|
||||
For `cleanup_facts`:
|
||||
|
||||
`{"older_than": "4w", `granularity`: "3d"}`
|
||||
`{"older_than": "4w", "granularity": "3d"}`
|
||||
|
||||
Which will reduce the granularity of scan data to one scan per 3 days when the data is older than 4w.
|
||||
|
||||
Each individual system job task has its own default values, which are
|
||||
applicable either when running it from the command line or launching its
|
||||
system job template with empty `extra_vars`.
|
||||
|
||||
- Defaults for `cleanup_activitystream`: days=90
|
||||
- Defaults for `cleanup_facts`: older_than="30d", granularity="1w"
|
||||
- Defaults for `cleanup_jobs`: days=90
|
||||
|
||||
If successful, the response status code will be 202. If the job cannot be
|
||||
launched, a 405 status code will be returned.
|
||||
|
||||
@@ -13,6 +13,7 @@ Use the `format` query string parameter to specify the output format.
|
||||
* Plain Text with ANSI color codes: `?format=ansi`
|
||||
* JSON structure: `?format=json`
|
||||
* Downloaded Plain Text: `?format=txt_download`
|
||||
* Downloaded Plain Text with ANSI color codes: `?format=ansi_download`
|
||||
|
||||
(_New in Ansible Tower 2.0.0_) When using the Browsable API, HTML and JSON
|
||||
formats, the `start_line` and `end_line` query string parameters can be used
|
||||
@@ -21,7 +22,8 @@ to specify a range of line numbers to retrieve.
|
||||
Use `dark=1` or `dark=0` as a query string parameter to force or disable a
|
||||
dark background.
|
||||
|
||||
+Files over {{ settings.STDOUT_MAX_BYTES_DISPLAY|filesizeformat }} (configurable) will not display in the browser. Use the `txt_download`
|
||||
+format to download the file directly to view it.
|
||||
Files over {{ settings.STDOUT_MAX_BYTES_DISPLAY|filesizeformat }} (configurable)
|
||||
will not display in the browser. Use the `txt_download` or `ansi_download`
|
||||
formats to download the file directly to view it.
|
||||
|
||||
{% include "api/_new_in_awx.md" %}
|
||||
|
||||
12
awx/api/templates/api/workflow_job_cancel.md
Normal file
12
awx/api/templates/api/workflow_job_cancel.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Cancel Workflow Job
|
||||
|
||||
Make a GET request to this resource to determine if the workflow job can be
|
||||
canceled. The response will include the following field:
|
||||
|
||||
* `can_cancel`: Indicates whether this workflow job is in a state that can
|
||||
be canceled (boolean, read-only)
|
||||
|
||||
Make a POST request to this endpoint to submit a request to cancel a pending
|
||||
or running workflow job. The response status code will be 202 if the
|
||||
request to cancel was successfully submitted, or 405 if the workflow job
|
||||
cannot be canceled.
|
||||
5
awx/api/templates/api/workflow_job_relaunch.md
Normal file
5
awx/api/templates/api/workflow_job_relaunch.md
Normal file
@@ -0,0 +1,5 @@
|
||||
Relaunch a workflow job:
|
||||
|
||||
Make a POST request to this endpoint to launch a workflow job identical to the parent workflow job. This will spawn jobs, project updates, or inventory updates based on the unified job templates referenced in the workflow nodes in the workflow job. No POST data is accepted for this action.
|
||||
|
||||
If successful, the response status code will be 201 and serialized data of the new workflow job will be returned.
|
||||
34
awx/api/templates/api/workflow_job_template_copy.md
Normal file
34
awx/api/templates/api/workflow_job_template_copy.md
Normal file
@@ -0,0 +1,34 @@
|
||||
Copy a Workflow Job Template:
|
||||
|
||||
Make a GET request to this resource to determine if the current user has
|
||||
permission to copy the workflow_job_template and whether any linked
|
||||
templates or prompted fields will be ignored due to permissions problems.
|
||||
The response will include the following fields:
|
||||
|
||||
* `can_copy`: Flag indicating whether the active user has permission to make
|
||||
a copy of this workflow_job_template, provides same content as the
|
||||
workflow_job_template detail view summary_fields.user_capabilities.copy
|
||||
(boolean, read-only)
|
||||
* `can_copy_without_user_input`: Flag indicating if the user should be
|
||||
prompted for confirmation before the copy is executed (boolean, read-only)
|
||||
* `templates_unable_to_copy`: List of node ids of nodes that have a related
|
||||
job template, project, or inventory that the current user lacks permission
|
||||
to use and will be missing in workflow nodes of the copy (array, read-only)
|
||||
* `inventories_unable_to_copy`: List of node ids of nodes that have a related
|
||||
prompted inventory that the current user lacks permission
|
||||
to use and will be missing in workflow nodes of the copy (array, read-only)
|
||||
* `credentials_unable_to_copy`: List of node ids of nodes that have a related
|
||||
prompted credential that the current user lacks permission
|
||||
to use and will be missing in workflow nodes of the copy (array, read-only)
|
||||
|
||||
Make a POST request to this endpoint to save a copy of this
|
||||
workflow_job_template. No POST data is accepted for this action.
|
||||
|
||||
If successful, the response status code will be 201. The response body will
|
||||
contain serialized data about the new workflow_job_template, which will be
|
||||
similar to the original workflow_job_template, but with an additional `@`
|
||||
and a timestamp in the name.
|
||||
|
||||
All workflow nodes and connections in the original will also exist in the
|
||||
copy. The nodes will be missing related resources if the user did not have
|
||||
access to use them.
|
||||
@@ -12,8 +12,13 @@ workflow_job_template. The response will include the following fields:
|
||||
enabled survey (boolean, read-only)
|
||||
* `extra_vars`: Text which is the `extra_vars` field of this workflow_job_template
|
||||
(text, read-only)
|
||||
* `warnings`: JSON object listing warnings of all workflow_job_template_nodes
|
||||
contained in this workflow_job_template (JSON object, read-only)
|
||||
* `node_templates_missing`: List of node ids of all nodes that have a
|
||||
null `unified_job_template`, which will cause their branches to stop
|
||||
execution (list, read-only)
|
||||
* `node_prompts_rejected`: List of node ids of all nodes that have
|
||||
specified a field that will be rejected because its `unified_job_template`
|
||||
does not allow prompting for this field, this will not halt execution of
|
||||
the branch but the field will be ignored (list, read-only)
|
||||
* `workflow_job_template_data`: JSON object listing general information of
|
||||
this workflow_job_template (JSON object, read-only)
|
||||
|
||||
|
||||
@@ -205,8 +205,6 @@ job_urls = patterns('awx.api.views',
|
||||
url(r'^(?P<pk>[0-9]+)/relaunch/$', 'job_relaunch'),
|
||||
url(r'^(?P<pk>[0-9]+)/job_host_summaries/$', 'job_job_host_summaries_list'),
|
||||
url(r'^(?P<pk>[0-9]+)/job_events/$', 'job_job_events_list'),
|
||||
url(r'^(?P<pk>[0-9]+)/job_plays/$', 'job_job_plays_list'),
|
||||
url(r'^(?P<pk>[0-9]+)/job_tasks/$', 'job_job_tasks_list'),
|
||||
url(r'^(?P<pk>[0-9]+)/activity_stream/$', 'job_activity_stream_list'),
|
||||
url(r'^(?P<pk>[0-9]+)/stdout/$', 'job_stdout'),
|
||||
url(r'^(?P<pk>[0-9]+)/notifications/$', 'job_notifications_list'),
|
||||
|
||||
@@ -1,82 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
from collections import OrderedDict
|
||||
import copy
|
||||
import functools
|
||||
|
||||
from rest_framework.response import Response
|
||||
from rest_framework.settings import api_settings
|
||||
from rest_framework import status
|
||||
|
||||
|
||||
def paginated(method):
|
||||
"""Given an method with a Django REST Framework API method signature
|
||||
(e.g. `def get(self, request, ...):`), abstract out boilerplate pagination
|
||||
duties.
|
||||
|
||||
This causes the method to receive two additional keyword arguments:
|
||||
`limit`, and `offset`. The method expects a two-tuple to be
|
||||
returned, with a result list as the first item, and the total number
|
||||
of results (across all pages) as the second item.
|
||||
"""
|
||||
@functools.wraps(method)
|
||||
def func(self, request, *args, **kwargs):
|
||||
# Manually spin up pagination.
|
||||
# How many results do we show?
|
||||
paginator_class = api_settings.DEFAULT_PAGINATION_CLASS
|
||||
limit = paginator_class.page_size
|
||||
if request.query_params.get(paginator_class.page_size_query_param, False):
|
||||
limit = request.query_params[paginator_class.page_size_query_param]
|
||||
if paginator_class.max_page_size:
|
||||
limit = min(paginator_class.max_page_size, limit)
|
||||
limit = int(limit)
|
||||
|
||||
# Get the order parameter if it's given
|
||||
if request.query_params.get("ordering", False):
|
||||
ordering = request.query_params["ordering"]
|
||||
else:
|
||||
ordering = None
|
||||
|
||||
# What page are we on?
|
||||
page = int(request.query_params.get('page', 1))
|
||||
offset = (page - 1) * limit
|
||||
|
||||
# Add the limit, offset, page, and order variables to the keyword arguments
|
||||
# being sent to the underlying method.
|
||||
kwargs['limit'] = limit
|
||||
kwargs['offset'] = offset
|
||||
kwargs['ordering'] = ordering
|
||||
|
||||
# Okay, call the underlying method.
|
||||
results, count, stat = method(self, request, *args, **kwargs)
|
||||
if stat is None:
|
||||
stat = status.HTTP_200_OK
|
||||
|
||||
if stat == status.HTTP_200_OK:
|
||||
# Determine the next and previous pages, if any.
|
||||
prev, next_ = None, None
|
||||
if page > 1:
|
||||
get_copy = copy.copy(request.GET)
|
||||
get_copy['page'] = page - 1
|
||||
prev = '%s?%s' % (request.path, get_copy.urlencode())
|
||||
if count > offset + limit:
|
||||
get_copy = copy.copy(request.GET)
|
||||
get_copy['page'] = page + 1
|
||||
next_ = '%s?%s' % (request.path, get_copy.urlencode())
|
||||
|
||||
# Compile the results into a dictionary with pagination
|
||||
# information.
|
||||
answer = OrderedDict((
|
||||
('count', count),
|
||||
('next', next_),
|
||||
('previous', prev),
|
||||
('results', results),
|
||||
))
|
||||
else:
|
||||
answer = results
|
||||
|
||||
# Okay, we're done; return response data.
|
||||
return Response(answer, status=stat)
|
||||
return func
|
||||
|
||||
478
awx/api/views.py
478
awx/api/views.py
File diff suppressed because it is too large
Load Diff
@@ -16,7 +16,10 @@ class ConfConfig(AppConfig):
|
||||
from .settings import SettingsWrapper
|
||||
SettingsWrapper.initialize()
|
||||
if settings.LOG_AGGREGATOR_ENABLED:
|
||||
LOGGING = settings.LOGGING
|
||||
LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler'
|
||||
configure_logging(settings.LOGGING_CONFIG, LOGGING)
|
||||
LOGGING_DICT = settings.LOGGING
|
||||
LOGGING_DICT['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler'
|
||||
if 'awx' in settings.LOG_AGGREGATOR_LOGGERS:
|
||||
if 'http_receiver' not in LOGGING_DICT['loggers']['awx']['handlers']:
|
||||
LOGGING_DICT['loggers']['awx']['handlers'] += ['http_receiver']
|
||||
configure_logging(settings.LOGGING_CONFIG, LOGGING_DICT)
|
||||
# checks.register(SettingsWrapper._check_settings)
|
||||
|
||||
@@ -2,9 +2,6 @@
|
||||
# All Rights Reserved.
|
||||
|
||||
# Django
|
||||
from django.core.cache import cache
|
||||
from django.core.signals import setting_changed
|
||||
from django.dispatch import receiver
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
|
||||
# Django REST Framework
|
||||
@@ -12,7 +9,6 @@ from rest_framework.exceptions import APIException
|
||||
|
||||
# Tower
|
||||
from awx.main.task_engine import TaskEnhancer
|
||||
from awx.main.utils import memoize
|
||||
|
||||
__all__ = ['LicenseForbids', 'get_license', 'get_licensed_features',
|
||||
'feature_enabled', 'feature_exists']
|
||||
@@ -23,18 +19,10 @@ class LicenseForbids(APIException):
|
||||
default_detail = _('Your Tower license does not allow that.')
|
||||
|
||||
|
||||
@memoize(cache_key='_validated_license_data')
|
||||
def _get_validated_license_data():
|
||||
return TaskEnhancer().validate_enhancements()
|
||||
|
||||
|
||||
@receiver(setting_changed)
|
||||
def _on_setting_changed(sender, **kwargs):
|
||||
# Clear cached result above when license changes.
|
||||
if kwargs.get('setting', None) == 'LICENSE':
|
||||
cache.delete('_validated_license_data')
|
||||
|
||||
|
||||
def get_license(show_key=False):
|
||||
"""Return a dictionary representing the active license on this Tower instance."""
|
||||
license_data = _get_validated_license_data()
|
||||
|
||||
@@ -64,11 +64,11 @@ class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('conf', '0001_initial'),
|
||||
('main', '0036_v310_jobevent_uuid'),
|
||||
('main', '0034_v310_release'),
|
||||
]
|
||||
|
||||
run_before = [
|
||||
('main', '0037_v310_remove_tower_settings'),
|
||||
('main', '0035_v310_remove_tower_settings'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
|
||||
@@ -50,6 +50,8 @@ class SettingFieldMixin(object):
|
||||
return obj
|
||||
|
||||
def to_internal_value(self, value):
|
||||
if getattr(self, 'encrypted', False) and isinstance(value, basestring) and value.startswith('$encrypted$'):
|
||||
raise serializers.SkipField()
|
||||
obj = super(SettingFieldMixin, self).to_internal_value(value)
|
||||
return super(SettingFieldMixin, self).to_representation(obj)
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# Python
|
||||
import contextlib
|
||||
import logging
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
@@ -86,6 +87,7 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
self.__dict__['_awx_conf_settings'] = self
|
||||
self.__dict__['_awx_conf_preload_expires'] = None
|
||||
self.__dict__['_awx_conf_preload_lock'] = threading.RLock()
|
||||
self.__dict__['_awx_conf_init_readonly'] = False
|
||||
|
||||
def _get_supported_settings(self):
|
||||
return settings_registry.get_registered_settings()
|
||||
@@ -110,6 +112,20 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
return
|
||||
# Otherwise update local preload timeout.
|
||||
self.__dict__['_awx_conf_preload_expires'] = time.time() + SETTING_CACHE_TIMEOUT
|
||||
# Check for any settings that have been defined in Python files and
|
||||
# make those read-only to avoid overriding in the database.
|
||||
if not self._awx_conf_init_readonly and 'migrate_to_database_settings' not in sys.argv:
|
||||
defaults_snapshot = self._get_default('DEFAULTS_SNAPSHOT')
|
||||
for key in self._get_writeable_settings():
|
||||
init_default = defaults_snapshot.get(key, None)
|
||||
try:
|
||||
file_default = self._get_default(key)
|
||||
except AttributeError:
|
||||
file_default = None
|
||||
if file_default != init_default and file_default is not None:
|
||||
logger.warning('Setting %s has been marked read-only!', key)
|
||||
settings_registry._registry[key]['read_only'] = True
|
||||
self.__dict__['_awx_conf_init_readonly'] = True
|
||||
# If local preload timer has expired, check to see if another process
|
||||
# has already preloaded the cache and skip preloading if so.
|
||||
if cache.get('_awx_conf_preload_expires', empty) is not empty:
|
||||
@@ -146,7 +162,10 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
def _get_local(self, name):
|
||||
self._preload_cache()
|
||||
cache_key = Setting.get_cache_key(name)
|
||||
cache_value = cache.get(cache_key, empty)
|
||||
try:
|
||||
cache_value = cache.get(cache_key, empty)
|
||||
except ValueError:
|
||||
cache_value = empty
|
||||
logger.debug('cache get(%r, %r) -> %r', cache_key, empty, cache_value)
|
||||
if cache_value == SETTING_CACHE_NOTSET:
|
||||
value = empty
|
||||
|
||||
@@ -13,7 +13,7 @@ import awx.main.signals
|
||||
from awx.conf import settings_registry
|
||||
from awx.conf.models import Setting
|
||||
from awx.conf.serializers import SettingSerializer
|
||||
from awx.main.tasks import clear_cache_keys
|
||||
from awx.main.tasks import process_cache_changes
|
||||
|
||||
logger = logging.getLogger('awx.conf.signals')
|
||||
|
||||
@@ -26,16 +26,13 @@ def handle_setting_change(key, for_delete=False):
|
||||
# When a setting changes or is deleted, remove its value from cache along
|
||||
# with any other settings that depend on it.
|
||||
setting_keys = [key]
|
||||
setting_key_dict = {}
|
||||
setting_key_dict[key] = key
|
||||
for dependent_key in settings_registry.get_dependent_settings(key):
|
||||
# Note: Doesn't handle multiple levels of dependencies!
|
||||
setting_keys.append(dependent_key)
|
||||
setting_key_dict[dependent_key] = dependent_key
|
||||
cache_keys = set([Setting.get_cache_key(k) for k in setting_keys])
|
||||
logger.debug('sending signals to delete cache keys(%r)', cache_keys)
|
||||
cache.delete_many(cache_keys)
|
||||
clear_cache_keys.delay(setting_key_dict)
|
||||
process_cache_changes.delay(list(cache_keys))
|
||||
|
||||
# Send setting_changed signal with new value for each setting.
|
||||
for setting_key in setting_keys:
|
||||
|
||||
@@ -14,7 +14,10 @@ def argv_ready(argv):
|
||||
class argv_placeholder(object):
|
||||
|
||||
def __del__(self):
|
||||
argv_ready(sys.argv)
|
||||
try:
|
||||
argv_ready(sys.argv)
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
if hasattr(sys, 'argv'):
|
||||
|
||||
@@ -26,7 +26,7 @@ import uuid
|
||||
from ansible.utils.display import Display
|
||||
|
||||
# Tower Display Callback
|
||||
from tower_display_callback.events import event_context
|
||||
from .events import event_context
|
||||
|
||||
__all__ = []
|
||||
|
||||
|
||||
@@ -22,14 +22,76 @@ import base64
|
||||
import contextlib
|
||||
import datetime
|
||||
import json
|
||||
import logging
|
||||
import multiprocessing
|
||||
import os
|
||||
import threading
|
||||
import uuid
|
||||
import memcache
|
||||
|
||||
# Kombu
|
||||
from kombu import Connection, Exchange, Producer
|
||||
|
||||
__all__ = ['event_context']
|
||||
|
||||
|
||||
class CallbackQueueEventDispatcher(object):
|
||||
|
||||
def __init__(self):
|
||||
self.callback_connection = os.getenv('CALLBACK_CONNECTION', None)
|
||||
self.connection_queue = os.getenv('CALLBACK_QUEUE', '')
|
||||
self.connection = None
|
||||
self.exchange = None
|
||||
self._init_logging()
|
||||
|
||||
def _init_logging(self):
|
||||
try:
|
||||
self.job_callback_debug = int(os.getenv('JOB_CALLBACK_DEBUG', '0'))
|
||||
except ValueError:
|
||||
self.job_callback_debug = 0
|
||||
self.logger = logging.getLogger('awx.plugins.callback.job_event_callback')
|
||||
if self.job_callback_debug >= 2:
|
||||
self.logger.setLevel(logging.DEBUG)
|
||||
elif self.job_callback_debug >= 1:
|
||||
self.logger.setLevel(logging.INFO)
|
||||
else:
|
||||
self.logger.setLevel(logging.WARNING)
|
||||
handler = logging.StreamHandler()
|
||||
formatter = logging.Formatter('%(levelname)-8s %(process)-8d %(message)s')
|
||||
handler.setFormatter(formatter)
|
||||
self.logger.addHandler(handler)
|
||||
self.logger.propagate = False
|
||||
|
||||
def dispatch(self, obj):
|
||||
if not self.callback_connection or not self.connection_queue:
|
||||
return
|
||||
active_pid = os.getpid()
|
||||
for retry_count in xrange(4):
|
||||
try:
|
||||
if not hasattr(self, 'connection_pid'):
|
||||
self.connection_pid = active_pid
|
||||
if self.connection_pid != active_pid:
|
||||
self.connection = None
|
||||
if self.connection is None:
|
||||
self.connection = Connection(self.callback_connection)
|
||||
self.exchange = Exchange(self.connection_queue, type='direct')
|
||||
|
||||
producer = Producer(self.connection)
|
||||
producer.publish(obj,
|
||||
serializer='json',
|
||||
compression='bzip2',
|
||||
exchange=self.exchange,
|
||||
declare=[self.exchange],
|
||||
routing_key=self.connection_queue)
|
||||
return
|
||||
except Exception, e:
|
||||
self.logger.info('Publish Job Event Exception: %r, retry=%d', e,
|
||||
retry_count, exc_info=True)
|
||||
retry_count += 1
|
||||
if retry_count >= 3:
|
||||
break
|
||||
|
||||
|
||||
class EventContext(object):
|
||||
'''
|
||||
Store global and local (per thread/process) data associated with callback
|
||||
@@ -38,6 +100,9 @@ class EventContext(object):
|
||||
|
||||
def __init__(self):
|
||||
self.display_lock = multiprocessing.RLock()
|
||||
self.dispatcher = CallbackQueueEventDispatcher()
|
||||
cache_actual = os.getenv('CACHE', '127.0.0.1:11211')
|
||||
self.cache = memcache.Client([cache_actual], debug=0)
|
||||
|
||||
def add_local(self, **kwargs):
|
||||
if not hasattr(self, '_local'):
|
||||
@@ -111,10 +176,12 @@ class EventContext(object):
|
||||
if event_data.get(key, False):
|
||||
event = key
|
||||
break
|
||||
|
||||
max_res = int(os.getenv("MAX_EVENT_RES", 700000))
|
||||
if event not in ('playbook_on_stats',) and "res" in event_data and len(str(event_data['res'])) > max_res:
|
||||
event_data['res'] = {}
|
||||
event_dict = dict(event=event, event_data=event_data)
|
||||
for key in event_data.keys():
|
||||
if key in ('job_id', 'ad_hoc_command_id', 'uuid', 'parent_uuid', 'created', 'artifact_data'):
|
||||
if key in ('job_id', 'ad_hoc_command_id', 'uuid', 'parent_uuid', 'created',):
|
||||
event_dict[key] = event_data.pop(key)
|
||||
elif key in ('verbosity', 'pid'):
|
||||
event_dict[key] = event_data[key]
|
||||
@@ -136,7 +203,9 @@ class EventContext(object):
|
||||
fileobj.flush()
|
||||
|
||||
def dump_begin(self, fileobj):
|
||||
self.dump(fileobj, self.get_begin_dict())
|
||||
begin_dict = self.get_begin_dict()
|
||||
self.cache.set(":1:ev-{}".format(begin_dict['uuid']), begin_dict)
|
||||
self.dump(fileobj, {'uuid': begin_dict['uuid']})
|
||||
|
||||
def dump_end(self, fileobj):
|
||||
self.dump(fileobj, self.get_end_dict(), flush=True)
|
||||
|
||||
@@ -19,8 +19,6 @@ from __future__ import (absolute_import, division, print_function)
|
||||
|
||||
# Python
|
||||
import contextlib
|
||||
import copy
|
||||
import re
|
||||
import sys
|
||||
import uuid
|
||||
|
||||
@@ -29,8 +27,8 @@ from ansible.plugins.callback import CallbackBase
|
||||
from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule
|
||||
|
||||
# Tower Display Callback
|
||||
from tower_display_callback.events import event_context
|
||||
from tower_display_callback.minimal import CallbackModule as MinimalCallbackModule
|
||||
from .events import event_context
|
||||
from .minimal import CallbackModule as MinimalCallbackModule
|
||||
|
||||
|
||||
class BaseCallbackModule(CallbackBase):
|
||||
@@ -77,45 +75,11 @@ class BaseCallbackModule(CallbackBase):
|
||||
super(BaseCallbackModule, self).__init__()
|
||||
self.task_uuids = set()
|
||||
|
||||
def censor_result(self, res, no_log=False):
|
||||
if not isinstance(res, dict):
|
||||
if no_log:
|
||||
return "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
|
||||
return res
|
||||
if res.get('_ansible_no_log', no_log):
|
||||
new_res = {}
|
||||
for k in self.CENSOR_FIELD_WHITELIST:
|
||||
if k in res:
|
||||
new_res[k] = res[k]
|
||||
if k == 'cmd' and k in res:
|
||||
if isinstance(res['cmd'], list):
|
||||
res['cmd'] = ' '.join(res['cmd'])
|
||||
if re.search(r'\s', res['cmd']):
|
||||
new_res['cmd'] = re.sub(r'^(([^\s\\]|\\\s)+).*$',
|
||||
r'\1 <censored>',
|
||||
res['cmd'])
|
||||
new_res['censored'] = "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
|
||||
res = new_res
|
||||
if 'results' in res:
|
||||
if isinstance(res['results'], list):
|
||||
for i in xrange(len(res['results'])):
|
||||
res['results'][i] = self.censor_result(res['results'][i], res.get('_ansible_no_log', no_log))
|
||||
elif res.get('_ansible_no_log', False):
|
||||
res['results'] = "the output has been hidden due to the fact that 'no_log: true' was specified for this result"
|
||||
return res
|
||||
|
||||
@contextlib.contextmanager
|
||||
def capture_event_data(self, event, **event_data):
|
||||
|
||||
event_data.setdefault('uuid', str(uuid.uuid4()))
|
||||
|
||||
if 'res' in event_data:
|
||||
event_data['res'] = self.censor_result(copy.deepcopy(event_data['res']))
|
||||
res = event_data.get('res', None)
|
||||
if res and isinstance(res, dict):
|
||||
if 'artifact_data' in res:
|
||||
event_data['artifact_data'] = res['artifact_data']
|
||||
|
||||
if event not in self.EVENTS_WITHOUT_TASK:
|
||||
task = event_data.pop('task', None)
|
||||
else:
|
||||
@@ -262,7 +226,7 @@ class BaseCallbackModule(CallbackBase):
|
||||
if task_uuid in self.task_uuids:
|
||||
# FIXME: When this task UUID repeats, it means the play is using the
|
||||
# free strategy, so different hosts may be running different tasks
|
||||
# within a play.
|
||||
# within a play.
|
||||
return
|
||||
self.task_uuids.add(task_uuid)
|
||||
self.set_task(task)
|
||||
@@ -319,6 +283,9 @@ class BaseCallbackModule(CallbackBase):
|
||||
with self.capture_event_data('playbook_on_notify', **event_data):
|
||||
super(BaseCallbackModule, self).v2_playbook_on_notify(result, handler)
|
||||
|
||||
'''
|
||||
ansible_stats is, retoractively, added in 2.2
|
||||
'''
|
||||
def v2_playbook_on_stats(self, stats):
|
||||
self.clear_play()
|
||||
# FIXME: Add count of plays/tasks.
|
||||
@@ -329,7 +296,9 @@ class BaseCallbackModule(CallbackBase):
|
||||
ok=stats.ok,
|
||||
processed=stats.processed,
|
||||
skipped=stats.skipped,
|
||||
artifact_data=stats.custom.get('_run', {}) if hasattr(stats, 'custom') else {}
|
||||
)
|
||||
|
||||
with self.capture_event_data('playbook_on_stats', **event_data):
|
||||
super(BaseCallbackModule, self).v2_playbook_on_stats(stats)
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
3637
awx/locale/en-us/LC_MESSAGES/django.po
Normal file
3637
awx/locale/en-us/LC_MESSAGES/django.po
Normal file
File diff suppressed because it is too large
Load Diff
4053
awx/locale/fr/LC_MESSAGES/django.po
Normal file
4053
awx/locale/fr/LC_MESSAGES/django.po
Normal file
File diff suppressed because it is too large
Load Diff
3808
awx/locale/ja/LC_MESSAGES/django.po
Normal file
3808
awx/locale/ja/LC_MESSAGES/django.po
Normal file
File diff suppressed because it is too large
Load Diff
@@ -285,7 +285,7 @@ class BaseAccess(object):
|
||||
|
||||
return True # User has access to both, permission check passed
|
||||
|
||||
def check_license(self, add_host=False, feature=None, check_expiration=True):
|
||||
def check_license(self, add_host_name=None, feature=None, check_expiration=True):
|
||||
validation_info = TaskEnhancer().validate_enhancements()
|
||||
if ('test' in sys.argv or 'py.test' in sys.argv[0] or 'jenkins' in sys.argv) and not os.environ.get('SKIP_LICENSE_FIXUP_FOR_TEST', ''):
|
||||
validation_info['free_instances'] = 99999999
|
||||
@@ -299,11 +299,14 @@ class BaseAccess(object):
|
||||
|
||||
free_instances = validation_info.get('free_instances', 0)
|
||||
available_instances = validation_info.get('available_instances', 0)
|
||||
if add_host and free_instances == 0:
|
||||
raise PermissionDenied(_("License count of %s instances has been reached.") % available_instances)
|
||||
elif add_host and free_instances < 0:
|
||||
raise PermissionDenied(_("License count of %s instances has been exceeded.") % available_instances)
|
||||
elif not add_host and free_instances < 0:
|
||||
|
||||
if add_host_name:
|
||||
host_exists = Host.objects.filter(name=add_host_name).exists()
|
||||
if not host_exists and free_instances == 0:
|
||||
raise PermissionDenied(_("License count of %s instances has been reached.") % available_instances)
|
||||
elif not host_exists and free_instances < 0:
|
||||
raise PermissionDenied(_("License count of %s instances has been exceeded.") % available_instances)
|
||||
elif not add_host_name and free_instances < 0:
|
||||
raise PermissionDenied(_("Host count exceeds available instances."))
|
||||
|
||||
if feature is not None:
|
||||
@@ -353,7 +356,7 @@ class BaseAccess(object):
|
||||
|
||||
# Shortcuts in certain cases by deferring to earlier property
|
||||
if display_method == 'schedule':
|
||||
user_capabilities['schedule'] = user_capabilities['edit']
|
||||
user_capabilities['schedule'] = user_capabilities['start']
|
||||
continue
|
||||
elif display_method == 'delete' and not isinstance(obj, (User, UnifiedJob)):
|
||||
user_capabilities['delete'] = user_capabilities['edit']
|
||||
@@ -363,27 +366,30 @@ class BaseAccess(object):
|
||||
continue
|
||||
|
||||
# Compute permission
|
||||
data = {}
|
||||
access_method = getattr(self, "can_%s" % method)
|
||||
if method in ['change']: # 3 args
|
||||
user_capabilities[display_method] = access_method(obj, data)
|
||||
elif method in ['delete', 'run_ad_hoc_commands', 'copy']:
|
||||
user_capabilities[display_method] = access_method(obj)
|
||||
elif method in ['start']:
|
||||
user_capabilities[display_method] = access_method(obj, validate_license=False)
|
||||
elif method in ['add']: # 2 args with data
|
||||
user_capabilities[display_method] = access_method(data)
|
||||
elif method in ['attach', 'unattach']: # parent/sub-object call
|
||||
if type(parent_obj) == Team:
|
||||
relationship = 'parents'
|
||||
parent_obj = parent_obj.member_role
|
||||
else:
|
||||
relationship = 'members'
|
||||
user_capabilities[display_method] = access_method(
|
||||
obj, parent_obj, relationship, skip_sub_obj_read_check=True, data=data)
|
||||
user_capabilities[display_method] = self.get_method_capability(method, obj, parent_obj)
|
||||
|
||||
return user_capabilities
|
||||
|
||||
def get_method_capability(self, method, obj, parent_obj):
|
||||
if method in ['change']: # 3 args
|
||||
return self.can_change(obj, {})
|
||||
elif method in ['delete', 'run_ad_hoc_commands', 'copy']:
|
||||
access_method = getattr(self, "can_%s" % method)
|
||||
return access_method(obj)
|
||||
elif method in ['start']:
|
||||
return self.can_start(obj, validate_license=False)
|
||||
elif method in ['add']: # 2 args with data
|
||||
return self.can_add({})
|
||||
elif method in ['attach', 'unattach']: # parent/sub-object call
|
||||
access_method = getattr(self, "can_%s" % method)
|
||||
if type(parent_obj) == Team:
|
||||
relationship = 'parents'
|
||||
parent_obj = parent_obj.member_role
|
||||
else:
|
||||
relationship = 'members'
|
||||
return access_method(obj, parent_obj, relationship, skip_sub_obj_read_check=True, data={})
|
||||
return False
|
||||
|
||||
|
||||
class UserAccess(BaseAccess):
|
||||
'''
|
||||
@@ -402,23 +408,24 @@ class UserAccess(BaseAccess):
|
||||
|
||||
def get_queryset(self):
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
return User.objects.all()
|
||||
qs = User.objects.all()
|
||||
|
||||
if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and \
|
||||
elif settings.ORG_ADMINS_CAN_SEE_ALL_USERS and \
|
||||
(self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()):
|
||||
return User.objects.all()
|
||||
|
||||
return (
|
||||
User.objects.filter(
|
||||
pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members')
|
||||
) |
|
||||
User.objects.filter(
|
||||
pk=self.user.id
|
||||
) |
|
||||
User.objects.filter(
|
||||
pk__in=Role.objects.filter(singleton_name__in = [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members')
|
||||
)
|
||||
).distinct()
|
||||
qs = User.objects.all()
|
||||
else:
|
||||
qs = (
|
||||
User.objects.filter(
|
||||
pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members')
|
||||
) |
|
||||
User.objects.filter(
|
||||
pk=self.user.id
|
||||
) |
|
||||
User.objects.filter(
|
||||
pk__in=Role.objects.filter(singleton_name__in = [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members')
|
||||
)
|
||||
).distinct()
|
||||
return qs.prefetch_related('profile')
|
||||
|
||||
|
||||
def can_add(self, data):
|
||||
@@ -485,7 +492,7 @@ class OrganizationAccess(BaseAccess):
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.accessible_objects(self.user, 'read_role')
|
||||
return qs.select_related('created_by', 'modified_by').all()
|
||||
return qs.prefetch_related('created_by', 'modified_by').all()
|
||||
|
||||
@check_superuser
|
||||
def can_change(self, obj, data):
|
||||
@@ -608,7 +615,7 @@ class HostAccess(BaseAccess):
|
||||
return False
|
||||
|
||||
# Check to see if we have enough licenses
|
||||
self.check_license(add_host=True)
|
||||
self.check_license(add_host_name=data.get('name', None))
|
||||
return True
|
||||
|
||||
def can_change(self, obj, data):
|
||||
@@ -616,6 +623,11 @@ class HostAccess(BaseAccess):
|
||||
inventory_pk = get_pk_from_dict(data, 'inventory')
|
||||
if obj and inventory_pk and obj.inventory.pk != inventory_pk:
|
||||
raise PermissionDenied(_('Unable to change inventory on a host.'))
|
||||
|
||||
# Prevent renaming a host that might exceed license count
|
||||
if 'name' in data:
|
||||
self.check_license(add_host_name=data['name'])
|
||||
|
||||
# Checks for admin or change permission on inventory, controls whether
|
||||
# the user can edit variable data.
|
||||
return obj and self.user in obj.inventory.admin_role
|
||||
@@ -837,15 +849,7 @@ class CredentialAccess(BaseAccess):
|
||||
def can_change(self, obj, data):
|
||||
if not obj:
|
||||
return False
|
||||
|
||||
# Cannot change the organization for a credential after it's been created
|
||||
if data and 'organization' in data:
|
||||
organization_pk = get_pk_from_dict(data, 'organization')
|
||||
if (organization_pk and (not obj.organization or organization_pk != obj.organization.id)) \
|
||||
or (not organization_pk and obj.organization):
|
||||
return False
|
||||
|
||||
return self.user in obj.admin_role
|
||||
return self.user in obj.admin_role and self.check_related('organization', Organization, data, obj=obj)
|
||||
|
||||
def can_delete(self, obj):
|
||||
# Unassociated credentials may be marked deleted by anyone, though we
|
||||
@@ -990,8 +994,6 @@ class ProjectUpdateAccess(BaseAccess):
|
||||
|
||||
@check_superuser
|
||||
def can_cancel(self, obj):
|
||||
if not obj.can_cancel:
|
||||
return False
|
||||
if self.user == obj.created_by:
|
||||
return True
|
||||
# Project updates cascade delete with project, admin role descends from org admin
|
||||
@@ -1048,7 +1050,7 @@ class JobTemplateAccess(BaseAccess):
|
||||
Project.accessible_objects(self.user, 'use_role').exists() or
|
||||
Inventory.accessible_objects(self.user, 'use_role').exists())
|
||||
|
||||
# if reference_obj is provided, determine if it can be coppied
|
||||
# if reference_obj is provided, determine if it can be copied
|
||||
reference_obj = data.get('reference_obj', None)
|
||||
|
||||
if 'job_type' in data and data['job_type'] == PERM_INVENTORY_SCAN:
|
||||
@@ -1223,7 +1225,7 @@ class JobAccess(BaseAccess):
|
||||
model = Job
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.objects.distinct()
|
||||
qs = self.model.objects
|
||||
qs = qs.select_related('created_by', 'modified_by', 'job_template', 'inventory',
|
||||
'project', 'credential', 'cloud_credential', 'job_template')
|
||||
qs = qs.prefetch_related('unified_job_template')
|
||||
@@ -1370,7 +1372,6 @@ class SystemJobAccess(BaseAccess):
|
||||
return False # no relaunching of system jobs
|
||||
|
||||
|
||||
# TODO:
|
||||
class WorkflowJobTemplateNodeAccess(BaseAccess):
|
||||
'''
|
||||
I can see/use a WorkflowJobTemplateNode if I have read permission
|
||||
@@ -1401,25 +1402,23 @@ class WorkflowJobTemplateNodeAccess(BaseAccess):
|
||||
qs = self.model.objects.filter(
|
||||
workflow_job_template__in=WorkflowJobTemplate.accessible_objects(
|
||||
self.user, 'read_role'))
|
||||
qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes')
|
||||
qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes',
|
||||
'unified_job_template')
|
||||
return qs
|
||||
|
||||
def can_use_prompted_resources(self, data):
|
||||
if not self.check_related('credential', Credential, data):
|
||||
return False
|
||||
if not self.check_related('inventory', Inventory, data):
|
||||
return False
|
||||
return True
|
||||
return (
|
||||
self.check_related('credential', Credential, data, role_field='use_role') and
|
||||
self.check_related('inventory', Inventory, data, role_field='use_role'))
|
||||
|
||||
@check_superuser
|
||||
def can_add(self, data):
|
||||
if not data: # So the browseable API will work
|
||||
return True
|
||||
if not self.check_related('workflow_job_template', WorkflowJobTemplate, data, mandatory=True):
|
||||
return False
|
||||
if not self.can_use_prompted_resources(data):
|
||||
return False
|
||||
return True
|
||||
return (
|
||||
self.check_related('workflow_job_template', WorkflowJobTemplate, data, mandatory=True) and
|
||||
self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role') and
|
||||
self.can_use_prompted_resources(data))
|
||||
|
||||
def wfjt_admin(self, obj):
|
||||
if not obj.workflow_job_template:
|
||||
@@ -1490,8 +1489,14 @@ class WorkflowJobNodeAccess(BaseAccess):
|
||||
qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes')
|
||||
return qs
|
||||
|
||||
@check_superuser
|
||||
def can_add(self, data):
|
||||
return False
|
||||
if data is None: # Hide direct creation in API browser
|
||||
return False
|
||||
return (
|
||||
self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role') and
|
||||
self.check_related('credential', Credential, data, role_field='use_role') and
|
||||
self.check_related('inventory', Inventory, data, role_field='use_role'))
|
||||
|
||||
def can_change(self, obj, data):
|
||||
return False
|
||||
@@ -1540,24 +1545,30 @@ class WorkflowJobTemplateAccess(BaseAccess):
|
||||
|
||||
def can_copy(self, obj):
|
||||
if self.save_messages:
|
||||
wfjt_errors = {}
|
||||
missing_ujt = []
|
||||
missing_credentials = []
|
||||
missing_inventories = []
|
||||
qs = obj.workflow_job_template_nodes
|
||||
qs.select_related('unified_job_template', 'inventory', 'credential')
|
||||
for node in qs.all():
|
||||
node_errors = {}
|
||||
if node.inventory and self.user not in node.inventory.use_role:
|
||||
node_errors['inventory'] = 'Prompted inventory %s can not be coppied.' % node.inventory.name
|
||||
missing_inventories.append(node.inventory.name)
|
||||
if node.credential and self.user not in node.credential.use_role:
|
||||
node_errors['credential'] = 'Prompted credential %s can not be coppied.' % node.credential.name
|
||||
missing_credentials.append(node.credential.name)
|
||||
ujt = node.unified_job_template
|
||||
if ujt and not self.user.can_access(UnifiedJobTemplate, 'start', ujt, validate_license=False):
|
||||
node_errors['unified_job_template'] = (
|
||||
'Prompted %s %s can not be coppied.' % (ujt._meta.verbose_name_raw, ujt.name))
|
||||
missing_ujt.append(ujt.name)
|
||||
if node_errors:
|
||||
wfjt_errors[node.id] = node_errors
|
||||
self.messages.update(wfjt_errors)
|
||||
if missing_ujt:
|
||||
self.messages['templates_unable_to_copy'] = missing_ujt
|
||||
if missing_credentials:
|
||||
self.messages['credentials_unable_to_copy'] = missing_credentials
|
||||
if missing_inventories:
|
||||
self.messages['inventories_unable_to_copy'] = missing_inventories
|
||||
|
||||
return self.check_related('organization', Organization, {}, obj=obj, mandatory=True)
|
||||
return self.check_related('organization', Organization, {'reference_obj': obj}, mandatory=True)
|
||||
|
||||
def can_start(self, obj, validate_license=True):
|
||||
if validate_license:
|
||||
@@ -1623,22 +1634,50 @@ class WorkflowJobAccess(BaseAccess):
|
||||
def can_change(self, obj, data):
|
||||
return False
|
||||
|
||||
@check_superuser
|
||||
def can_delete(self, obj):
|
||||
if obj.workflow_job_template is None:
|
||||
# only superusers can delete orphaned workflow jobs
|
||||
return self.user.is_superuser
|
||||
return self.user in obj.workflow_job_template.admin_role
|
||||
return (obj.workflow_job_template and
|
||||
obj.workflow_job_template.organization and
|
||||
self.user in obj.workflow_job_template.organization.admin_role)
|
||||
|
||||
def get_method_capability(self, method, obj, parent_obj):
|
||||
if method == 'start':
|
||||
# Return simplistic permission, will perform detailed check on POST
|
||||
if not obj.workflow_job_template:
|
||||
return self.user.is_superuser
|
||||
return self.user in obj.workflow_job_template.execute_role
|
||||
return super(WorkflowJobAccess, self).get_method_capability(method, obj, parent_obj)
|
||||
|
||||
def can_start(self, obj, validate_license=True):
|
||||
if validate_license:
|
||||
self.check_license()
|
||||
if obj.survey_enabled:
|
||||
self.check_license(feature='surveys')
|
||||
|
||||
if self.user.is_superuser:
|
||||
return True
|
||||
|
||||
return (obj.workflow_job_template and self.user in obj.workflow_job_template.execute_role)
|
||||
wfjt = obj.workflow_job_template
|
||||
# only superusers can relaunch orphans
|
||||
if not wfjt:
|
||||
return False
|
||||
|
||||
# execute permission to WFJT is mandatory for any relaunch
|
||||
if self.user not in wfjt.execute_role:
|
||||
return False
|
||||
|
||||
# user's WFJT access doesn't guarentee permission to launch, introspect nodes
|
||||
return self.can_recreate(obj)
|
||||
|
||||
def can_recreate(self, obj):
|
||||
node_qs = obj.workflow_job_nodes.all().prefetch_related('inventory', 'credential', 'unified_job_template')
|
||||
node_access = WorkflowJobNodeAccess(user=self.user)
|
||||
wj_add_perm = True
|
||||
for node in node_qs:
|
||||
if not node_access.can_add({'reference_obj': node}):
|
||||
wj_add_perm = False
|
||||
if not wj_add_perm and self.save_messages:
|
||||
self.messages['workflow_job_template'] = _('You do not have permission to the workflow job '
|
||||
'resources required for relaunch.')
|
||||
return wj_add_perm
|
||||
|
||||
def can_cancel(self, obj):
|
||||
if not obj.can_cancel:
|
||||
@@ -1766,21 +1805,15 @@ class JobEventAccess(BaseAccess):
|
||||
model = JobEvent
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.objects.all()
|
||||
qs = qs.select_related('job', 'job__job_template', 'host', 'parent')
|
||||
qs = qs.prefetch_related('hosts', 'children')
|
||||
|
||||
# Filter certain "internal" events generated by async polling.
|
||||
qs = qs.exclude(event__in=('runner_on_ok', 'runner_on_failed'),
|
||||
event_data__icontains='"ansible_job_id": "',
|
||||
event_data__contains='"module_name": "async_status"')
|
||||
qs = self.model.objects
|
||||
qs = qs.prefetch_related('hosts', 'children', 'job__job_template', 'host')
|
||||
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
return qs.all()
|
||||
|
||||
job_qs = self.user.get_queryset(Job)
|
||||
host_qs = self.user.get_queryset(Host)
|
||||
return qs.filter(Q(host__isnull=True) | Q(host__in=host_qs), job__in=job_qs)
|
||||
return qs.filter(
|
||||
Q(host__inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role')) |
|
||||
Q(job__job_template__in=JobTemplate.accessible_pk_qs(self.user, 'read_role')))
|
||||
|
||||
def can_add(self, data):
|
||||
return False
|
||||
@@ -1795,29 +1828,29 @@ class JobEventAccess(BaseAccess):
|
||||
class UnifiedJobTemplateAccess(BaseAccess):
|
||||
'''
|
||||
I can see a unified job template whenever I can see the same project,
|
||||
inventory source or job template. Unified job templates do not include
|
||||
projects without SCM configured or inventory sources without a cloud
|
||||
source.
|
||||
inventory source, WFJT, or job template. Unified job templates do not include
|
||||
inventory sources without a cloud source.
|
||||
'''
|
||||
|
||||
model = UnifiedJobTemplate
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.objects.all()
|
||||
project_qs = self.user.get_queryset(Project).filter(scm_type__in=[s[0] for s in Project.SCM_TYPE_CHOICES])
|
||||
inventory_source_qs = self.user.get_queryset(InventorySource).filter(source__in=CLOUD_INVENTORY_SOURCES)
|
||||
job_template_qs = self.user.get_queryset(JobTemplate)
|
||||
system_job_template_qs = self.user.get_queryset(SystemJobTemplate)
|
||||
workflow_job_template_qs = self.user.get_queryset(WorkflowJobTemplate)
|
||||
qs = qs.filter(Q(Project___in=project_qs) |
|
||||
Q(InventorySource___in=inventory_source_qs) |
|
||||
Q(JobTemplate___in=job_template_qs) |
|
||||
Q(systemjobtemplate__in=system_job_template_qs) |
|
||||
Q(workflowjobtemplate__in=workflow_job_template_qs))
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
qs = self.model.objects.all()
|
||||
else:
|
||||
qs = self.model.objects.filter(
|
||||
Q(pk__in=self.model.accessible_pk_qs(self.user, 'read_role')) |
|
||||
Q(inventorysource__inventory__id__in=Inventory._accessible_pk_qs(
|
||||
Inventory, self.user, 'read_role')))
|
||||
qs = qs.exclude(inventorysource__source="")
|
||||
|
||||
qs = qs.select_related(
|
||||
'created_by',
|
||||
'modified_by',
|
||||
'next_schedule',
|
||||
)
|
||||
# prefetch last/current jobs so we get the real instance
|
||||
qs = qs.prefetch_related(
|
||||
'last_job',
|
||||
'current_job',
|
||||
)
|
||||
@@ -1849,25 +1882,23 @@ class UnifiedJobAccess(BaseAccess):
|
||||
model = UnifiedJob
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.objects.all()
|
||||
project_update_qs = self.user.get_queryset(ProjectUpdate)
|
||||
inventory_update_qs = self.user.get_queryset(InventoryUpdate).filter(source__in=CLOUD_INVENTORY_SOURCES)
|
||||
job_qs = self.user.get_queryset(Job)
|
||||
ad_hoc_command_qs = self.user.get_queryset(AdHocCommand)
|
||||
system_job_qs = self.user.get_queryset(SystemJob)
|
||||
workflow_job_qs = self.user.get_queryset(WorkflowJob)
|
||||
qs = qs.filter(Q(ProjectUpdate___in=project_update_qs) |
|
||||
Q(InventoryUpdate___in=inventory_update_qs) |
|
||||
Q(Job___in=job_qs) |
|
||||
Q(AdHocCommand___in=ad_hoc_command_qs) |
|
||||
Q(SystemJob___in=system_job_qs) |
|
||||
Q(WorkflowJob___in=workflow_job_qs))
|
||||
qs = qs.select_related(
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
qs = self.model.objects.all()
|
||||
else:
|
||||
inv_pk_qs = Inventory._accessible_pk_qs(Inventory, self.user, 'read_role')
|
||||
org_auditor_qs = Organization.objects.filter(
|
||||
Q(admin_role__members=self.user) | Q(auditor_role__members=self.user))
|
||||
qs = self.model.objects.filter(
|
||||
Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role')) |
|
||||
Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs) |
|
||||
Q(adhoccommand__inventory__id__in=inv_pk_qs) |
|
||||
Q(job__inventory__organization__in=org_auditor_qs) |
|
||||
Q(job__project__organization__in=org_auditor_qs)
|
||||
)
|
||||
qs = qs.prefetch_related(
|
||||
'created_by',
|
||||
'modified_by',
|
||||
'unified_job_node__workflow_job',
|
||||
)
|
||||
qs = qs.prefetch_related(
|
||||
'unified_job_template',
|
||||
)
|
||||
|
||||
@@ -1923,11 +1954,17 @@ class ScheduleAccess(BaseAccess):
|
||||
|
||||
@check_superuser
|
||||
def can_add(self, data):
|
||||
return self.check_related('unified_job_template', UnifiedJobTemplate, data, mandatory=True)
|
||||
return self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role', mandatory=True)
|
||||
|
||||
@check_superuser
|
||||
def can_change(self, obj, data):
|
||||
return self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, mandatory=True)
|
||||
if self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, mandatory=True):
|
||||
return True
|
||||
# Users with execute role can modify the schedules they created
|
||||
return (
|
||||
obj.created_by == self.user and
|
||||
self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, role_field='execute_role', mandatory=True))
|
||||
|
||||
|
||||
def can_delete(self, obj):
|
||||
return self.can_change(obj, {})
|
||||
@@ -1989,9 +2026,9 @@ class NotificationAccess(BaseAccess):
|
||||
model = Notification
|
||||
|
||||
def get_queryset(self):
|
||||
qs = self.model.objects.all()
|
||||
qs = self.model.objects.prefetch_related('notification_template')
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
return qs
|
||||
return qs.all()
|
||||
return self.model.objects.filter(
|
||||
Q(notification_template__organization__in=self.user.admin_of_organizations) |
|
||||
Q(notification_template__organization__in=self.user.auditor_of_organizations)
|
||||
@@ -2067,11 +2104,12 @@ class ActivityStreamAccess(BaseAccess):
|
||||
- custom inventory scripts
|
||||
'''
|
||||
qs = self.model.objects.all()
|
||||
qs = qs.select_related('actor')
|
||||
qs = qs.prefetch_related('organization', 'user', 'inventory', 'host', 'group', 'inventory_source',
|
||||
'inventory_update', 'credential', 'team', 'project', 'project_update',
|
||||
'permission', 'job_template', 'job', 'ad_hoc_command',
|
||||
'notification_template', 'notification', 'label', 'role')
|
||||
'job_template', 'job', 'ad_hoc_command',
|
||||
'notification_template', 'notification', 'label', 'role', 'actor',
|
||||
'schedule', 'custom_inventory_script', 'unified_job_template',
|
||||
'workflow_job_template', 'workflow_job')
|
||||
if self.user.is_superuser or self.user.is_system_auditor:
|
||||
return qs.all()
|
||||
|
||||
@@ -2084,6 +2122,7 @@ class ActivityStreamAccess(BaseAccess):
|
||||
project_set = Project.accessible_objects(self.user, 'read_role')
|
||||
jt_set = JobTemplate.accessible_objects(self.user, 'read_role')
|
||||
team_set = Team.accessible_objects(self.user, 'read_role')
|
||||
wfjt_set = WorkflowJobTemplate.accessible_objects(self.user, 'read_role')
|
||||
|
||||
return qs.filter(
|
||||
Q(ad_hoc_command__inventory__in=inventory_set) |
|
||||
@@ -2101,6 +2140,9 @@ class ActivityStreamAccess(BaseAccess):
|
||||
Q(project_update__project__in=project_set) |
|
||||
Q(job_template__in=jt_set) |
|
||||
Q(job__job_template__in=jt_set) |
|
||||
Q(workflow_job_template__in=wfjt_set) |
|
||||
Q(workflow_job_template_node__workflow_job_template__in=wfjt_set) |
|
||||
Q(workflow_job__workflow_job_template__in=wfjt_set) |
|
||||
Q(notification_template__organization__in=auditing_orgs) |
|
||||
Q(notification__notification_template__organization__in=auditing_orgs) |
|
||||
Q(label__organization__in=auditing_orgs) |
|
||||
|
||||
@@ -134,6 +134,7 @@ register(
|
||||
register(
|
||||
'AWX_PROOT_HIDE_PATHS',
|
||||
field_class=fields.StringListField,
|
||||
required=False,
|
||||
label=_('Paths to hide from isolated jobs'),
|
||||
help_text=_('Additional paths to hide from isolated processes.'),
|
||||
category=_('Jobs'),
|
||||
@@ -143,6 +144,7 @@ register(
|
||||
register(
|
||||
'AWX_PROOT_SHOW_PATHS',
|
||||
field_class=fields.StringListField,
|
||||
required=False,
|
||||
label=_('Paths to expose to isolated jobs'),
|
||||
help_text=_('Whitelist of paths that would otherwise be hidden to expose to isolated jobs.'),
|
||||
category=_('Jobs'),
|
||||
@@ -182,6 +184,7 @@ register(
|
||||
register(
|
||||
'AWX_ANSIBLE_CALLBACK_PLUGINS',
|
||||
field_class=fields.StringListField,
|
||||
required=False,
|
||||
label=_('Ansible Callback Plugins'),
|
||||
help_text=_('List of paths to search for extra callback plugins to be used when running jobs.'),
|
||||
category=_('Jobs'),
|
||||
@@ -228,8 +231,8 @@ register(
|
||||
'LOG_AGGREGATOR_HOST',
|
||||
field_class=fields.CharField,
|
||||
allow_null=True,
|
||||
label=_('Logging Aggregator Receiving Host'),
|
||||
help_text=_('External host maintain a log collector to send logs to'),
|
||||
label=_('Logging Aggregator'),
|
||||
help_text=_('Hostname/IP where external logs will be sent to.'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -237,8 +240,8 @@ register(
|
||||
'LOG_AGGREGATOR_PORT',
|
||||
field_class=fields.IntegerField,
|
||||
allow_null=True,
|
||||
label=_('Logging Aggregator Receiving Port'),
|
||||
help_text=_('Port that the log collector is listening on'),
|
||||
label=_('Logging Aggregator Port'),
|
||||
help_text=_('Port on Logging Aggregator to send logs to (if required).'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -247,8 +250,8 @@ register(
|
||||
field_class=fields.ChoiceField,
|
||||
choices=['logstash', 'splunk', 'loggly', 'sumologic', 'other'],
|
||||
allow_null=True,
|
||||
label=_('Logging Aggregator Type: Logstash, Loggly, Datadog, etc'),
|
||||
help_text=_('The type of log aggregator service to format messages for'),
|
||||
label=_('Logging Aggregator Type'),
|
||||
help_text=_('Format messages for the chosen log aggregator.'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -256,8 +259,8 @@ register(
|
||||
'LOG_AGGREGATOR_USERNAME',
|
||||
field_class=fields.CharField,
|
||||
allow_null=True,
|
||||
label=_('Logging Aggregator Username to Authenticate With'),
|
||||
help_text=_('Username for Logstash or others (basic auth)'),
|
||||
label=_('Logging Aggregator Username'),
|
||||
help_text=_('Username for external log aggregator (if required).'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -265,8 +268,9 @@ register(
|
||||
'LOG_AGGREGATOR_PASSWORD',
|
||||
field_class=fields.CharField,
|
||||
allow_null=True,
|
||||
label=_('Logging Aggregator Password to Authenticate With'),
|
||||
help_text=_('Password for Logstash or others (basic auth)'),
|
||||
encrypted=True,
|
||||
label=_('Logging Aggregator Password/Token'),
|
||||
help_text=_('Password or authentication token for external log aggregator (if required).'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -277,11 +281,10 @@ register(
|
||||
label=_('Loggers to send data to the log aggregator from'),
|
||||
help_text=_('List of loggers that will send HTTP logs to the collector, these can '
|
||||
'include any or all of: \n'
|
||||
'activity_stream - logs duplicate to records entered in activity stream\n'
|
||||
'awx - Tower service logs\n'
|
||||
'activity_stream - activity stream records\n'
|
||||
'job_events - callback data from Ansible job events\n'
|
||||
'system_tracking - data generated from scan jobs\n'
|
||||
'Sending generic Tower logs must be configured through local_settings.py'
|
||||
'instead of this mechanism.'),
|
||||
'system_tracking - facts gathered from scan jobs.'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -289,10 +292,11 @@ register(
|
||||
'LOG_AGGREGATOR_INDIVIDUAL_FACTS',
|
||||
field_class=fields.BooleanField,
|
||||
default=False,
|
||||
label=_('Flag denoting to send individual messages for each fact in system tracking'),
|
||||
help_text=_('If not set, the data from system tracking will be sent inside '
|
||||
'of a single dictionary, but if set, separate requests will be sent '
|
||||
'for each package, service, etc. that is found in the scan.'),
|
||||
label=_('Log System Tracking Facts Individually'),
|
||||
help_text=_('If set, system tracking facts will be sent for each package, service, or'
|
||||
'other item found in a scan, allowing for greater search query granularity. '
|
||||
'If unset, facts will be sent as a single dictionary, allowing for greater '
|
||||
'efficiency in fact processing.'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
@@ -300,8 +304,8 @@ register(
|
||||
'LOG_AGGREGATOR_ENABLED',
|
||||
field_class=fields.BooleanField,
|
||||
default=False,
|
||||
label=_('Flag denoting whether to use the external logger system'),
|
||||
help_text=_('If not set, only normal settings data will be used to configure loggers.'),
|
||||
label=_('Enable External Logging'),
|
||||
help_text=_('Enable sending logs to external log aggregator.'),
|
||||
category=_('Logging'),
|
||||
category_slug='logging',
|
||||
)
|
||||
|
||||
@@ -6,6 +6,7 @@ from channels import Group
|
||||
from channels.sessions import channel_session
|
||||
|
||||
from django.contrib.auth.models import User
|
||||
from django.core.serializers.json import DjangoJSONEncoder
|
||||
from awx.main.models.organization import AuthToken
|
||||
|
||||
|
||||
@@ -86,4 +87,4 @@ def ws_receive(message):
|
||||
|
||||
|
||||
def emit_channel_notification(group, payload):
|
||||
Group(group).send({"text": json.dumps(payload)})
|
||||
Group(group).send({"text": json.dumps(payload, cls=DjangoJSONEncoder)})
|
||||
|
||||
@@ -96,12 +96,12 @@ class Command(BaseCommand):
|
||||
option_list = BaseCommand.option_list + (
|
||||
make_option('--older_than',
|
||||
dest='older_than',
|
||||
default=None,
|
||||
help='Specify the relative time to consider facts older than (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y).'),
|
||||
default='30d',
|
||||
help='Specify the relative time to consider facts older than (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y). Defaults to 30d.'),
|
||||
make_option('--granularity',
|
||||
dest='granularity',
|
||||
default=None,
|
||||
help='Window duration to group same hosts by for deletion (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y).'),
|
||||
default='1w',
|
||||
help='Window duration to group same hosts by for deletion (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y). Defaults to 1w.'),
|
||||
make_option('--module',
|
||||
dest='module',
|
||||
default=None,
|
||||
|
||||
@@ -12,7 +12,7 @@ from django.db import transaction
|
||||
from django.utils.timezone import now
|
||||
|
||||
# AWX
|
||||
from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob
|
||||
from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification
|
||||
|
||||
|
||||
class Command(NoArgsCommand):
|
||||
@@ -30,19 +30,25 @@ class Command(NoArgsCommand):
|
||||
'be removed)'),
|
||||
make_option('--jobs', dest='only_jobs', action='store_true',
|
||||
default=False,
|
||||
help='Only remove jobs'),
|
||||
help='Remove jobs'),
|
||||
make_option('--ad-hoc-commands', dest='only_ad_hoc_commands',
|
||||
action='store_true', default=False,
|
||||
help='Only remove ad hoc commands'),
|
||||
help='Remove ad hoc commands'),
|
||||
make_option('--project-updates', dest='only_project_updates',
|
||||
action='store_true', default=False,
|
||||
help='Only remove project updates'),
|
||||
help='Remove project updates'),
|
||||
make_option('--inventory-updates', dest='only_inventory_updates',
|
||||
action='store_true', default=False,
|
||||
help='Only remove inventory updates'),
|
||||
help='Remove inventory updates'),
|
||||
make_option('--management-jobs', default=False,
|
||||
action='store_true', dest='only_management_jobs',
|
||||
help='Only remove management jobs')
|
||||
help='Remove management jobs'),
|
||||
make_option('--notifications', dest='only_notifications',
|
||||
action='store_true', default=False,
|
||||
help='Remove notifications'),
|
||||
make_option('--workflow-jobs', default=False,
|
||||
action='store_true', dest='only_workflow_jobs',
|
||||
help='Remove workflow jobs')
|
||||
)
|
||||
|
||||
def cleanup_jobs(self):
|
||||
@@ -169,6 +175,50 @@ class Command(NoArgsCommand):
|
||||
self.logger.addHandler(handler)
|
||||
self.logger.propagate = False
|
||||
|
||||
def cleanup_workflow_jobs(self):
|
||||
skipped, deleted = 0, 0
|
||||
for workflow_job in WorkflowJob.objects.all():
|
||||
workflow_job_display = '"{}" (started {}, {} nodes)'.format(
|
||||
unicode(workflow_job), unicode(workflow_job.created),
|
||||
workflow_job.workflow_nodes.count())
|
||||
if workflow_job.status in ('pending', 'waiting', 'running'):
|
||||
action_text = 'would skip' if self.dry_run else 'skipping'
|
||||
self.logger.debug('%s %s job %s', action_text, workflow_job.status, workflow_job_display)
|
||||
skipped += 1
|
||||
elif workflow_job.created >= self.cutoff:
|
||||
action_text = 'would skip' if self.dry_run else 'skipping'
|
||||
self.logger.debug('%s %s', action_text, workflow_job_display)
|
||||
skipped += 1
|
||||
else:
|
||||
action_text = 'would delete' if self.dry_run else 'deleting'
|
||||
self.logger.info('%s %s', action_text, workflow_job_display)
|
||||
if not self.dry_run:
|
||||
workflow_job.delete()
|
||||
deleted += 1
|
||||
return skipped, deleted
|
||||
|
||||
def cleanup_notifications(self):
|
||||
skipped, deleted = 0, 0
|
||||
for notification in Notification.objects.all():
|
||||
notification_display = '"{}" (started {}, {} type, {} sent)'.format(
|
||||
unicode(notification), unicode(notification.created),
|
||||
notification.notification_type, notification.notifications_sent)
|
||||
if notification.status in ('pending',):
|
||||
action_text = 'would skip' if self.dry_run else 'skipping'
|
||||
self.logger.debug('%s %s notification %s', action_text, notification.status, notification_display)
|
||||
skipped += 1
|
||||
elif notification.created >= self.cutoff:
|
||||
action_text = 'would skip' if self.dry_run else 'skipping'
|
||||
self.logger.debug('%s %s', action_text, notification_display)
|
||||
skipped += 1
|
||||
else:
|
||||
action_text = 'would delete' if self.dry_run else 'deleting'
|
||||
self.logger.info('%s %s', action_text, notification_display)
|
||||
if not self.dry_run:
|
||||
notification.delete()
|
||||
deleted += 1
|
||||
return skipped, deleted
|
||||
|
||||
@transaction.atomic
|
||||
def handle_noargs(self, **options):
|
||||
self.verbosity = int(options.get('verbosity', 1))
|
||||
@@ -179,7 +229,8 @@ class Command(NoArgsCommand):
|
||||
self.cutoff = now() - datetime.timedelta(days=self.days)
|
||||
except OverflowError:
|
||||
raise CommandError('--days specified is too large. Try something less than 99999 (about 270 years).')
|
||||
model_names = ('jobs', 'ad_hoc_commands', 'project_updates', 'inventory_updates', 'management_jobs')
|
||||
model_names = ('jobs', 'ad_hoc_commands', 'project_updates', 'inventory_updates',
|
||||
'management_jobs', 'workflow_jobs', 'notifications')
|
||||
models_to_cleanup = set()
|
||||
for m in model_names:
|
||||
if options.get('only_%s' % m, False):
|
||||
|
||||
@@ -64,7 +64,7 @@ class MemObject(object):
|
||||
all_vars = {}
|
||||
files_found = 0
|
||||
for suffix in ('', '.yml', '.yaml', '.json'):
|
||||
path = ''.join([base_path, suffix])
|
||||
path = ''.join([base_path, suffix]).encode("utf-8")
|
||||
if not os.path.exists(path):
|
||||
continue
|
||||
if not os.path.isfile(path):
|
||||
@@ -462,7 +462,7 @@ class ExecutableJsonLoader(BaseLoader):
|
||||
# to set their variables
|
||||
for k,v in self.all_group.all_hosts.iteritems():
|
||||
if 'hostvars' not in _meta:
|
||||
data = self.command_to_json([self.source, '--host', k])
|
||||
data = self.command_to_json([self.source, '--host', k.encode("utf-8")])
|
||||
else:
|
||||
data = _meta['hostvars'].get(k, {})
|
||||
if isinstance(data, dict):
|
||||
@@ -482,6 +482,7 @@ def load_inventory_source(source, all_group=None, group_filter_re=None,
|
||||
# good naming conventions
|
||||
source = source.replace('azure.py', 'windows_azure.py')
|
||||
source = source.replace('satellite6.py', 'foreman.py')
|
||||
source = source.replace('vmware.py', 'vmware_inventory.py')
|
||||
logger.debug('Analyzing type of source: %s', source)
|
||||
original_all_group = all_group
|
||||
if not os.path.exists(source):
|
||||
@@ -1191,7 +1192,7 @@ class Command(NoArgsCommand):
|
||||
|
||||
def check_license(self):
|
||||
license_info = TaskEnhancer().validate_enhancements()
|
||||
if not license_info or len(license_info) == 0:
|
||||
if license_info.get('license_key', 'UNLICENSED') == 'UNLICENSED':
|
||||
self.logger.error(LICENSE_NON_EXISTANT_MESSAGE)
|
||||
raise CommandError('No Tower license found!')
|
||||
available_instances = license_info.get('available_instances', 0)
|
||||
@@ -1253,6 +1254,12 @@ class Command(NoArgsCommand):
|
||||
except re.error:
|
||||
raise CommandError('invalid regular expression for --host-filter')
|
||||
|
||||
'''
|
||||
TODO: Remove this deprecation when we remove support for rax.py
|
||||
'''
|
||||
if self.source == "rax.py":
|
||||
self.logger.info("Rackspace inventory sync is Deprecated in Tower 3.1.0 and support for Rackspace will be removed in a future release.")
|
||||
|
||||
begin = time.time()
|
||||
self.load_inventory_from_database()
|
||||
|
||||
|
||||
@@ -3,6 +3,11 @@
|
||||
|
||||
# Python
|
||||
import logging
|
||||
import signal
|
||||
from uuid import UUID
|
||||
from multiprocessing import Process
|
||||
from multiprocessing import Queue as MPQueue
|
||||
from Queue import Empty as QueueEmpty
|
||||
|
||||
from kombu import Connection, Exchange, Queue
|
||||
from kombu.mixins import ConsumerMixin
|
||||
@@ -10,7 +15,9 @@ from kombu.mixins import ConsumerMixin
|
||||
# Django
|
||||
from django.conf import settings
|
||||
from django.core.management.base import NoArgsCommand
|
||||
from django.db import connection as django_connection
|
||||
from django.db import DatabaseError
|
||||
from django.core.cache import cache as django_cache
|
||||
|
||||
# AWX
|
||||
from awx.main.models import * # noqa
|
||||
@@ -19,8 +26,40 @@ logger = logging.getLogger('awx.main.commands.run_callback_receiver')
|
||||
|
||||
|
||||
class CallbackBrokerWorker(ConsumerMixin):
|
||||
def __init__(self, connection):
|
||||
def __init__(self, connection, use_workers=True):
|
||||
self.connection = connection
|
||||
self.worker_queues = []
|
||||
self.total_messages = 0
|
||||
self.init_workers(use_workers)
|
||||
|
||||
def init_workers(self, use_workers=True):
|
||||
def shutdown_handler(active_workers):
|
||||
def _handler(signum, frame):
|
||||
try:
|
||||
for active_worker in active_workers:
|
||||
active_worker.terminate()
|
||||
signal.signal(signum, signal.SIG_DFL)
|
||||
os.kill(os.getpid(), signum) # Rethrow signal, this time without catching it
|
||||
except Exception:
|
||||
# TODO: LOG
|
||||
pass
|
||||
return _handler
|
||||
|
||||
if use_workers:
|
||||
django_connection.close()
|
||||
django_cache.close()
|
||||
for idx in range(settings.JOB_EVENT_WORKERS):
|
||||
queue_actual = MPQueue(settings.JOB_EVENT_MAX_QUEUE_SIZE)
|
||||
w = Process(target=self.callback_worker, args=(queue_actual, idx,))
|
||||
w.start()
|
||||
if settings.DEBUG:
|
||||
logger.info('Started worker %s' % str(idx))
|
||||
self.worker_queues.append([0, queue_actual, w])
|
||||
elif settings.DEBUG:
|
||||
logger.warn('Started callback receiver (no workers)')
|
||||
|
||||
signal.signal(signal.SIGINT, shutdown_handler([p[2] for p in self.worker_queues]))
|
||||
signal.signal(signal.SIGTERM, shutdown_handler([p[2] for p in self.worker_queues]))
|
||||
|
||||
def get_consumers(self, Consumer, channel):
|
||||
return [Consumer(queues=[Queue(settings.CALLBACK_QUEUE,
|
||||
@@ -30,27 +69,57 @@ class CallbackBrokerWorker(ConsumerMixin):
|
||||
callbacks=[self.process_task])]
|
||||
|
||||
def process_task(self, body, message):
|
||||
try:
|
||||
if 'event' not in body:
|
||||
raise Exception('Payload does not have an event')
|
||||
if 'job_id' not in body and 'ad_hoc_command_id' not in body:
|
||||
raise Exception('Payload does not have a job_id or ad_hoc_command_id')
|
||||
if settings.DEBUG:
|
||||
logger.info('Body: {}'.format(body))
|
||||
logger.info('Message: {}'.format(message))
|
||||
try:
|
||||
if 'job_id' in body:
|
||||
JobEvent.create_from_data(**body)
|
||||
elif 'ad_hoc_command_id' in body:
|
||||
AdHocCommandEvent.create_from_data(**body)
|
||||
except DatabaseError as e:
|
||||
logger.error('Database Error Saving Job Event: {}'.format(e))
|
||||
except Exception as exc:
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
logger.error('Callback Task Processor Raised Exception: %r', exc)
|
||||
if "uuid" in body:
|
||||
queue = UUID(body['uuid']).int % settings.JOB_EVENT_WORKERS
|
||||
else:
|
||||
queue = self.total_messages % settings.JOB_EVENT_WORKERS
|
||||
self.write_queue_worker(queue, body)
|
||||
self.total_messages += 1
|
||||
message.ack()
|
||||
|
||||
def write_queue_worker(self, preferred_queue, body):
|
||||
queue_order = sorted(range(settings.JOB_EVENT_WORKERS), cmp=lambda x, y: -1 if x==preferred_queue else 0)
|
||||
for queue_actual in queue_order:
|
||||
try:
|
||||
worker_actual = self.worker_queues[queue_actual]
|
||||
worker_actual[1].put(body, block=True, timeout=5)
|
||||
worker_actual[0] += 1
|
||||
return queue_actual
|
||||
except Exception:
|
||||
import traceback
|
||||
tb = traceback.format_exc()
|
||||
logger.warn("Could not write to queue %s" % preferred_queue)
|
||||
logger.warn("Detail: {}".format(tb))
|
||||
continue
|
||||
return None
|
||||
|
||||
def callback_worker(self, queue_actual, idx):
|
||||
while True:
|
||||
try:
|
||||
body = queue_actual.get(block=True, timeout=1)
|
||||
except QueueEmpty:
|
||||
continue
|
||||
except Exception as e:
|
||||
logger.error("Exception on worker thread, restarting: " + str(e))
|
||||
continue
|
||||
try:
|
||||
if 'job_id' not in body and 'ad_hoc_command_id' not in body:
|
||||
raise Exception('Payload does not have a job_id or ad_hoc_command_id')
|
||||
if settings.DEBUG:
|
||||
logger.info('Body: {}'.format(body))
|
||||
try:
|
||||
if 'job_id' in body:
|
||||
JobEvent.create_from_data(**body)
|
||||
elif 'ad_hoc_command_id' in body:
|
||||
AdHocCommandEvent.create_from_data(**body)
|
||||
except DatabaseError as e:
|
||||
logger.error('Database Error Saving Job Event: {}'.format(e))
|
||||
except Exception as exc:
|
||||
import traceback
|
||||
tb = traceback.format_exc()
|
||||
logger.error('Callback Task Processor Raised Exception: %r', exc)
|
||||
logger.error('Detail: {}'.format(tb))
|
||||
|
||||
|
||||
class Command(NoArgsCommand):
|
||||
'''
|
||||
|
||||
@@ -1,293 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
# Python
|
||||
import os
|
||||
import logging
|
||||
import urllib
|
||||
import weakref
|
||||
from optparse import make_option
|
||||
from threading import Thread
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
from django.core.management.base import NoArgsCommand
|
||||
|
||||
# AWX
|
||||
import awx
|
||||
from awx.main.models import * # noqa
|
||||
from awx.main.socket_queue import Socket
|
||||
|
||||
# socketio
|
||||
from socketio import socketio_manage
|
||||
from socketio.server import SocketIOServer
|
||||
from socketio.namespace import BaseNamespace
|
||||
|
||||
logger = logging.getLogger('awx.main.commands.run_socketio_service')
|
||||
|
||||
|
||||
class SocketSession(object):
|
||||
def __init__(self, session_id, token_key, socket):
|
||||
self.socket = weakref.ref(socket)
|
||||
self.session_id = session_id
|
||||
self.token_key = token_key
|
||||
self._valid = True
|
||||
|
||||
def is_valid(self):
|
||||
return bool(self._valid)
|
||||
|
||||
def invalidate(self):
|
||||
self._valid = False
|
||||
|
||||
def is_db_token_valid(self):
|
||||
auth_token = AuthToken.objects.filter(key=self.token_key, reason='')
|
||||
if not auth_token.exists():
|
||||
return False
|
||||
auth_token = auth_token[0]
|
||||
return bool(not auth_token.is_expired())
|
||||
|
||||
|
||||
class SocketSessionManager(object):
|
||||
def __init__(self):
|
||||
self.SESSIONS_MAX = 1000
|
||||
self.socket_sessions = []
|
||||
self.socket_session_token_key_map = {}
|
||||
|
||||
def _prune(self):
|
||||
if len(self.socket_sessions) > self.SESSIONS_MAX:
|
||||
session = self.socket_sessions[0]
|
||||
entries = self.socket_session_token_key_map[session.token_key]
|
||||
del entries[session.session_id]
|
||||
if len(entries) == 0:
|
||||
del self.socket_session_token_key_map[session.token_key]
|
||||
self.socket_sessions.pop(0)
|
||||
|
||||
'''
|
||||
Returns an dict of sessions <session_id, session>
|
||||
'''
|
||||
def lookup(self, token_key=None):
|
||||
if not token_key:
|
||||
raise ValueError("token_key required")
|
||||
return self.socket_session_token_key_map.get(token_key, None)
|
||||
|
||||
def add_session(self, session):
|
||||
self.socket_sessions.append(session)
|
||||
entries = self.socket_session_token_key_map.get(session.token_key, None)
|
||||
if not entries:
|
||||
entries = {}
|
||||
self.socket_session_token_key_map[session.token_key] = entries
|
||||
entries[session.session_id] = session
|
||||
self._prune()
|
||||
return session
|
||||
|
||||
|
||||
class SocketController(object):
|
||||
def __init__(self, SocketSessionManager):
|
||||
self.server = None
|
||||
self.SocketSessionManager = SocketSessionManager
|
||||
|
||||
def add_session(self, session):
|
||||
return self.SocketSessionManager.add_session(session)
|
||||
|
||||
def broadcast_packet(self, packet):
|
||||
# Broadcast message to everyone at endpoint
|
||||
# Loop over the 'raw' list of sockets (don't trust our list)
|
||||
for session_id, socket in list(self.server.sockets.iteritems()):
|
||||
socket_session = socket.session.get('socket_session', None)
|
||||
if socket_session and socket_session.is_valid():
|
||||
try:
|
||||
socket.send_packet(packet)
|
||||
except Exception as e:
|
||||
logger.error("Error sending client packet to %s: %s" % (str(session_id), str(packet)))
|
||||
logger.error("Error was: " + str(e))
|
||||
|
||||
def send_packet(self, packet, token_key):
|
||||
if not token_key:
|
||||
raise ValueError("token_key is required")
|
||||
socket_sessions = self.SocketSessionManager.lookup(token_key=token_key)
|
||||
# We may not find the socket_session if the user disconnected
|
||||
# (it's actually more compliciated than that because of our prune logic)
|
||||
if not socket_sessions:
|
||||
return None
|
||||
for session_id, socket_session in socket_sessions.iteritems():
|
||||
logger.warn("Maybe sending packet to %s" % session_id)
|
||||
if socket_session and socket_session.is_valid():
|
||||
logger.warn("Sending packet to %s" % session_id)
|
||||
socket = socket_session.socket()
|
||||
if socket:
|
||||
try:
|
||||
socket.send_packet(packet)
|
||||
except Exception as e:
|
||||
logger.error("Error sending client packet to %s: %s" % (str(socket_session.session_id), str(packet)))
|
||||
logger.error("Error was: " + str(e))
|
||||
|
||||
def set_server(self, server):
|
||||
self.server = server
|
||||
return server
|
||||
|
||||
|
||||
socketController = SocketController(SocketSessionManager())
|
||||
|
||||
|
||||
#
|
||||
# Socket session is attached to self.session['socket_session']
|
||||
# self.session and self.socket.session point to the same dict
|
||||
#
|
||||
class TowerBaseNamespace(BaseNamespace):
|
||||
def get_allowed_methods(self):
|
||||
return ['recv_disconnect']
|
||||
|
||||
def get_initial_acl(self):
|
||||
request_token = self._get_request_token()
|
||||
if request_token:
|
||||
# (1) This is the first time the socket has been seen (first
|
||||
# namespace joined).
|
||||
# (2) This socket has already been seen (already joined and maybe
|
||||
# left a namespace)
|
||||
#
|
||||
# Note: Assume that the user token is valid if the session is found
|
||||
socket_session = self.session.get('socket_session', None)
|
||||
if not socket_session:
|
||||
socket_session = SocketSession(self.socket.sessid, request_token, self.socket)
|
||||
if socket_session.is_db_token_valid():
|
||||
self.session['socket_session'] = socket_session
|
||||
socketController.add_session(socket_session)
|
||||
else:
|
||||
socket_session.invalidate()
|
||||
|
||||
return set(['recv_connect'] + self.get_allowed_methods())
|
||||
else:
|
||||
logger.warn("Authentication Failure validating user")
|
||||
self.emit("connect_failed", "Authentication failed")
|
||||
return set(['recv_connect'])
|
||||
|
||||
def _get_request_token(self):
|
||||
if 'QUERY_STRING' not in self.environ:
|
||||
return False
|
||||
|
||||
try:
|
||||
k, v = self.environ['QUERY_STRING'].split("=")
|
||||
if k == "Token":
|
||||
token_actual = urllib.unquote_plus(v).decode().replace("\"","")
|
||||
return token_actual
|
||||
except Exception as e:
|
||||
logger.error("Exception validating user: " + str(e))
|
||||
return False
|
||||
return False
|
||||
|
||||
def recv_connect(self):
|
||||
socket_session = self.session.get('socket_session', None)
|
||||
if socket_session and not socket_session.is_valid():
|
||||
self.disconnect(silent=False)
|
||||
|
||||
|
||||
class TestNamespace(TowerBaseNamespace):
|
||||
def recv_connect(self):
|
||||
logger.info("Received client connect for test namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
self.emit('test', "If you see this then you attempted to connect to the test socket endpoint")
|
||||
super(TestNamespace, self).recv_connect()
|
||||
|
||||
|
||||
class JobNamespace(TowerBaseNamespace):
|
||||
def recv_connect(self):
|
||||
logger.info("Received client connect for job namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
super(JobNamespace, self).recv_connect()
|
||||
|
||||
|
||||
class JobEventNamespace(TowerBaseNamespace):
|
||||
def recv_connect(self):
|
||||
logger.info("Received client connect for job event namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
super(JobEventNamespace, self).recv_connect()
|
||||
|
||||
|
||||
class AdHocCommandEventNamespace(TowerBaseNamespace):
|
||||
def recv_connect(self):
|
||||
logger.info("Received client connect for ad hoc command event namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
super(AdHocCommandEventNamespace, self).recv_connect()
|
||||
|
||||
|
||||
class ScheduleNamespace(TowerBaseNamespace):
|
||||
def get_allowed_methods(self):
|
||||
parent_allowed = super(ScheduleNamespace, self).get_allowed_methods()
|
||||
return parent_allowed + ["schedule_changed"]
|
||||
|
||||
def recv_connect(self):
|
||||
logger.info("Received client connect for schedule namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
super(ScheduleNamespace, self).recv_connect()
|
||||
|
||||
|
||||
# Catch-all namespace.
|
||||
# Deliver 'global' events over this namespace
|
||||
class ControlNamespace(TowerBaseNamespace):
|
||||
def recv_connect(self):
|
||||
logger.warn("Received client connect for control namespace from %s" % str(self.environ['REMOTE_ADDR']))
|
||||
super(ControlNamespace, self).recv_connect()
|
||||
|
||||
|
||||
class TowerSocket(object):
|
||||
def __call__(self, environ, start_response):
|
||||
path = environ['PATH_INFO'].strip('/') or 'index.html'
|
||||
if path.startswith('socket.io'):
|
||||
socketio_manage(environ, {'/socket.io/test': TestNamespace,
|
||||
'/socket.io/jobs': JobNamespace,
|
||||
'/socket.io/job_events': JobEventNamespace,
|
||||
'/socket.io/ad_hoc_command_events': AdHocCommandEventNamespace,
|
||||
'/socket.io/schedules': ScheduleNamespace,
|
||||
'/socket.io/control': ControlNamespace})
|
||||
else:
|
||||
logger.warn("Invalid connect path received: " + path)
|
||||
start_response('404 Not Found', [])
|
||||
return ['Tower version %s' % awx.__version__]
|
||||
|
||||
|
||||
def notification_handler(server):
|
||||
with Socket('websocket', 'r') as websocket:
|
||||
for message in websocket.listen():
|
||||
packet = {
|
||||
'args': message,
|
||||
'endpoint': message['endpoint'],
|
||||
'name': message['event'],
|
||||
'type': 'event',
|
||||
}
|
||||
|
||||
if 'token_key' in message:
|
||||
# Best practice not to send the token over the socket
|
||||
socketController.send_packet(packet, message.pop('token_key'))
|
||||
else:
|
||||
socketController.broadcast_packet(packet)
|
||||
|
||||
|
||||
class Command(NoArgsCommand):
|
||||
'''
|
||||
SocketIO event emitter Tower service
|
||||
Receives notifications from other services destined for UI notification
|
||||
'''
|
||||
|
||||
help = 'Launch the SocketIO event emitter service'
|
||||
|
||||
option_list = NoArgsCommand.option_list + (
|
||||
make_option('--receive_port', dest='receive_port', type='int', default=5559,
|
||||
help='Port to listen for new events that will be destined for a client'),
|
||||
make_option('--socketio_port', dest='socketio_port', type='int', default=8080,
|
||||
help='Port to accept socketio requests from clients'),)
|
||||
|
||||
def handle_noargs(self, **options):
|
||||
socketio_listen_port = settings.SOCKETIO_LISTEN_PORT
|
||||
|
||||
try:
|
||||
if os.path.exists('/etc/tower/tower.cert') and os.path.exists('/etc/tower/tower.key'):
|
||||
logger.info('Listening on port https://0.0.0.0:' + str(socketio_listen_port))
|
||||
server = SocketIOServer(('0.0.0.0', socketio_listen_port), TowerSocket(), resource='socket.io',
|
||||
keyfile='/etc/tower/tower.key', certfile='/etc/tower/tower.cert')
|
||||
else:
|
||||
logger.info('Listening on port http://0.0.0.0:' + str(socketio_listen_port))
|
||||
server = SocketIOServer(('0.0.0.0', socketio_listen_port), TowerSocket(), resource='socket.io')
|
||||
|
||||
socketController.set_server(server)
|
||||
handler_thread = Thread(target=notification_handler, args=(server,))
|
||||
handler_thread.daemon = True
|
||||
handler_thread.start()
|
||||
|
||||
server.serve_forever()
|
||||
except KeyboardInterrupt:
|
||||
pass
|
||||
@@ -68,7 +68,6 @@ class ActivityStreamMiddleware(threading.local):
|
||||
if user.exists():
|
||||
user = user[0]
|
||||
instance.actor = user
|
||||
instance.save(update_fields=['actor'])
|
||||
else:
|
||||
if instance.id not in self.instance_ids:
|
||||
self.instance_ids.append(instance.id)
|
||||
|
||||
742
awx/main/migrations/0002_squashed_v300_release.py
Normal file
742
awx/main/migrations/0002_squashed_v300_release.py
Normal file
@@ -0,0 +1,742 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Copyright (c) 2016 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import awx.main.fields
|
||||
|
||||
from django.db import migrations, models
|
||||
import django.db.models.deletion
|
||||
from django.conf import settings
|
||||
from django.utils.timezone import now
|
||||
|
||||
import jsonfield.fields
|
||||
import jsonbfield.fields
|
||||
import taggit.managers
|
||||
|
||||
|
||||
def create_system_job_templates(apps, schema_editor):
|
||||
'''
|
||||
Create default system job templates if not present. Create default schedules
|
||||
only if new system job templates were created (i.e. new database).
|
||||
'''
|
||||
|
||||
SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate')
|
||||
Schedule = apps.get_model('main', 'Schedule')
|
||||
ContentType = apps.get_model('contenttypes', 'ContentType')
|
||||
sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate)
|
||||
now_dt = now()
|
||||
now_str = now_dt.strftime('%Y%m%dT%H%M%SZ')
|
||||
|
||||
sjt, created = SystemJobTemplate.objects.get_or_create(
|
||||
job_type='cleanup_jobs',
|
||||
defaults=dict(
|
||||
name='Cleanup Job Details',
|
||||
description='Remove job history',
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
polymorphic_ctype=sjt_ct,
|
||||
),
|
||||
)
|
||||
if created:
|
||||
sched = Schedule(
|
||||
name='Cleanup Job Schedule',
|
||||
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=SU' % now_str,
|
||||
description='Automatically Generated Schedule',
|
||||
enabled=True,
|
||||
extra_data={'days': '120'},
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
)
|
||||
sched.unified_job_template = sjt
|
||||
sched.save()
|
||||
|
||||
existing_cd_jobs = SystemJobTemplate.objects.filter(job_type='cleanup_deleted')
|
||||
Schedule.objects.filter(unified_job_template__in=existing_cd_jobs).delete()
|
||||
existing_cd_jobs.delete()
|
||||
|
||||
sjt, created = SystemJobTemplate.objects.get_or_create(
|
||||
job_type='cleanup_activitystream',
|
||||
defaults=dict(
|
||||
name='Cleanup Activity Stream',
|
||||
description='Remove activity stream history',
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
polymorphic_ctype=sjt_ct,
|
||||
),
|
||||
)
|
||||
if created:
|
||||
sched = Schedule(
|
||||
name='Cleanup Activity Schedule',
|
||||
rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TU' % now_str,
|
||||
description='Automatically Generated Schedule',
|
||||
enabled=True,
|
||||
extra_data={'days': '355'},
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
)
|
||||
sched.unified_job_template = sjt
|
||||
sched.save()
|
||||
|
||||
sjt, created = SystemJobTemplate.objects.get_or_create(
|
||||
job_type='cleanup_facts',
|
||||
defaults=dict(
|
||||
name='Cleanup Fact Details',
|
||||
description='Remove system tracking history',
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
polymorphic_ctype=sjt_ct,
|
||||
),
|
||||
)
|
||||
if created:
|
||||
sched = Schedule(
|
||||
name='Cleanup Fact Schedule',
|
||||
rrule='DTSTART:%s RRULE:FREQ=MONTHLY;INTERVAL=1;BYMONTHDAY=1' % now_str,
|
||||
description='Automatically Generated Schedule',
|
||||
enabled=True,
|
||||
extra_data={'older_than': '120d', 'granularity': '1w'},
|
||||
created=now_dt,
|
||||
modified=now_dt,
|
||||
)
|
||||
sched.unified_job_template = sjt
|
||||
sched.save()
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
replaces = [(b'main', '0002_v300_tower_settings_changes'),
|
||||
(b'main', '0003_v300_notification_changes'),
|
||||
(b'main', '0004_v300_fact_changes'),
|
||||
(b'main', '0005_v300_migrate_facts'),
|
||||
(b'main', '0006_v300_active_flag_cleanup'),
|
||||
(b'main', '0007_v300_active_flag_removal'),
|
||||
(b'main', '0008_v300_rbac_changes'),
|
||||
(b'main', '0009_v300_rbac_migrations'),
|
||||
(b'main', '0010_v300_create_system_job_templates'),
|
||||
(b'main', '0011_v300_credential_domain_field'),
|
||||
(b'main', '0012_v300_create_labels'),
|
||||
(b'main', '0013_v300_label_changes'),
|
||||
(b'main', '0014_v300_invsource_cred'),
|
||||
(b'main', '0015_v300_label_changes'),
|
||||
(b'main', '0016_v300_prompting_changes'),
|
||||
(b'main', '0017_v300_prompting_migrations'),
|
||||
(b'main', '0018_v300_host_ordering'),
|
||||
(b'main', '0019_v300_new_azure_credential'),]
|
||||
|
||||
dependencies = [
|
||||
('taggit', '0002_auto_20150616_2121'),
|
||||
('contenttypes', '0002_remove_content_type_name'),
|
||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
||||
('main', '0001_initial'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
# Tower settings changes
|
||||
migrations.CreateModel(
|
||||
name='TowerSettings',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('key', models.CharField(unique=True, max_length=255)),
|
||||
('description', models.TextField()),
|
||||
('category', models.CharField(max_length=128)),
|
||||
('value', models.TextField(blank=True)),
|
||||
('value_type', models.CharField(max_length=12, choices=[(b'string', 'String'), (b'int', 'Integer'), (b'float', 'Decimal'), (b'json', 'JSON'), (b'bool', 'Boolean'), (b'password', 'Password'), (b'list', 'List')])),
|
||||
('user', models.ForeignKey(related_name='settings', default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
|
||||
],
|
||||
),
|
||||
# Notification changes
|
||||
migrations.CreateModel(
|
||||
name='Notification',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('status', models.CharField(default=b'pending', max_length=20, editable=False, choices=[(b'pending', 'Pending'), (b'successful', 'Successful'), (b'failed', 'Failed')])),
|
||||
('error', models.TextField(default=b'', editable=False, blank=True)),
|
||||
('notifications_sent', models.IntegerField(default=0, editable=False)),
|
||||
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'irc', 'IRC')])),
|
||||
('recipients', models.TextField(default=b'', editable=False, blank=True)),
|
||||
('subject', models.TextField(default=b'', editable=False, blank=True)),
|
||||
('body', jsonfield.fields.JSONField(default=dict, blank=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ('pk',),
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='NotificationTemplate',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('description', models.TextField(default=b'', blank=True)),
|
||||
('name', models.CharField(unique=True, max_length=512)),
|
||||
('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'irc', 'IRC')])),
|
||||
('notification_configuration', jsonfield.fields.JSONField(default=dict)),
|
||||
('created_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
|
||||
('modified_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
|
||||
('organization', models.ForeignKey(related_name='notification_templates', on_delete=django.db.models.deletion.SET_NULL, to='main.Organization', null=True)),
|
||||
('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags')),
|
||||
],
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='notification',
|
||||
name='notification_template',
|
||||
field=models.ForeignKey(related_name='notifications', editable=False, to='main.NotificationTemplate'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='notification',
|
||||
field=models.ManyToManyField(to='main.Notification', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='notification_template',
|
||||
field=models.ManyToManyField(to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='notification_templates_any',
|
||||
field=models.ManyToManyField(related_name='organization_notification_templates_for_any', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='notification_templates_error',
|
||||
field=models.ManyToManyField(related_name='organization_notification_templates_for_errors', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='notification_templates_success',
|
||||
field=models.ManyToManyField(related_name='organization_notification_templates_for_success', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjob',
|
||||
name='notifications',
|
||||
field=models.ManyToManyField(related_name='unifiedjob_notifications', editable=False, to='main.Notification'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='notification_templates_any',
|
||||
field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_any', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='notification_templates_error',
|
||||
field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_errors', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='notification_templates_success',
|
||||
field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_success', to='main.NotificationTemplate', blank=True),
|
||||
),
|
||||
# Fact changes
|
||||
migrations.CreateModel(
|
||||
name='Fact',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('timestamp', models.DateTimeField(default=None, help_text='Date and time of the corresponding fact scan gathering time.', editable=False)),
|
||||
('module', models.CharField(max_length=128)),
|
||||
('facts', jsonbfield.fields.JSONField(default={}, help_text='Arbitrary JSON structure of module facts captured at timestamp for a single host.', blank=True)),
|
||||
('host', models.ForeignKey(related_name='facts', to='main.Host', help_text='Host for the facts that the fact scan captured.')),
|
||||
],
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='fact',
|
||||
index_together=set([('timestamp', 'module', 'host')]),
|
||||
),
|
||||
# Active flag removal
|
||||
migrations.RemoveField(
|
||||
model_name='credential',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='custominventoryscript',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='group',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='host',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='inventory',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='organization',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='permission',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='schedule',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='team',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='unifiedjob',
|
||||
name='active',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='active',
|
||||
),
|
||||
|
||||
# RBAC Changes
|
||||
# ############
|
||||
migrations.RenameField(
|
||||
'Organization',
|
||||
'admins',
|
||||
'deprecated_admins',
|
||||
),
|
||||
migrations.RenameField(
|
||||
'Organization',
|
||||
'users',
|
||||
'deprecated_users',
|
||||
),
|
||||
migrations.RenameField(
|
||||
'Team',
|
||||
'users',
|
||||
'deprecated_users',
|
||||
),
|
||||
migrations.RenameField(
|
||||
'Team',
|
||||
'projects',
|
||||
'deprecated_projects',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='projects', to='main.Organization', blank=True, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='deprecated_projects',
|
||||
field=models.ManyToManyField(related_name='deprecated_teams', to='main.Project', blank=True),
|
||||
),
|
||||
migrations.RenameField(
|
||||
model_name='organization',
|
||||
old_name='projects',
|
||||
new_name='deprecated_projects',
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='organization',
|
||||
name='deprecated_projects',
|
||||
field=models.ManyToManyField(related_name='deprecated_organizations', to='main.Project', blank=True),
|
||||
),
|
||||
migrations.RenameField(
|
||||
'Credential',
|
||||
'team',
|
||||
'deprecated_team',
|
||||
),
|
||||
migrations.RenameField(
|
||||
'Credential',
|
||||
'user',
|
||||
'deprecated_user',
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='organization',
|
||||
name='deprecated_admins',
|
||||
field=models.ManyToManyField(related_name='deprecated_admin_of_organizations', to=settings.AUTH_USER_MODEL, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='organization',
|
||||
name='deprecated_users',
|
||||
field=models.ManyToManyField(related_name='deprecated_organizations', to=settings.AUTH_USER_MODEL, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='deprecated_users',
|
||||
field=models.ManyToManyField(related_name='deprecated_teams', to=settings.AUTH_USER_MODEL, blank=True),
|
||||
),
|
||||
migrations.AlterUniqueTogether(
|
||||
name='credential',
|
||||
unique_together=set([]),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='credentials', default=None, blank=True, to='main.Organization', null=True),
|
||||
),
|
||||
|
||||
#
|
||||
# New RBAC models and fields
|
||||
#
|
||||
migrations.CreateModel(
|
||||
name='Role',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('role_field', models.TextField()),
|
||||
('singleton_name', models.TextField(default=None, unique=True, null=True, db_index=True)),
|
||||
('members', models.ManyToManyField(related_name='roles', to=settings.AUTH_USER_MODEL)),
|
||||
('parents', models.ManyToManyField(related_name='children', to='main.Role')),
|
||||
('implicit_parents', models.TextField(default=b'[]')),
|
||||
('content_type', models.ForeignKey(default=None, to='contenttypes.ContentType', null=True)),
|
||||
('object_id', models.PositiveIntegerField(default=None, null=True)),
|
||||
|
||||
],
|
||||
options={
|
||||
'db_table': 'main_rbac_roles',
|
||||
'verbose_name_plural': 'roles',
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='RoleAncestorEntry',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('role_field', models.TextField()),
|
||||
('content_type_id', models.PositiveIntegerField()),
|
||||
('object_id', models.PositiveIntegerField()),
|
||||
('ancestor', models.ForeignKey(related_name='+', to='main.Role')),
|
||||
('descendent', models.ForeignKey(related_name='+', to='main.Role')),
|
||||
],
|
||||
options={
|
||||
'db_table': 'main_rbac_role_ancestors',
|
||||
'verbose_name_plural': 'role_ancestors',
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='role',
|
||||
name='ancestors',
|
||||
field=models.ManyToManyField(related_name='descendents', through='main.RoleAncestorEntry', to='main.Role'),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='role',
|
||||
index_together=set([('content_type', 'object_id')]),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='roleancestorentry',
|
||||
index_together=set([('ancestor', 'content_type_id', 'object_id'), ('ancestor', 'content_type_id', 'role_field'), ('ancestor', 'descendent')]),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='use_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='custominventoryscript',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='custominventoryscript',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'organization.member_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventory',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventory',
|
||||
name='adhoc_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventory',
|
||||
name='update_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventory',
|
||||
name='use_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'adhoc_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventory',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'update_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.admin_role', b'inventory.organization.admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='execute_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.auditor_role', b'inventory.organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='auditor_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_auditor', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='member_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='organization',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'member_role', b'auditor_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'singleton:system_administrator'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='use_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='update_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'singleton:system_auditor', b'use_role', b'update_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='team',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='team',
|
||||
name='member_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=None, to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='team',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role', b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
|
||||
# System Job Templates
|
||||
migrations.RunPython(create_system_job_templates, migrations.RunPython.noop),
|
||||
migrations.AlterField(
|
||||
model_name='systemjob',
|
||||
name='job_type',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='systemjobtemplate',
|
||||
name='job_type',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]),
|
||||
),
|
||||
# Credential domain field
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='domain',
|
||||
field=models.CharField(default=b'', help_text='The identifier for the domain.', max_length=100, verbose_name='Domain', blank=True),
|
||||
),
|
||||
# Create Labels
|
||||
migrations.CreateModel(
|
||||
name='Label',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('description', models.TextField(default=b'', blank=True)),
|
||||
('name', models.CharField(max_length=512)),
|
||||
('created_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
|
||||
('modified_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)),
|
||||
('organization', models.ForeignKey(related_name='labels', to='main.Organization', help_text='Organization this label belongs to.')),
|
||||
('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags')),
|
||||
],
|
||||
options={
|
||||
'ordering': ('organization', 'name'),
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='label',
|
||||
field=models.ManyToManyField(to='main.Label', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='labels',
|
||||
field=models.ManyToManyField(related_name='job_labels', to='main.Label', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='labels',
|
||||
field=models.ManyToManyField(related_name='jobtemplate_labels', to='main.Label', blank=True),
|
||||
),
|
||||
migrations.AlterUniqueTogether(
|
||||
name='label',
|
||||
unique_together=set([('name', 'organization')]),
|
||||
),
|
||||
# Label changes
|
||||
migrations.AlterField(
|
||||
model_name='label',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='labels', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Organization', help_text='Organization this label belongs to.', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='label',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='labels', to='main.Organization', help_text='Organization this label belongs to.'),
|
||||
),
|
||||
# InventorySource Credential
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='network_credential',
|
||||
field=models.ForeignKey(related_name='jobs_as_network_credential+', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='network_credential',
|
||||
field=models.ForeignKey(related_name='jobtemplates_as_network_credential+', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='authorize',
|
||||
field=models.BooleanField(default=False, help_text='Whether to use the authorize mechanism.'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='authorize_password',
|
||||
field=models.CharField(default=b'', help_text='Password used by the authorize mechanism.', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='deprecated_team',
|
||||
field=models.ForeignKey(related_name='deprecated_credentials', default=None, blank=True, to='main.Team', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='deprecated_user',
|
||||
field=models.ForeignKey(related_name='deprecated_credentials', default=None, blank=True, to=settings.AUTH_USER_MODEL, null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='kind',
|
||||
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'openstack', 'OpenStack')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventorysource',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventoryupdate',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='deprecated_projects',
|
||||
field=models.ManyToManyField(related_name='deprecated_teams', to='main.Project', blank=True),
|
||||
),
|
||||
# Prompting changes
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_limit_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_inventory_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_credential_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_job_type_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_tags_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobtemplate',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='jobtemplates', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
# Host ordering
|
||||
migrations.AlterModelOptions(
|
||||
name='host',
|
||||
options={'ordering': ('name',)},
|
||||
),
|
||||
# New Azure credential
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='client',
|
||||
field=models.CharField(default=b'', help_text='Client Id or Application Id for the credential', max_length=128, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='secret',
|
||||
field=models.CharField(default=b'', help_text='Secret Token for this credential', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='subscription',
|
||||
field=models.CharField(default=b'', help_text='Subscription identifier for this credential', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='credential',
|
||||
name='tenant',
|
||||
field=models.CharField(default=b'', help_text='Tenant identifier for this credential', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='kind',
|
||||
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='host',
|
||||
name='instance_id',
|
||||
field=models.CharField(default=b'', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventorysource',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventoryupdate',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
]
|
||||
156
awx/main/migrations/0003_squashed_v300_v303_updates.py
Normal file
156
awx/main/migrations/0003_squashed_v300_v303_updates.py
Normal file
@@ -0,0 +1,156 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
# Copyright (c) 2016 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
from django.conf import settings
|
||||
import awx.main.fields
|
||||
import jsonfield.fields
|
||||
|
||||
|
||||
def update_dashed_host_variables(apps, schema_editor):
|
||||
Host = apps.get_model('main', 'Host')
|
||||
for host in Host.objects.filter(variables='---'):
|
||||
host.variables = ''
|
||||
host.save()
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
replaces = [(b'main', '0020_v300_labels_changes'),
|
||||
(b'main', '0021_v300_activity_stream'),
|
||||
(b'main', '0022_v300_adhoc_extravars'),
|
||||
(b'main', '0023_v300_activity_stream_ordering'),
|
||||
(b'main', '0024_v300_jobtemplate_allow_simul'),
|
||||
(b'main', '0025_v300_update_rbac_parents'),
|
||||
(b'main', '0026_v300_credential_unique'),
|
||||
(b'main', '0027_v300_team_migrations'),
|
||||
(b'main', '0028_v300_org_team_cascade'),
|
||||
(b'main', '0029_v302_add_ask_skip_tags'),
|
||||
(b'main', '0030_v302_job_survey_passwords'),
|
||||
(b'main', '0031_v302_migrate_survey_passwords'),
|
||||
(b'main', '0032_v302_credential_permissions_update'),
|
||||
(b'main', '0033_v303_v245_host_variable_fix'),]
|
||||
|
||||
|
||||
dependencies = [
|
||||
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
|
||||
('main', '0002_squashed_v300_release'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
# Labels Changes
|
||||
migrations.RemoveField(
|
||||
model_name='job',
|
||||
name='labels',
|
||||
),
|
||||
migrations.RemoveField(
|
||||
model_name='jobtemplate',
|
||||
name='labels',
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjob',
|
||||
name='labels',
|
||||
field=models.ManyToManyField(related_name='unifiedjob_labels', to='main.Label', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='labels',
|
||||
field=models.ManyToManyField(related_name='unifiedjobtemplate_labels', to='main.Label', blank=True),
|
||||
),
|
||||
# Activity Stream
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='role',
|
||||
field=models.ManyToManyField(to='main.Role', blank=True),
|
||||
),
|
||||
migrations.AlterModelOptions(
|
||||
name='activitystream',
|
||||
options={'ordering': ('pk',)},
|
||||
),
|
||||
# Adhoc extra vars
|
||||
migrations.AddField(
|
||||
model_name='adhoccommand',
|
||||
name='extra_vars',
|
||||
field=models.TextField(default=b'', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='kind',
|
||||
field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventorysource',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventoryupdate',
|
||||
name='source',
|
||||
field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]),
|
||||
),
|
||||
# jobtemplate allow simul
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='allow_simultaneous',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
# RBAC update parents
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='use_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='member_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
# Unique credential
|
||||
migrations.AlterUniqueTogether(
|
||||
name='credential',
|
||||
unique_together=set([('organization', 'name', 'kind')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
# Team cascade
|
||||
migrations.AlterField(
|
||||
model_name='team',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='teams', to='main.Organization'),
|
||||
preserve_default=False,
|
||||
),
|
||||
# add ask skip tags
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='ask_skip_tags_on_launch',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
# job survery passwords
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='survey_passwords',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
# RBAC credential permission updates
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='use_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
]
|
||||
@@ -1,109 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import awx.main.models.notifications
|
||||
import django.db.models.deletion
|
||||
import awx.main.models.workflow
|
||||
import awx.main.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0033_v303_v245_host_variable_fix'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='launch_type',
|
||||
field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow')]),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJob',
|
||||
fields=[
|
||||
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
|
||||
('extra_vars', models.TextField(default=b'', blank=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ('id',),
|
||||
},
|
||||
bases=('main.unifiedjob', models.Model, awx.main.models.notifications.JobNotificationMixin, awx.main.models.workflow.WorkflowJobInheritNodesMixin),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobNode',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('always_nodes', models.ManyToManyField(related_name='workflowjobnodes_always', to='main.WorkflowJobNode', blank=True)),
|
||||
('failure_nodes', models.ManyToManyField(related_name='workflowjobnodes_failure', to='main.WorkflowJobNode', blank=True)),
|
||||
('job', models.OneToOneField(related_name='unified_job_node', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJob', null=True)),
|
||||
('success_nodes', models.ManyToManyField(related_name='workflowjobnodes_success', to='main.WorkflowJobNode', blank=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobTemplate',
|
||||
fields=[
|
||||
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
|
||||
('extra_vars', models.TextField(default=b'', blank=True)),
|
||||
('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True')),
|
||||
],
|
||||
bases=('main.unifiedjobtemplate', models.Model),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobTemplateNode',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('always_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_always', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('failure_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_failure', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('success_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_success', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('unified_job_template', models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True)),
|
||||
('workflow_job_template', models.ForeignKey(related_name='workflow_job_template_nodes', default=None, blank=True, to='main.WorkflowJobTemplate', null=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='workflow_job',
|
||||
field=models.ForeignKey(related_name='workflow_job_nodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJob', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjob',
|
||||
name='workflow_job_template',
|
||||
field=models.ForeignKey(related_name='workflow_jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJobTemplate', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job',
|
||||
field=models.ManyToManyField(to='main.WorkflowJob', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_node',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobNode', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_template',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_template_node',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobTemplateNode', blank=True),
|
||||
),
|
||||
]
|
||||
614
awx/main/migrations/0034_v310_release.py
Normal file
614
awx/main/migrations/0034_v310_release.py
Normal file
@@ -0,0 +1,614 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import awx.main.models.notifications
|
||||
import jsonfield.fields
|
||||
import django.db.models.deletion
|
||||
import awx.main.models.workflow
|
||||
import awx.main.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0033_v303_v245_host_variable_fix'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
# Create ChannelGroup table
|
||||
migrations.CreateModel(
|
||||
name='ChannelGroup',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('group', models.CharField(unique=True, max_length=200)),
|
||||
('channels', models.TextField()),
|
||||
],
|
||||
),
|
||||
# Allow simultaneous Job
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='allow_simultaneous',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
# Job Event UUID
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='uuid',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
# Job Parent Event UUID
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='parent_uuid',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
# Modify the HA Instance
|
||||
migrations.RemoveField(
|
||||
model_name='instance',
|
||||
name='primary',
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='instance',
|
||||
name='uuid',
|
||||
field=models.CharField(max_length=40),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='credential',
|
||||
name='become_method',
|
||||
field=models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec'), (b'dzdo', 'DZDO'), (b'pmrun', 'Pmrun')]),
|
||||
),
|
||||
# Add Workflows
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='launch_type',
|
||||
field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow'), (b'sync', 'Sync')]),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJob',
|
||||
fields=[
|
||||
('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')),
|
||||
('extra_vars', models.TextField(default=b'', blank=True)),
|
||||
],
|
||||
options={
|
||||
'ordering': ('id',),
|
||||
},
|
||||
bases=('main.unifiedjob', models.Model, awx.main.models.notifications.JobNotificationMixin),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobNode',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('always_nodes', models.ManyToManyField(related_name='workflowjobnodes_always', to='main.WorkflowJobNode', blank=True)),
|
||||
('failure_nodes', models.ManyToManyField(related_name='workflowjobnodes_failure', to='main.WorkflowJobNode', blank=True)),
|
||||
('job', models.OneToOneField(related_name='unified_job_node', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJob', null=True)),
|
||||
('success_nodes', models.ManyToManyField(related_name='workflowjobnodes_success', to='main.WorkflowJobNode', blank=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobTemplate',
|
||||
fields=[
|
||||
('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')),
|
||||
('extra_vars', models.TextField(default=b'', blank=True)),
|
||||
('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True')),
|
||||
],
|
||||
bases=('main.unifiedjobtemplate', models.Model),
|
||||
),
|
||||
migrations.CreateModel(
|
||||
name='WorkflowJobTemplateNode',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('created', models.DateTimeField(default=None, editable=False)),
|
||||
('modified', models.DateTimeField(default=None, editable=False)),
|
||||
('always_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_always', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('failure_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_failure', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('success_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_success', to='main.WorkflowJobTemplateNode', blank=True)),
|
||||
('unified_job_template', models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True)),
|
||||
('workflow_job_template', models.ForeignKey(related_name='workflow_job_template_nodes', default=None, blank=True, to='main.WorkflowJobTemplate', null=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='workflow_job',
|
||||
field=models.ForeignKey(related_name='workflow_job_nodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJob', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjob',
|
||||
name='workflow_job_template',
|
||||
field=models.ForeignKey(related_name='workflow_jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJobTemplate', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job',
|
||||
field=models.ManyToManyField(to='main.WorkflowJob', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_node',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobNode', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_template',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobTemplate', blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='activitystream',
|
||||
name='workflow_job_template_node',
|
||||
field=models.ManyToManyField(to='main.WorkflowJobTemplateNode', blank=True),
|
||||
),
|
||||
# Workflow RBAC prompts
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='char_prompts',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='credential',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='execute_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='workflows', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='main.Organization', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='char_prompts',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='credential',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='workflow_job',
|
||||
field=models.ForeignKey(related_name='workflow_job_nodes', default=None, blank=True, to='main.WorkflowJob', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
# Job artifacts
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='artifacts',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='ancestor_artifacts',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
# Job timeout settings
|
||||
migrations.AddField(
|
||||
model_name='inventorysource',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventoryupdate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='projectupdate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
# Execution Node
|
||||
migrations.AddField(
|
||||
model_name='unifiedjob',
|
||||
name='execution_node',
|
||||
field=models.TextField(default=b'', editable=False, blank=True),
|
||||
),
|
||||
# SCM Revision
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='scm_revision',
|
||||
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='projectupdate',
|
||||
name='job_type',
|
||||
field=models.CharField(default=b'check', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')]),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='scm_revision',
|
||||
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'),
|
||||
),
|
||||
# Project Playbook Files
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='playbook_files',
|
||||
field=jsonfield.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True),
|
||||
),
|
||||
# Job events to stdout
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='end_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='start_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='stdout',
|
||||
field=models.TextField(default=b'', editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='uuid',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='verbosity',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='end_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='playbook',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='start_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='stdout',
|
||||
field=models.TextField(default=b'', editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='verbosity',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='counter',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='event',
|
||||
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='counter',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='event',
|
||||
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
|
||||
),
|
||||
migrations.AlterUniqueTogether(
|
||||
name='adhoccommandevent',
|
||||
unique_together=set([]),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='adhoccommandevent',
|
||||
index_together=set([('ad_hoc_command', 'event'), ('ad_hoc_command', 'uuid'), ('ad_hoc_command', 'end_line'), ('ad_hoc_command', 'start_line')]),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='jobevent',
|
||||
index_together=set([('job', 'event'), ('job', 'parent_uuid'), ('job', 'start_line'), ('job', 'uuid'), ('job', 'end_line')]),
|
||||
),
|
||||
# Tower state
|
||||
migrations.CreateModel(
|
||||
name='TowerScheduleState',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('schedule_last_run', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
# Tower instance capacity
|
||||
migrations.AddField(
|
||||
model_name='instance',
|
||||
name='capacity',
|
||||
field=models.PositiveIntegerField(default=100, editable=False),
|
||||
),
|
||||
# Workflow surveys
|
||||
migrations.AddField(
|
||||
model_name='workflowjob',
|
||||
name='survey_passwords',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_enabled',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_spec',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
# JSON field changes
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='event_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='artifacts',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='survey_passwords',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='event_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobtemplate',
|
||||
name='survey_spec',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='notification',
|
||||
name='body',
|
||||
field=awx.main.fields.JSONField(default=dict, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='notificationtemplate',
|
||||
name='notification_configuration',
|
||||
field=awx.main.fields.JSONField(default=dict),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='playbook_files',
|
||||
field=awx.main.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='extra_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='job_env',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjob',
|
||||
name='survey_passwords',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='ancestor_artifacts',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='char_prompts',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_spec',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='char_prompts',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
# Job Project Update
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='project_update',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.ProjectUpdate', help_text='The SCM Refresh task used to make sure the playbooks were available for the job run', null=True),
|
||||
),
|
||||
# Inventory, non-unique name
|
||||
migrations.AlterField(
|
||||
model_name='inventory',
|
||||
name='name',
|
||||
field=models.CharField(max_length=512),
|
||||
),
|
||||
# Text and has schedules
|
||||
migrations.RemoveField(
|
||||
model_name='unifiedjobtemplate',
|
||||
name='has_schedules',
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='host',
|
||||
name='instance_id',
|
||||
field=models.CharField(default=b'', help_text='The value used by the remote inventory source to uniquely identify the host', max_length=1024, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_clean',
|
||||
field=models.BooleanField(default=False, help_text='Discard any local changes before syncing the project.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_delete_on_update',
|
||||
field=models.BooleanField(default=False, help_text='Delete the project before syncing.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_type',
|
||||
field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_update_cache_timeout',
|
||||
field=models.PositiveIntegerField(default=0, help_text='The number of seconds after the last project update ran that a newproject update will be launched as a job dependency.', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_update_on_launch',
|
||||
field=models.BooleanField(default=False, help_text='Update the project when a job is launched that uses the project.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_url',
|
||||
field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, help_text='The amount of time to run before the task is canceled.', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='scm_clean',
|
||||
field=models.BooleanField(default=False, help_text='Discard any local changes before syncing the project.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='scm_delete_on_update',
|
||||
field=models.BooleanField(default=False, help_text='Delete the project before syncing.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='scm_type',
|
||||
field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='scm_url',
|
||||
field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, help_text='The amount of time to run before the task is canceled.', blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='dtend',
|
||||
field=models.DateTimeField(default=None, help_text='The last occurrence of the schedule occurs before this time, aftewards the schedule expires.', null=True, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='dtstart',
|
||||
field=models.DateTimeField(default=None, help_text='The first occurrence of the schedule occurs on or after this time.', null=True, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='enabled',
|
||||
field=models.BooleanField(default=True, help_text='Enables processing of this schedule by Tower.'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='next_run',
|
||||
field=models.DateTimeField(default=None, help_text='The next time that the scheduled action will run.', null=True, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='rrule',
|
||||
field=models.CharField(help_text='A value representing the schedules iCal recurrence rule.', max_length=255),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='elapsed',
|
||||
field=models.DecimalField(help_text='Elapsed time in seconds that the job ran.', editable=False, max_digits=12, decimal_places=3),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='execution_node',
|
||||
field=models.TextField(default=b'', help_text='The Tower node the job executed on.', editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='finished',
|
||||
field=models.DateTimeField(default=None, help_text='The date and time the job finished execution.', null=True, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='job_explanation',
|
||||
field=models.TextField(default=b'', help_text="A status field to indicate the state of the job if it wasn't able to run and capture stdout", editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='started',
|
||||
field=models.DateTimeField(default=None, help_text='The date and time the job was queued for starting.', null=True, editable=False),
|
||||
),
|
||||
|
||||
]
|
||||
@@ -1,23 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0034_v310_add_workflows'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RemoveField(
|
||||
model_name='instance',
|
||||
name='primary',
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='instance',
|
||||
name='uuid',
|
||||
field=models.CharField(max_length=40),
|
||||
),
|
||||
]
|
||||
@@ -1,17 +1,17 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
from django.db import migrations
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0036_v310_jobevent_uuid'),
|
||||
('main', '0034_v310_release'),
|
||||
]
|
||||
|
||||
# These settings are now in the separate awx.conf app.
|
||||
operations = [
|
||||
# Remove Tower settings, these settings are now in separate awx.conf app.
|
||||
migrations.RemoveField(
|
||||
model_name='towersettings',
|
||||
name='user',
|
||||
@@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0035_v310_modify_ha_instance'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='uuid',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
]
|
||||
@@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0037_v310_remove_tower_settings'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='allow_simultaneous',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
]
|
||||
@@ -1,82 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import jsonfield.fields
|
||||
import django.db.models.deletion
|
||||
import awx.main.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0038_v310_job_allow_simultaneous'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='char_prompts',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='credential',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='execute_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='organization',
|
||||
field=models.ForeignKey(related_name='workflows', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='main.Organization', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='read_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='char_prompts',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='credential',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='inventory',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='workflow_job',
|
||||
field=models.ForeignKey(related_name='workflow_job_nodes', default=None, blank=True, to='main.WorkflowJob', null=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='admin_role',
|
||||
field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='unified_job_template',
|
||||
field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True),
|
||||
),
|
||||
]
|
||||
@@ -1,22 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0039_v310_workflow_rbac_prompts'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='ChannelGroup',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('group', models.CharField(unique=True, max_length=200)),
|
||||
('channels', models.TextField()),
|
||||
],
|
||||
),
|
||||
]
|
||||
@@ -1,25 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import jsonfield.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0040_v310_channelgroup'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='artifacts',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobnode',
|
||||
name='ancestor_artifacts',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,44 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0041_v310_artifacts'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='inventorysource',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='inventoryupdate',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobtemplate',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='projectupdate',
|
||||
name='timeout',
|
||||
field=models.PositiveIntegerField(default=0, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0042_v310_job_timeout'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='unifiedjob',
|
||||
name='execution_node',
|
||||
field=models.TextField(default=b'', editable=False, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,30 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0043_v310_executionnode'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='scm_revision',
|
||||
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='projectupdate',
|
||||
name='job_type',
|
||||
field=models.CharField(default=b'check', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')]),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='scm_revision',
|
||||
field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'),
|
||||
),
|
||||
|
||||
]
|
||||
@@ -1,20 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import jsonfield.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0044_v310_scm_revision'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='project',
|
||||
name='playbook_files',
|
||||
field=jsonfield.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,96 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0045_v310_project_playbook_files'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='end_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='start_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='stdout',
|
||||
field=models.TextField(default=b'', editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='uuid',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='adhoccommandevent',
|
||||
name='verbosity',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='end_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='playbook',
|
||||
field=models.CharField(default=b'', max_length=1024, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='start_line',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='stdout',
|
||||
field=models.TextField(default=b'', editable=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='jobevent',
|
||||
name='verbosity',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='counter',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='event',
|
||||
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='counter',
|
||||
field=models.PositiveIntegerField(default=0, editable=False),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='event',
|
||||
field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]),
|
||||
),
|
||||
migrations.AlterUniqueTogether(
|
||||
name='adhoccommandevent',
|
||||
unique_together=set([]),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='adhoccommandevent',
|
||||
index_together=set([('ad_hoc_command', 'event'), ('ad_hoc_command', 'uuid'), ('ad_hoc_command', 'end_line'), ('ad_hoc_command', 'start_line')]),
|
||||
),
|
||||
migrations.AlterIndexTogether(
|
||||
name='jobevent',
|
||||
index_together=set([('job', 'event'), ('job', 'parent'), ('job', 'start_line'), ('job', 'uuid'), ('job', 'end_line')]),
|
||||
),
|
||||
]
|
||||
@@ -1,24 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0046_v310_job_event_stdout'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.CreateModel(
|
||||
name='TowerScheduleState',
|
||||
fields=[
|
||||
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
|
||||
('schedule_last_run', models.DateTimeField(auto_now_add=True)),
|
||||
],
|
||||
options={
|
||||
'abstract': False,
|
||||
},
|
||||
),
|
||||
]
|
||||
@@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0047_v310_tower_state'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='instance',
|
||||
name='capacity',
|
||||
field=models.PositiveIntegerField(default=100, editable=False),
|
||||
),
|
||||
]
|
||||
@@ -1,30 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import jsonfield.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0048_v310_instance_capacity'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='workflowjob',
|
||||
name='survey_passwords',
|
||||
field=jsonfield.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_enabled',
|
||||
field=models.BooleanField(default=False),
|
||||
),
|
||||
migrations.AddField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_spec',
|
||||
field=jsonfield.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,90 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import awx.main.fields
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0049_v310_workflow_surveys'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='adhoccommandevent',
|
||||
name='event_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='artifacts',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='survey_passwords',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobevent',
|
||||
name='event_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobtemplate',
|
||||
name='survey_spec',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='notification',
|
||||
name='body',
|
||||
field=awx.main.fields.JSONField(default=dict, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='notificationtemplate',
|
||||
name='notification_configuration',
|
||||
field=awx.main.fields.JSONField(default=dict),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='playbook_files',
|
||||
field=awx.main.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='schedule',
|
||||
name='extra_data',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='unifiedjob',
|
||||
name='job_env',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjob',
|
||||
name='survey_passwords',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='ancestor_artifacts',
|
||||
field=awx.main.fields.JSONField(default={}, editable=False, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobnode',
|
||||
name='char_prompts',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplate',
|
||||
name='survey_spec',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='workflowjobtemplatenode',
|
||||
name='char_prompts',
|
||||
field=awx.main.fields.JSONField(default={}, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -1,20 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
import django.db.models.deletion
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0050_v310_JSONField_changes'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AddField(
|
||||
model_name='job',
|
||||
name='project_update',
|
||||
field=models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.ProjectUpdate', help_text='The SCM Refresh task used to make sure the playbooks were available for the job run', null=True),
|
||||
),
|
||||
]
|
||||
@@ -1,19 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0051_v310_job_project_update'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='inventory',
|
||||
name='name',
|
||||
field=models.CharField(max_length=512),
|
||||
),
|
||||
]
|
||||
@@ -1,44 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from django.db import migrations, models
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0052_v310_inventory_name_non_unique'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.AlterField(
|
||||
model_name='inventorysource',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='inventoryupdate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='job',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='jobtemplate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='timeout',
|
||||
field=models.IntegerField(default=0, blank=True),
|
||||
),
|
||||
]
|
||||
@@ -76,7 +76,12 @@ User.add_to_class('auditor_of_organizations', user_get_auditor_of_organizations)
|
||||
@property
|
||||
def user_is_system_auditor(user):
|
||||
if not hasattr(user, '_is_system_auditor'):
|
||||
user._is_system_auditor = Role.objects.filter(role_field='system_auditor', id=user.id).exists()
|
||||
if user.pk:
|
||||
user._is_system_auditor = user.roles.filter(
|
||||
singleton_name='system_auditor', role_field='system_auditor').exists()
|
||||
else:
|
||||
# Odd case where user is unsaved, this should never be relied on
|
||||
return False
|
||||
return user._is_system_auditor
|
||||
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ from django.core.urlresolvers import reverse
|
||||
# AWX
|
||||
from awx.main.models.base import * # noqa
|
||||
from awx.main.models.unified_jobs import * # noqa
|
||||
from awx.main.models.notifications import JobNotificationMixin
|
||||
from awx.main.models.notifications import JobNotificationMixin, NotificationTemplate
|
||||
from awx.main.fields import JSONField
|
||||
|
||||
logger = logging.getLogger('awx.main.models.ad_hoc_commands')
|
||||
@@ -157,18 +157,20 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
|
||||
|
||||
@property
|
||||
def notification_templates(self):
|
||||
all_inventory_sources = set()
|
||||
all_orgs = set()
|
||||
for h in self.hosts.all():
|
||||
for invsrc in h.inventory_sources.all():
|
||||
all_inventory_sources.add(invsrc)
|
||||
all_orgs.add(h.inventory.organization)
|
||||
active_templates = dict(error=set(),
|
||||
success=set(),
|
||||
any=set())
|
||||
for invsrc in all_inventory_sources:
|
||||
notifications_dict = invsrc.notification_templates
|
||||
for notification_type in active_templates.keys():
|
||||
for templ in notifications_dict[notification_type]:
|
||||
active_templates[notification_type].add(templ)
|
||||
base_notification_templates = NotificationTemplate.objects
|
||||
for org in all_orgs:
|
||||
for templ in base_notification_templates.filter(organization_notification_templates_for_errors=org):
|
||||
active_templates['error'].add(templ)
|
||||
for templ in base_notification_templates.filter(organization_notification_templates_for_success=org):
|
||||
active_templates['success'].add(templ)
|
||||
for templ in base_notification_templates.filter(organization_notification_templates_for_any=org):
|
||||
active_templates['any'].add(templ)
|
||||
active_templates['error'] = list(active_templates['error'])
|
||||
active_templates['any'] = list(active_templates['any'])
|
||||
active_templates['success'] = list(active_templates['success'])
|
||||
|
||||
@@ -50,6 +50,8 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin):
|
||||
('su', _('Su')),
|
||||
('pbrun', _('Pbrun')),
|
||||
('pfexec', _('Pfexec')),
|
||||
('dzdo', _('DZDO')),
|
||||
('pmrun', _('Pmrun')),
|
||||
#('runas', _('Runas')),
|
||||
]
|
||||
|
||||
|
||||
@@ -342,6 +342,7 @@ class Host(CommonModelNameNotUnique):
|
||||
max_length=1024,
|
||||
blank=True,
|
||||
default='',
|
||||
help_text=_('The value used by the remote inventory source to uniquely identify the host'),
|
||||
)
|
||||
variables = models.TextField(
|
||||
blank=True,
|
||||
@@ -1001,9 +1002,8 @@ class InventorySourceOptions(BaseModel):
|
||||
if r not in valid_regions and r not in invalid_regions:
|
||||
invalid_regions.append(r)
|
||||
if invalid_regions:
|
||||
raise ValidationError(_('Invalid %(source)s region%(plural)s: %(region)s') % {
|
||||
'source': self.source, 'plural': '' if len(invalid_regions) == 1 else 's',
|
||||
'region': ', '.join(invalid_regions)})
|
||||
raise ValidationError(_('Invalid %(source)s region: %(region)s') % {
|
||||
'source': self.source, 'region': ', '.join(invalid_regions)})
|
||||
return ','.join(regions)
|
||||
|
||||
source_vars_dict = VarsDictProperty('source_vars')
|
||||
@@ -1027,9 +1027,8 @@ class InventorySourceOptions(BaseModel):
|
||||
if instance_filter_name not in self.INSTANCE_FILTER_NAMES:
|
||||
invalid_filters.append(instance_filter)
|
||||
if invalid_filters:
|
||||
raise ValidationError(_('Invalid filter expression%(plural)s: %(filter)s') %
|
||||
{'plural': '' if len(invalid_filters) == 1 else 's',
|
||||
'filter': ', '.join(invalid_filters)})
|
||||
raise ValidationError(_('Invalid filter expression: %(filter)s') %
|
||||
{'filter': ', '.join(invalid_filters)})
|
||||
return instance_filters
|
||||
|
||||
def clean_group_by(self):
|
||||
@@ -1046,9 +1045,8 @@ class InventorySourceOptions(BaseModel):
|
||||
if c not in valid_choices and c not in invalid_choices:
|
||||
invalid_choices.append(c)
|
||||
if invalid_choices:
|
||||
raise ValidationError(_('Invalid group by choice%(plural)s: %(choice)s') %
|
||||
{'plural': '' if len(invalid_choices) == 1 else 's',
|
||||
'choice': ', '.join(invalid_choices)})
|
||||
raise ValidationError(_('Invalid group by choice: %(choice)s') %
|
||||
{'choice': ', '.join(invalid_choices)})
|
||||
return ','.join(choices)
|
||||
|
||||
|
||||
|
||||
@@ -4,14 +4,12 @@
|
||||
# Python
|
||||
import datetime
|
||||
import hmac
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from urlparse import urljoin
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
from django.core.cache import cache
|
||||
from django.db import models
|
||||
from django.db.models import Q, Count
|
||||
from django.utils.dateparse import parse_datetime
|
||||
@@ -589,22 +587,10 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin):
|
||||
hosts=all_hosts))
|
||||
return data
|
||||
|
||||
def handle_extra_data(self, extra_data):
|
||||
extra_vars = {}
|
||||
if isinstance(extra_data, dict):
|
||||
extra_vars = extra_data
|
||||
elif extra_data is None:
|
||||
return
|
||||
else:
|
||||
if extra_data == "":
|
||||
return
|
||||
try:
|
||||
extra_vars = json.loads(extra_data)
|
||||
except Exception as e:
|
||||
logger.warn("Exception deserializing extra vars: " + str(e))
|
||||
evars = self.extra_vars_dict
|
||||
evars.update(extra_vars)
|
||||
self.update_fields(extra_vars=json.dumps(evars))
|
||||
def _resources_sufficient_for_launch(self):
|
||||
if self.job_type == PERM_INVENTORY_SCAN:
|
||||
return self.inventory_id is not None
|
||||
return not (self.inventory_id is None or self.project_id is None)
|
||||
|
||||
def display_artifacts(self):
|
||||
'''
|
||||
@@ -654,6 +640,16 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin):
|
||||
def get_notification_friendly_name(self):
|
||||
return "Job"
|
||||
|
||||
'''
|
||||
Canceling a job also cancels the implicit project update with launch_type
|
||||
run.
|
||||
'''
|
||||
def cancel(self):
|
||||
res = super(Job, self).cancel()
|
||||
if self.project_update:
|
||||
self.project_update.cancel()
|
||||
return res
|
||||
|
||||
|
||||
class JobHostSummary(CreatedModifiedModel):
|
||||
'''
|
||||
@@ -814,7 +810,7 @@ class JobEvent(CreatedModifiedModel):
|
||||
('job', 'uuid'),
|
||||
('job', 'start_line'),
|
||||
('job', 'end_line'),
|
||||
('job', 'parent'),
|
||||
('job', 'parent_uuid'),
|
||||
]
|
||||
|
||||
job = models.ForeignKey(
|
||||
@@ -890,6 +886,11 @@ class JobEvent(CreatedModifiedModel):
|
||||
on_delete=models.SET_NULL,
|
||||
editable=False,
|
||||
)
|
||||
parent_uuid = models.CharField(
|
||||
max_length=1024,
|
||||
default='',
|
||||
editable=False,
|
||||
)
|
||||
counter = models.PositiveIntegerField(
|
||||
default=0,
|
||||
editable=False,
|
||||
@@ -970,28 +971,6 @@ class JobEvent(CreatedModifiedModel):
|
||||
pass
|
||||
return msg
|
||||
|
||||
def _find_parent_id(self):
|
||||
# Find the (most likely) parent event for this event.
|
||||
parent_events = set()
|
||||
if self.event in ('playbook_on_play_start', 'playbook_on_stats',
|
||||
'playbook_on_vars_prompt'):
|
||||
parent_events.add('playbook_on_start')
|
||||
elif self.event in ('playbook_on_notify', 'playbook_on_setup',
|
||||
'playbook_on_task_start',
|
||||
'playbook_on_no_hosts_matched',
|
||||
'playbook_on_no_hosts_remaining',
|
||||
'playbook_on_import_for_host',
|
||||
'playbook_on_not_import_for_host'):
|
||||
parent_events.add('playbook_on_play_start')
|
||||
elif self.event.startswith('runner_on_'):
|
||||
parent_events.add('playbook_on_setup')
|
||||
parent_events.add('playbook_on_task_start')
|
||||
if parent_events:
|
||||
qs = JobEvent.objects.filter(job_id=self.job_id, event__in=parent_events).order_by('-pk')
|
||||
if self.pk:
|
||||
qs = qs.filter(pk__lt=self.pk)
|
||||
return qs.only('id').values_list('id', flat=True).first()
|
||||
|
||||
def _update_from_event_data(self):
|
||||
# Update job event model fields from event data.
|
||||
updated_fields = set()
|
||||
@@ -1033,20 +1012,14 @@ class JobEvent(CreatedModifiedModel):
|
||||
updated_fields.add(field)
|
||||
return updated_fields
|
||||
|
||||
def _update_parent_failed_and_changed(self):
|
||||
# Propagate failed and changed flags to parent events.
|
||||
if self.parent_id:
|
||||
parent = self.parent
|
||||
update_fields = []
|
||||
if self.failed and not parent.failed:
|
||||
parent.failed = True
|
||||
update_fields.append('failed')
|
||||
if self.changed and not parent.changed:
|
||||
parent.changed = True
|
||||
update_fields.append('changed')
|
||||
if update_fields:
|
||||
parent.save(update_fields=update_fields, from_parent_update=True)
|
||||
parent._update_parent_failed_and_changed()
|
||||
def _update_parents_failed_and_changed(self):
|
||||
# Update parent events to reflect failed, changed
|
||||
runner_events = JobEvent.objects.filter(job=self.job,
|
||||
event__startswith='runner_on')
|
||||
changed_events = runner_events.filter(changed=True)
|
||||
failed_events = runner_events.filter(failed=True)
|
||||
JobEvent.objects.filter(uuid__in=changed_events.values_list('parent_uuid', flat=True)).update(changed=True)
|
||||
JobEvent.objects.filter(uuid__in=failed_events.values_list('parent_uuid', flat=True)).update(failed=True)
|
||||
|
||||
def _update_hosts(self, extra_host_pks=None):
|
||||
# Update job event hosts m2m from host_name, propagate to parent events.
|
||||
@@ -1066,15 +1039,18 @@ class JobEvent(CreatedModifiedModel):
|
||||
qs = qs.exclude(job_events__pk=self.id).only('id')
|
||||
for host in qs:
|
||||
self.hosts.add(host)
|
||||
if self.parent_id:
|
||||
self.parent._update_hosts(qs.values_list('id', flat=True))
|
||||
if self.parent_uuid:
|
||||
parent = JobEvent.objects.filter(uuid=self.parent_uuid)
|
||||
if parent.exists():
|
||||
parent = parent[0]
|
||||
parent._update_hosts(qs.values_list('id', flat=True))
|
||||
|
||||
def _update_host_summary_from_stats(self):
|
||||
from awx.main.models.inventory import Host
|
||||
hostnames = set()
|
||||
try:
|
||||
for v in self.event_data.values():
|
||||
hostnames.update(v.keys())
|
||||
for stat in ('changed', 'dark', 'failures', 'ok', 'processed', 'skipped'):
|
||||
hostnames.update(self.event_data.get(stat, {}).keys())
|
||||
except AttributeError: # In case event_data or v isn't a dict.
|
||||
pass
|
||||
with ignore_inventory_computed_fields():
|
||||
@@ -1126,21 +1102,13 @@ class JobEvent(CreatedModifiedModel):
|
||||
self.host_id = host_id
|
||||
if 'host_id' not in update_fields:
|
||||
update_fields.append('host_id')
|
||||
# Update parent related field if not set.
|
||||
if self.parent_id is None:
|
||||
self.parent_id = self._find_parent_id()
|
||||
if self.parent_id and 'parent_id' not in update_fields:
|
||||
update_fields.append('parent_id')
|
||||
super(JobEvent, self).save(*args, **kwargs)
|
||||
# Update related objects after this event is saved.
|
||||
if not from_parent_update:
|
||||
if self.parent_id:
|
||||
self._update_parent_failed_and_changed()
|
||||
# FIXME: The update_hosts() call (and its queries) are the current
|
||||
# performance bottleneck....
|
||||
if getattr(settings, 'CAPTURE_JOB_EVENT_HOSTS', False):
|
||||
self._update_hosts()
|
||||
if self.event == 'playbook_on_stats':
|
||||
self._update_parents_failed_and_changed()
|
||||
self._update_host_summary_from_stats()
|
||||
|
||||
@classmethod
|
||||
@@ -1162,48 +1130,40 @@ class JobEvent(CreatedModifiedModel):
|
||||
except (KeyError, ValueError):
|
||||
kwargs.pop('created', None)
|
||||
|
||||
# Save UUID and parent UUID for determining parent-child relationship.
|
||||
job_event_uuid = kwargs.get('uuid', None)
|
||||
parent_event_uuid = kwargs.get('parent_uuid', None)
|
||||
artifact_dict = kwargs.get('artifact_data', None)
|
||||
|
||||
# Sanity check: Don't honor keys that we don't recognize.
|
||||
valid_keys = {'job_id', 'event', 'event_data', 'playbook', 'play',
|
||||
'role', 'task', 'created', 'counter', 'uuid', 'stdout',
|
||||
'start_line', 'end_line', 'verbosity'}
|
||||
'parent_uuid', 'start_line', 'end_line', 'verbosity'}
|
||||
for key in kwargs.keys():
|
||||
if key not in valid_keys:
|
||||
kwargs.pop(key)
|
||||
|
||||
# Try to find a parent event based on UUID.
|
||||
if parent_event_uuid:
|
||||
cache_key = '{}_{}'.format(kwargs['job_id'], parent_event_uuid)
|
||||
parent_id = cache.get(cache_key)
|
||||
if parent_id is None:
|
||||
parent_id = JobEvent.objects.filter(job_id=kwargs['job_id'], uuid=parent_event_uuid).only('id').values_list('id', flat=True).first()
|
||||
if parent_id:
|
||||
print("Settings cache: {} with value {}".format(cache_key, parent_id))
|
||||
cache.set(cache_key, parent_id, 300)
|
||||
if parent_id:
|
||||
kwargs['parent_id'] = parent_id
|
||||
event_data = kwargs.get('event_data', None)
|
||||
artifact_dict = None
|
||||
if event_data:
|
||||
artifact_dict = event_data.pop('artifact_data', None)
|
||||
|
||||
analytics_logger.info('Job event data saved.', extra=dict(event_model_data=kwargs))
|
||||
|
||||
job_event = JobEvent.objects.create(**kwargs)
|
||||
|
||||
# Cache this job event ID vs. UUID for future parent lookups.
|
||||
if job_event_uuid:
|
||||
cache_key = '{}_{}'.format(kwargs['job_id'], job_event_uuid)
|
||||
cache.set(cache_key, job_event.id, 300)
|
||||
|
||||
# Save artifact data to parent job (if provided).
|
||||
if artifact_dict:
|
||||
event_data = kwargs.get('event_data', None)
|
||||
if event_data and isinstance(event_data, dict):
|
||||
res = event_data.get('res', None)
|
||||
if res and isinstance(res, dict):
|
||||
if res.get('_ansible_no_log', False):
|
||||
artifact_dict['_ansible_no_log'] = True
|
||||
# Note: Core has not added support for marking artifacts as
|
||||
# sensitive yet. Going forward, core will not use
|
||||
# _ansible_no_log to denote sensitive set_stats calls.
|
||||
# Instead, they plan to add a flag outside of the traditional
|
||||
# no_log mechanism. no_log will not work for this feature,
|
||||
# in core, because sensitive data is scrubbed before sending
|
||||
# data to the callback. The playbook_on_stats is the callback
|
||||
# in which the set_stats data is used.
|
||||
|
||||
# Again, the sensitive artifact feature has not yet landed in
|
||||
# core. The below is how we mark artifacts payload as
|
||||
# senstive
|
||||
# artifact_dict['_ansible_no_log'] = True
|
||||
#
|
||||
parent_job = Job.objects.filter(pk=kwargs['job_id']).first()
|
||||
if parent_job and parent_job.artifacts != artifact_dict:
|
||||
parent_job.artifacts = artifact_dict
|
||||
@@ -1328,23 +1288,6 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin):
|
||||
def get_ui_url(self):
|
||||
return urljoin(settings.TOWER_URL_BASE, "/#/management_jobs/{}".format(self.pk))
|
||||
|
||||
def handle_extra_data(self, extra_data):
|
||||
extra_vars = {}
|
||||
if isinstance(extra_data, dict):
|
||||
extra_vars = extra_data
|
||||
elif extra_data is None:
|
||||
return
|
||||
else:
|
||||
if extra_data == "":
|
||||
return
|
||||
try:
|
||||
extra_vars = json.loads(extra_data)
|
||||
except Exception as e:
|
||||
logger.warn("Exception deserializing extra vars: " + str(e))
|
||||
evars = self.extra_vars_dict
|
||||
evars.update(extra_vars)
|
||||
self.update_fields(extra_vars=json.dumps(evars))
|
||||
|
||||
@property
|
||||
def task_impact(self):
|
||||
return 150
|
||||
|
||||
@@ -37,8 +37,12 @@ class ResourceMixin(models.Model):
|
||||
'''
|
||||
return ResourceMixin._accessible_objects(cls, accessor, role_field)
|
||||
|
||||
@classmethod
|
||||
def accessible_pk_qs(cls, accessor, role_field):
|
||||
return ResourceMixin._accessible_pk_qs(cls, accessor, role_field)
|
||||
|
||||
@staticmethod
|
||||
def _accessible_objects(cls, accessor, role_field):
|
||||
def _accessible_pk_qs(cls, accessor, role_field, content_types=None):
|
||||
if type(accessor) == User:
|
||||
ancestor_roles = accessor.roles.all()
|
||||
elif type(accessor) == Role:
|
||||
@@ -47,14 +51,22 @@ class ResourceMixin(models.Model):
|
||||
accessor_type = ContentType.objects.get_for_model(accessor)
|
||||
ancestor_roles = Role.objects.filter(content_type__pk=accessor_type.id,
|
||||
object_id=accessor.id)
|
||||
qs = cls.objects.filter(pk__in =
|
||||
RoleAncestorEntry.objects.filter(
|
||||
ancestor__in=ancestor_roles,
|
||||
content_type_id = ContentType.objects.get_for_model(cls).id,
|
||||
role_field = role_field
|
||||
).values_list('object_id').distinct()
|
||||
)
|
||||
return qs
|
||||
|
||||
if content_types is None:
|
||||
ct_kwarg = dict(content_type_id = ContentType.objects.get_for_model(cls).id)
|
||||
else:
|
||||
ct_kwarg = dict(content_type_id__in = content_types)
|
||||
|
||||
return RoleAncestorEntry.objects.filter(
|
||||
ancestor__in = ancestor_roles,
|
||||
role_field = role_field,
|
||||
**ct_kwarg
|
||||
).values_list('object_id').distinct()
|
||||
|
||||
|
||||
@staticmethod
|
||||
def _accessible_objects(cls, accessor, role_field):
|
||||
return cls.objects.filter(pk__in = ResourceMixin._accessible_pk_qs(cls, accessor, role_field))
|
||||
|
||||
|
||||
def get_permissions(self, accessor):
|
||||
@@ -105,12 +117,6 @@ class SurveyJobTemplateMixin(models.Model):
|
||||
# Job Template extra_vars
|
||||
extra_vars = self.extra_vars_dict
|
||||
|
||||
# Overwrite with job template extra vars with survey default vars
|
||||
if self.survey_enabled and 'spec' in self.survey_spec:
|
||||
for survey_element in self.survey_spec.get("spec", []):
|
||||
if 'default' in survey_element and survey_element['default']:
|
||||
extra_vars[survey_element['variable']] = survey_element['default']
|
||||
|
||||
# transform to dict
|
||||
if 'extra_vars' in kwargs:
|
||||
kwargs_extra_vars = kwargs['extra_vars']
|
||||
@@ -118,6 +124,18 @@ class SurveyJobTemplateMixin(models.Model):
|
||||
else:
|
||||
kwargs_extra_vars = {}
|
||||
|
||||
# Overwrite with job template extra vars with survey default vars
|
||||
if self.survey_enabled and 'spec' in self.survey_spec:
|
||||
for survey_element in self.survey_spec.get("spec", []):
|
||||
default = survey_element['default']
|
||||
variable_key = survey_element['variable']
|
||||
if survey_element.get('type') == 'password':
|
||||
if variable_key in kwargs_extra_vars:
|
||||
kw_value = kwargs_extra_vars[variable_key]
|
||||
if kw_value.startswith('$encrypted$') and kw_value != default:
|
||||
kwargs_extra_vars[variable_key] = default
|
||||
extra_vars[variable_key] = default
|
||||
|
||||
# Overwrite job template extra vars with explicit job extra vars
|
||||
# and add on job extra vars
|
||||
extra_vars.update(kwargs_extra_vars)
|
||||
|
||||
@@ -210,7 +210,7 @@ class AuthToken(BaseModel):
|
||||
REASON_CHOICES = [
|
||||
('', _('Token not invalidated')),
|
||||
('timeout_reached', _('Token is expired')),
|
||||
('limit_reached', _('Maximum per-user sessions reached')),
|
||||
('limit_reached', _('The maximum number of allowed sessions for this user has been exceeded.')),
|
||||
# invalid_token is not a used data-base value, but is returned by the
|
||||
# api when a token is not found
|
||||
('invalid_token', _('Invalid token')),
|
||||
|
||||
@@ -78,12 +78,14 @@ class ProjectOptions(models.Model):
|
||||
blank=True,
|
||||
default='',
|
||||
verbose_name=_('SCM Type'),
|
||||
help_text=_("Specifies the source control system used to store the project."),
|
||||
)
|
||||
scm_url = models.CharField(
|
||||
max_length=1024,
|
||||
blank=True,
|
||||
default='',
|
||||
verbose_name=_('SCM URL'),
|
||||
help_text=_("The location where the project is stored."),
|
||||
)
|
||||
scm_branch = models.CharField(
|
||||
max_length=256,
|
||||
@@ -94,9 +96,11 @@ class ProjectOptions(models.Model):
|
||||
)
|
||||
scm_clean = models.BooleanField(
|
||||
default=False,
|
||||
help_text=_('Discard any local changes before syncing the project.'),
|
||||
)
|
||||
scm_delete_on_update = models.BooleanField(
|
||||
default=False,
|
||||
help_text=_('Delete the project before syncing.'),
|
||||
)
|
||||
credential = models.ForeignKey(
|
||||
'Credential',
|
||||
@@ -109,6 +113,7 @@ class ProjectOptions(models.Model):
|
||||
timeout = models.IntegerField(
|
||||
blank=True,
|
||||
default=0,
|
||||
help_text=_("The amount of time to run before the task is canceled."),
|
||||
)
|
||||
|
||||
def clean_scm_type(self):
|
||||
@@ -221,10 +226,13 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin):
|
||||
)
|
||||
scm_update_on_launch = models.BooleanField(
|
||||
default=False,
|
||||
help_text=_('Update the project when a job is launched that uses the project.'),
|
||||
)
|
||||
scm_update_cache_timeout = models.PositiveIntegerField(
|
||||
default=0,
|
||||
blank=True,
|
||||
help_text=_('The number of seconds after the last project update ran that a new'
|
||||
'project update will be launched as a job dependency.'),
|
||||
)
|
||||
|
||||
scm_revision = models.CharField(
|
||||
|
||||
@@ -427,6 +427,9 @@ class Role(models.Model):
|
||||
def is_ancestor_of(self, role):
|
||||
return role.ancestors.filter(id=self.id).exists()
|
||||
|
||||
def is_singleton(self):
|
||||
return self.singleton_name in [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]
|
||||
|
||||
|
||||
class RoleAncestorEntry(models.Model):
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ import dateutil.rrule
|
||||
from django.db import models
|
||||
from django.db.models.query import QuerySet
|
||||
from django.utils.timezone import now, make_aware, get_default_timezone
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
|
||||
# AWX
|
||||
from awx.main.models.base import * # noqa
|
||||
@@ -65,24 +66,29 @@ class Schedule(CommonModel):
|
||||
)
|
||||
enabled = models.BooleanField(
|
||||
default=True,
|
||||
help_text=_("Enables processing of this schedule by Tower.")
|
||||
)
|
||||
dtstart = models.DateTimeField(
|
||||
null=True,
|
||||
default=None,
|
||||
editable=False,
|
||||
help_text=_("The first occurrence of the schedule occurs on or after this time.")
|
||||
)
|
||||
dtend = models.DateTimeField(
|
||||
null=True,
|
||||
default=None,
|
||||
editable=False,
|
||||
help_text=_("The last occurrence of the schedule occurs before this time, aftewards the schedule expires.")
|
||||
)
|
||||
rrule = models.CharField(
|
||||
max_length=255,
|
||||
help_text=_("A value representing the schedules iCal recurrence rule.")
|
||||
)
|
||||
next_run = models.DateTimeField(
|
||||
null=True,
|
||||
default=None,
|
||||
editable=False,
|
||||
help_text=_("The next time that the scheduled action will run.")
|
||||
)
|
||||
extra_data = JSONField(
|
||||
blank=True,
|
||||
|
||||
@@ -20,6 +20,7 @@ from django.utils.translation import ugettext_lazy as _
|
||||
from django.utils.timezone import now
|
||||
from django.utils.encoding import smart_text
|
||||
from django.apps import apps
|
||||
from django.contrib.contenttypes.models import ContentType
|
||||
|
||||
# Django-Polymorphic
|
||||
from polymorphic import PolymorphicModel
|
||||
@@ -30,6 +31,7 @@ from djcelery.models import TaskMeta
|
||||
# AWX
|
||||
from awx.main.models.base import * # noqa
|
||||
from awx.main.models.schedules import Schedule
|
||||
from awx.main.models.mixins import ResourceMixin
|
||||
from awx.main.utils import (
|
||||
decrypt_field, _inventory_updates,
|
||||
copy_model_by_class, copy_m2m_relationships
|
||||
@@ -122,10 +124,6 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
|
||||
default=None,
|
||||
editable=False,
|
||||
)
|
||||
has_schedules = models.BooleanField(
|
||||
default=False,
|
||||
editable=False,
|
||||
)
|
||||
#on_missed_schedule = models.CharField(
|
||||
# max_length=32,
|
||||
# choices=[],
|
||||
@@ -170,6 +168,20 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
|
||||
else:
|
||||
return super(UnifiedJobTemplate, self).unique_error_message(model_class, unique_check)
|
||||
|
||||
@classmethod
|
||||
def accessible_pk_qs(cls, accessor, role_field):
|
||||
'''
|
||||
A re-implementation of accessible pk queryset for the "normal" unified JTs.
|
||||
Does not return inventory sources or system JTs, these should
|
||||
be handled inside of get_queryset where it is utilized.
|
||||
'''
|
||||
ujt_names = [c.__name__.lower() for c in cls.__subclasses__()
|
||||
if c.__name__.lower() not in ['inventorysource', 'systemjobtemplate']]
|
||||
subclass_content_types = list(ContentType.objects.filter(
|
||||
model__in=ujt_names).values_list('id', flat=True))
|
||||
|
||||
return ResourceMixin._accessible_pk_qs(cls, accessor, role_field, content_types=subclass_content_types)
|
||||
|
||||
def _perform_unique_checks(self, unique_checks):
|
||||
# Handle the list of unique fields returned above. Replace with an
|
||||
# appropriate error message for the remaining field(s) in the unique
|
||||
@@ -353,6 +365,10 @@ class UnifiedJobTypeStringMixin(object):
|
||||
def _underscore_to_camel(cls, word):
|
||||
return ''.join(x.capitalize() or '_' for x in word.split('_'))
|
||||
|
||||
@classmethod
|
||||
def _camel_to_underscore(cls, word):
|
||||
return re.sub('(?!^)([A-Z]+)', r'_\1', word).lower()
|
||||
|
||||
@classmethod
|
||||
def _model_type(cls, job_type):
|
||||
# Django >= 1.9
|
||||
@@ -371,6 +387,9 @@ class UnifiedJobTypeStringMixin(object):
|
||||
return None
|
||||
return model.objects.get(id=job_id)
|
||||
|
||||
def model_to_str(self):
|
||||
return UnifiedJobTypeStringMixin._camel_to_underscore(self.__class__.__name__)
|
||||
|
||||
|
||||
class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique, UnifiedJobTypeStringMixin):
|
||||
'''
|
||||
@@ -386,6 +405,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
('scheduled', _('Scheduled')), # Job was started from a schedule.
|
||||
('dependency', _('Dependency')), # Job was started as a dependency of another job.
|
||||
('workflow', _('Workflow')), # Job was started from a workflow job.
|
||||
('sync', _('Sync')), # Job was started from a project sync.
|
||||
]
|
||||
|
||||
PASSWORD_FIELDS = ('start_args',)
|
||||
@@ -431,6 +451,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
blank=True,
|
||||
default='',
|
||||
editable=False,
|
||||
help_text=_("The Tower node the job executed on."),
|
||||
)
|
||||
notifications = models.ManyToManyField(
|
||||
'Notification',
|
||||
@@ -456,16 +477,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
null=True,
|
||||
default=None,
|
||||
editable=False,
|
||||
help_text=_("The date and time the job was queued for starting."),
|
||||
)
|
||||
finished = models.DateTimeField(
|
||||
null=True,
|
||||
default=None,
|
||||
editable=False,
|
||||
help_text=_("The date and time the job finished execution."),
|
||||
)
|
||||
elapsed = models.DecimalField(
|
||||
max_digits=12,
|
||||
decimal_places=3,
|
||||
editable=False,
|
||||
help_text=_("Elapsed time in seconds that the job ran."),
|
||||
)
|
||||
job_args = models.TextField(
|
||||
blank=True,
|
||||
@@ -487,6 +511,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
blank=True,
|
||||
default='',
|
||||
editable=False,
|
||||
help_text=_("A status field to indicate the state of the job if it wasn't able to run and capture stdout"),
|
||||
)
|
||||
start_args = models.TextField(
|
||||
blank=True,
|
||||
@@ -553,6 +578,9 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
"Override in child classes, None value indicates this is not configurable"
|
||||
return None
|
||||
|
||||
def _resources_sufficient_for_launch(self):
|
||||
return True
|
||||
|
||||
def __unicode__(self):
|
||||
return u'%s-%s-%s' % (self.created, self.id, self.status)
|
||||
|
||||
@@ -780,13 +808,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
@property
|
||||
def workflow_job_id(self):
|
||||
if self.spawned_by_workflow:
|
||||
return self.unified_job_node.workflow_job.pk
|
||||
try:
|
||||
return self.unified_job_node.workflow_job.pk
|
||||
except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist:
|
||||
pass
|
||||
return None
|
||||
|
||||
@property
|
||||
def workflow_node_id(self):
|
||||
if self.spawned_by_workflow:
|
||||
return self.unified_job_node.pk
|
||||
try:
|
||||
return self.unified_job_node.pk
|
||||
except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist:
|
||||
pass
|
||||
return None
|
||||
|
||||
@property
|
||||
@@ -801,7 +835,22 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
|
||||
return []
|
||||
|
||||
def handle_extra_data(self, extra_data):
|
||||
return
|
||||
if hasattr(self, 'extra_vars'):
|
||||
extra_vars = {}
|
||||
if isinstance(extra_data, dict):
|
||||
extra_vars = extra_data
|
||||
elif extra_data is None:
|
||||
return
|
||||
else:
|
||||
if extra_data == "":
|
||||
return
|
||||
try:
|
||||
extra_vars = json.loads(extra_data)
|
||||
except Exception as e:
|
||||
logger.warn("Exception deserializing extra vars: " + str(e))
|
||||
evars = self.extra_vars_dict
|
||||
evars.update(extra_vars)
|
||||
self.update_fields(extra_vars=json.dumps(evars))
|
||||
|
||||
@property
|
||||
def can_start(self):
|
||||
|
||||
@@ -125,23 +125,16 @@ class WorkflowNodeBase(CreatedModifiedModel):
|
||||
return {}
|
||||
|
||||
accepted_fields, ignored_fields = ujt_obj._accept_or_ignore_job_kwargs(**prompts_dict)
|
||||
ask_for_vars_dict = ujt_obj._ask_for_vars_dict()
|
||||
|
||||
ignored_dict = {}
|
||||
missing_dict = {}
|
||||
for fd in ignored_fields:
|
||||
ignored_dict[fd] = 'Workflow node provided field, but job template is not set to ask on launch'
|
||||
scan_errors = ujt_obj._extra_job_type_errors(accepted_fields)
|
||||
ignored_dict.update(scan_errors)
|
||||
for fd in ['inventory', 'credential']:
|
||||
if getattr(ujt_obj, fd) is None and not (ask_for_vars_dict.get(fd, False) and fd in prompts_dict):
|
||||
missing_dict[fd] = 'Job Template does not have this field and workflow node does not provide it'
|
||||
|
||||
data = {}
|
||||
if ignored_dict:
|
||||
data['ignored'] = ignored_dict
|
||||
if missing_dict:
|
||||
data['missing'] = missing_dict
|
||||
return data
|
||||
|
||||
def get_parent_nodes(self):
|
||||
@@ -245,8 +238,7 @@ class WorkflowJobNode(WorkflowNodeBase):
|
||||
accepted_fields, ignored_fields = ujt_obj._accept_or_ignore_job_kwargs(**self.prompts_dict())
|
||||
for fd in ujt_obj._extra_job_type_errors(accepted_fields):
|
||||
accepted_fields.pop(fd)
|
||||
data.update(accepted_fields)
|
||||
# TODO: decide what to do in the event of missing fields
|
||||
data.update(accepted_fields) # missing fields are handled in the scheduler
|
||||
# build ancestor artifacts, save them to node model for later
|
||||
aa_dict = {}
|
||||
for parent_node in self.get_parent_nodes():
|
||||
@@ -366,7 +358,9 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
|
||||
|
||||
@classmethod
|
||||
def _get_unified_jt_copy_names(cls):
|
||||
return (super(WorkflowJobTemplate, cls)._get_unified_jt_copy_names() +
|
||||
base_list = super(WorkflowJobTemplate, cls)._get_unified_jt_copy_names()
|
||||
base_list.remove('labels')
|
||||
return (base_list +
|
||||
['survey_spec', 'survey_enabled', 'organization'])
|
||||
|
||||
def get_absolute_url(self):
|
||||
@@ -390,8 +384,8 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
|
||||
success=list(success_notification_templates),
|
||||
any=list(any_notification_templates))
|
||||
|
||||
def create_workflow_job(self, **kwargs):
|
||||
workflow_job = self.create_unified_job(**kwargs)
|
||||
def create_unified_job(self, **kwargs):
|
||||
workflow_job = super(WorkflowJobTemplate, self).create_unified_job(**kwargs)
|
||||
workflow_job.copy_nodes_from_original(original=self)
|
||||
return workflow_job
|
||||
|
||||
@@ -416,18 +410,22 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
|
||||
|
||||
def can_start_without_user_input(self):
|
||||
'''Return whether WFJT can be launched without survey passwords.'''
|
||||
return not bool(self.variables_needed_to_start)
|
||||
return not bool(
|
||||
self.variables_needed_to_start or
|
||||
self.node_templates_missing() or
|
||||
self.node_prompts_rejected())
|
||||
|
||||
def get_warnings(self):
|
||||
warning_data = {}
|
||||
for node in self.workflow_job_template_nodes.all():
|
||||
if node.unified_job_template is None:
|
||||
warning_data[node.pk] = 'Node is missing a linked unified_job_template'
|
||||
continue
|
||||
def node_templates_missing(self):
|
||||
return [node.pk for node in self.workflow_job_template_nodes.filter(
|
||||
unified_job_template__isnull=True).all()]
|
||||
|
||||
def node_prompts_rejected(self):
|
||||
node_list = []
|
||||
for node in self.workflow_job_template_nodes.prefetch_related('unified_job_template').all():
|
||||
node_prompts_warnings = node.get_prompts_warnings()
|
||||
if node_prompts_warnings:
|
||||
warning_data[node.pk] = node_prompts_warnings
|
||||
return warning_data
|
||||
node_list.append(node.pk)
|
||||
return node_list
|
||||
|
||||
def user_copy(self, user):
|
||||
new_wfjt = self.copy_unified_jt()
|
||||
@@ -435,11 +433,6 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl
|
||||
return new_wfjt
|
||||
|
||||
|
||||
# Stub in place because of old migrations, can remove if migrations are squashed
|
||||
class WorkflowJobInheritNodesMixin(object):
|
||||
pass
|
||||
|
||||
|
||||
class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin):
|
||||
class Meta:
|
||||
app_label = 'main'
|
||||
@@ -488,10 +481,6 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
|
||||
result['body'] = '\n'.join(str_arr)
|
||||
return result
|
||||
|
||||
# TODO: Ask UI if this is needed ?
|
||||
#def get_ui_url(self):
|
||||
# return urlparse.urljoin(tower_settings.TOWER_URL_BASE, "/#/workflow_jobs/{}".format(self.pk))
|
||||
|
||||
@property
|
||||
def task_impact(self):
|
||||
return 0
|
||||
|
||||
@@ -10,6 +10,7 @@ from sets import Set
|
||||
from django.conf import settings
|
||||
from django.db import transaction, connection
|
||||
from django.db.utils import DatabaseError
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
|
||||
# AWX
|
||||
from awx.main.models import * # noqa
|
||||
@@ -114,14 +115,20 @@ class TaskManager():
|
||||
dag = WorkflowDAG(workflow_job)
|
||||
spawn_nodes = dag.bfs_nodes_to_run()
|
||||
for spawn_node in spawn_nodes:
|
||||
if spawn_node.unified_job_template is None:
|
||||
continue
|
||||
kv = spawn_node.get_job_kwargs()
|
||||
job = spawn_node.unified_job_template.create_unified_job(**kv)
|
||||
spawn_node.job = job
|
||||
spawn_node.save()
|
||||
can_start = job.signal_start(**kv)
|
||||
if job._resources_sufficient_for_launch():
|
||||
can_start = job.signal_start(**kv)
|
||||
else:
|
||||
can_start = False
|
||||
if not can_start:
|
||||
job.status = 'failed'
|
||||
job.job_explanation = "Workflow job could not start because it was not in the right state or required manual credentials"
|
||||
job.job_explanation = _("Job spawned from workflow could not start because it "
|
||||
"was not in the right state or required manual credentials")
|
||||
job.save(update_fields=['status', 'job_explanation'])
|
||||
connection.on_commit(lambda: job.websocket_emit_status('failed'))
|
||||
|
||||
@@ -166,6 +173,9 @@ class TaskManager():
|
||||
|
||||
return (active_task_queues, active_tasks)
|
||||
|
||||
def get_dependent_jobs_for_inv_and_proj_update(self, job_obj):
|
||||
return [{'type': j.model_to_str(), 'id': j.id} for j in job_obj.dependent_jobs.all()]
|
||||
|
||||
def start_task(self, task, dependent_tasks=[]):
|
||||
from awx.main.tasks import handle_work_error, handle_work_success
|
||||
|
||||
@@ -179,6 +189,17 @@ class TaskManager():
|
||||
success_handler = handle_work_success.s(task_actual=task_actual)
|
||||
|
||||
job_obj = task.get_full()
|
||||
'''
|
||||
This is to account for when there isn't enough capacity to execute all
|
||||
dependent jobs (i.e. proj or inv update) within the same schedule()
|
||||
call.
|
||||
|
||||
Proceeding calls to schedule() need to recontruct the proj or inv
|
||||
update -> job fail logic dependency. The below call recontructs that
|
||||
failure dependency.
|
||||
'''
|
||||
if len(dependencies) == 0:
|
||||
dependencies = self.get_dependent_jobs_for_inv_and_proj_update(job_obj)
|
||||
job_obj.status = 'waiting'
|
||||
|
||||
(start_status, opts) = job_obj.pre_start()
|
||||
@@ -230,16 +251,41 @@ class TaskManager():
|
||||
|
||||
return inventory_task
|
||||
|
||||
'''
|
||||
Since we are dealing with partial objects we don't get to take advantage
|
||||
of Django to resolve the type of related Many to Many field dependent_job.
|
||||
|
||||
Hence the, potentional, double query in this method.
|
||||
'''
|
||||
def get_related_dependent_jobs_as_patials(self, job_ids):
|
||||
dependent_partial_jobs = []
|
||||
for id in job_ids:
|
||||
if ProjectUpdate.objects.filter(id=id).exists():
|
||||
dependent_partial_jobs.append(ProjectUpdateDict({"id": id}).refresh_partial())
|
||||
elif InventoryUpdate.objects.filter(id=id).exists():
|
||||
dependent_partial_jobs.append(InventoryUpdateDict({"id": id}).refresh_partial())
|
||||
return dependent_partial_jobs
|
||||
|
||||
def capture_chain_failure_dependencies(self, task, dependencies):
|
||||
for dep in dependencies:
|
||||
dep_obj = task.get_full()
|
||||
dep_obj.dependent_jobs.add(task['id'])
|
||||
dep_obj.save()
|
||||
|
||||
def generate_dependencies(self, task):
|
||||
dependencies = []
|
||||
# TODO: What if the project is null ?
|
||||
if type(task) is JobDict:
|
||||
|
||||
if task['project__scm_update_on_launch'] is True and \
|
||||
self.graph.should_update_related_project(task):
|
||||
project_task = self.create_project_update(task)
|
||||
dependencies.append(project_task)
|
||||
# Inventory created 2 seconds behind job
|
||||
|
||||
'''
|
||||
Inventory may have already been synced from a provision callback.
|
||||
'''
|
||||
inventory_sources_already_updated = task.get_inventory_sources_already_updated()
|
||||
|
||||
for inventory_source_task in self.graph.get_inventory_sources(task['inventory_id']):
|
||||
@@ -248,6 +294,8 @@ class TaskManager():
|
||||
if self.graph.should_update_related_inventory_source(task, inventory_source_task['id']):
|
||||
inventory_task = self.create_inventory_update(task, inventory_source_task)
|
||||
dependencies.append(inventory_task)
|
||||
|
||||
self.capture_chain_failure_dependencies(task, dependencies)
|
||||
return dependencies
|
||||
|
||||
def process_latest_project_updates(self, latest_project_updates):
|
||||
@@ -305,6 +353,7 @@ class TaskManager():
|
||||
'Celery, so it has been marked as failed.',
|
||||
))
|
||||
task_obj.save()
|
||||
_send_notification_templates(task_obj, 'failed')
|
||||
connection.on_commit(lambda: task_obj.websocket_emit_status('failed'))
|
||||
|
||||
logger.error("Task %s appears orphaned... marking as failed" % task)
|
||||
|
||||
@@ -67,6 +67,8 @@ class WorkflowDAG(SimpleDAG):
|
||||
obj = n['node_object']
|
||||
job = obj.job
|
||||
|
||||
if obj.unified_job_template is None:
|
||||
continue
|
||||
if not job:
|
||||
return False
|
||||
# Job is about to run or is running. Hold our horses and wait for
|
||||
|
||||
@@ -117,10 +117,6 @@ class DependencyGraph(object):
|
||||
if not latest_inventory_update:
|
||||
return True
|
||||
|
||||
# TODO: Other finished, failed cases? i.e. error ?
|
||||
if latest_inventory_update['status'] in ['failed', 'canceled']:
|
||||
return True
|
||||
|
||||
'''
|
||||
This is a bit of fuzzy logic.
|
||||
If the latest inventory update has a created time == job_created_time-2
|
||||
@@ -138,7 +134,11 @@ class DependencyGraph(object):
|
||||
timeout_seconds = timedelta(seconds=latest_inventory_update['inventory_source__update_cache_timeout'])
|
||||
if (latest_inventory_update['finished'] + timeout_seconds) < now:
|
||||
return True
|
||||
|
||||
|
||||
if latest_inventory_update['inventory_source__update_on_launch'] is True and \
|
||||
latest_inventory_update['status'] in ['failed', 'canceled', 'error']:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def mark_system_job(self):
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
|
||||
# Python
|
||||
import json
|
||||
import itertools
|
||||
|
||||
# AWX
|
||||
from awx.main.utils import decrypt_field_value
|
||||
@@ -61,13 +62,35 @@ class PartialModelDict(object):
|
||||
def task_impact(self):
|
||||
raise RuntimeError("Inherit and implement me")
|
||||
|
||||
@classmethod
|
||||
def merge_values(cls, values):
|
||||
grouped_results = itertools.groupby(values, key=lambda value: value['id'])
|
||||
|
||||
merged_values = []
|
||||
for k, g in grouped_results:
|
||||
groups = list(g)
|
||||
merged_value = {}
|
||||
for group in groups:
|
||||
for key, val in group.iteritems():
|
||||
if not merged_value.get(key):
|
||||
merged_value[key] = val
|
||||
elif val != merged_value[key]:
|
||||
if isinstance(merged_value[key], list):
|
||||
if val not in merged_value[key]:
|
||||
merged_value[key].append(val)
|
||||
else:
|
||||
old_val = merged_value[key]
|
||||
merged_value[key] = [old_val, val]
|
||||
merged_values.append(merged_value)
|
||||
return merged_values
|
||||
|
||||
|
||||
class JobDict(PartialModelDict):
|
||||
FIELDS = (
|
||||
'id', 'status', 'job_template_id', 'inventory_id', 'project_id',
|
||||
'launch_type', 'limit', 'allow_simultaneous', 'created',
|
||||
'job_type', 'celery_task_id', 'project__scm_update_on_launch',
|
||||
'forks', 'start_args',
|
||||
'forks', 'start_args', 'dependent_jobs__id',
|
||||
)
|
||||
model = Job
|
||||
|
||||
@@ -85,6 +108,14 @@ class JobDict(PartialModelDict):
|
||||
start_args = start_args or {}
|
||||
return start_args.get('inventory_sources_already_updated', [])
|
||||
|
||||
@classmethod
|
||||
def filter_partial(cls, status=[]):
|
||||
kv = {
|
||||
'status__in': status
|
||||
}
|
||||
merged = PartialModelDict.merge_values(cls.model.objects.filter(**kv).values(*cls.get_db_values()))
|
||||
return [cls(o) for o in merged]
|
||||
|
||||
|
||||
class ProjectUpdateDict(PartialModelDict):
|
||||
FIELDS = (
|
||||
@@ -134,7 +165,8 @@ class InventoryUpdateDict(PartialModelDict):
|
||||
#'inventory_source__update_on_launch',
|
||||
#'inventory_source__update_cache_timeout',
|
||||
FIELDS = (
|
||||
'id', 'status', 'created', 'celery_task_id', 'inventory_source_id', 'inventory_source__inventory_id',
|
||||
'id', 'status', 'created', 'celery_task_id', 'inventory_source_id',
|
||||
'inventory_source__inventory_id',
|
||||
)
|
||||
model = InventoryUpdate
|
||||
|
||||
@@ -151,6 +183,7 @@ class InventoryUpdateLatestDict(InventoryUpdateDict):
|
||||
FIELDS = (
|
||||
'id', 'status', 'created', 'celery_task_id', 'inventory_source_id',
|
||||
'finished', 'inventory_source__update_cache_timeout', 'launch_type',
|
||||
'inventory_source__update_on_launch',
|
||||
)
|
||||
model = InventoryUpdate
|
||||
|
||||
@@ -217,7 +250,7 @@ class SystemJobDict(PartialModelDict):
|
||||
|
||||
class AdHocCommandDict(PartialModelDict):
|
||||
FIELDS = (
|
||||
'id', 'created', 'status', 'inventory_id',
|
||||
'id', 'created', 'status', 'inventory_id', 'dependent_jobs__id', 'celery_task_id',
|
||||
)
|
||||
model = AdHocCommand
|
||||
|
||||
|
||||
@@ -34,11 +34,13 @@ def run_job_complete(job_id):
|
||||
|
||||
@task
|
||||
def run_task_manager():
|
||||
logger.debug("Running Tower task manager.")
|
||||
TaskManager().schedule()
|
||||
|
||||
|
||||
@task
|
||||
def run_fail_inconsistent_running_jobs():
|
||||
logger.debug("Running task to fail inconsistent running jobs.")
|
||||
with transaction.atomic():
|
||||
# Lock
|
||||
try:
|
||||
|
||||
@@ -1,169 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
import os
|
||||
|
||||
import zmq
|
||||
|
||||
from django.conf import settings
|
||||
|
||||
|
||||
class Socket(object):
|
||||
"""An abstraction class implemented for a dumb OS socket.
|
||||
|
||||
Intended to allow alteration of backend details in a single, consistent
|
||||
way throughout the Tower application.
|
||||
"""
|
||||
def __init__(self, bucket, rw, debug=0, logger=None, nowait=False):
|
||||
"""Instantiate a Socket object, which uses ZeroMQ to actually perform
|
||||
passing a message back and forth.
|
||||
|
||||
Designed to be used as a context manager:
|
||||
|
||||
with Socket('callbacks', 'w') as socket:
|
||||
socket.publish({'message': 'foo bar baz'})
|
||||
|
||||
If listening for messages through a socket, the `listen` method
|
||||
is a simple generator:
|
||||
|
||||
with Socket('callbacks', 'r') as socket:
|
||||
for message in socket.listen():
|
||||
[...]
|
||||
"""
|
||||
self._bucket = bucket
|
||||
self._rw = {
|
||||
'r': zmq.REP,
|
||||
'w': zmq.REQ,
|
||||
}[rw.lower()]
|
||||
|
||||
self._connection_pid = None
|
||||
self._context = None
|
||||
self._socket = None
|
||||
|
||||
self._debug = debug
|
||||
self._logger = logger
|
||||
self._nowait = nowait
|
||||
|
||||
def __enter__(self):
|
||||
self.connect()
|
||||
return self
|
||||
|
||||
def __exit__(self, *args, **kwargs):
|
||||
self.close()
|
||||
|
||||
@property
|
||||
def is_connected(self):
|
||||
if self._socket:
|
||||
return True
|
||||
return False
|
||||
|
||||
@property
|
||||
def port(self):
|
||||
return {
|
||||
'callbacks': os.environ.get('CALLBACK_CONSUMER_PORT',
|
||||
getattr(settings, 'CALLBACK_CONSUMER_PORT', 'tcp://127.0.0.1:5557')),
|
||||
'task_commands': settings.TASK_COMMAND_PORT,
|
||||
'websocket': settings.SOCKETIO_NOTIFICATION_PORT,
|
||||
'fact_cache': settings.FACT_CACHE_PORT,
|
||||
}[self._bucket]
|
||||
|
||||
def connect(self):
|
||||
"""Connect to ZeroMQ."""
|
||||
|
||||
# Make sure that we are clearing everything out if there is
|
||||
# a problem; PID crossover can cause bad news.
|
||||
active_pid = os.getpid()
|
||||
if self._connection_pid is None:
|
||||
self._connection_pid = active_pid
|
||||
if self._connection_pid != active_pid:
|
||||
self._context = None
|
||||
self._socket = None
|
||||
self._connection_pid = active_pid
|
||||
|
||||
# If the port is an integer, convert it into tcp://
|
||||
port = self.port
|
||||
if isinstance(port, int):
|
||||
port = 'tcp://127.0.0.1:%d' % port
|
||||
|
||||
# If the port is None, then this is an intentional dummy;
|
||||
# honor this. (For testing.)
|
||||
if not port:
|
||||
return
|
||||
|
||||
# Okay, create the connection.
|
||||
if self._context is None:
|
||||
self._context = zmq.Context()
|
||||
self._socket = self._context.socket(self._rw)
|
||||
if self._nowait:
|
||||
self._socket.setsockopt(zmq.RCVTIMEO, 2000)
|
||||
self._socket.setsockopt(zmq.LINGER, 1000)
|
||||
if self._rw == zmq.REQ:
|
||||
self._socket.connect(port)
|
||||
else:
|
||||
self._socket.bind(port)
|
||||
|
||||
def close(self):
|
||||
"""Disconnect and tear down."""
|
||||
if self._socket:
|
||||
self._socket.close()
|
||||
self._socket = None
|
||||
self._context = None
|
||||
|
||||
def publish(self, message):
|
||||
"""Publish a message over the socket."""
|
||||
|
||||
# If the port is None, no-op.
|
||||
if self.port is None:
|
||||
return
|
||||
|
||||
# If we are not connected, whine.
|
||||
if not self.is_connected:
|
||||
raise RuntimeError('Cannot publish a message when not connected '
|
||||
'to the socket.')
|
||||
|
||||
# If we are in the wrong mode, whine.
|
||||
if self._rw != zmq.REQ:
|
||||
raise RuntimeError('This socket is not opened for writing.')
|
||||
|
||||
# If we are in debug mode; provide the PID.
|
||||
if self._debug:
|
||||
message.update({'pid': os.getpid(),
|
||||
'connection_pid': self._connection_pid})
|
||||
|
||||
# Send the message.
|
||||
for retry in xrange(4):
|
||||
try:
|
||||
self._socket.send_json(message)
|
||||
self._socket.recv()
|
||||
break
|
||||
except Exception as ex:
|
||||
if self._logger:
|
||||
self._logger.error('Publish Exception: %r; retry=%d',
|
||||
ex, retry, exc_info=True)
|
||||
if retry >= 3:
|
||||
raise
|
||||
|
||||
def listen(self):
|
||||
"""Retrieve a single message from the subcription channel
|
||||
and return it.
|
||||
"""
|
||||
# If the port is None, no-op.
|
||||
if self.port is None:
|
||||
raise StopIteration
|
||||
|
||||
# If we are not connected, whine.
|
||||
if not self.is_connected:
|
||||
raise RuntimeError('Cannot publish a message when not connected '
|
||||
'to the socket.')
|
||||
|
||||
# If we are in the wrong mode, whine.
|
||||
if self._rw != zmq.REP:
|
||||
raise RuntimeError('This socket is not opened for reading.')
|
||||
|
||||
# Actually listen to the socket.
|
||||
while True:
|
||||
try:
|
||||
message = self._socket.recv_json()
|
||||
yield message
|
||||
finally:
|
||||
self._socket.send('1')
|
||||
@@ -32,7 +32,8 @@ import pexpect
|
||||
|
||||
# Celery
|
||||
from celery import Task, task
|
||||
from celery.signals import celeryd_init
|
||||
from celery.signals import celeryd_init, worker_ready
|
||||
from celery import current_app
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
@@ -43,7 +44,6 @@ from django.core.mail import send_mail
|
||||
from django.contrib.auth.models import User
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
from django.core.cache import cache
|
||||
from django.utils.log import configure_logging
|
||||
|
||||
# AWX
|
||||
from awx.main.constants import CLOUD_PROVIDERS
|
||||
@@ -76,7 +76,8 @@ logger = logging.getLogger('awx.main.tasks')
|
||||
def celery_startup(conf=None, **kwargs):
|
||||
# Re-init all schedules
|
||||
# NOTE: Rework this during the Rampart work
|
||||
logger.info("Syncing Tower Schedules")
|
||||
startup_logger = logging.getLogger('awx.main.tasks')
|
||||
startup_logger.info("Syncing Tower Schedules")
|
||||
for sch in Schedule.objects.all():
|
||||
try:
|
||||
sch.update_computed_fields()
|
||||
@@ -85,19 +86,58 @@ def celery_startup(conf=None, **kwargs):
|
||||
logger.error("Failed to rebuild schedule {}: {}".format(sch, e))
|
||||
|
||||
|
||||
@task(queue='broadcast_all')
|
||||
def clear_cache_keys(cache_keys):
|
||||
set_of_keys = set([key for key in cache_keys])
|
||||
def _setup_tower_logger():
|
||||
global logger
|
||||
from django.utils.log import configure_logging
|
||||
LOGGING_DICT = settings.LOGGING
|
||||
if settings.LOG_AGGREGATOR_ENABLED:
|
||||
LOGGING_DICT['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler'
|
||||
LOGGING_DICT['handlers']['http_receiver']['async'] = False
|
||||
if 'awx' in settings.LOG_AGGREGATOR_LOGGERS:
|
||||
if 'http_receiver' not in LOGGING_DICT['loggers']['awx']['handlers']:
|
||||
LOGGING_DICT['loggers']['awx']['handlers'] += ['http_receiver']
|
||||
configure_logging(settings.LOGGING_CONFIG, LOGGING_DICT)
|
||||
logger = logging.getLogger('awx.main.tasks')
|
||||
|
||||
|
||||
@worker_ready.connect
|
||||
def task_set_logger_pre_run(*args, **kwargs):
|
||||
cache.close()
|
||||
if settings.LOG_AGGREGATOR_ENABLED:
|
||||
_setup_tower_logger()
|
||||
logger.debug('Custom Tower logger configured for worker process.')
|
||||
|
||||
|
||||
def _uwsgi_reload():
|
||||
# http://uwsgi-docs.readthedocs.io/en/latest/MasterFIFO.html#available-commands
|
||||
logger.warn('Initiating uWSGI chain reload of server')
|
||||
TRIGGER_CHAIN_RELOAD = 'c'
|
||||
with open('/var/lib/awx/awxfifo', 'w') as awxfifo:
|
||||
awxfifo.write(TRIGGER_CHAIN_RELOAD)
|
||||
|
||||
|
||||
def _reset_celery_logging():
|
||||
# Worker logger reloaded, now send signal to restart pool
|
||||
app = current_app._get_current_object()
|
||||
app.control.broadcast('pool_restart', arguments={'reload': True},
|
||||
destination=['celery@{}'.format(settings.CLUSTER_HOST_ID)], reply=False)
|
||||
|
||||
|
||||
def _clear_cache_keys(set_of_keys):
|
||||
logger.debug('cache delete_many(%r)', set_of_keys)
|
||||
cache.delete_many(set_of_keys)
|
||||
|
||||
|
||||
@task(queue='broadcast_all')
|
||||
def process_cache_changes(cache_keys):
|
||||
logger.warn('Processing cache changes, task args: {0.args!r} kwargs: {0.kwargs!r}'.format(
|
||||
process_cache_changes.request))
|
||||
set_of_keys = set([key for key in cache_keys])
|
||||
_clear_cache_keys(set_of_keys)
|
||||
for setting_key in set_of_keys:
|
||||
if setting_key.startswith('LOG_AGGREGATOR_'):
|
||||
LOGGING = settings.LOGGING
|
||||
if settings.LOG_AGGREGATOR_ENABLED:
|
||||
LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler'
|
||||
else:
|
||||
LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSNullHandler'
|
||||
configure_logging(settings.LOGGING_CONFIG, LOGGING)
|
||||
_uwsgi_reload()
|
||||
_reset_celery_logging()
|
||||
break
|
||||
|
||||
|
||||
@@ -107,8 +147,12 @@ def send_notifications(notification_list, job_id=None):
|
||||
raise TypeError("notification_list should be of type list")
|
||||
if job_id is not None:
|
||||
job_actual = UnifiedJob.objects.get(id=job_id)
|
||||
for notification_id in notification_list:
|
||||
notification = Notification.objects.get(id=notification_id)
|
||||
|
||||
notifications = Notification.objects.filter(id__in=notification_list)
|
||||
if job_id is not None:
|
||||
job_actual.notifications.add(*notifications)
|
||||
|
||||
for notification in notifications:
|
||||
try:
|
||||
sent = notification.notification_template.send(notification.subject, notification.body)
|
||||
notification.status = "successful"
|
||||
@@ -119,12 +163,11 @@ def send_notifications(notification_list, job_id=None):
|
||||
notification.error = smart_str(e)
|
||||
finally:
|
||||
notification.save()
|
||||
if job_id is not None:
|
||||
job_actual.notifications.add(notification)
|
||||
|
||||
|
||||
@task(bind=True, queue='default')
|
||||
def run_administrative_checks(self):
|
||||
logger.warn("Running administrative checks.")
|
||||
if not settings.TOWER_ADMIN_ALERTS:
|
||||
return
|
||||
validation_info = TaskEnhancer().validate_enhancements()
|
||||
@@ -146,11 +189,13 @@ def run_administrative_checks(self):
|
||||
|
||||
@task(bind=True, queue='default')
|
||||
def cleanup_authtokens(self):
|
||||
logger.warn("Cleaning up expired authtokens.")
|
||||
AuthToken.objects.filter(expires__lt=now()).delete()
|
||||
|
||||
|
||||
@task(bind=True)
|
||||
def cluster_node_heartbeat(self):
|
||||
logger.debug("Cluster node heartbeat task.")
|
||||
inst = Instance.objects.filter(hostname=settings.CLUSTER_HOST_ID)
|
||||
if inst.exists():
|
||||
inst = inst[0]
|
||||
@@ -370,9 +415,12 @@ class BaseTask(Task):
|
||||
data += '\n'
|
||||
# For credentials used with ssh-add, write to a named pipe which
|
||||
# will be read then closed, instead of leaving the SSH key on disk.
|
||||
if name in ('credential', 'network_credential', 'scm_credential', 'ad_hoc_credential') and not ssh_too_old:
|
||||
if name in ('credential', 'scm_credential', 'ad_hoc_credential') and not ssh_too_old:
|
||||
path = os.path.join(kwargs.get('private_data_dir', tempfile.gettempdir()), name)
|
||||
self.open_fifo_write(path, data)
|
||||
# Ansible network modules do not yet support ssh-agent.
|
||||
# Instead, ssh private key file is explicitly passed via an
|
||||
# env variable.
|
||||
else:
|
||||
handle, path = tempfile.mkstemp(dir=kwargs.get('private_data_dir', None))
|
||||
f = os.fdopen(handle, 'w')
|
||||
@@ -452,7 +500,7 @@ class BaseTask(Task):
|
||||
for k,v in env.items():
|
||||
if k in ('REST_API_URL', 'AWS_ACCESS_KEY', 'AWS_ACCESS_KEY_ID'):
|
||||
continue
|
||||
elif k.startswith('ANSIBLE_'):
|
||||
elif k.startswith('ANSIBLE_') and not k.startswith('ANSIBLE_NET'):
|
||||
continue
|
||||
elif hidden_re.search(k):
|
||||
env[k] = HIDDEN_PASSWORD
|
||||
@@ -616,7 +664,7 @@ class BaseTask(Task):
|
||||
for child_proc in child_procs:
|
||||
os.kill(child_proc.pid, signal.SIGKILL)
|
||||
os.kill(main_proc.pid, signal.SIGKILL)
|
||||
except TypeError:
|
||||
except (TypeError, psutil.Error):
|
||||
os.kill(job.pid, signal.SIGKILL)
|
||||
else:
|
||||
os.kill(job.pid, signal.SIGTERM)
|
||||
@@ -706,6 +754,11 @@ class BaseTask(Task):
|
||||
stdout_handle.close()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
instance = self.update_model(pk)
|
||||
if instance.cancel_flag:
|
||||
status = 'canceled'
|
||||
|
||||
instance = self.update_model(pk, status=status, result_traceback=tb,
|
||||
output_replacements=output_replacements,
|
||||
**extra_update_fields)
|
||||
@@ -807,8 +860,10 @@ class RunJob(BaseTask):
|
||||
env['REST_API_URL'] = settings.INTERNAL_API_URL
|
||||
env['REST_API_TOKEN'] = job.task_auth_token or ''
|
||||
env['TOWER_HOST'] = settings.TOWER_URL_BASE
|
||||
env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA)
|
||||
env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE
|
||||
env['CALLBACK_CONNECTION'] = settings.BROKER_URL
|
||||
env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else ''
|
||||
if getattr(settings, 'JOB_CALLBACK_DEBUG', False):
|
||||
env['JOB_CALLBACK_DEBUG'] = '2'
|
||||
elif settings.DEBUG:
|
||||
@@ -865,10 +920,14 @@ class RunJob(BaseTask):
|
||||
env['ANSIBLE_NET_USERNAME'] = network_cred.username
|
||||
env['ANSIBLE_NET_PASSWORD'] = decrypt_field(network_cred, 'password')
|
||||
|
||||
ssh_keyfile = kwargs.get('private_data_files', {}).get('network_credential', '')
|
||||
if ssh_keyfile:
|
||||
env['ANSIBLE_NET_SSH_KEYFILE'] = ssh_keyfile
|
||||
|
||||
authorize = network_cred.authorize
|
||||
env['ANSIBLE_NET_AUTHORIZE'] = unicode(int(authorize))
|
||||
if authorize:
|
||||
env['ANSIBLE_NET_AUTHORIZE_PASSWORD'] = decrypt_field(network_cred, 'authorize_password')
|
||||
env['ANSIBLE_NET_AUTH_PASS'] = decrypt_field(network_cred, 'authorize_password')
|
||||
|
||||
# Set environment variables related to scan jobs
|
||||
if job.job_type == PERM_INVENTORY_SCAN:
|
||||
@@ -984,21 +1043,23 @@ class RunJob(BaseTask):
|
||||
|
||||
def get_password_prompts(self):
|
||||
d = super(RunJob, self).get_password_prompts()
|
||||
d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
|
||||
d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
d[re.compile(r'^sudo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SUDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^su password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SU password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^PBRUN password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^pbrun password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^PFEXEC password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^pfexec password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^RUNAS password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^runas password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SSH password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'^Password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'^Vault password:\s*?$', re.M)] = 'vault_password'
|
||||
d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
|
||||
d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'Vault password:\s*?$', re.M)] = 'vault_password'
|
||||
return d
|
||||
|
||||
def get_stdout_handle(self, instance):
|
||||
@@ -1012,6 +1073,10 @@ class RunJob(BaseTask):
|
||||
|
||||
def job_event_callback(event_data):
|
||||
event_data.setdefault('job_id', instance.id)
|
||||
if 'uuid' in event_data:
|
||||
cache_event = cache.get('ev-{}'.format(event_data['uuid']), None)
|
||||
if cache_event is not None:
|
||||
event_data.update(cache_event)
|
||||
dispatcher.dispatch(event_data)
|
||||
else:
|
||||
def job_event_callback(event_data):
|
||||
@@ -1027,8 +1092,15 @@ class RunJob(BaseTask):
|
||||
private_data_files = kwargs.get('private_data_files', {})
|
||||
if 'credential' in private_data_files:
|
||||
return private_data_files.get('credential')
|
||||
elif 'network_credential' in private_data_files:
|
||||
return private_data_files.get('network_credential')
|
||||
'''
|
||||
Note: Don't inject network ssh key data into ssh-agent for network
|
||||
credentials because the ansible modules do not yet support it.
|
||||
We will want to add back in support when/if Ansible network modules
|
||||
support this.
|
||||
'''
|
||||
#elif 'network_credential' in private_data_files:
|
||||
# return private_data_files.get('network_credential')
|
||||
|
||||
return ''
|
||||
|
||||
def should_use_proot(self, instance, **kwargs):
|
||||
@@ -1042,17 +1114,18 @@ class RunJob(BaseTask):
|
||||
local_project_sync = job.project.create_project_update(launch_type="sync")
|
||||
local_project_sync.job_type = 'run'
|
||||
local_project_sync.save()
|
||||
# save the associated project update before calling run() so that a
|
||||
# cancel() call on the job can cancel the project update
|
||||
job = self.update_model(job.pk, project_update=local_project_sync)
|
||||
|
||||
project_update_task = local_project_sync._get_task_class()
|
||||
try:
|
||||
project_update_task().run(local_project_sync.id)
|
||||
job.scm_revision = job.project.scm_revision
|
||||
job.project_update = local_project_sync
|
||||
job.save()
|
||||
job = self.update_model(job.pk, scm_revision=job.project.scm_revision)
|
||||
except Exception:
|
||||
job.status = 'failed'
|
||||
job.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % \
|
||||
('project_update', local_project_sync.name, local_project_sync.id)
|
||||
job.save()
|
||||
job = self.update_model(job.pk, status='failed',
|
||||
job_explanation=('Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' %
|
||||
('project_update', local_project_sync.name, local_project_sync.id)))
|
||||
raise
|
||||
|
||||
def post_run_hook(self, job, status, **kwargs):
|
||||
@@ -1222,12 +1295,12 @@ class RunProjectUpdate(BaseTask):
|
||||
|
||||
def get_password_prompts(self):
|
||||
d = super(RunProjectUpdate, self).get_password_prompts()
|
||||
d[re.compile(r'^Username for.*:\s*?$', re.M)] = 'scm_username'
|
||||
d[re.compile(r'^Password for.*:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'^Password:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'^\S+?@\S+?\'s\s+?password:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'scm_key_unlock'
|
||||
d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
d[re.compile(r'Username for.*:\s*?$', re.M)] = 'scm_username'
|
||||
d[re.compile(r'Password for.*:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'Password:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'\S+?@\S+?\'s\s+?password:\s*?$', re.M)] = 'scm_password'
|
||||
d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'scm_key_unlock'
|
||||
d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
# FIXME: Configure whether we should auto accept host keys?
|
||||
d[re.compile(r'^Are you sure you want to continue connecting \(yes/no\)\?\s*?$', re.M)] = 'yes'
|
||||
return d
|
||||
@@ -1338,10 +1411,22 @@ class RunInventoryUpdate(BaseTask):
|
||||
'password'))
|
||||
# Allow custom options to vmware inventory script.
|
||||
elif inventory_update.source == 'vmware':
|
||||
section = 'defaults'
|
||||
credential = inventory_update.credential
|
||||
|
||||
section = 'vmware'
|
||||
cp.add_section(section)
|
||||
cp.set('vmware', 'cache_max_age', 0)
|
||||
|
||||
cp.set('vmware', 'username', credential.username)
|
||||
cp.set('vmware', 'password', decrypt_field(credential, 'password'))
|
||||
cp.set('vmware', 'server', credential.host)
|
||||
|
||||
vmware_opts = dict(inventory_update.source_vars_dict.items())
|
||||
vmware_opts.setdefault('guests_only', 'True')
|
||||
if inventory_update.instance_filters:
|
||||
vmware_opts.setdefault('host_filters', inventory_update.instance_filters)
|
||||
if inventory_update.group_by:
|
||||
vmware_opts.setdefault('groupby_patterns', inventory_update.groupby_patterns)
|
||||
|
||||
for k,v in vmware_opts.items():
|
||||
cp.set(section, k, unicode(v))
|
||||
|
||||
@@ -1362,7 +1447,9 @@ class RunInventoryUpdate(BaseTask):
|
||||
|
||||
section = 'ansible'
|
||||
cp.add_section(section)
|
||||
cp.set(section, 'group_patterns', '["{app}-{tier}-{color}", "{app}-{color}", "{app}", "{tier}"]')
|
||||
cp.set(section, 'group_patterns', os.environ.get('SATELLITE6_GROUP_PATTERNS', []))
|
||||
cp.set(section, 'want_facts', True)
|
||||
cp.set(section, 'group_prefix', os.environ.get('SATELLITE6_GROUP_PREFIX', 'foreman_'))
|
||||
|
||||
section = 'cache'
|
||||
cp.add_section(section)
|
||||
@@ -1459,10 +1546,7 @@ class RunInventoryUpdate(BaseTask):
|
||||
# complain about not being able to determine its version number.
|
||||
env['PBR_VERSION'] = '0.5.21'
|
||||
elif inventory_update.source == 'vmware':
|
||||
env['VMWARE_INI'] = cloud_credential
|
||||
env['VMWARE_HOST'] = passwords.get('source_host', '')
|
||||
env['VMWARE_USER'] = passwords.get('source_username', '')
|
||||
env['VMWARE_PASSWORD'] = passwords.get('source_password', '')
|
||||
env['VMWARE_INI_PATH'] = cloud_credential
|
||||
elif inventory_update.source == 'azure':
|
||||
env['AZURE_SUBSCRIPTION_ID'] = passwords.get('source_username', '')
|
||||
env['AZURE_CERT_PATH'] = cloud_credential
|
||||
@@ -1647,6 +1731,7 @@ class RunAdHocCommand(BaseTask):
|
||||
env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE
|
||||
env['CALLBACK_CONNECTION'] = settings.BROKER_URL
|
||||
env['ANSIBLE_SFTP_BATCH_MODE'] = 'False'
|
||||
env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else ''
|
||||
if getattr(settings, 'JOB_CALLBACK_DEBUG', False):
|
||||
env['JOB_CALLBACK_DEBUG'] = '2'
|
||||
elif settings.DEBUG:
|
||||
@@ -1722,20 +1807,22 @@ class RunAdHocCommand(BaseTask):
|
||||
|
||||
def get_password_prompts(self):
|
||||
d = super(RunAdHocCommand, self).get_password_prompts()
|
||||
d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
|
||||
d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
d[re.compile(r'^sudo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SUDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^su password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SU password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^PBRUN password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^pbrun password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^PFEXEC password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^pfexec password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^RUNAS password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^runas password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'^SSH password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'^Password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock'
|
||||
d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = ''
|
||||
d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password'
|
||||
d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password'
|
||||
d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password'
|
||||
return d
|
||||
|
||||
def get_stdout_handle(self, instance):
|
||||
@@ -1749,6 +1836,10 @@ class RunAdHocCommand(BaseTask):
|
||||
|
||||
def ad_hoc_command_event_callback(event_data):
|
||||
event_data.setdefault('ad_hoc_command_id', instance.id)
|
||||
if 'uuid' in event_data:
|
||||
cache_event = cache.get('ev-{}'.format(event_data['uuid']), None)
|
||||
if cache_event is not None:
|
||||
event_data.update(cache_event)
|
||||
dispatcher.dispatch(event_data)
|
||||
else:
|
||||
def ad_hoc_command_event_callback(event_data):
|
||||
@@ -1788,7 +1879,9 @@ class RunSystemJob(BaseTask):
|
||||
if 'days' in json_vars and system_job.job_type != 'cleanup_facts':
|
||||
args.extend(['--days', str(json_vars.get('days', 60))])
|
||||
if system_job.job_type == 'cleanup_jobs':
|
||||
args.extend(['--jobs', '--project-updates', '--inventory-updates', '--management-jobs', '--ad-hoc-commands'])
|
||||
args.extend(['--jobs', '--project-updates', '--inventory-updates',
|
||||
'--management-jobs', '--ad-hoc-commands', '--workflow-jobs',
|
||||
'--notifications'])
|
||||
if system_job.job_type == 'cleanup_facts':
|
||||
if 'older_than' in json_vars:
|
||||
args.extend(['--older_than', str(json_vars['older_than'])])
|
||||
|
||||
@@ -45,3 +45,18 @@ def test_role_team_view_access(rando, team, inventory, mocker, post):
|
||||
mock_access.assert_called_once_with(
|
||||
inventory.admin_role, team, 'member_role.parents', data,
|
||||
skip_sub_obj_read_check=False)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_org_associate_with_junk_data(rando, admin_user, organization, post):
|
||||
"""
|
||||
Assure that post-hoc enforcement of auditor role
|
||||
will turn off if the action is an association
|
||||
"""
|
||||
user_data = {'is_system_auditor': True, 'id': rando.pk}
|
||||
post(url=reverse('api:organization_users_list', args=(organization.pk,)),
|
||||
data=user_data, expect=204, user=admin_user)
|
||||
# assure user is now an org member
|
||||
assert rando in organization.member_role
|
||||
# assure that this did not also make them a system auditor
|
||||
assert not rando.is_system_auditor
|
||||
|
||||
@@ -339,39 +339,6 @@ def test_list_created_org_credentials(post, get, organization, org_admin, org_me
|
||||
assert response.data['count'] == 0
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_cant_change_organization(patch, credential, organization, org_admin):
|
||||
credential.organization = organization
|
||||
credential.save()
|
||||
|
||||
response = patch(reverse('api:credential_detail', args=(credential.id,)), {
|
||||
'name': 'Some new name',
|
||||
}, org_admin)
|
||||
assert response.status_code == 200
|
||||
|
||||
response = patch(reverse('api:credential_detail', args=(credential.id,)), {
|
||||
'name': 'Some new name2',
|
||||
'organization': organization.id, # fine for it to be the same
|
||||
}, org_admin)
|
||||
assert response.status_code == 200
|
||||
|
||||
response = patch(reverse('api:credential_detail', args=(credential.id,)), {
|
||||
'name': 'Some new name3',
|
||||
'organization': None
|
||||
}, org_admin)
|
||||
assert response.status_code == 403
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_cant_add_organization(patch, credential, organization, org_admin):
|
||||
assert credential.organization is None
|
||||
response = patch(reverse('api:credential_detail', args=(credential.id,)), {
|
||||
'name': 'Some new name',
|
||||
'organization': organization.id
|
||||
}, org_admin)
|
||||
assert response.status_code == 403
|
||||
|
||||
|
||||
#
|
||||
# Openstack Credentials
|
||||
#
|
||||
|
||||
@@ -65,6 +65,17 @@ def test_edit_sensitive_fields(patch, job_template_factory, alice, grant_project
|
||||
}, alice, expect=expect)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_reject_dict_extra_vars_patch(patch, job_template_factory, admin_user):
|
||||
# Expect a string for extra_vars, raise 400 in this case that would
|
||||
# otherwise have been saved incorrectly
|
||||
jt = job_template_factory(
|
||||
'jt', organization='org1', project='prj', inventory='inv', credential='cred'
|
||||
).job_template
|
||||
patch(reverse('api:job_template_detail', args=(jt.id,)),
|
||||
{'extra_vars': {'foo': 5}}, admin_user, expect=400)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_edit_playbook(patch, job_template_factory, alice):
|
||||
objs = job_template_factory('jt', organization='org1', project='prj', inventory='inv', credential='cred')
|
||||
|
||||
@@ -44,6 +44,27 @@ def test_license_cannot_be_removed_via_system_settings(mock_no_license_file, get
|
||||
assert response.data['LICENSE']
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_jobs_settings(get, put, patch, delete, admin):
|
||||
url = reverse('api:setting_singleton_detail', args=('jobs',))
|
||||
get(url, user=admin, expect=200)
|
||||
delete(url, user=admin, expect=204)
|
||||
response = get(url, user=admin, expect=200)
|
||||
data = dict(response.data.items())
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
patch(url, user=admin, data={'AWX_PROOT_HIDE_PATHS': ['/home']}, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['AWX_PROOT_HIDE_PATHS'] == ['/home']
|
||||
data.pop('AWX_PROOT_HIDE_PATHS')
|
||||
data.pop('AWX_PROOT_SHOW_PATHS')
|
||||
data.pop('AWX_ANSIBLE_CALLBACK_PLUGINS')
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['AWX_PROOT_HIDE_PATHS'] == []
|
||||
assert response.data['AWX_PROOT_SHOW_PATHS'] == []
|
||||
assert response.data['AWX_ANSIBLE_CALLBACK_PLUGINS'] == []
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_ldap_settings(get, put, patch, delete, admin, enterprise_license):
|
||||
url = reverse('api:setting_singleton_detail', args=('ldap',))
|
||||
@@ -65,6 +86,26 @@ def test_ldap_settings(get, put, patch, delete, admin, enterprise_license):
|
||||
patch(url, user=admin, data={'AUTH_LDAP_SERVER_URI': 'ldap://ldap.example.com, ldap://ldap2.example.com'}, expect=200)
|
||||
|
||||
|
||||
@pytest.mark.parametrize('setting', [
|
||||
'AUTH_LDAP_USER_DN_TEMPLATE',
|
||||
'AUTH_LDAP_REQUIRE_GROUP',
|
||||
'AUTH_LDAP_DENY_GROUP',
|
||||
])
|
||||
@pytest.mark.django_db
|
||||
def test_empty_ldap_dn(get, put, patch, delete, admin, enterprise_license,
|
||||
setting):
|
||||
url = reverse('api:setting_singleton_detail', args=('ldap',))
|
||||
Setting.objects.create(key='LICENSE', value=enterprise_license)
|
||||
|
||||
patch(url, user=admin, data={setting: ''}, expect=200)
|
||||
resp = get(url, user=admin, expect=200)
|
||||
assert resp.data[setting] is None
|
||||
|
||||
patch(url, user=admin, data={setting: None}, expect=200)
|
||||
resp = get(url, user=admin, expect=200)
|
||||
assert resp.data[setting] is None
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_radius_settings(get, put, patch, delete, admin, enterprise_license, settings):
|
||||
url = reverse('api:setting_singleton_detail', args=('radius',))
|
||||
|
||||
@@ -7,65 +7,41 @@ from django.core.urlresolvers import reverse
|
||||
# user creation
|
||||
#
|
||||
|
||||
EXAMPLE_USER_DATA = {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "r$TyKiOCb#ED"
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_create(post, admin):
|
||||
response = post(reverse('api:user_list'), {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "fo0m4nchU"
|
||||
}, admin)
|
||||
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin)
|
||||
assert response.status_code == 201
|
||||
assert not response.data['is_superuser']
|
||||
assert not response.data['is_system_auditor']
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_fail_double_create_user(post, admin):
|
||||
response = post(reverse('api:user_list'), {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "fo0m4nchU"
|
||||
}, admin)
|
||||
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin)
|
||||
assert response.status_code == 201
|
||||
|
||||
response = post(reverse('api:user_list'), {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "fo0m4nchU"
|
||||
}, admin)
|
||||
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin)
|
||||
assert response.status_code == 400
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_create_delete_create_user(post, delete, admin):
|
||||
response = post(reverse('api:user_list'), {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "fo0m4nchU"
|
||||
}, admin)
|
||||
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin)
|
||||
assert response.status_code == 201
|
||||
|
||||
response = delete(reverse('api:user_detail', args=(response.data['id'],)), admin)
|
||||
assert response.status_code == 204
|
||||
|
||||
response = post(reverse('api:user_list'), {
|
||||
"username": "affable",
|
||||
"first_name": "a",
|
||||
"last_name": "a",
|
||||
"email": "a@a.com",
|
||||
"is_superuser": False,
|
||||
"password": "fo0m4nchU"
|
||||
}, admin)
|
||||
response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin)
|
||||
print(response.data)
|
||||
assert response.status_code == 201
|
||||
|
||||
@@ -41,7 +41,7 @@ from awx.main.models.organization import (
|
||||
Permission,
|
||||
Team,
|
||||
)
|
||||
|
||||
from awx.main.models.rbac import Role
|
||||
from awx.main.models.notifications import (
|
||||
NotificationTemplate,
|
||||
Notification
|
||||
@@ -262,6 +262,13 @@ def admin(user):
|
||||
return user('admin', True)
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def system_auditor(user):
|
||||
u = user(False)
|
||||
Role.singleton('system_auditor').members.add(u)
|
||||
return u
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def alice(user):
|
||||
return user('alice', False)
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
import mock # noqa
|
||||
import pytest
|
||||
|
||||
@@ -22,6 +24,84 @@ def team_project_list(organization_factory):
|
||||
return objects
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_project_paged_list(get, organization_factory):
|
||||
'Test project listing that spans multiple pages'
|
||||
|
||||
# 3 total projects, 1 per page, 3 pages
|
||||
objects = organization_factory(
|
||||
'org1',
|
||||
projects=['project-%s' % i for i in range(3)],
|
||||
users=['alice'],
|
||||
roles=['project-%s.admin_role:alice' % i for i in range(3)],
|
||||
)
|
||||
|
||||
# first page has first project and no previous page
|
||||
pk = objects.users.alice.pk
|
||||
url = reverse('api:user_projects_list', args=(pk,))
|
||||
results = get(url, objects.users.alice, QUERY_STRING='page_size=1').data
|
||||
assert results['count'] == 3
|
||||
assert len(results['results']) == 1
|
||||
assert results['previous'] is None
|
||||
assert results['next'] == (
|
||||
'/api/v1/users/%s/projects/?page=2&page_size=1' % pk
|
||||
)
|
||||
|
||||
# second page has one more, a previous and next page
|
||||
results = get(url, objects.users.alice,
|
||||
QUERY_STRING='page=2&page_size=1').data
|
||||
assert len(results['results']) == 1
|
||||
assert results['previous'] == (
|
||||
'/api/v1/users/%s/projects/?page=1&page_size=1' % pk
|
||||
)
|
||||
assert results['next'] == (
|
||||
'/api/v1/users/%s/projects/?page=3&page_size=1' % pk
|
||||
)
|
||||
|
||||
# third page has last project and a previous page
|
||||
results = get(url, objects.users.alice,
|
||||
QUERY_STRING='page=3&page_size=1').data
|
||||
assert len(results['results']) == 1
|
||||
assert results['previous'] == (
|
||||
'/api/v1/users/%s/projects/?page=2&page_size=1' % pk
|
||||
)
|
||||
assert results['next'] is None
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_project_paged_list_with_unicode(get, organization_factory):
|
||||
'Test project listing that contains unicode chars in the next/prev links'
|
||||
|
||||
# Create 2 projects that contain a "cloud" unicode character, make sure we
|
||||
# can search it and properly generate next/previous page links
|
||||
objects = organization_factory(
|
||||
'org1',
|
||||
projects=['project-☁-1','project-☁-2'],
|
||||
users=['alice'],
|
||||
roles=['project-☁-1.admin_role:alice','project-☁-2.admin_role:alice'],
|
||||
)
|
||||
pk = objects.users.alice.pk
|
||||
url = reverse('api:user_projects_list', args=(pk,))
|
||||
|
||||
# first on first page, next page link contains unicode char
|
||||
results = get(url, objects.users.alice,
|
||||
QUERY_STRING='page_size=1&search=%E2%98%81').data
|
||||
assert results['count'] == 2
|
||||
assert len(results['results']) == 1
|
||||
assert results['next'] == (
|
||||
'/api/v1/users/%s/projects/?page=2&page_size=1&search=%%E2%%98%%81' % pk # noqa
|
||||
)
|
||||
|
||||
# second project on second page, previous page link contains unicode char
|
||||
results = get(url, objects.users.alice,
|
||||
QUERY_STRING='page=2&page_size=1&search=%E2%98%81').data
|
||||
assert results['count'] == 2
|
||||
assert len(results['results']) == 1
|
||||
assert results['previous'] == (
|
||||
'/api/v1/users/%s/projects/?page=1&page_size=1&search=%%E2%%98%%81' % pk # noqa
|
||||
)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_project_list(get, organization_factory):
|
||||
'List of projects a user has access to, filtered by projects you can also see'
|
||||
|
||||
@@ -259,22 +259,37 @@ def test_associate_label(label, user, job_template):
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_move_schedule_to_JT_no_access(job_template, rando):
|
||||
schedule = Schedule.objects.create(
|
||||
unified_job_template=job_template,
|
||||
rrule='DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1')
|
||||
job_template.admin_role.members.add(rando)
|
||||
jt2 = JobTemplate.objects.create(name="other-jt")
|
||||
access = ScheduleAccess(rando)
|
||||
assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk))
|
||||
class TestJobTemplateSchedules:
|
||||
|
||||
rrule = 'DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1'
|
||||
rrule2 = 'DTSTART:20151117T050000Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1'
|
||||
|
||||
@pytest.fixture
|
||||
def jt2(self):
|
||||
return JobTemplate.objects.create(name="other-jt")
|
||||
|
||||
def test_move_schedule_to_JT_no_access(self, job_template, rando, jt2):
|
||||
schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule)
|
||||
job_template.admin_role.members.add(rando)
|
||||
access = ScheduleAccess(rando)
|
||||
assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk))
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_move_schedule_from_JT_no_access(job_template, rando):
|
||||
schedule = Schedule.objects.create(
|
||||
unified_job_template=job_template,
|
||||
rrule='DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1')
|
||||
jt2 = JobTemplate.objects.create(name="other-jt")
|
||||
jt2.admin_role.members.add(rando)
|
||||
access = ScheduleAccess(rando)
|
||||
assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk))
|
||||
def test_move_schedule_from_JT_no_access(self, job_template, rando, jt2):
|
||||
schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule)
|
||||
jt2.admin_role.members.add(rando)
|
||||
access = ScheduleAccess(rando)
|
||||
assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk))
|
||||
|
||||
|
||||
def test_can_create_schedule_with_execute(self, job_template, rando):
|
||||
job_template.execute_role.members.add(rando)
|
||||
access = ScheduleAccess(rando)
|
||||
assert access.can_add({'unified_job_template': job_template})
|
||||
|
||||
|
||||
def test_can_modify_ones_own_schedule(self, job_template, rando):
|
||||
job_template.execute_role.members.add(rando)
|
||||
schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule, created_by=rando)
|
||||
access = ScheduleAccess(rando)
|
||||
assert access.can_change(schedule, {'rrule': self.rrule2})
|
||||
|
||||
@@ -9,7 +9,7 @@ from awx.main.models import Role, User, Organization, Inventory
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestSysAuditor(TransactionTestCase):
|
||||
class TestSysAuditorTransactional(TransactionTestCase):
|
||||
def rando(self):
|
||||
return User.objects.create(username='rando', password='rando', email='rando@com.com')
|
||||
|
||||
@@ -41,6 +41,10 @@ class TestSysAuditor(TransactionTestCase):
|
||||
assert not rando.is_system_auditor
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_system_auditor_is_system_auditor(system_auditor):
|
||||
assert system_auditor.is_system_auditor
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_user_admin(user_project, project, user):
|
||||
|
||||
@@ -51,19 +51,50 @@ class TestWorkflowJobTemplateAccess:
|
||||
@pytest.mark.django_db
|
||||
class TestWorkflowJobTemplateNodeAccess:
|
||||
|
||||
def test_jt_access_to_edit(self, wfjt_node, org_admin):
|
||||
def test_no_jt_access_to_edit(self, wfjt_node, org_admin):
|
||||
# without access to the related job template, admin to the WFJT can
|
||||
# not change the prompted parameters
|
||||
access = WorkflowJobTemplateNodeAccess(org_admin)
|
||||
assert not access.can_change(wfjt_node, {'job_type': 'scan'})
|
||||
|
||||
def test_add_JT_no_start_perm(self, wfjt, job_template, rando):
|
||||
wfjt.admin_role.members.add(rando)
|
||||
access = WorkflowJobTemplateNodeAccess(rando)
|
||||
job_template.read_role.members.add(rando)
|
||||
assert not access.can_add({
|
||||
'workflow_job_template': wfjt,
|
||||
'unified_job_template': job_template})
|
||||
|
||||
def test_add_node_with_minimum_permissions(self, wfjt, job_template, inventory, rando):
|
||||
wfjt.admin_role.members.add(rando)
|
||||
access = WorkflowJobTemplateNodeAccess(rando)
|
||||
job_template.execute_role.members.add(rando)
|
||||
inventory.use_role.members.add(rando)
|
||||
assert access.can_add({
|
||||
'workflow_job_template': wfjt,
|
||||
'inventory': inventory,
|
||||
'unified_job_template': job_template})
|
||||
|
||||
def test_remove_unwanted_foreign_node(self, wfjt_node, job_template, rando):
|
||||
wfjt = wfjt_node.workflow_job_template
|
||||
wfjt.admin_role.members.add(rando)
|
||||
wfjt_node.unified_job_template = job_template
|
||||
access = WorkflowJobTemplateNodeAccess(rando)
|
||||
assert access.can_delete(wfjt_node)
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
class TestWorkflowJobAccess:
|
||||
|
||||
def test_wfjt_admin_delete(self, wfjt, workflow_job, rando):
|
||||
wfjt.admin_role.members.add(rando)
|
||||
access = WorkflowJobAccess(rando)
|
||||
def test_org_admin_can_delete_workflow_job(self, workflow_job, org_admin):
|
||||
access = WorkflowJobAccess(org_admin)
|
||||
assert access.can_delete(workflow_job)
|
||||
|
||||
def test_wfjt_admin_can_delete_workflow_job(self, workflow_job, rando):
|
||||
workflow_job.workflow_job_template.admin_role.members.add(rando)
|
||||
access = WorkflowJobAccess(rando)
|
||||
assert not access.can_delete(workflow_job)
|
||||
|
||||
def test_cancel_your_own_job(self, wfjt, workflow_job, rando):
|
||||
wfjt.execute_role.members.add(rando)
|
||||
workflow_job.created_by = rando
|
||||
@@ -71,6 +102,19 @@ class TestWorkflowJobAccess:
|
||||
access = WorkflowJobAccess(rando)
|
||||
assert access.can_cancel(workflow_job)
|
||||
|
||||
def test_copy_permissions_org_admin(self, wfjt, org_admin, org_member):
|
||||
admin_access = WorkflowJobTemplateAccess(org_admin)
|
||||
assert admin_access.can_copy(wfjt)
|
||||
|
||||
def test_copy_permissions_user(self, wfjt, org_admin, org_member):
|
||||
'''
|
||||
Only org admins are able to add WFJTs, only org admins
|
||||
are able to copy them
|
||||
'''
|
||||
wfjt.admin_role.members.add(org_member)
|
||||
member_access = WorkflowJobTemplateAccess(org_member)
|
||||
assert not member_access.can_copy(wfjt)
|
||||
|
||||
def test_workflow_copy_warnings_inv(self, wfjt, rando, inventory):
|
||||
'''
|
||||
The user `rando` does not have access to the prompted inventory in a
|
||||
@@ -80,13 +124,11 @@ class TestWorkflowJobAccess:
|
||||
access = WorkflowJobTemplateAccess(rando, save_messages=True)
|
||||
assert not access.can_copy(wfjt)
|
||||
warnings = access.messages
|
||||
assert 1 in warnings
|
||||
assert 'inventory' in warnings[1]
|
||||
assert 'inventories_unable_to_copy' in warnings
|
||||
|
||||
def test_workflow_copy_warnings_jt(self, wfjt, rando, job_template):
|
||||
wfjt.workflow_job_template_nodes.create(unified_job_template=job_template)
|
||||
access = WorkflowJobTemplateAccess(rando, save_messages=True)
|
||||
assert not access.can_copy(wfjt)
|
||||
warnings = access.messages
|
||||
assert 1 in warnings
|
||||
assert 'unified_job_template' in warnings[1]
|
||||
assert 'templates_unable_to_copy' in warnings
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc.
|
||||
# All Rights Reserved.
|
||||
|
||||
import json
|
||||
|
||||
from django.test import TestCase
|
||||
|
||||
from rest_framework.permissions import AllowAny
|
||||
from rest_framework.test import APIRequestFactory
|
||||
from rest_framework.views import APIView
|
||||
|
||||
from awx.api.utils.decorators import paginated
|
||||
|
||||
|
||||
class PaginatedDecoratorTests(TestCase):
|
||||
"""A set of tests for ensuring that the "paginated" decorator works
|
||||
in the way we expect.
|
||||
"""
|
||||
def setUp(self):
|
||||
self.rf = APIRequestFactory()
|
||||
|
||||
# Define an uninteresting view that we can use to test
|
||||
# that the paginator wraps in the way we expect.
|
||||
class View(APIView):
|
||||
permission_classes = (AllowAny,)
|
||||
|
||||
@paginated
|
||||
def get(self, request, limit, ordering, offset):
|
||||
return ['a', 'b', 'c', 'd', 'e'], 26, None
|
||||
self.view = View.as_view()
|
||||
|
||||
def test_implicit_first_page(self):
|
||||
"""Establish that if we get an implicit request for the first page
|
||||
(e.g. no page provided), that it is returned appropriately.
|
||||
"""
|
||||
# Create a request, and run the paginated function.
|
||||
request = self.rf.get('/dummy/', {'page_size': 5})
|
||||
response = self.view(request)
|
||||
|
||||
# Ensure the response looks like what it should.
|
||||
r = json.loads(response.rendered_content)
|
||||
self.assertEqual(r['count'], 26)
|
||||
self.assertIn(r['next'],
|
||||
(u'/dummy/?page=2&page_size=5',
|
||||
u'/dummy/?page_size=5&page=2'))
|
||||
self.assertEqual(r['previous'], None)
|
||||
self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e'])
|
||||
|
||||
def test_mid_page(self):
|
||||
"""Establish that if we get a request for a page in the middle, that
|
||||
the paginator causes next and prev to be set appropriately.
|
||||
"""
|
||||
# Create a request, and run the paginated function.
|
||||
request = self.rf.get('/dummy/', {'page': 3, 'page_size': 5})
|
||||
response = self.view(request)
|
||||
|
||||
# Ensure the response looks like what it should.
|
||||
r = json.loads(response.rendered_content)
|
||||
self.assertEqual(r['count'], 26)
|
||||
self.assertIn(r['next'],
|
||||
(u'/dummy/?page=4&page_size=5',
|
||||
u'/dummy/?page_size=5&page=4'))
|
||||
self.assertIn(r['previous'],
|
||||
(u'/dummy/?page=2&page_size=5',
|
||||
u'/dummy/?page_size=5&page=2'))
|
||||
self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e'])
|
||||
|
||||
def test_last_page(self):
|
||||
"""Establish that if we get a request for the last page, that the
|
||||
paginator picks up on it and sets `next` to None.
|
||||
"""
|
||||
# Create a request, and run the paginated function.
|
||||
request = self.rf.get('/dummy/', {'page': 6, 'page_size': 5})
|
||||
response = self.view(request)
|
||||
|
||||
# Ensure the response looks like what it should.
|
||||
r = json.loads(response.rendered_content)
|
||||
self.assertEqual(r['count'], 26)
|
||||
self.assertEqual(r['next'], None)
|
||||
self.assertIn(r['previous'],
|
||||
(u'/dummy/?page=5&page_size=5',
|
||||
u'/dummy/?page_size=5&page=5'))
|
||||
self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e'])
|
||||
@@ -53,8 +53,6 @@ def jobs(mocker):
|
||||
class TestJobSerializerGetRelated():
|
||||
@pytest.mark.parametrize("related_resource_name", [
|
||||
'job_events',
|
||||
'job_plays',
|
||||
'job_tasks',
|
||||
'relaunch',
|
||||
'labels',
|
||||
])
|
||||
|
||||
@@ -125,6 +125,7 @@ class TestWorkflowJobTemplateNodeSerializerCharPrompts():
|
||||
serializer = WorkflowJobTemplateNodeSerializer()
|
||||
node = WorkflowJobTemplateNode(pk=1)
|
||||
node.char_prompts = {'limit': 'webservers'}
|
||||
serializer.instance = node
|
||||
view = FakeView(node)
|
||||
view.request = FakeRequest()
|
||||
view.request.method = "PATCH"
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user