diff --git a/COPYING b/COPYING index 991895d074..83fb3ccb6e 100644 --- a/COPYING +++ b/COPYING @@ -1,5 +1,19 @@ -The Ansible Tower Software is a commercial software licensed to you pursuant to the Ansible Software Subscription and Services Agreement (“EULA”) located at www.ansible.com/subscription-agreement and an annual Order/Agreement with Ansible, Inc. +ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT -The Ansible Tower Software is free for use up to ten (10) Nodes, any additional Nodes shall be purchased. +This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism. -Ansible and Ansible Tower are registered Trademarks of Ansible, Inc. +1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license. + +2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License. + +3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation. + +4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements. + +5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability. + +6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions. + +7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods. + +Copyright © 2015 Red Hat, Inc. All rights reserved. "Red Hat" and “Ansible Tower” are registered trademarks of Red Hat, Inc. All other trademarks are the property of their respective owners. diff --git a/Makefile b/Makefile index 63b5ac259e..f748c7992b 100644 --- a/Makefile +++ b/Makefile @@ -10,9 +10,8 @@ DEPS_SCRIPT ?= packaging/bundle/deps.py GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD) GCLOUD_AUTH ?= $(shell gcloud auth print-access-token) -COMPOSE_TAG ?= devel # NOTE: This defaults the container image version to the branch that's active -# COMPOSE_TAG ?= $(GIT_BRANCH) +COMPOSE_TAG ?= $(GIT_BRANCH) COMPOSE_HOST ?= $(shell hostname) @@ -176,7 +175,6 @@ UI_RELEASE_FLAG_FILE = awx/ui/.release_built .DEFAULT_GOAL := build .PHONY: clean clean-tmp clean-venv rebase push requirements requirements_dev \ - requirements_jenkins \ develop refresh adduser migrate dbchange dbshell runserver celeryd \ receiver test test_unit test_coverage coverage_html test_jenkins dev_build \ release_build release_clean sdist rpmtar mock-rpm mock-srpm rpm-sign \ @@ -286,8 +284,10 @@ requirements_ansible: virtualenv_ansible if [ "$(VENV_BASE)" ]; then \ . $(VENV_BASE)/ansible/bin/activate; \ $(VENV_BASE)/ansible/bin/pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements_ansible.txt ;\ + $(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt; \ else \ pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements_ansible.txt ; \ + pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt; \ fi # Install third-party requirements needed for Tower's environment. @@ -295,29 +295,24 @@ requirements_tower: virtualenv_tower if [ "$(VENV_BASE)" ]; then \ . $(VENV_BASE)/tower/bin/activate; \ $(VENV_BASE)/tower/bin/pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements.txt ;\ + $(VENV_BASE)/tower/bin/pip uninstall --yes -r requirements/requirements_tower_uninstall.txt; \ else \ pip install --ignore-installed --no-binary $(SRC_ONLY_PKGS) -r requirements/requirements.txt ; \ + pip uninstall --yes -r requirements/requirements_tower_uninstall.txt; \ fi requirements_tower_dev: if [ "$(VENV_BASE)" ]; then \ . $(VENV_BASE)/tower/bin/activate; \ $(VENV_BASE)/tower/bin/pip install -r requirements/requirements_dev.txt; \ - fi - -# Install third-party requirements needed for running unittests in jenkins -requirements_jenkins: - if [ "$(VENV_BASE)" ]; then \ - . $(VENV_BASE)/tower/bin/activate && pip install -Ir requirements/requirements_jenkins.txt; \ - else \ - pip install -Ir requirements/requirements_jenkins.txt; \ + $(VENV_BASE)/tower/bin/pip uninstall --yes -r requirements/requirements_dev_uninstall.txt; \ fi requirements: requirements_ansible requirements_tower requirements_dev: requirements requirements_tower_dev -requirements_test: requirements requirements_jenkins +requirements_test: requirements # "Install" ansible-tower package in development mode. develop: @@ -407,7 +402,7 @@ uwsgi: collectstatic @if [ "$(VENV_BASE)" ]; then \ . $(VENV_BASE)/tower/bin/activate; \ fi; \ - uwsgi -b 32768 --socket :8050 --module=awx.wsgi:application --home=/venv/tower --chdir=/tower_devel/ --vacuum --processes=5 --harakiri=60 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket + uwsgi -b 32768 --socket :8050 --module=awx.wsgi:application --home=/venv/tower --chdir=/tower_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --master-fifo=/var/lib/awx/awxfifo --lazy-apps daphne: @if [ "$(VENV_BASE)" ]; then \ @@ -433,7 +428,7 @@ celeryd: @if [ "$(VENV_BASE)" ]; then \ . $(VENV_BASE)/tower/bin/activate; \ fi; \ - $(PYTHON) manage.py celeryd -l DEBUG -B --autoreload --autoscale=20,3 --schedule=$(CELERY_SCHEDULE_FILE) -Q projects,jobs,default,scheduler,broadcast_all,$(COMPOSE_HOST) + $(PYTHON) manage.py celeryd -l DEBUG -B --autoreload --autoscale=20,3 --schedule=$(CELERY_SCHEDULE_FILE) -Q projects,jobs,default,scheduler,broadcast_all,$(COMPOSE_HOST) -n celery@$(COMPOSE_HOST) #$(PYTHON) manage.py celery multi show projects jobs default -l DEBUG -Q:projects projects -Q:jobs jobs -Q:default default -c:projects 1 -c:jobs 3 -c:default 3 -Ofair -B --schedule=$(CELERY_SCHEDULE_FILE) # Run to start the zeromq callback receiver @@ -511,6 +506,14 @@ test_tox: # Alias existing make target so old versions run against Jekins the same way test_jenkins : test_coverage +# Make fake data +DATA_GEN_PRESET = "" +bulk_data: + @if [ "$(VENV_BASE)" ]; then \ + . $(VENV_BASE)/tower/bin/activate; \ + fi; \ + $(PYTHON) tools/data_generators/rbac_dummy_data_generator.py --preset=$(DATA_GEN_PRESET) + # l10n TASKS # -------------------------------------- @@ -559,10 +562,7 @@ messages: # generate l10n .json .mo languages: $(UI_DEPS_FLAG_FILE) check-po $(NPM_BIN) --prefix awx/ui run languages - @if [ "$(VENV_BASE)" ]; then \ - . $(VENV_BASE)/tower/bin/activate; \ - fi; \ - $(PYTHON) manage.py compilemessages + $(PYTHON) tools/scripts/compilemessages.py # End l10n TASKS # -------------------------------------- @@ -572,16 +572,16 @@ languages: $(UI_DEPS_FLAG_FILE) check-po ui-deps: $(UI_DEPS_FLAG_FILE) -$(UI_DEPS_FLAG_FILE): awx/ui/package.json +$(UI_DEPS_FLAG_FILE): $(NPM_BIN) --unsafe-perm --prefix awx/ui install awx/ui touch $(UI_DEPS_FLAG_FILE) ui-docker-machine: $(UI_DEPS_FLAG_FILE) - $(NPM_BIN) --prefix awx/ui run build-docker-machine -- $(MAKEFLAGS) + $(NPM_BIN) --prefix awx/ui run ui-docker-machine -- $(MAKEFLAGS) # Native docker. Builds UI and raises BrowserSync & filesystem polling. ui-docker: $(UI_DEPS_FLAG_FILE) - $(NPM_BIN) --prefix awx/ui run build-docker-cid -- $(MAKEFLAGS) + $(NPM_BIN) --prefix awx/ui run ui-docker -- $(MAKEFLAGS) # Builds UI with development UI without raising browser-sync or filesystem polling. ui-devel: $(UI_DEPS_FLAG_FILE) @@ -589,8 +589,7 @@ ui-devel: $(UI_DEPS_FLAG_FILE) ui-release: $(UI_RELEASE_FLAG_FILE) -# todo: include languages target when .po deliverables are added to source control -$(UI_RELEASE_FLAG_FILE): $(UI_DEPS_FLAG_FILE) +$(UI_RELEASE_FLAG_FILE): languages $(UI_DEPS_FLAG_FILE) $(NPM_BIN) --prefix awx/ui run build-release touch $(UI_RELEASE_FLAG_FILE) @@ -690,7 +689,7 @@ rpm-build: rpm-build/$(SDIST_TAR_FILE): rpm-build dist/$(SDIST_TAR_FILE) cp packaging/rpm/$(NAME).spec rpm-build/ - cp packaging/rpm/$(NAME).te rpm-build/ + cp packaging/rpm/tower.te rpm-build/ cp packaging/rpm/$(NAME).sysconfig rpm-build/ cp packaging/remove_tower_source.py rpm-build/ cp packaging/bytecompile.sh rpm-build/ diff --git a/awx/api/filters.py b/awx/api/filters.py index d861303f1e..fbbbba2d05 100644 --- a/awx/api/filters.py +++ b/awx/api/filters.py @@ -19,6 +19,7 @@ from rest_framework.filters import BaseFilterBackend # Ansible Tower from awx.main.utils import get_type_for_model, to_python_boolean +from awx.main.models.rbac import RoleAncestorEntry class MongoFilterBackend(BaseFilterBackend): @@ -76,7 +77,7 @@ class FieldLookupBackend(BaseFilterBackend): SUPPORTED_LOOKUPS = ('exact', 'iexact', 'contains', 'icontains', 'startswith', 'istartswith', 'endswith', 'iendswith', 'regex', 'iregex', 'gt', 'gte', 'lt', 'lte', 'in', - 'isnull') + 'isnull', 'search') def get_field_from_lookup(self, model, lookup): field = None @@ -147,6 +148,15 @@ class FieldLookupBackend(BaseFilterBackend): re.compile(value) except re.error as e: raise ValueError(e.args[0]) + elif new_lookup.endswith('__search'): + related_model = getattr(field, 'related_model', None) + if not related_model: + raise ValueError('%s is not searchable' % new_lookup[:-8]) + new_lookups = [] + for rm_field in related_model._meta.fields: + if rm_field.name in ('username', 'first_name', 'last_name', 'email', 'name', 'description'): + new_lookups.append('{}__{}__icontains'.format(new_lookup[:-8], rm_field.name)) + return value, new_lookups else: value = self.value_to_python_for_field(field, value) return value, new_lookup @@ -158,6 +168,8 @@ class FieldLookupBackend(BaseFilterBackend): and_filters = [] or_filters = [] chain_filters = [] + role_filters = [] + search_filters = [] for key, values in request.query_params.lists(): if key in self.RESERVED_NAMES: continue @@ -174,6 +186,21 @@ class FieldLookupBackend(BaseFilterBackend): key = key[:-5] q_int = True + # RBAC filtering + if key == 'role_level': + role_filters.append(values[0]) + continue + + # Search across related objects. + if key.endswith('__search'): + for value in values: + for search_term in force_text(value).replace(',', ' ').split(): + search_value, new_keys = self.value_to_python(queryset.model, key, search_term) + assert isinstance(new_keys, list) + for new_key in new_keys: + search_filters.append((new_key, search_value)) + continue + # Custom chain__ and or__ filters, mutually exclusive (both can # precede not__). q_chain = False @@ -204,13 +231,21 @@ class FieldLookupBackend(BaseFilterBackend): and_filters.append((q_not, new_key, value)) # Now build Q objects for database query filter. - if and_filters or or_filters or chain_filters: + if and_filters or or_filters or chain_filters or role_filters or search_filters: args = [] for n, k, v in and_filters: if n: args.append(~Q(**{k:v})) else: args.append(Q(**{k:v})) + for role_name in role_filters: + args.append( + Q(pk__in=RoleAncestorEntry.objects.filter( + ancestor__in=request.user.roles.all(), + content_type_id=ContentType.objects.get_for_model(queryset.model).id, + role_field=role_name + ).values_list('object_id').distinct()) + ) if or_filters: q = Q() for n,k,v in or_filters: @@ -219,6 +254,11 @@ class FieldLookupBackend(BaseFilterBackend): else: q |= Q(**{k:v}) args.append(q) + if search_filters: + q = Q() + for k,v in search_filters: + q |= Q(**{k:v}) + args.append(q) for n,k,v in chain_filters: if n: q = ~Q(**{k:v}) @@ -227,7 +267,7 @@ class FieldLookupBackend(BaseFilterBackend): queryset = queryset.filter(q) queryset = queryset.filter(*args).distinct() return queryset - except (FieldError, FieldDoesNotExist, ValueError) as e: + except (FieldError, FieldDoesNotExist, ValueError, TypeError) as e: raise ParseError(e.args[0]) except ValidationError as e: raise ParseError(e.messages) diff --git a/awx/api/generics.py b/awx/api/generics.py index 1062135a28..93a8d3d987 100644 --- a/awx/api/generics.py +++ b/awx/api/generics.py @@ -156,6 +156,7 @@ class APIView(views.APIView): 'new_in_240': getattr(self, 'new_in_240', False), 'new_in_300': getattr(self, 'new_in_300', False), 'new_in_310': getattr(self, 'new_in_310', False), + 'deprecated': getattr(self, 'deprecated', False), } def get_description(self, html=False): @@ -267,10 +268,25 @@ class ListAPIView(generics.ListAPIView, GenericAPIView): fields = [] for field in self.model._meta.fields: if field.name in ('username', 'first_name', 'last_name', 'email', - 'name', 'description', 'email'): + 'name', 'description'): fields.append(field.name) return fields + @property + def related_search_fields(self): + fields = [] + for field in self.model._meta.fields: + if field.name.endswith('_role'): + continue + if getattr(field, 'related_model', None): + fields.append('{}__search'.format(field.name)) + for rel in self.model._meta.related_objects: + name = rel.get_accessor_name() + if name.endswith('_set'): + continue + fields.append('{}__search'.format(name)) + return fields + class ListCreateAPIView(ListAPIView, generics.ListCreateAPIView): # Base class for a list view that allows creating new objects. @@ -543,14 +559,12 @@ class DestroyAPIView(GenericAPIView, generics.DestroyAPIView): pass -class ResourceAccessList(ListAPIView): +class ResourceAccessList(ParentMixin, ListAPIView): serializer_class = ResourceAccessListElementSerializer def get_queryset(self): - self.object_id = self.kwargs['pk'] - resource_model = getattr(self, 'resource_model') - obj = get_object_or_404(resource_model, pk=self.object_id) + obj = self.get_parent_object() content_type = ContentType.objects.get_for_model(obj) roles = set(Role.objects.filter(content_type=content_type, object_id=obj.id)) diff --git a/awx/api/management/commands/uses_mongo.py b/awx/api/management/commands/uses_mongo.py deleted file mode 100644 index 6f77ee47fa..0000000000 --- a/awx/api/management/commands/uses_mongo.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (c) 2015 Ansible, Inc. -# All Rights Reserved - -import sys - -from optparse import make_option -from django.core.management.base import BaseCommand -from awx.main.ha import is_ha_environment -from awx.main.task_engine import TaskEnhancer - - -class Command(BaseCommand): - """Return a exit status of 0 if MongoDB should be active, and an - exit status of 1 otherwise. - - This script is intended to be used by bash and init scripts to - conditionally start MongoDB, so its focus is on being bash-friendly. - """ - - def __init__(self): - super(Command, self).__init__() - BaseCommand.option_list += (make_option('--local', - dest='local', - default=False, - action="store_true", - help="Only check if mongo should be running locally"),) - - def handle(self, *args, **kwargs): - # Get the license data. - license_data = TaskEnhancer().validate_enhancements() - - # Does the license have features, at all? - # If there is no license yet, then all features are clearly off. - if 'features' not in license_data: - print('No license available.') - sys.exit(2) - - # Does the license contain the system tracking feature? - # If and only if it does, MongoDB should run. - system_tracking = license_data['features']['system_tracking'] - - # Okay, do we need MongoDB to be turned on? - # This is a silly variable assignment right now, but I expect the - # rules here will grow more complicated over time. - uses_mongo = system_tracking # noqa - - if is_ha_environment() and kwargs['local'] and uses_mongo: - print("HA Configuration detected. Database should be remote") - uses_mongo = False - - # If we do not need Mongo, return a non-zero exit status. - if not uses_mongo: - print('MongoDB NOT required') - sys.exit(1) - - # We do need Mongo, return zero. - print('MongoDB required') - sys.exit(0) diff --git a/awx/api/metadata.py b/awx/api/metadata.py index da0aef79f3..21444acb75 100644 --- a/awx/api/metadata.py +++ b/awx/api/metadata.py @@ -13,7 +13,7 @@ from django.utils.translation import ugettext_lazy as _ from rest_framework import exceptions from rest_framework import metadata from rest_framework import serializers -from rest_framework.relations import RelatedField +from rest_framework.relations import RelatedField, ManyRelatedField from rest_framework.request import clone_request # Ansible Tower @@ -75,7 +75,7 @@ class Metadata(metadata.SimpleMetadata): elif getattr(field, 'fields', None): field_info['children'] = self.get_serializer_info(field) - if hasattr(field, 'choices') and not isinstance(field, RelatedField): + if not isinstance(field, (RelatedField, ManyRelatedField)) and hasattr(field, 'choices'): field_info['choices'] = [(choice_value, choice_name) for choice_value, choice_name in field.choices.items()] # Indicate if a field is write-only. @@ -183,6 +183,10 @@ class Metadata(metadata.SimpleMetadata): if getattr(view, 'search_fields', None): metadata['search_fields'] = view.search_fields + # Add related search fields if available from the view. + if getattr(view, 'related_search_fields', None): + metadata['related_search_fields'] = view.related_search_fields + return metadata diff --git a/awx/api/pagination.py b/awx/api/pagination.py index ee17aee0e1..9a416e9995 100644 --- a/awx/api/pagination.py +++ b/awx/api/pagination.py @@ -2,6 +2,7 @@ # All Rights Reserved. # Django REST Framework +from django.conf import settings from rest_framework import pagination from rest_framework.utils.urls import replace_query_param @@ -9,11 +10,13 @@ from rest_framework.utils.urls import replace_query_param class Pagination(pagination.PageNumberPagination): page_size_query_param = 'page_size' + max_page_size = settings.MAX_PAGE_SIZE def get_next_link(self): if not self.page.has_next(): return None url = self.request and self.request.get_full_path() or '' + url = url.encode('utf-8') page_number = self.page.next_page_number() return replace_query_param(url, self.page_query_param, page_number) @@ -21,5 +24,6 @@ class Pagination(pagination.PageNumberPagination): if not self.page.has_previous(): return None url = self.request and self.request.get_full_path() or '' + url = url.encode('utf-8') page_number = self.page.previous_page_number() return replace_query_param(url, self.page_query_param, page_number) diff --git a/awx/api/permissions.py b/awx/api/permissions.py index a655360dc8..8ec26a2cc8 100644 --- a/awx/api/permissions.py +++ b/awx/api/permissions.py @@ -4,9 +4,6 @@ # Python import logging -# Django -from django.http import Http404 - # Django REST Framework from rest_framework.exceptions import MethodNotAllowed, PermissionDenied from rest_framework import permissions @@ -19,7 +16,7 @@ from awx.main.utils import get_object_or_400 logger = logging.getLogger('awx.api.permissions') __all__ = ['ModelAccessPermission', 'JobTemplateCallbackPermission', - 'TaskPermission', 'ProjectUpdatePermission', 'UserPermission'] + 'TaskPermission', 'ProjectUpdatePermission', 'UserPermission',] class ModelAccessPermission(permissions.BasePermission): @@ -96,13 +93,6 @@ class ModelAccessPermission(permissions.BasePermission): method based on the request method. ''' - # Check that obj (if given) is active, otherwise raise a 404. - active = getattr(obj, 'active', getattr(obj, 'is_active', True)) - if callable(active): - active = active() - if not active: - raise Http404() - # Don't allow anonymous users. 401, not 403, hence no raised exception. if not request.user or request.user.is_anonymous(): return False @@ -216,3 +206,5 @@ class UserPermission(ModelAccessPermission): elif request.user.is_superuser: return True raise PermissionDenied() + + diff --git a/awx/api/renderers.py b/awx/api/renderers.py index 9f3d17470e..fa039a2226 100644 --- a/awx/api/renderers.py +++ b/awx/api/renderers.py @@ -80,3 +80,8 @@ class AnsiTextRenderer(PlainTextRenderer): media_type = 'text/plain' format = 'ansi' + + +class AnsiDownloadRenderer(PlainTextRenderer): + + format = "ansi_download" diff --git a/awx/api/serializers.py b/awx/api/serializers.py index 9210f6abe9..2845328bad 100644 --- a/awx/api/serializers.py +++ b/awx/api/serializers.py @@ -76,13 +76,15 @@ SUMMARIZABLE_FK_FIELDS = { 'total_groups', 'groups_with_active_failures', 'has_inventory_sources'), - 'project': DEFAULT_SUMMARY_FIELDS + ('status',), + 'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'), 'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',), 'credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud'), 'cloud_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud'), 'network_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'net'), - 'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',), + 'job': DEFAULT_SUMMARY_FIELDS + ('status', 'failed', 'elapsed'), 'job_template': DEFAULT_SUMMARY_FIELDS, + 'workflow_job_template': DEFAULT_SUMMARY_FIELDS, + 'workflow_job': DEFAULT_SUMMARY_FIELDS, 'schedule': DEFAULT_SUMMARY_FIELDS + ('next_run',), 'unified_job_template': DEFAULT_SUMMARY_FIELDS + ('unified_job_type',), 'last_job': DEFAULT_SUMMARY_FIELDS + ('finished', 'status', 'failed', 'license_error'), @@ -250,6 +252,8 @@ class BaseSerializer(serializers.ModelSerializer): 'project_update': _('SCM Update'), 'inventory_update': _('Inventory Sync'), 'system_job': _('Management Job'), + 'workflow_job': _('Workflow Job'), + 'workflow_job_template': _('Workflow Template'), } choices = [] for t in self.get_types(): @@ -518,7 +522,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer): class Meta: model = UnifiedJobTemplate - fields = ('*', 'last_job_run', 'last_job_failed', 'has_schedules', + fields = ('*', 'last_job_run', 'last_job_failed', 'next_job_run', 'status') def get_related(self, obj): @@ -607,7 +611,11 @@ class UnifiedJobSerializer(BaseSerializer): summary_fields = super(UnifiedJobSerializer, self).get_summary_fields(obj) if obj.spawned_by_workflow: summary_fields['source_workflow_job'] = {} - summary_obj = obj.unified_job_node.workflow_job + try: + summary_obj = obj.unified_job_node.workflow_job + except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist: + return summary_fields + for field in SUMMARIZABLE_FK_FIELDS['job']: val = getattr(summary_obj, field, None) if val is not None: @@ -666,7 +674,7 @@ class UnifiedJobListSerializer(UnifiedJobSerializer): def get_types(self): if type(self) is UnifiedJobListSerializer: - return ['project_update', 'inventory_update', 'job', 'ad_hoc_command', 'system_job'] + return ['project_update', 'inventory_update', 'job', 'ad_hoc_command', 'system_job', 'workflow_job'] else: return super(UnifiedJobListSerializer, self).get_types() @@ -1581,8 +1589,7 @@ class ResourceAccessListElementSerializer(UserSerializer): the resource. ''' ret = super(ResourceAccessListElementSerializer, self).to_representation(user) - object_id = self.context['view'].object_id - obj = self.context['view'].resource_model.objects.get(pk=object_id) + obj = self.context['view'].get_parent_object() if self.context['view'].request is not None: requesting_user = self.context['view'].request.user else: @@ -1615,7 +1622,8 @@ class ResourceAccessListElementSerializer(UserSerializer): 'name': role.name, 'description': role.description, 'team_id': team_role.object_id, - 'team_name': team_role.content_object.name + 'team_name': team_role.content_object.name, + 'team_organization_name': team_role.content_object.organization.name, } if role.content_type is not None: role_dict['resource_name'] = role.content_object.name @@ -1757,9 +1765,9 @@ class CredentialSerializerCreate(CredentialSerializer): 'do not give either user or organization. Only valid for creation.')) organization = serializers.PrimaryKeyRelatedField( queryset=Organization.objects.all(), - required=False, default=None, write_only=True, allow_null=True, - help_text=_('Write-only field used to add organization to owner role. If provided, ' - 'do not give either team or team. Only valid for creation.')) + required=False, default=None, allow_null=True, + help_text=_('Inherit permissions from organization roles. If provided on creation, ' + 'do not give either user or team.')) class Meta: model = Credential @@ -1985,8 +1993,6 @@ class JobSerializer(UnifiedJobSerializer, JobOptionsSerializer): res = super(JobSerializer, self).get_related(obj) res.update(dict( job_events = reverse('api:job_job_events_list', args=(obj.pk,)), - job_plays = reverse('api:job_job_plays_list', args=(obj.pk,)), - job_tasks = reverse('api:job_job_tasks_list', args=(obj.pk,)), job_host_summaries = reverse('api:job_job_host_summaries_list', args=(obj.pk,)), activity_stream = reverse('api:job_activity_stream_list', args=(obj.pk,)), notifications = reverse('api:job_notifications_list', args=(obj.pk,)), @@ -2365,7 +2371,7 @@ class WorkflowJobTemplateNodeSerializer(WorkflowNodeBaseSerializer): if view and view.request: request_method = view.request.method if request_method in ['PATCH']: - obj = view.get_object() + obj = self.instance char_prompts = copy.copy(obj.char_prompts) char_prompts.update(self.extract_char_prompts(data)) else: @@ -2415,7 +2421,7 @@ class WorkflowJobNodeSerializer(WorkflowNodeBaseSerializer): res['failure_nodes'] = reverse('api:workflow_job_node_failure_nodes_list', args=(obj.pk,)) res['always_nodes'] = reverse('api:workflow_job_node_always_nodes_list', args=(obj.pk,)) if obj.job: - res['job'] = reverse('api:job_detail', args=(obj.job.pk,)) + res['job'] = obj.job.get_absolute_url() if obj.workflow_job: res['workflow_job'] = reverse('api:workflow_job_detail', args=(obj.workflow_job.pk,)) return res @@ -2497,8 +2503,8 @@ class JobEventSerializer(BaseSerializer): model = JobEvent fields = ('*', '-name', '-description', 'job', 'event', 'counter', 'event_display', 'event_data', 'event_level', 'failed', - 'changed', 'uuid', 'host', 'host_name', 'parent', 'playbook', - 'play', 'task', 'role', 'stdout', 'start_line', 'end_line', + 'changed', 'uuid', 'parent_uuid', 'host', 'host_name', 'parent', + 'playbook', 'play', 'task', 'role', 'stdout', 'start_line', 'end_line', 'verbosity') def get_related(self, obj): @@ -2704,18 +2710,15 @@ class WorkflowJobLaunchSerializer(BaseSerializer): variables_needed_to_start = serializers.ReadOnlyField() survey_enabled = serializers.SerializerMethodField() extra_vars = VerbatimField(required=False, write_only=True) - warnings = serializers.SerializerMethodField() workflow_job_template_data = serializers.SerializerMethodField() class Meta: model = WorkflowJobTemplate - fields = ('can_start_without_user_input', 'extra_vars', 'warnings', + fields = ('can_start_without_user_input', 'extra_vars', 'survey_enabled', 'variables_needed_to_start', + 'node_templates_missing', 'node_prompts_rejected', 'workflow_job_template_data') - def get_warnings(self, obj): - return obj.get_warnings() - def get_survey_enabled(self, obj): if obj: return obj.survey_enabled and 'spec' in obj.survey_spec @@ -2999,10 +3002,14 @@ class ActivityStreamSerializer(BaseSerializer): for fk, __ in SUMMARIZABLE_FK_FIELDS.items(): if not hasattr(obj, fk): continue - allm2m = getattr(obj, fk).distinct() + allm2m = getattr(obj, fk).all() if getattr(obj, fk).exists(): rel[fk] = [] + id_list = [] for thisItem in allm2m: + if getattr(thisItem, 'id', None) in id_list: + continue + id_list.append(getattr(thisItem, 'id', None)) if fk == 'custom_inventory_script': rel[fk].append(reverse('api:inventory_script_detail', args=(thisItem.id,))) else: @@ -3018,7 +3025,7 @@ class ActivityStreamSerializer(BaseSerializer): try: if not hasattr(obj, fk): continue - allm2m = getattr(obj, fk).distinct() + allm2m = getattr(obj, fk).all() if getattr(obj, fk).exists(): summary_fields[fk] = [] for thisItem in allm2m: @@ -3047,6 +3054,9 @@ class ActivityStreamSerializer(BaseSerializer): thisItemDict[field] = fval if fk == 'group': thisItemDict['inventory_id'] = getattr(thisItem, 'inventory_id', None) + if thisItemDict.get('id', None): + if thisItemDict.get('id', None) in [obj_dict.get('id', None) for obj_dict in summary_fields[fk]]: + continue summary_fields[fk].append(thisItemDict) except ObjectDoesNotExist: pass diff --git a/awx/api/templates/api/_list_common.md b/awx/api/templates/api/_list_common.md index e355421de3..706ae732a5 100644 --- a/awx/api/templates/api/_list_common.md +++ b/awx/api/templates/api/_list_common.md @@ -56,6 +56,10 @@ within all designated text fields of a model. _Added in AWX 1.4_ +(_Added in Ansible Tower 3.1.0_) Search across related fields: + + ?related__search=findme + ## Filtering Any additional query string parameters may be used to filter the list of @@ -132,3 +136,8 @@ values. Lists (for the `in` lookup) may be specified as a comma-separated list of values. + +(_Added in Ansible Tower 3.1.0_) Filtering based on the requesting user's +level of access by query string parameter. + +* `role_level`: Level of role to filter on, such as `admin_role` diff --git a/awx/api/templates/api/_new_in_awx.md b/awx/api/templates/api/_new_in_awx.md index 8960aa808c..a113b9d5fa 100644 --- a/awx/api/templates/api/_new_in_awx.md +++ b/awx/api/templates/api/_new_in_awx.md @@ -3,10 +3,11 @@ {% if new_in_14 %}> _Added in AWX 1.4_{% endif %} {% if new_in_145 %}> _Added in Ansible Tower 1.4.5_{% endif %} {% if new_in_148 %}> _Added in Ansible Tower 1.4.8_{% endif %} -{% if new_in_200 %}> _New in Ansible Tower 2.0.0_{% endif %} -{% if new_in_220 %}> _New in Ansible Tower 2.2.0_{% endif %} -{% if new_in_230 %}> _New in Ansible Tower 2.3.0_{% endif %} -{% if new_in_240 %}> _New in Ansible Tower 2.4.0_{% endif %} -{% if new_in_300 %}> _New in Ansible Tower 3.0.0_{% endif %} +{% if new_in_200 %}> _Added in Ansible Tower 2.0.0_{% endif %} +{% if new_in_220 %}> _Added in Ansible Tower 2.2.0_{% endif %} +{% if new_in_230 %}> _Added in Ansible Tower 2.3.0_{% endif %} +{% if new_in_240 %}> _Added in Ansible Tower 2.4.0_{% endif %} +{% if new_in_300 %}> _Added in Ansible Tower 3.0.0_{% endif %} {% if new_in_310 %}> _New in Ansible Tower 3.1.0_{% endif %} +{% if deprecated %}> _This resource has been deprecated and will be removed in a future release_{% endif %} {% endif %} diff --git a/awx/api/templates/api/system_job_template_launch.md b/awx/api/templates/api/system_job_template_launch.md index 4543014005..a50e3fdae3 100644 --- a/awx/api/templates/api/system_job_template_launch.md +++ b/awx/api/templates/api/system_job_template_launch.md @@ -2,8 +2,9 @@ Launch a Job Template: Make a POST request to this resource to launch the system job template. -An extra parameter `extra_vars` is suggested in order to pass extra parameters -to the system job task. +Variables specified inside of the parameter `extra_vars` are passed to the +system job task as command line parameters. These tasks can be ran manually +on the host system via the `tower-manage` command. For example on `cleanup_jobs` and `cleanup_activitystream`: @@ -13,9 +14,17 @@ Which will act on data older than 30 days. For `cleanup_facts`: -`{"older_than": "4w", `granularity`: "3d"}` +`{"older_than": "4w", "granularity": "3d"}` Which will reduce the granularity of scan data to one scan per 3 days when the data is older than 4w. +Each individual system job task has its own default values, which are +applicable either when running it from the command line or launching its +system job template with empty `extra_vars`. + + - Defaults for `cleanup_activitystream`: days=90 + - Defaults for `cleanup_facts`: older_than="30d", granularity="1w" + - Defaults for `cleanup_jobs`: days=90 + If successful, the response status code will be 202. If the job cannot be launched, a 405 status code will be returned. diff --git a/awx/api/templates/api/unified_job_stdout.md b/awx/api/templates/api/unified_job_stdout.md index 63f7acea8e..d86c6e2378 100644 --- a/awx/api/templates/api/unified_job_stdout.md +++ b/awx/api/templates/api/unified_job_stdout.md @@ -13,6 +13,7 @@ Use the `format` query string parameter to specify the output format. * Plain Text with ANSI color codes: `?format=ansi` * JSON structure: `?format=json` * Downloaded Plain Text: `?format=txt_download` +* Downloaded Plain Text with ANSI color codes: `?format=ansi_download` (_New in Ansible Tower 2.0.0_) When using the Browsable API, HTML and JSON formats, the `start_line` and `end_line` query string parameters can be used @@ -21,7 +22,8 @@ to specify a range of line numbers to retrieve. Use `dark=1` or `dark=0` as a query string parameter to force or disable a dark background. -+Files over {{ settings.STDOUT_MAX_BYTES_DISPLAY|filesizeformat }} (configurable) will not display in the browser. Use the `txt_download` -+format to download the file directly to view it. +Files over {{ settings.STDOUT_MAX_BYTES_DISPLAY|filesizeformat }} (configurable) +will not display in the browser. Use the `txt_download` or `ansi_download` +formats to download the file directly to view it. {% include "api/_new_in_awx.md" %} diff --git a/awx/api/templates/api/workflow_job_cancel.md b/awx/api/templates/api/workflow_job_cancel.md new file mode 100644 index 0000000000..bcdd347d36 --- /dev/null +++ b/awx/api/templates/api/workflow_job_cancel.md @@ -0,0 +1,12 @@ +# Cancel Workflow Job + +Make a GET request to this resource to determine if the workflow job can be +canceled. The response will include the following field: + +* `can_cancel`: Indicates whether this workflow job is in a state that can + be canceled (boolean, read-only) + +Make a POST request to this endpoint to submit a request to cancel a pending +or running workflow job. The response status code will be 202 if the +request to cancel was successfully submitted, or 405 if the workflow job +cannot be canceled. diff --git a/awx/api/templates/api/workflow_job_relaunch.md b/awx/api/templates/api/workflow_job_relaunch.md new file mode 100644 index 0000000000..f9a9b2c31c --- /dev/null +++ b/awx/api/templates/api/workflow_job_relaunch.md @@ -0,0 +1,5 @@ +Relaunch a workflow job: + +Make a POST request to this endpoint to launch a workflow job identical to the parent workflow job. This will spawn jobs, project updates, or inventory updates based on the unified job templates referenced in the workflow nodes in the workflow job. No POST data is accepted for this action. + +If successful, the response status code will be 201 and serialized data of the new workflow job will be returned. \ No newline at end of file diff --git a/awx/api/templates/api/workflow_job_template_copy.md b/awx/api/templates/api/workflow_job_template_copy.md new file mode 100644 index 0000000000..f28d6466ba --- /dev/null +++ b/awx/api/templates/api/workflow_job_template_copy.md @@ -0,0 +1,34 @@ +Copy a Workflow Job Template: + +Make a GET request to this resource to determine if the current user has +permission to copy the workflow_job_template and whether any linked +templates or prompted fields will be ignored due to permissions problems. +The response will include the following fields: + +* `can_copy`: Flag indicating whether the active user has permission to make + a copy of this workflow_job_template, provides same content as the + workflow_job_template detail view summary_fields.user_capabilities.copy + (boolean, read-only) +* `can_copy_without_user_input`: Flag indicating if the user should be + prompted for confirmation before the copy is executed (boolean, read-only) +* `templates_unable_to_copy`: List of node ids of nodes that have a related + job template, project, or inventory that the current user lacks permission + to use and will be missing in workflow nodes of the copy (array, read-only) +* `inventories_unable_to_copy`: List of node ids of nodes that have a related + prompted inventory that the current user lacks permission + to use and will be missing in workflow nodes of the copy (array, read-only) +* `credentials_unable_to_copy`: List of node ids of nodes that have a related + prompted credential that the current user lacks permission + to use and will be missing in workflow nodes of the copy (array, read-only) + +Make a POST request to this endpoint to save a copy of this +workflow_job_template. No POST data is accepted for this action. + +If successful, the response status code will be 201. The response body will +contain serialized data about the new workflow_job_template, which will be +similar to the original workflow_job_template, but with an additional `@` +and a timestamp in the name. + +All workflow nodes and connections in the original will also exist in the +copy. The nodes will be missing related resources if the user did not have +access to use them. diff --git a/awx/api/templates/api/workflow_job_template_launch.md b/awx/api/templates/api/workflow_job_template_launch.md index bb2fe1c2b8..dca08c59d9 100644 --- a/awx/api/templates/api/workflow_job_template_launch.md +++ b/awx/api/templates/api/workflow_job_template_launch.md @@ -12,8 +12,13 @@ workflow_job_template. The response will include the following fields: enabled survey (boolean, read-only) * `extra_vars`: Text which is the `extra_vars` field of this workflow_job_template (text, read-only) -* `warnings`: JSON object listing warnings of all workflow_job_template_nodes - contained in this workflow_job_template (JSON object, read-only) +* `node_templates_missing`: List of node ids of all nodes that have a + null `unified_job_template`, which will cause their branches to stop + execution (list, read-only) +* `node_prompts_rejected`: List of node ids of all nodes that have + specified a field that will be rejected because its `unified_job_template` + does not allow prompting for this field, this will not halt execution of + the branch but the field will be ignored (list, read-only) * `workflow_job_template_data`: JSON object listing general information of this workflow_job_template (JSON object, read-only) diff --git a/awx/api/urls.py b/awx/api/urls.py index 0f0dec8ba7..f3abfae3fe 100644 --- a/awx/api/urls.py +++ b/awx/api/urls.py @@ -205,8 +205,6 @@ job_urls = patterns('awx.api.views', url(r'^(?P[0-9]+)/relaunch/$', 'job_relaunch'), url(r'^(?P[0-9]+)/job_host_summaries/$', 'job_job_host_summaries_list'), url(r'^(?P[0-9]+)/job_events/$', 'job_job_events_list'), - url(r'^(?P[0-9]+)/job_plays/$', 'job_job_plays_list'), - url(r'^(?P[0-9]+)/job_tasks/$', 'job_job_tasks_list'), url(r'^(?P[0-9]+)/activity_stream/$', 'job_activity_stream_list'), url(r'^(?P[0-9]+)/stdout/$', 'job_stdout'), url(r'^(?P[0-9]+)/notifications/$', 'job_notifications_list'), diff --git a/awx/api/utils/__init__.py b/awx/api/utils/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/awx/api/utils/decorators.py b/awx/api/utils/decorators.py deleted file mode 100644 index 8a80b1457e..0000000000 --- a/awx/api/utils/decorators.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) 2015 Ansible, Inc. -# All Rights Reserved. - -from collections import OrderedDict -import copy -import functools - -from rest_framework.response import Response -from rest_framework.settings import api_settings -from rest_framework import status - - -def paginated(method): - """Given an method with a Django REST Framework API method signature - (e.g. `def get(self, request, ...):`), abstract out boilerplate pagination - duties. - - This causes the method to receive two additional keyword arguments: - `limit`, and `offset`. The method expects a two-tuple to be - returned, with a result list as the first item, and the total number - of results (across all pages) as the second item. - """ - @functools.wraps(method) - def func(self, request, *args, **kwargs): - # Manually spin up pagination. - # How many results do we show? - paginator_class = api_settings.DEFAULT_PAGINATION_CLASS - limit = paginator_class.page_size - if request.query_params.get(paginator_class.page_size_query_param, False): - limit = request.query_params[paginator_class.page_size_query_param] - if paginator_class.max_page_size: - limit = min(paginator_class.max_page_size, limit) - limit = int(limit) - - # Get the order parameter if it's given - if request.query_params.get("ordering", False): - ordering = request.query_params["ordering"] - else: - ordering = None - - # What page are we on? - page = int(request.query_params.get('page', 1)) - offset = (page - 1) * limit - - # Add the limit, offset, page, and order variables to the keyword arguments - # being sent to the underlying method. - kwargs['limit'] = limit - kwargs['offset'] = offset - kwargs['ordering'] = ordering - - # Okay, call the underlying method. - results, count, stat = method(self, request, *args, **kwargs) - if stat is None: - stat = status.HTTP_200_OK - - if stat == status.HTTP_200_OK: - # Determine the next and previous pages, if any. - prev, next_ = None, None - if page > 1: - get_copy = copy.copy(request.GET) - get_copy['page'] = page - 1 - prev = '%s?%s' % (request.path, get_copy.urlencode()) - if count > offset + limit: - get_copy = copy.copy(request.GET) - get_copy['page'] = page + 1 - next_ = '%s?%s' % (request.path, get_copy.urlencode()) - - # Compile the results into a dictionary with pagination - # information. - answer = OrderedDict(( - ('count', count), - ('next', next_), - ('previous', prev), - ('results', results), - )) - else: - answer = results - - # Okay, we're done; return response data. - return Response(answer, status=stat) - return func - diff --git a/awx/api/views.py b/awx/api/views.py index dd57f1bf6e..f36404d710 100644 --- a/awx/api/views.py +++ b/awx/api/views.py @@ -60,12 +60,14 @@ from awx.main.tasks import send_notifications from awx.main.access import get_user_queryset from awx.main.ha import is_ha_environment from awx.api.authentication import TaskAuthentication, TokenGetAuthentication -from awx.api.utils.decorators import paginated from awx.api.generics import get_view_name from awx.api.generics import * # noqa from awx.conf.license import get_license, feature_enabled, feature_exists, LicenseForbids from awx.main.models import * # noqa from awx.main.utils import * # noqa +from awx.main.utils import ( + callback_filter_out_ansible_extra_vars +) from awx.api.permissions import * # noqa from awx.api.renderers import * # noqa from awx.api.serializers import * # noqa @@ -227,6 +229,11 @@ class ApiV1ConfigView(APIView): permission_classes = (IsAuthenticated,) view_name = _('Configuration') + def check_permissions(self, request): + super(ApiV1ConfigView, self).check_permissions(request) + if not request.user.is_superuser and request.method.lower() not in {'options', 'head', 'get'}: + self.permission_denied(request) # Raises PermissionDenied exception. + def get(self, request, format=None): '''Return various sitewide configuration settings.''' @@ -272,8 +279,6 @@ class ApiV1ConfigView(APIView): return Response(data) def post(self, request): - if not request.user.is_superuser: - return Response(None, status=status.HTTP_404_NOT_FOUND) if not isinstance(request.data, dict): return Response({"error": _("Invalid license data")}, status=status.HTTP_400_BAD_REQUEST) if "eula_accepted" not in request.data: @@ -312,9 +317,6 @@ class ApiV1ConfigView(APIView): return Response({"error": _("Invalid license")}, status=status.HTTP_400_BAD_REQUEST) def delete(self, request): - if not request.user.is_superuser: - return Response(None, status=status.HTTP_404_NOT_FOUND) - try: settings.LICENSE = {} return Response(status=status.HTTP_204_NO_CONTENT) @@ -534,7 +536,7 @@ class AuthView(APIView): saml_backend_data = dict(backend_data.items()) saml_backend_data['login_url'] = '%s?idp=%s' % (login_url, idp) full_backend_name = '%s:%s' % (name, idp) - if err_backend == full_backend_name and err_message: + if (err_backend == full_backend_name or err_backend == name) and err_message: saml_backend_data['error'] = err_message data[full_backend_name] = saml_backend_data else: @@ -601,7 +603,7 @@ class AuthTokenView(APIView): return Response({'token': token.key, 'expires': token.expires}, headers=headers) if 'username' in request.data: logger.warning(smart_text(u"Login failed for user {}".format(request.data['username'])), - user=dict(actor=request.data['username'])) + extra=dict(actor=request.data['username'])) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) @@ -695,6 +697,7 @@ class OrganizationList(OrganizationCountsMixin, ListCreateAPIView): def get_queryset(self): qs = Organization.accessible_objects(self.request.user, 'read_role') qs = qs.select_related('admin_role', 'auditor_role', 'member_role', 'read_role') + qs = qs.prefetch_related('created_by', 'modified_by') return qs def create(self, request, *args, **kwargs): @@ -768,7 +771,7 @@ class BaseUsersList(SubListCreateAttachDetachAPIView): def post(self, request, *args, **kwargs): ret = super(BaseUsersList, self).post( request, *args, **kwargs) try: - if request.data.get('is_system_auditor', False): + if ret.data is not None and request.data.get('is_system_auditor', False): # This is a faux-field that just maps to checking the system # auditor role member list.. unfortunately this means we can't # set it on creation, and thus needs to be set here. @@ -849,6 +852,7 @@ class OrganizationNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView) serializer_class = NotificationTemplateSerializer parent_model = Organization relationship = 'notification_templates_any' + new_in_300 = True class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView): @@ -857,6 +861,7 @@ class OrganizationNotificationTemplatesErrorList(SubListCreateAttachDetachAPIVie serializer_class = NotificationTemplateSerializer parent_model = Organization relationship = 'notification_templates_error' + new_in_300 = True class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView): @@ -865,12 +870,13 @@ class OrganizationNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIV serializer_class = NotificationTemplateSerializer parent_model = Organization relationship = 'notification_templates_success' + new_in_300 = True class OrganizationAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = Organization + parent_model = Organization new_in_300 = True @@ -919,6 +925,7 @@ class TeamRolesList(SubListCreateAttachDetachAPIView): metadata_class = RoleMetadata parent_model = Team relationship='member_role.children' + new_in_300 = True def get_queryset(self): team = get_object_or_404(Team, pk=self.kwargs['pk']) @@ -939,6 +946,10 @@ class TeamRolesList(SubListCreateAttachDetachAPIView): data = dict(msg=_("You cannot assign an Organization role as a child role for a Team.")) return Response(data, status=status.HTTP_400_BAD_REQUEST) + if role.is_singleton(): + data = dict(msg=_("You cannot grant system-level permissions to a team.")) + return Response(data, status=status.HTTP_400_BAD_REQUEST) + team = get_object_or_404(Team, pk=self.kwargs['pk']) credential_content_type = ContentType.objects.get_for_model(Credential) if role.content_type == credential_content_type: @@ -1001,7 +1012,7 @@ class TeamActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView): class TeamAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = Team + parent_model = Team new_in_300 = True @@ -1020,6 +1031,7 @@ class ProjectList(ListCreateAPIView): 'update_role', 'read_role', ) + projects_qs = projects_qs.prefetch_related('last_job', 'created_by') return projects_qs @@ -1096,6 +1108,7 @@ class ProjectNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView): serializer_class = NotificationTemplateSerializer parent_model = Project relationship = 'notification_templates_any' + new_in_300 = True class ProjectNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView): @@ -1104,6 +1117,7 @@ class ProjectNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView): serializer_class = NotificationTemplateSerializer parent_model = Project relationship = 'notification_templates_error' + new_in_300 = True class ProjectNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView): @@ -1112,6 +1126,7 @@ class ProjectNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView): serializer_class = NotificationTemplateSerializer parent_model = Project relationship = 'notification_templates_success' + new_in_300 = True class ProjectUpdatesList(SubListAPIView): @@ -1149,6 +1164,7 @@ class ProjectUpdateList(ListAPIView): model = ProjectUpdate serializer_class = ProjectUpdateListSerializer + new_in_13 = True class ProjectUpdateDetail(RetrieveDestroyAPIView): @@ -1159,8 +1175,11 @@ class ProjectUpdateDetail(RetrieveDestroyAPIView): def destroy(self, request, *args, **kwargs): obj = self.get_object() - if obj.unified_job_node.filter(workflow_job__status__in=ACTIVE_STATES).exists(): - raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + try: + if obj.unified_job_node.workflow_job.status in ACTIVE_STATES: + raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + except ProjectUpdate.unified_job_node.RelatedObjectDoesNotExist: + pass return super(ProjectUpdateDetail, self).destroy(request, *args, **kwargs) @@ -1186,12 +1205,13 @@ class ProjectUpdateNotificationsList(SubListAPIView): serializer_class = NotificationSerializer parent_model = ProjectUpdate relationship = 'notifications' + new_in_300 = True class ProjectAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = Project + parent_model = Project new_in_300 = True @@ -1249,7 +1269,8 @@ class UserTeamsList(ListAPIView): u = get_object_or_404(User, pk=self.kwargs['pk']) if not self.request.user.can_access(User, 'read', u): raise PermissionDenied() - return Team.accessible_objects(self.request.user, 'read_role').filter(member_role__members=u) + return Team.accessible_objects(self.request.user, 'read_role').filter( + Q(member_role__members=u) | Q(admin_role__members=u)).distinct() class UserRolesList(SubListCreateAttachDetachAPIView): @@ -1260,6 +1281,7 @@ class UserRolesList(SubListCreateAttachDetachAPIView): parent_model = User relationship='roles' permission_classes = (IsAuthenticated,) + new_in_300 = True def get_queryset(self): u = get_object_or_404(User, pk=self.kwargs['pk']) @@ -1404,7 +1426,7 @@ class UserDetail(RetrieveUpdateDestroyAPIView): class UserAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = User + parent_model = User new_in_300 = True @@ -1511,7 +1533,7 @@ class CredentialActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIVie class CredentialAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = Credential + parent_model = Credential new_in_300 = True @@ -1532,12 +1554,14 @@ class InventoryScriptList(ListCreateAPIView): model = CustomInventoryScript serializer_class = CustomInventoryScriptSerializer + new_in_210 = True class InventoryScriptDetail(RetrieveUpdateDestroyAPIView): model = CustomInventoryScript serializer_class = CustomInventoryScriptSerializer + new_in_210 = True def destroy(self, request, *args, **kwargs): instance = self.get_object() @@ -1572,6 +1596,7 @@ class InventoryList(ListCreateAPIView): def get_queryset(self): qs = Inventory.accessible_objects(self.request.user, 'read_role') qs = qs.select_related('admin_role', 'read_role', 'update_role', 'use_role', 'adhoc_role') + qs = qs.prefetch_related('created_by', 'modified_by', 'organization') return qs @@ -1604,7 +1629,7 @@ class InventoryActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView class InventoryAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = Inventory + parent_model = Inventory new_in_300 = True @@ -1660,6 +1685,7 @@ class HostList(ListCreateAPIView): class HostDetail(RetrieveUpdateDestroyAPIView): + always_allow_superuser = False model = Host serializer_class = HostSerializer @@ -2158,6 +2184,7 @@ class InventorySourceNotificationTemplatesAnyList(SubListCreateAttachDetachAPIVi serializer_class = NotificationTemplateSerializer parent_model = InventorySource relationship = 'notification_templates_any' + new_in_300 = True def post(self, request, *args, **kwargs): parent = self.get_parent_object() @@ -2239,8 +2266,11 @@ class InventoryUpdateDetail(RetrieveDestroyAPIView): def destroy(self, request, *args, **kwargs): obj = self.get_object() - if obj.unified_job_node.filter(workflow_job__status__in=ACTIVE_STATES).exists(): - raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + try: + if obj.unified_job_node.workflow_job.status in ACTIVE_STATES: + raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + except InventoryUpdate.unified_job_node.RelatedObjectDoesNotExist: + pass return super(InventoryUpdateDetail, self).destroy(request, *args, **kwargs) @@ -2266,6 +2296,7 @@ class InventoryUpdateNotificationsList(SubListAPIView): serializer_class = NotificationSerializer parent_model = InventoryUpdate relationship = 'notifications' + new_in_300 = True class JobTemplateList(ListCreateAPIView): @@ -2301,7 +2332,10 @@ class JobTemplateLaunch(RetrieveAPIView, GenericAPIView): always_allow_superuser = False def update_raw_data(self, data): - obj = self.get_object() + try: + obj = self.get_object() + except PermissionDenied: + return data extra_vars = data.pop('extra_vars', None) or {} if obj: for p in obj.passwords_needed_to_start: @@ -2381,13 +2415,20 @@ class JobTemplateSurveySpec(GenericAPIView): model = JobTemplate parent_model = JobTemplate serializer_class = EmptySerializer + new_in_210 = True def get(self, request, *args, **kwargs): obj = self.get_object() if not feature_enabled('surveys'): raise LicenseForbids(_('Your license does not allow ' 'adding surveys.')) - return Response(obj.survey_spec) + survey_spec = obj.survey_spec + for pos, field in enumerate(survey_spec.get('spec', [])): + if field.get('type') == 'password': + if 'default' in field and field['default']: + field['default'] = '$encrypted$' + + return Response(survey_spec) def post(self, request, *args, **kwargs): obj = self.get_object() @@ -2411,6 +2452,7 @@ class JobTemplateSurveySpec(GenericAPIView): return Response(dict(error=_("'spec' must be a list of items.")), status=status.HTTP_400_BAD_REQUEST) if len(new_spec["spec"]) < 1: return Response(dict(error=_("'spec' doesn't contain any items.")), status=status.HTTP_400_BAD_REQUEST) + idx = 0 variable_set = set() for survey_item in new_spec["spec"]: @@ -2429,7 +2471,15 @@ class JobTemplateSurveySpec(GenericAPIView): variable_set.add(survey_item['variable']) if "required" not in survey_item: return Response(dict(error=_("'required' missing from survey question %s.") % str(idx)), status=status.HTTP_400_BAD_REQUEST) + + if survey_item["type"] == "password": + if "default" in survey_item and survey_item["default"].startswith('$encrypted$'): + old_spec = obj.survey_spec + for old_item in old_spec['spec']: + if old_item['variable'] == survey_item['variable']: + survey_item['default'] = old_item['default'] idx += 1 + obj.survey_spec = new_spec obj.save(update_fields=['survey_spec']) return Response() @@ -2447,6 +2497,7 @@ class WorkflowJobTemplateSurveySpec(WorkflowsEnforcementMixin, JobTemplateSurvey model = WorkflowJobTemplate parent_model = WorkflowJobTemplate + new_in_310 = True class JobTemplateActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView): @@ -2464,6 +2515,7 @@ class JobTemplateNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView): serializer_class = NotificationTemplateSerializer parent_model = JobTemplate relationship = 'notification_templates_any' + new_in_300 = True class JobTemplateNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView): @@ -2472,6 +2524,7 @@ class JobTemplateNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView serializer_class = NotificationTemplateSerializer parent_model = JobTemplate relationship = 'notification_templates_error' + new_in_300 = True class JobTemplateNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView): @@ -2480,6 +2533,7 @@ class JobTemplateNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIVi serializer_class = NotificationTemplateSerializer parent_model = JobTemplate relationship = 'notification_templates_success' + new_in_300 = True class JobTemplateLabelList(DeleteLastUnattachLabelMixin, SubListCreateAttachDetachAPIView): @@ -2551,23 +2605,25 @@ class JobTemplateCallback(GenericAPIView): return set([hosts.get(name__in=remote_hosts)]) except (Host.DoesNotExist, Host.MultipleObjectsReturned): pass - # Next, try matching based on name or ansible_ssh_host variable. + # Next, try matching based on name or ansible_host variables. matches = set() for host in hosts: - ansible_ssh_host = host.variables_dict.get('ansible_ssh_host', '') - if ansible_ssh_host in remote_hosts: - matches.add(host) - if host.name != ansible_ssh_host and host.name in remote_hosts: - matches.add(host) + for host_var in ['ansible_ssh_host', 'ansible_host']: + ansible_host = host.variables_dict.get(host_var, '') + if ansible_host in remote_hosts: + matches.add(host) + if host.name != ansible_host and host.name in remote_hosts: + matches.add(host) if len(matches) == 1: return matches # Try to resolve forward addresses for each host to find matches. for host in hosts: hostnames = set([host.name]) - ansible_ssh_host = host.variables_dict.get('ansible_ssh_host', '') - if ansible_ssh_host: - hostnames.add(ansible_ssh_host) + for host_var in ['ansible_ssh_host', 'ansible_host']: + ansible_host = host.variables_dict.get(host_var, '') + if ansible_host: + hostnames.add(ansible_host) for hostname in hostnames: try: result = socket.getaddrinfo(hostname, None) @@ -2650,7 +2706,7 @@ class JobTemplateCallback(GenericAPIView): # Send a signal to celery that the job should be started. kv = {"inventory_sources_already_updated": inventory_sources_already_updated} if extra_vars is not None: - kv['extra_vars'] = extra_vars + kv['extra_vars'] = callback_filter_out_ansible_extra_vars(extra_vars) result = job.signal_start(**kv) if not result: data = dict(msg=_('Error starting job!')) @@ -2673,7 +2729,7 @@ class JobTemplateJobsList(SubListCreateAPIView): class JobTemplateAccessList(ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = JobTemplate + parent_model = JobTemplate new_in_300 = True @@ -2732,7 +2788,7 @@ class WorkflowJobTemplateNodeChildrenBaseList(WorkflowsEnforcementMixin, Enforce model = WorkflowJobTemplateNode serializer_class = WorkflowJobTemplateNodeListSerializer - always_allow_superuser = True # TODO: RBAC + always_allow_superuser = True parent_model = WorkflowJobTemplateNode relationship = '' enforce_parent_relationship = 'workflow_job_template' @@ -2830,7 +2886,6 @@ class WorkflowJobNodeAlwaysNodesList(WorkflowJobNodeChildrenBaseList): relationship = 'always_nodes' -# TODO: class WorkflowJobTemplateList(WorkflowsEnforcementMixin, ListCreateAPIView): model = WorkflowJobTemplate @@ -2838,18 +2893,7 @@ class WorkflowJobTemplateList(WorkflowsEnforcementMixin, ListCreateAPIView): always_allow_superuser = False new_in_310 = True - # TODO: RBAC - ''' - def post(self, request, *args, **kwargs): - ret = super(WorkflowJobTemplateList, self).post(request, *args, **kwargs) - if ret.status_code == 201: - workflow_job_template = WorkflowJobTemplate.objects.get(id=ret.data['id']) - workflow_job_template.admin_role.members.add(request.user) - return ret - ''' - -# TODO: class WorkflowJobTemplateDetail(WorkflowsEnforcementMixin, RetrieveUpdateDestroyAPIView): model = WorkflowJobTemplate @@ -2867,20 +2911,28 @@ class WorkflowJobTemplateCopy(WorkflowsEnforcementMixin, GenericAPIView): def get(self, request, *args, **kwargs): obj = self.get_object() - data = {} - copy_TF, messages = request.user.can_access_with_errors(self.model, 'copy', obj) - data['can_copy'] = copy_TF - data['warnings'] = messages + can_copy, messages = request.user.can_access_with_errors(self.model, 'copy', obj) + data = OrderedDict([ + ('can_copy', can_copy), ('can_copy_without_user_input', can_copy), + ('templates_unable_to_copy', [] if can_copy else ['all']), + ('credentials_unable_to_copy', [] if can_copy else ['all']), + ('inventories_unable_to_copy', [] if can_copy else ['all']) + ]) + if messages and can_copy: + data['can_copy_without_user_input'] = False + data.update(messages) return Response(data) def post(self, request, *args, **kwargs): obj = self.get_object() if not request.user.can_access(self.model, 'copy', obj): - return PermissionDenied() - new_wfjt = obj.user_copy(request.user) + raise PermissionDenied() + new_obj = obj.user_copy(request.user) + if request.user not in new_obj.admin_role: + new_obj.admin_role.members.add(request.user) data = OrderedDict() data.update(WorkflowJobTemplateSerializer( - new_wfjt, context=self.get_serializer_context()).to_representation(new_wfjt)) + new_obj, context=self.get_serializer_context()).to_representation(new_obj)) return Response(data, status=status.HTTP_201_CREATED) @@ -2899,7 +2951,10 @@ class WorkflowJobTemplateLaunch(WorkflowsEnforcementMixin, RetrieveAPIView): always_allow_superuser = False def update_raw_data(self, data): - obj = self.get_object() + try: + obj = self.get_object() + except PermissionDenied: + return data extra_vars = data.pop('extra_vars', None) or {} if obj: for v in obj.variables_needed_to_start: @@ -2919,7 +2974,7 @@ class WorkflowJobTemplateLaunch(WorkflowsEnforcementMixin, RetrieveAPIView): prompted_fields, ignored_fields = obj._accept_or_ignore_job_kwargs(**request.data) - new_job = obj.create_workflow_job(**prompted_fields) + new_job = obj.create_unified_job(**prompted_fields) new_job.signal_start(**prompted_fields) data = OrderedDict() @@ -2934,6 +2989,14 @@ class WorkflowJobRelaunch(WorkflowsEnforcementMixin, GenericAPIView): model = WorkflowJob serializer_class = EmptySerializer is_job_start = True + new_in_310 = True + + def check_object_permissions(self, request, obj): + if request.method == 'POST' and obj: + relaunch_perm, messages = request.user.can_access_with_errors(self.model, 'start', obj) + if not relaunch_perm and 'workflow_job_template' in messages: + self.permission_denied(request, message=messages['workflow_job_template']) + return super(WorkflowJobRelaunch, self).check_object_permissions(request, obj) def get(self, request, *args, **kwargs): return Response({}) @@ -2948,7 +3011,6 @@ class WorkflowJobRelaunch(WorkflowsEnforcementMixin, GenericAPIView): return Response(data, status=status.HTTP_201_CREATED, headers=headers) -# TODO: class WorkflowJobTemplateWorkflowNodesList(WorkflowsEnforcementMixin, SubListCreateAPIView): model = WorkflowJobTemplateNode @@ -2964,7 +3026,6 @@ class WorkflowJobTemplateWorkflowNodesList(WorkflowsEnforcementMixin, SubListCre return super(WorkflowJobTemplateWorkflowNodesList, self).update_raw_data(data) -# TODO: class WorkflowJobTemplateJobsList(WorkflowsEnforcementMixin, SubListAPIView): model = WorkflowJob @@ -3017,7 +3078,7 @@ class WorkflowJobTemplateNotificationTemplatesSuccessList(WorkflowsEnforcementMi class WorkflowJobTemplateAccessList(WorkflowsEnforcementMixin, ResourceAccessList): model = User # needs to be User for AccessLists's - resource_model = WorkflowJobTemplate + parent_model = WorkflowJobTemplate new_in_310 = True @@ -3042,8 +3103,14 @@ class WorkflowJobTemplateActivityStreamList(WorkflowsEnforcementMixin, ActivityS relationship = 'activitystream_set' new_in_310 = True + def get_queryset(self): + parent = self.get_parent_object() + self.check_parent_access(parent) + qs = self.request.user.get_queryset(self.model) + return qs.filter(Q(workflow_job_template=parent) | + Q(workflow_job_template_node__workflow_job_template=parent)) + -# TODO: class WorkflowJobList(WorkflowsEnforcementMixin, ListCreateAPIView): model = WorkflowJob @@ -3051,7 +3118,6 @@ class WorkflowJobList(WorkflowsEnforcementMixin, ListCreateAPIView): new_in_310 = True -# TODO: class WorkflowJobDetail(WorkflowsEnforcementMixin, RetrieveDestroyAPIView): model = WorkflowJob @@ -3063,7 +3129,7 @@ class WorkflowJobWorkflowNodesList(WorkflowsEnforcementMixin, SubListAPIView): model = WorkflowJobNode serializer_class = WorkflowJobNodeListSerializer - always_allow_superuser = True # TODO: RBAC + always_allow_superuser = True parent_model = WorkflowJob relationship = 'workflow_job_nodes' parent_key = 'workflow_job' @@ -3110,6 +3176,7 @@ class SystemJobTemplateList(ListAPIView): model = SystemJobTemplate serializer_class = SystemJobTemplateSerializer + new_in_210 = True def get(self, request, *args, **kwargs): if not request.user.is_superuser and not request.user.is_system_auditor: @@ -3121,6 +3188,7 @@ class SystemJobTemplateDetail(RetrieveAPIView): model = SystemJobTemplate serializer_class = SystemJobTemplateSerializer + new_in_210 = True class SystemJobTemplateLaunch(GenericAPIView): @@ -3128,6 +3196,7 @@ class SystemJobTemplateLaunch(GenericAPIView): model = SystemJobTemplate serializer_class = EmptySerializer is_job_start = True + new_in_210 = True def get(self, request, *args, **kwargs): return Response({}) @@ -3135,8 +3204,8 @@ class SystemJobTemplateLaunch(GenericAPIView): def post(self, request, *args, **kwargs): obj = self.get_object() - new_job = obj.create_unified_job(**request.data) - new_job.signal_start(**request.data) + new_job = obj.create_unified_job(extra_vars=request.data.get('extra_vars', {})) + new_job.signal_start() data = dict(system_job=new_job.id) return Response(data, status=status.HTTP_201_CREATED) @@ -3150,6 +3219,7 @@ class SystemJobTemplateSchedulesList(SubListCreateAttachDetachAPIView): parent_model = SystemJobTemplate relationship = 'schedules' parent_key = 'unified_job_template' + new_in_210 = True class SystemJobTemplateJobsList(SubListAPIView): @@ -3159,6 +3229,7 @@ class SystemJobTemplateJobsList(SubListAPIView): parent_model = SystemJobTemplate relationship = 'jobs' parent_key = 'system_job_template' + new_in_210 = True class SystemJobTemplateNotificationTemplatesAnyList(SubListCreateAttachDetachAPIView): @@ -3167,6 +3238,7 @@ class SystemJobTemplateNotificationTemplatesAnyList(SubListCreateAttachDetachAPI serializer_class = NotificationTemplateSerializer parent_model = SystemJobTemplate relationship = 'notification_templates_any' + new_in_300 = True class SystemJobTemplateNotificationTemplatesErrorList(SubListCreateAttachDetachAPIView): @@ -3175,6 +3247,7 @@ class SystemJobTemplateNotificationTemplatesErrorList(SubListCreateAttachDetachA serializer_class = NotificationTemplateSerializer parent_model = SystemJobTemplate relationship = 'notification_templates_error' + new_in_300 = True class SystemJobTemplateNotificationTemplatesSuccessList(SubListCreateAttachDetachAPIView): @@ -3183,6 +3256,7 @@ class SystemJobTemplateNotificationTemplatesSuccessList(SubListCreateAttachDetac serializer_class = NotificationTemplateSerializer parent_model = SystemJobTemplate relationship = 'notification_templates_success' + new_in_300 = True class JobList(ListCreateAPIView): @@ -3205,8 +3279,11 @@ class JobDetail(RetrieveUpdateDestroyAPIView): def destroy(self, request, *args, **kwargs): obj = self.get_object() - if obj.unified_job_node.filter(workflow_job__status__in=ACTIVE_STATES).exists(): - raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + try: + if obj.unified_job_node.workflow_job.status in ACTIVE_STATES: + raise PermissionDenied(detail=_('Cannot delete job resource when associated workflow job is running.')) + except Job.unified_job_node.RelatedObjectDoesNotExist: + pass return super(JobDetail, self).destroy(request, *args, **kwargs) @@ -3217,10 +3294,12 @@ class JobLabelList(SubListAPIView): parent_model = Job relationship = 'labels' parent_key = 'job' + new_in_300 = True class WorkflowJobLabelList(WorkflowsEnforcementMixin, JobLabelList): parent_model = WorkflowJob + new_in_310 = True class JobActivityStreamList(ActivityStreamEnforcementMixin, SubListAPIView): @@ -3237,6 +3316,7 @@ class JobStart(GenericAPIView): model = Job serializer_class = EmptySerializer is_job_start = True + deprecated = True def get(self, request, *args, **kwargs): obj = self.get_object() @@ -3315,6 +3395,7 @@ class JobNotificationsList(SubListAPIView): serializer_class = NotificationSerializer parent_model = Job relationship = 'notifications' + new_in_300 = True class BaseJobHostSummariesList(SubListAPIView): @@ -3385,6 +3466,10 @@ class BaseJobEventsList(SubListAPIView): relationship = 'job_events' view_name = _('Job Events List') + def finalize_response(self, request, response, *args, **kwargs): + response['X-UI-Max-Events'] = settings.RECOMMENDED_MAX_EVENTS_DISPLAY_HEADER + return super(BaseJobEventsList, self).finalize_response(request, response, *args, **kwargs) + class HostJobEventsList(BaseJobEventsList): @@ -3403,210 +3488,10 @@ class JobJobEventsList(BaseJobEventsList): def get_queryset(self): job = self.get_parent_object() self.check_parent_access(job) - qs = job.job_events.all() + qs = job.job_events qs = qs.select_related('host') qs = qs.prefetch_related('hosts', 'children') - if self.request.user.is_superuser or self.request.user.is_system_auditor: - return qs.all() - host_qs = self.request.user.get_queryset(Host) - return qs.filter(Q(host__isnull=True) | Q(host__in=host_qs)) - - -class JobJobPlaysList(BaseJobEventsList): - - parent_model = Job - view_name = _('Job Plays List') - new_in_200 = True - - @paginated - def get(self, request, limit, offset, ordering, *args, **kwargs): - all_plays = [] - job = Job.objects.filter(pk=self.kwargs['pk']) - if not job.exists(): - return ({'detail': 'Job not found.'}, -1, status.HTTP_404_NOT_FOUND) - job = job[0] - - # Put together a queryset for relevant job events. - qs = job.job_events.filter(event='playbook_on_play_start') - if ordering is not None: - qs = qs.order_by(ordering) - - # This is a bit of a special case for filtering requested by the UI - # doing this here for the moment until/unless we need to implement more - # complex filtering (since we aren't under a serializer) - - if "id__in" in request.query_params: - qs = qs.filter(id__in=[int(filter_id) for filter_id in request.query_params["id__in"].split(",")]) - elif "id__gt" in request.query_params: - qs = qs.filter(id__gt=request.query_params['id__gt']) - elif "id__lt" in request.query_params: - qs = qs.filter(id__lt=request.query_params['id__lt']) - if "failed" in request.query_params: - qs = qs.filter(failed=(request.query_params['failed'].lower() == 'true')) - if "play__icontains" in request.query_params: - qs = qs.filter(play__icontains=request.query_params['play__icontains']) - - count = qs.count() - - # Iterate over the relevant play events and get the details. - for play_event in qs[offset:offset + limit]: - play_details = dict(id=play_event.id, play=play_event.play, started=play_event.created, failed=play_event.failed, changed=play_event.changed) - event_aggregates = JobEvent.objects.filter(parent__in=play_event.children.all()).values("event").annotate(Count("id")).order_by() - change_aggregates = JobEvent.objects.filter(parent__in=play_event.children.all(), event='runner_on_ok').values("changed").annotate(Count("id")).order_by() - failed_count = 0 - ok_count = 0 - changed_count = 0 - skipped_count = 0 - unreachable_count = 0 - for event_aggregate in event_aggregates: - if event_aggregate['event'] == 'runner_on_failed': - failed_count += event_aggregate['id__count'] - elif event_aggregate['event'] == 'runner_on_error': - failed_count += event_aggregate['id_count'] - elif event_aggregate['event'] == 'runner_on_skipped': - skipped_count = event_aggregate['id__count'] - elif event_aggregate['event'] == 'runner_on_unreachable': - unreachable_count = event_aggregate['id__count'] - for change_aggregate in change_aggregates: - if not change_aggregate['changed']: - ok_count = change_aggregate['id__count'] - else: - changed_count = change_aggregate['id__count'] - play_details['related'] = {'job_event': reverse('api:job_event_detail', args=(play_event.pk,))} - play_details['type'] = 'job_event' - play_details['ok_count'] = ok_count - play_details['failed_count'] = failed_count - play_details['changed_count'] = changed_count - play_details['skipped_count'] = skipped_count - play_details['unreachable_count'] = unreachable_count - all_plays.append(play_details) - - # Done; return the plays and the total count. - return all_plays, count, None - - -class JobJobTasksList(BaseJobEventsList): - """A view for displaying aggregate data about tasks within a job - and their completion status. - """ - parent_model = Job - view_name = _('Job Play Tasks List') - new_in_200 = True - - @paginated - def get(self, request, limit, offset, ordering, *args, **kwargs): - """Return aggregate data about each of the job tasks that is: - - an immediate child of the job event - - corresponding to the spinning up of a new task or playbook - """ - results = [] - - # Get the job and the parent task. - # If there's no event ID specified, this will return a 404. - job = Job.objects.filter(pk=self.kwargs['pk']) - if not job.exists(): - return ({'detail': _('Job not found.')}, -1, status.HTTP_404_NOT_FOUND) - job = job[0] - - if 'event_id' not in request.query_params: - return ({"detail": _("'event_id' not provided.")}, -1, status.HTTP_400_BAD_REQUEST) - - parent_task = job.job_events.filter(pk=int(request.query_params.get('event_id', -1))) - if not parent_task.exists(): - return ({'detail': _('Parent event not found.')}, -1, status.HTTP_404_NOT_FOUND) - parent_task = parent_task[0] - - STARTING_EVENTS = ('playbook_on_task_start', 'playbook_on_setup') - queryset = JobEvent.get_startevent_queryset(parent_task, STARTING_EVENTS) - - # The data above will come back in a list, but we are going to - # want to access it based on the parent id, so map it into a - # dictionary. - data = {} - for line in queryset[offset:offset + limit]: - parent_id = line.pop('parent__id') - data.setdefault(parent_id, []) - data[parent_id].append(line) - - # Iterate over the start events and compile information about each one - # using their children. - qs = parent_task.children.filter(event__in=STARTING_EVENTS, - id__in=data.keys()) - - # This is a bit of a special case for id filtering requested by the UI - # doing this here for the moment until/unless we need to implement more - # complex filtering (since we aren't under a serializer) - - if "id__in" in request.query_params: - qs = qs.filter(id__in=[int(filter_id) for filter_id in request.query_params["id__in"].split(",")]) - elif "id__gt" in request.query_params: - qs = qs.filter(id__gt=request.query_params['id__gt']) - elif "id__lt" in request.query_params: - qs = qs.filter(id__lt=request.query_params['id__lt']) - if "failed" in request.query_params: - qs = qs.filter(failed=(request.query_params['failed'].lower() == 'true')) - if "task__icontains" in request.query_params: - qs = qs.filter(task__icontains=request.query_params['task__icontains']) - - if ordering is not None: - qs = qs.order_by(ordering) - - count = 0 - for task_start_event in qs: - # Create initial task data. - task_data = { - 'related': {'job_event': reverse('api:job_event_detail', args=(task_start_event.pk,))}, - 'type': 'job_event', - 'changed': task_start_event.changed, - 'changed_count': 0, - 'created': task_start_event.created, - 'failed': task_start_event.failed, - 'failed_count': 0, - 'host_count': 0, - 'id': task_start_event.id, - 'modified': task_start_event.modified, - 'name': 'Gathering Facts' if task_start_event.event == 'playbook_on_setup' else task_start_event.task, - 'reported_hosts': 0, - 'skipped_count': 0, - 'unreachable_count': 0, - 'successful_count': 0, - } - - # Iterate over the data compiled for this child event, and - # make appropriate changes to the task data. - for child_data in data.get(task_start_event.id, []): - if child_data['event'] == 'runner_on_failed': - task_data['failed'] = True - task_data['host_count'] += child_data['num'] - task_data['reported_hosts'] += child_data['num'] - task_data['failed_count'] += child_data['num'] - elif child_data['event'] == 'runner_on_ok': - task_data['host_count'] += child_data['num'] - task_data['reported_hosts'] += child_data['num'] - if child_data['changed']: - task_data['changed_count'] += child_data['num'] - task_data['changed'] = True - else: - task_data['successful_count'] += child_data['num'] - elif child_data['event'] == 'runner_on_unreachable': - task_data['host_count'] += child_data['num'] - task_data['unreachable_count'] += child_data['num'] - elif child_data['event'] == 'runner_on_skipped': - task_data['host_count'] += child_data['num'] - task_data['reported_hosts'] += child_data['num'] - task_data['skipped_count'] += child_data['num'] - elif child_data['event'] == 'runner_on_error': - task_data['host_count'] += child_data['num'] - task_data['reported_hosts'] += child_data['num'] - task_data['failed'] = True - task_data['failed_count'] += child_data['num'] - elif child_data['event'] == 'runner_on_no_hosts': - task_data['host_count'] += child_data['num'] - count += 1 - results.append(task_data) - - # Done; return the results and count. - return (results, count, None) + return qs.all() class AdHocCommandList(ListCreateAPIView): @@ -3820,12 +3705,14 @@ class AdHocCommandNotificationsList(SubListAPIView): serializer_class = NotificationSerializer parent_model = AdHocCommand relationship = 'notifications' + new_in_300 = True class SystemJobList(ListCreateAPIView): model = SystemJob serializer_class = SystemJobListSerializer + new_in_210 = True def get(self, request, *args, **kwargs): if not request.user.is_superuser and not request.user.is_system_auditor: @@ -3837,6 +3724,7 @@ class SystemJobDetail(RetrieveDestroyAPIView): model = SystemJob serializer_class = SystemJobSerializer + new_in_210 = True class SystemJobCancel(RetrieveAPIView): @@ -3844,6 +3732,7 @@ class SystemJobCancel(RetrieveAPIView): model = SystemJob serializer_class = SystemJobCancelSerializer is_job_cancel = True + new_in_210 = True def post(self, request, *args, **kwargs): obj = self.get_object() @@ -3860,6 +3749,7 @@ class SystemJobNotificationsList(SubListAPIView): serializer_class = NotificationSerializer parent_model = SystemJob relationship = 'notifications' + new_in_300 = True class UnifiedJobTemplateList(ListAPIView): @@ -3876,20 +3766,47 @@ class UnifiedJobList(ListAPIView): new_in_148 = True +class StdoutANSIFilter(object): + + def __init__(self, fileobj): + self.fileobj = fileobj + self.extra_data = '' + if hasattr(fileobj,'close'): + self.close = fileobj.close + + def read(self, size=-1): + data = self.extra_data + while size > 0 and len(data) < size: + line = self.fileobj.readline(size) + if not line: + break + # Remove ANSI escape sequences used to embed event data. + line = re.sub(r'\x1b\[K(?:[A-Za-z0-9+/=]+\x1b\[\d+D)+\x1b\[K', '', line) + # Remove ANSI color escape sequences. + line = re.sub(r'\x1b[^m]*m', '', line) + data += line + if size > 0 and len(data) > size: + self.extra_data = data[size:] + data = data[:size] + else: + self.extra_data = '' + return data + + class UnifiedJobStdout(RetrieveAPIView): authentication_classes = [TokenGetAuthentication] + api_settings.DEFAULT_AUTHENTICATION_CLASSES serializer_class = UnifiedJobStdoutSerializer renderer_classes = [BrowsableAPIRenderer, renderers.StaticHTMLRenderer, PlainTextRenderer, AnsiTextRenderer, - renderers.JSONRenderer, DownloadTextRenderer] + renderers.JSONRenderer, DownloadTextRenderer, AnsiDownloadRenderer] filter_backends = () new_in_148 = True def retrieve(self, request, *args, **kwargs): unified_job = self.get_object() obj_size = unified_job.result_stdout_size - if request.accepted_renderer.format != 'txt_download' and obj_size > settings.STDOUT_MAX_BYTES_DISPLAY: + if request.accepted_renderer.format not in {'txt_download', 'ansi_download'} and obj_size > settings.STDOUT_MAX_BYTES_DISPLAY: response_message = _("Standard Output too large to display (%(text_size)d bytes), " "only download supported for sizes over %(supported_size)d bytes") % { 'text_size': obj_size, 'supported_size': settings.STDOUT_MAX_BYTES_DISPLAY} @@ -3930,18 +3847,24 @@ class UnifiedJobStdout(RetrieveAPIView): elif content_format == 'html': return Response({'range': {'start': start, 'end': end, 'absolute_end': absolute_end}, 'content': body}) return Response(data) + elif request.accepted_renderer.format == 'txt': + return Response(unified_job.result_stdout) elif request.accepted_renderer.format == 'ansi': return Response(unified_job.result_stdout_raw) - elif request.accepted_renderer.format == 'txt_download': + elif request.accepted_renderer.format in {'txt_download', 'ansi_download'}: try: content_fd = open(unified_job.result_stdout_file, 'r') + if request.accepted_renderer.format == 'txt_download': + # For txt downloads, filter out ANSI escape sequences. + content_fd = StdoutANSIFilter(content_fd) + suffix = '' + else: + suffix = '_ansi' response = HttpResponse(FileWrapper(content_fd), content_type='text/plain') - response["Content-Disposition"] = 'attachment; filename="job_%s.txt"' % str(unified_job.id) + response["Content-Disposition"] = 'attachment; filename="job_%s%s.txt"' % (str(unified_job.id), suffix) return response except Exception as e: return Response({"error": _("Error generating stdout download file: %s") % str(e)}, status=status.HTTP_400_BAD_REQUEST) - elif request.accepted_renderer.format == 'txt': - return Response(unified_job.result_stdout) else: return super(UnifiedJobStdout, self).retrieve(request, *args, **kwargs) @@ -3949,6 +3872,7 @@ class UnifiedJobStdout(RetrieveAPIView): class ProjectUpdateStdout(UnifiedJobStdout): model = ProjectUpdate + new_in_13 = True class InventoryUpdateStdout(UnifiedJobStdout): @@ -3992,7 +3916,7 @@ class NotificationTemplateDetail(RetrieveUpdateDestroyAPIView): class NotificationTemplateTest(GenericAPIView): - view_name = _('NotificationTemplate Test') + view_name = _('Notification Template Test') model = NotificationTemplate serializer_class = EmptySerializer new_in_300 = True @@ -4019,6 +3943,7 @@ class NotificationTemplateNotificationList(SubListAPIView): parent_model = NotificationTemplate relationship = 'notifications' parent_key = 'notification_template' + new_in_300 = True class NotificationList(ListAPIView): @@ -4170,6 +4095,11 @@ class RoleTeamsList(SubListAPIView): action = 'attach' if request.data.get('disassociate', None): action = 'unattach' + + if role.is_singleton() and action == 'attach': + data = dict(msg=_("You cannot grant system-level permissions to a team.")) + return Response(data, status=status.HTTP_400_BAD_REQUEST) + if not request.user.can_access(self.parent_model, action, role, team, self.relationship, request.data, skip_sub_obj_read_check=False): diff --git a/awx/conf/apps.py b/awx/conf/apps.py index 62ad0085df..6e09545236 100644 --- a/awx/conf/apps.py +++ b/awx/conf/apps.py @@ -16,7 +16,10 @@ class ConfConfig(AppConfig): from .settings import SettingsWrapper SettingsWrapper.initialize() if settings.LOG_AGGREGATOR_ENABLED: - LOGGING = settings.LOGGING - LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler' - configure_logging(settings.LOGGING_CONFIG, LOGGING) + LOGGING_DICT = settings.LOGGING + LOGGING_DICT['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler' + if 'awx' in settings.LOG_AGGREGATOR_LOGGERS: + if 'http_receiver' not in LOGGING_DICT['loggers']['awx']['handlers']: + LOGGING_DICT['loggers']['awx']['handlers'] += ['http_receiver'] + configure_logging(settings.LOGGING_CONFIG, LOGGING_DICT) # checks.register(SettingsWrapper._check_settings) diff --git a/awx/conf/license.py b/awx/conf/license.py index a5ac14e659..0df047caaa 100644 --- a/awx/conf/license.py +++ b/awx/conf/license.py @@ -2,9 +2,6 @@ # All Rights Reserved. # Django -from django.core.cache import cache -from django.core.signals import setting_changed -from django.dispatch import receiver from django.utils.translation import ugettext_lazy as _ # Django REST Framework @@ -12,7 +9,6 @@ from rest_framework.exceptions import APIException # Tower from awx.main.task_engine import TaskEnhancer -from awx.main.utils import memoize __all__ = ['LicenseForbids', 'get_license', 'get_licensed_features', 'feature_enabled', 'feature_exists'] @@ -23,18 +19,10 @@ class LicenseForbids(APIException): default_detail = _('Your Tower license does not allow that.') -@memoize(cache_key='_validated_license_data') def _get_validated_license_data(): return TaskEnhancer().validate_enhancements() -@receiver(setting_changed) -def _on_setting_changed(sender, **kwargs): - # Clear cached result above when license changes. - if kwargs.get('setting', None) == 'LICENSE': - cache.delete('_validated_license_data') - - def get_license(show_key=False): """Return a dictionary representing the active license on this Tower instance.""" license_data = _get_validated_license_data() diff --git a/awx/conf/migrations/0002_v310_copy_tower_settings.py b/awx/conf/migrations/0002_v310_copy_tower_settings.py index 4493007f70..7cf24b7061 100644 --- a/awx/conf/migrations/0002_v310_copy_tower_settings.py +++ b/awx/conf/migrations/0002_v310_copy_tower_settings.py @@ -64,11 +64,11 @@ class Migration(migrations.Migration): dependencies = [ ('conf', '0001_initial'), - ('main', '0036_v310_jobevent_uuid'), + ('main', '0034_v310_release'), ] run_before = [ - ('main', '0037_v310_remove_tower_settings'), + ('main', '0035_v310_remove_tower_settings'), ] operations = [ diff --git a/awx/conf/serializers.py b/awx/conf/serializers.py index 744a4770d6..4c2dd4748d 100644 --- a/awx/conf/serializers.py +++ b/awx/conf/serializers.py @@ -50,6 +50,8 @@ class SettingFieldMixin(object): return obj def to_internal_value(self, value): + if getattr(self, 'encrypted', False) and isinstance(value, basestring) and value.startswith('$encrypted$'): + raise serializers.SkipField() obj = super(SettingFieldMixin, self).to_internal_value(value) return super(SettingFieldMixin, self).to_representation(obj) diff --git a/awx/conf/settings.py b/awx/conf/settings.py index c08b161237..d5e379ba9f 100644 --- a/awx/conf/settings.py +++ b/awx/conf/settings.py @@ -1,6 +1,7 @@ # Python import contextlib import logging +import sys import threading import time @@ -86,6 +87,7 @@ class SettingsWrapper(UserSettingsHolder): self.__dict__['_awx_conf_settings'] = self self.__dict__['_awx_conf_preload_expires'] = None self.__dict__['_awx_conf_preload_lock'] = threading.RLock() + self.__dict__['_awx_conf_init_readonly'] = False def _get_supported_settings(self): return settings_registry.get_registered_settings() @@ -110,6 +112,20 @@ class SettingsWrapper(UserSettingsHolder): return # Otherwise update local preload timeout. self.__dict__['_awx_conf_preload_expires'] = time.time() + SETTING_CACHE_TIMEOUT + # Check for any settings that have been defined in Python files and + # make those read-only to avoid overriding in the database. + if not self._awx_conf_init_readonly and 'migrate_to_database_settings' not in sys.argv: + defaults_snapshot = self._get_default('DEFAULTS_SNAPSHOT') + for key in self._get_writeable_settings(): + init_default = defaults_snapshot.get(key, None) + try: + file_default = self._get_default(key) + except AttributeError: + file_default = None + if file_default != init_default and file_default is not None: + logger.warning('Setting %s has been marked read-only!', key) + settings_registry._registry[key]['read_only'] = True + self.__dict__['_awx_conf_init_readonly'] = True # If local preload timer has expired, check to see if another process # has already preloaded the cache and skip preloading if so. if cache.get('_awx_conf_preload_expires', empty) is not empty: @@ -146,7 +162,10 @@ class SettingsWrapper(UserSettingsHolder): def _get_local(self, name): self._preload_cache() cache_key = Setting.get_cache_key(name) - cache_value = cache.get(cache_key, empty) + try: + cache_value = cache.get(cache_key, empty) + except ValueError: + cache_value = empty logger.debug('cache get(%r, %r) -> %r', cache_key, empty, cache_value) if cache_value == SETTING_CACHE_NOTSET: value = empty diff --git a/awx/conf/signals.py b/awx/conf/signals.py index 8ef0005b1f..9d1813843e 100644 --- a/awx/conf/signals.py +++ b/awx/conf/signals.py @@ -13,7 +13,7 @@ import awx.main.signals from awx.conf import settings_registry from awx.conf.models import Setting from awx.conf.serializers import SettingSerializer -from awx.main.tasks import clear_cache_keys +from awx.main.tasks import process_cache_changes logger = logging.getLogger('awx.conf.signals') @@ -26,16 +26,13 @@ def handle_setting_change(key, for_delete=False): # When a setting changes or is deleted, remove its value from cache along # with any other settings that depend on it. setting_keys = [key] - setting_key_dict = {} - setting_key_dict[key] = key for dependent_key in settings_registry.get_dependent_settings(key): # Note: Doesn't handle multiple levels of dependencies! setting_keys.append(dependent_key) - setting_key_dict[dependent_key] = dependent_key cache_keys = set([Setting.get_cache_key(k) for k in setting_keys]) logger.debug('sending signals to delete cache keys(%r)', cache_keys) cache.delete_many(cache_keys) - clear_cache_keys.delay(setting_key_dict) + process_cache_changes.delay(list(cache_keys)) # Send setting_changed signal with new value for each setting. for setting_key in setting_keys: diff --git a/awx/lib/sitecustomize.py b/awx/lib/sitecustomize.py index be7c06102d..224840aae7 100644 --- a/awx/lib/sitecustomize.py +++ b/awx/lib/sitecustomize.py @@ -14,7 +14,10 @@ def argv_ready(argv): class argv_placeholder(object): def __del__(self): - argv_ready(sys.argv) + try: + argv_ready(sys.argv) + except: + pass if hasattr(sys, 'argv'): diff --git a/awx/lib/tower_display_callback/display.py b/awx/lib/tower_display_callback/display.py index 128c9349c7..ad5e8ba37a 100644 --- a/awx/lib/tower_display_callback/display.py +++ b/awx/lib/tower_display_callback/display.py @@ -26,7 +26,7 @@ import uuid from ansible.utils.display import Display # Tower Display Callback -from tower_display_callback.events import event_context +from .events import event_context __all__ = [] diff --git a/awx/lib/tower_display_callback/events.py b/awx/lib/tower_display_callback/events.py index 86fab2895b..a419b33e85 100644 --- a/awx/lib/tower_display_callback/events.py +++ b/awx/lib/tower_display_callback/events.py @@ -22,14 +22,76 @@ import base64 import contextlib import datetime import json +import logging import multiprocessing import os import threading import uuid +import memcache + +# Kombu +from kombu import Connection, Exchange, Producer __all__ = ['event_context'] +class CallbackQueueEventDispatcher(object): + + def __init__(self): + self.callback_connection = os.getenv('CALLBACK_CONNECTION', None) + self.connection_queue = os.getenv('CALLBACK_QUEUE', '') + self.connection = None + self.exchange = None + self._init_logging() + + def _init_logging(self): + try: + self.job_callback_debug = int(os.getenv('JOB_CALLBACK_DEBUG', '0')) + except ValueError: + self.job_callback_debug = 0 + self.logger = logging.getLogger('awx.plugins.callback.job_event_callback') + if self.job_callback_debug >= 2: + self.logger.setLevel(logging.DEBUG) + elif self.job_callback_debug >= 1: + self.logger.setLevel(logging.INFO) + else: + self.logger.setLevel(logging.WARNING) + handler = logging.StreamHandler() + formatter = logging.Formatter('%(levelname)-8s %(process)-8d %(message)s') + handler.setFormatter(formatter) + self.logger.addHandler(handler) + self.logger.propagate = False + + def dispatch(self, obj): + if not self.callback_connection or not self.connection_queue: + return + active_pid = os.getpid() + for retry_count in xrange(4): + try: + if not hasattr(self, 'connection_pid'): + self.connection_pid = active_pid + if self.connection_pid != active_pid: + self.connection = None + if self.connection is None: + self.connection = Connection(self.callback_connection) + self.exchange = Exchange(self.connection_queue, type='direct') + + producer = Producer(self.connection) + producer.publish(obj, + serializer='json', + compression='bzip2', + exchange=self.exchange, + declare=[self.exchange], + routing_key=self.connection_queue) + return + except Exception, e: + self.logger.info('Publish Job Event Exception: %r, retry=%d', e, + retry_count, exc_info=True) + retry_count += 1 + if retry_count >= 3: + break + + class EventContext(object): ''' Store global and local (per thread/process) data associated with callback @@ -38,6 +100,9 @@ class EventContext(object): def __init__(self): self.display_lock = multiprocessing.RLock() + self.dispatcher = CallbackQueueEventDispatcher() + cache_actual = os.getenv('CACHE', '127.0.0.1:11211') + self.cache = memcache.Client([cache_actual], debug=0) def add_local(self, **kwargs): if not hasattr(self, '_local'): @@ -111,10 +176,12 @@ class EventContext(object): if event_data.get(key, False): event = key break - + max_res = int(os.getenv("MAX_EVENT_RES", 700000)) + if event not in ('playbook_on_stats',) and "res" in event_data and len(str(event_data['res'])) > max_res: + event_data['res'] = {} event_dict = dict(event=event, event_data=event_data) for key in event_data.keys(): - if key in ('job_id', 'ad_hoc_command_id', 'uuid', 'parent_uuid', 'created', 'artifact_data'): + if key in ('job_id', 'ad_hoc_command_id', 'uuid', 'parent_uuid', 'created',): event_dict[key] = event_data.pop(key) elif key in ('verbosity', 'pid'): event_dict[key] = event_data[key] @@ -136,7 +203,9 @@ class EventContext(object): fileobj.flush() def dump_begin(self, fileobj): - self.dump(fileobj, self.get_begin_dict()) + begin_dict = self.get_begin_dict() + self.cache.set(":1:ev-{}".format(begin_dict['uuid']), begin_dict) + self.dump(fileobj, {'uuid': begin_dict['uuid']}) def dump_end(self, fileobj): self.dump(fileobj, self.get_end_dict(), flush=True) diff --git a/awx/lib/tower_display_callback/module.py b/awx/lib/tower_display_callback/module.py index 457455e513..02c5eee432 100644 --- a/awx/lib/tower_display_callback/module.py +++ b/awx/lib/tower_display_callback/module.py @@ -19,8 +19,6 @@ from __future__ import (absolute_import, division, print_function) # Python import contextlib -import copy -import re import sys import uuid @@ -29,8 +27,8 @@ from ansible.plugins.callback import CallbackBase from ansible.plugins.callback.default import CallbackModule as DefaultCallbackModule # Tower Display Callback -from tower_display_callback.events import event_context -from tower_display_callback.minimal import CallbackModule as MinimalCallbackModule +from .events import event_context +from .minimal import CallbackModule as MinimalCallbackModule class BaseCallbackModule(CallbackBase): @@ -77,45 +75,11 @@ class BaseCallbackModule(CallbackBase): super(BaseCallbackModule, self).__init__() self.task_uuids = set() - def censor_result(self, res, no_log=False): - if not isinstance(res, dict): - if no_log: - return "the output has been hidden due to the fact that 'no_log: true' was specified for this result" - return res - if res.get('_ansible_no_log', no_log): - new_res = {} - for k in self.CENSOR_FIELD_WHITELIST: - if k in res: - new_res[k] = res[k] - if k == 'cmd' and k in res: - if isinstance(res['cmd'], list): - res['cmd'] = ' '.join(res['cmd']) - if re.search(r'\s', res['cmd']): - new_res['cmd'] = re.sub(r'^(([^\s\\]|\\\s)+).*$', - r'\1 ', - res['cmd']) - new_res['censored'] = "the output has been hidden due to the fact that 'no_log: true' was specified for this result" - res = new_res - if 'results' in res: - if isinstance(res['results'], list): - for i in xrange(len(res['results'])): - res['results'][i] = self.censor_result(res['results'][i], res.get('_ansible_no_log', no_log)) - elif res.get('_ansible_no_log', False): - res['results'] = "the output has been hidden due to the fact that 'no_log: true' was specified for this result" - return res - @contextlib.contextmanager def capture_event_data(self, event, **event_data): event_data.setdefault('uuid', str(uuid.uuid4())) - if 'res' in event_data: - event_data['res'] = self.censor_result(copy.deepcopy(event_data['res'])) - res = event_data.get('res', None) - if res and isinstance(res, dict): - if 'artifact_data' in res: - event_data['artifact_data'] = res['artifact_data'] - if event not in self.EVENTS_WITHOUT_TASK: task = event_data.pop('task', None) else: @@ -262,7 +226,7 @@ class BaseCallbackModule(CallbackBase): if task_uuid in self.task_uuids: # FIXME: When this task UUID repeats, it means the play is using the # free strategy, so different hosts may be running different tasks - # within a play. + # within a play. return self.task_uuids.add(task_uuid) self.set_task(task) @@ -319,6 +283,9 @@ class BaseCallbackModule(CallbackBase): with self.capture_event_data('playbook_on_notify', **event_data): super(BaseCallbackModule, self).v2_playbook_on_notify(result, handler) + ''' + ansible_stats is, retoractively, added in 2.2 + ''' def v2_playbook_on_stats(self, stats): self.clear_play() # FIXME: Add count of plays/tasks. @@ -329,7 +296,9 @@ class BaseCallbackModule(CallbackBase): ok=stats.ok, processed=stats.processed, skipped=stats.skipped, + artifact_data=stats.custom.get('_run', {}) if hasattr(stats, 'custom') else {} ) + with self.capture_event_data('playbook_on_stats', **event_data): super(BaseCallbackModule, self).v2_playbook_on_stats(stats) diff --git a/awx/locale/django.pot b/awx/locale/django.pot index 7b67733c8b..ebfd9bcb4c 100644 --- a/awx/locale/django.pot +++ b/awx/locale/django.pot @@ -1,12 +1,14 @@ -# Ansible Tower POT file. -# Copyright (c) 2016 Ansible, Inc. -# All Rights Reserved. +# SOME DESCRIPTIVE TITLE. +# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER +# This file is distributed under the same license as the PACKAGE package. +# FIRST AUTHOR , YEAR. # +#, fuzzy msgid "" msgstr "" -"Project-Id-Version: ansible-tower-3.1.0\n" +"Project-Id-Version: PACKAGE VERSION\n" "Report-Msgid-Bugs-To: \n" -"POT-Creation-Date: 2016-12-01 06:37-0500\n" +"POT-Creation-Date: 2017-01-27 17:35+0000\n" "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" "Last-Translator: FULL NAME \n" "Language-Team: LANGUAGE \n" @@ -15,1033 +17,1029 @@ msgstr "" "Content-Type: text/plain; charset=UTF-8\n" "Content-Transfer-Encoding: 8bit\n" -#: awx/api/authentication.py:67 +#: api/authentication.py:67 msgid "Invalid token header. No credentials provided." msgstr "" -#: awx/api/authentication.py:70 +#: api/authentication.py:70 msgid "Invalid token header. Token string should not contain spaces." msgstr "" -#: awx/api/authentication.py:105 +#: api/authentication.py:105 msgid "User inactive or deleted" msgstr "" -#: awx/api/authentication.py:161 +#: api/authentication.py:161 msgid "Invalid task token" msgstr "" -#: awx/api/conf.py:12 +#: api/conf.py:12 msgid "Idle Time Force Log Out" msgstr "" -#: awx/api/conf.py:13 +#: api/conf.py:13 msgid "" "Number of seconds that a user is inactive before they will need to login " "again." msgstr "" -#: awx/api/conf.py:14 awx/api/conf.py:24 awx/api/conf.py:33 awx/sso/conf.py:124 -#: awx/sso/conf.py:135 awx/sso/conf.py:147 awx/sso/conf.py:162 +#: api/conf.py:14 api/conf.py:24 api/conf.py:33 sso/conf.py:124 +#: sso/conf.py:135 sso/conf.py:147 sso/conf.py:162 msgid "Authentication" msgstr "" -#: awx/api/conf.py:22 +#: api/conf.py:22 msgid "Maximum number of simultaneous logins" msgstr "" -#: awx/api/conf.py:23 +#: api/conf.py:23 msgid "" "Maximum number of simultaneous logins a user may have. To disable enter -1." msgstr "" -#: awx/api/conf.py:31 +#: api/conf.py:31 msgid "Enable HTTP Basic Auth" msgstr "" -#: awx/api/conf.py:32 +#: api/conf.py:32 msgid "Enable HTTP Basic Auth for the API Browser." msgstr "" -#: awx/api/generics.py:446 +#: api/generics.py:462 msgid "\"id\" is required to disassociate" msgstr "" -#: awx/api/metadata.py:50 +#: api/metadata.py:50 msgid "Database ID for this {}." msgstr "" -#: awx/api/metadata.py:51 +#: api/metadata.py:51 msgid "Name of this {}." msgstr "" -#: awx/api/metadata.py:52 +#: api/metadata.py:52 msgid "Optional description of this {}." msgstr "" -#: awx/api/metadata.py:53 +#: api/metadata.py:53 msgid "Data type for this {}." msgstr "" -#: awx/api/metadata.py:54 +#: api/metadata.py:54 msgid "URL for this {}." msgstr "" -#: awx/api/metadata.py:55 +#: api/metadata.py:55 msgid "Data structure with URLs of related resources." msgstr "" -#: awx/api/metadata.py:56 +#: api/metadata.py:56 msgid "Data structure with name/description for related resources." msgstr "" -#: awx/api/metadata.py:57 +#: api/metadata.py:57 msgid "Timestamp when this {} was created." msgstr "" -#: awx/api/metadata.py:58 +#: api/metadata.py:58 msgid "Timestamp when this {} was last modified." msgstr "" -#: awx/api/parsers.py:31 +#: api/parsers.py:31 #, python-format msgid "JSON parse error - %s" msgstr "" -#: awx/api/serializers.py:248 +#: api/serializers.py:248 msgid "Playbook Run" msgstr "" -#: awx/api/serializers.py:249 +#: api/serializers.py:249 msgid "Command" msgstr "" -#: awx/api/serializers.py:250 +#: api/serializers.py:250 msgid "SCM Update" msgstr "" -#: awx/api/serializers.py:251 +#: api/serializers.py:251 msgid "Inventory Sync" msgstr "" -#: awx/api/serializers.py:252 +#: api/serializers.py:252 msgid "Management Job" msgstr "" -#: awx/api/serializers.py:636 awx/api/serializers.py:694 awx/api/views.py:3994 +#: api/serializers.py:253 +msgid "Workflow Job" +msgstr "" + +#: api/serializers.py:254 +msgid "Workflow Template" +msgstr "" + +#: api/serializers.py:656 api/serializers.py:714 api/views.py:3805 #, python-format msgid "" "Standard Output too large to display (%(text_size)d bytes), only download " "supported for sizes over %(supported_size)d bytes" msgstr "" -#: awx/api/serializers.py:709 +#: api/serializers.py:729 msgid "Write-only field used to change the password." msgstr "" -#: awx/api/serializers.py:711 +#: api/serializers.py:731 msgid "Set if the account is managed by an external service" msgstr "" -#: awx/api/serializers.py:735 +#: api/serializers.py:755 msgid "Password required for new User." msgstr "" -#: awx/api/serializers.py:819 +#: api/serializers.py:839 #, python-format msgid "Unable to change %s on user managed by LDAP." msgstr "" -#: awx/api/serializers.py:971 +#: api/serializers.py:991 msgid "Organization is missing" msgstr "" -#: awx/api/serializers.py:977 +#: api/serializers.py:997 msgid "Array of playbooks available within this project." msgstr "" -#: awx/api/serializers.py:1159 +#: api/serializers.py:1179 #, python-format msgid "Invalid port specification: %s" msgstr "" -#: awx/api/serializers.py:1187 awx/main/validators.py:192 +#: api/serializers.py:1207 main/validators.py:193 msgid "Must be valid JSON or YAML." msgstr "" -#: awx/api/serializers.py:1244 +#: api/serializers.py:1264 msgid "Invalid group name." msgstr "" -#: awx/api/serializers.py:1319 +#: api/serializers.py:1339 msgid "" "Script must begin with a hashbang sequence: i.e.... #!/usr/bin/env python" msgstr "" -#: awx/api/serializers.py:1372 +#: api/serializers.py:1392 msgid "If 'source' is 'custom', 'source_script' must be provided." msgstr "" -#: awx/api/serializers.py:1376 +#: api/serializers.py:1396 msgid "" "The 'source_script' does not belong to the same organization as the " "inventory." msgstr "" -#: awx/api/serializers.py:1378 +#: api/serializers.py:1398 msgid "'source_script' doesn't exist." msgstr "" -#: awx/api/serializers.py:1737 +#: api/serializers.py:1757 msgid "" "Write-only field used to add user to owner role. If provided, do not give " "either team or organization. Only valid for creation." msgstr "" -#: awx/api/serializers.py:1742 +#: api/serializers.py:1762 msgid "" "Write-only field used to add team to owner role. If provided, do not give " "either user or organization. Only valid for creation." msgstr "" -#: awx/api/serializers.py:1747 +#: api/serializers.py:1767 msgid "" -"Write-only field used to add organization to owner role. If provided, do not " -"give either team or team. Only valid for creation." +"Inherit permissions from organization roles. If provided on creation, do not " +"give either user or team." msgstr "" -#: awx/api/serializers.py:1763 +#: api/serializers.py:1783 msgid "Missing 'user', 'team', or 'organization'." msgstr "" -#: awx/api/serializers.py:1776 +#: api/serializers.py:1796 msgid "" "Credential organization must be set and match before assigning to a team" msgstr "" -#: awx/api/serializers.py:1868 +#: api/serializers.py:1888 msgid "This field is required." msgstr "" -#: awx/api/serializers.py:1870 awx/api/serializers.py:1872 +#: api/serializers.py:1890 api/serializers.py:1892 msgid "Playbook not found for project." msgstr "" -#: awx/api/serializers.py:1874 +#: api/serializers.py:1894 msgid "Must select playbook for project." msgstr "" -#: awx/api/serializers.py:1938 awx/main/models/jobs.py:279 +#: api/serializers.py:1958 main/models/jobs.py:278 msgid "Scan jobs must be assigned a fixed inventory." msgstr "" -#: awx/api/serializers.py:1940 awx/main/models/jobs.py:282 +#: api/serializers.py:1960 main/models/jobs.py:281 msgid "Job types 'run' and 'check' must have assigned a project." msgstr "" -#: awx/api/serializers.py:1943 +#: api/serializers.py:1963 msgid "Survey Enabled cannot be used with scan jobs." msgstr "" -#: awx/api/serializers.py:2005 +#: api/serializers.py:2023 msgid "Invalid job template." msgstr "" -#: awx/api/serializers.py:2090 +#: api/serializers.py:2108 msgid "Credential not found or deleted." msgstr "" -#: awx/api/serializers.py:2092 +#: api/serializers.py:2110 msgid "Job Template Project is missing or undefined." msgstr "" -#: awx/api/serializers.py:2094 +#: api/serializers.py:2112 msgid "Job Template Inventory is missing or undefined." msgstr "" -#: awx/api/serializers.py:2379 +#: api/serializers.py:2397 #, python-format msgid "%(job_type)s is not a valid job type. The choices are %(choices)s." msgstr "" -#: awx/api/serializers.py:2384 +#: api/serializers.py:2402 msgid "Workflow job template is missing during creation." msgstr "" -#: awx/api/serializers.py:2389 +#: api/serializers.py:2407 #, python-format msgid "Cannot nest a %s inside a WorkflowJobTemplate" msgstr "" -#: awx/api/serializers.py:2625 +#: api/serializers.py:2645 #, python-format msgid "Job Template '%s' is missing or undefined." msgstr "" -#: awx/api/serializers.py:2651 +#: api/serializers.py:2671 msgid "Must be a valid JSON or YAML dictionary." msgstr "" -#: awx/api/serializers.py:2796 +#: api/serializers.py:2813 msgid "" "Missing required fields for Notification Configuration: notification_type" msgstr "" -#: awx/api/serializers.py:2819 +#: api/serializers.py:2836 msgid "No values specified for field '{}'" msgstr "" -#: awx/api/serializers.py:2824 +#: api/serializers.py:2841 msgid "Missing required fields for Notification Configuration: {}." msgstr "" -#: awx/api/serializers.py:2827 +#: api/serializers.py:2844 msgid "Configuration field '{}' incorrect type, expected {}." msgstr "" -#: awx/api/serializers.py:2880 +#: api/serializers.py:2897 msgid "Inventory Source must be a cloud resource." msgstr "" -#: awx/api/serializers.py:2882 +#: api/serializers.py:2899 msgid "Manual Project can not have a schedule set." msgstr "" -#: awx/api/serializers.py:2904 +#: api/serializers.py:2921 msgid "DTSTART required in rrule. Value should match: DTSTART:YYYYMMDDTHHMMSSZ" msgstr "" -#: awx/api/serializers.py:2906 +#: api/serializers.py:2923 msgid "Multiple DTSTART is not supported." msgstr "" -#: awx/api/serializers.py:2908 +#: api/serializers.py:2925 msgid "RRULE require in rrule." msgstr "" -#: awx/api/serializers.py:2910 +#: api/serializers.py:2927 msgid "Multiple RRULE is not supported." msgstr "" -#: awx/api/serializers.py:2912 +#: api/serializers.py:2929 msgid "INTERVAL required in rrule." msgstr "" -#: awx/api/serializers.py:2914 +#: api/serializers.py:2931 msgid "TZID is not supported." msgstr "" -#: awx/api/serializers.py:2916 +#: api/serializers.py:2933 msgid "SECONDLY is not supported." msgstr "" -#: awx/api/serializers.py:2918 +#: api/serializers.py:2935 msgid "Multiple BYMONTHDAYs not supported." msgstr "" -#: awx/api/serializers.py:2920 +#: api/serializers.py:2937 msgid "Multiple BYMONTHs not supported." msgstr "" -#: awx/api/serializers.py:2922 +#: api/serializers.py:2939 msgid "BYDAY with numeric prefix not supported." msgstr "" -#: awx/api/serializers.py:2924 +#: api/serializers.py:2941 msgid "BYYEARDAY not supported." msgstr "" -#: awx/api/serializers.py:2926 +#: api/serializers.py:2943 msgid "BYWEEKNO not supported." msgstr "" -#: awx/api/serializers.py:2930 +#: api/serializers.py:2947 msgid "COUNT > 999 is unsupported." msgstr "" -#: awx/api/serializers.py:2934 +#: api/serializers.py:2951 msgid "rrule parsing failed validation." msgstr "" -#: awx/api/serializers.py:2952 +#: api/serializers.py:2969 msgid "" "A summary of the new and changed values when an object is created, updated, " "or deleted" msgstr "" -#: awx/api/serializers.py:2954 +#: api/serializers.py:2971 msgid "" "For create, update, and delete events this is the object type that was " "affected. For associate and disassociate events this is the object type " "associated or disassociated with object2." msgstr "" -#: awx/api/serializers.py:2957 +#: api/serializers.py:2974 msgid "" "Unpopulated for create, update, and delete events. For associate and " "disassociate events this is the object type that object1 is being associated " "with." msgstr "" -#: awx/api/serializers.py:2960 +#: api/serializers.py:2977 msgid "The action taken with respect to the given object(s)." msgstr "" -#: awx/api/serializers.py:3060 +#: api/serializers.py:3077 msgid "Unable to login with provided credentials." msgstr "" -#: awx/api/serializers.py:3062 +#: api/serializers.py:3079 msgid "Must include \"username\" and \"password\"." msgstr "" -#: awx/api/views.py:95 awx/templates/rest_framework/api.html:28 +#: api/views.py:99 +msgid "Your license does not allow use of the activity stream." +msgstr "" + +#: api/views.py:109 +msgid "Your license does not permit use of system tracking." +msgstr "" + +#: api/views.py:119 +msgid "Your license does not allow use of workflows." +msgstr "" + +#: api/views.py:127 templates/rest_framework/api.html:28 msgid "REST API" msgstr "" -#: awx/api/views.py:102 awx/templates/rest_framework/api.html:4 +#: api/views.py:134 templates/rest_framework/api.html:4 msgid "Ansible Tower REST API" msgstr "" -#: awx/api/views.py:118 +#: api/views.py:150 msgid "Version 1" msgstr "" -#: awx/api/views.py:169 +#: api/views.py:201 msgid "Ping" msgstr "" -#: awx/api/views.py:198 awx/conf/apps.py:10 +#: api/views.py:230 conf/apps.py:12 msgid "Configuration" msgstr "" -#: awx/api/views.py:248 +#: api/views.py:283 msgid "Invalid license data" msgstr "" -#: awx/api/views.py:250 +#: api/views.py:285 msgid "Missing 'eula_accepted' property" msgstr "" -#: awx/api/views.py:254 +#: api/views.py:289 msgid "'eula_accepted' value is invalid" msgstr "" -#: awx/api/views.py:257 +#: api/views.py:292 msgid "'eula_accepted' must be True" msgstr "" -#: awx/api/views.py:263 +#: api/views.py:299 msgid "Invalid JSON" msgstr "" -#: awx/api/views.py:270 +#: api/views.py:307 msgid "Invalid License" msgstr "" -#: awx/api/views.py:278 +#: api/views.py:317 msgid "Invalid license" msgstr "" -#: awx/api/views.py:289 +#: api/views.py:325 #, python-format msgid "Failed to remove license (%s)" msgstr "" -#: awx/api/views.py:294 +#: api/views.py:330 msgid "Dashboard" msgstr "" -#: awx/api/views.py:400 +#: api/views.py:436 msgid "Dashboard Jobs Graphs" msgstr "" -#: awx/api/views.py:436 +#: api/views.py:472 #, python-format msgid "Unknown period \"%s\"" msgstr "" -#: awx/api/views.py:450 +#: api/views.py:486 msgid "Schedules" msgstr "" -#: awx/api/views.py:469 +#: api/views.py:505 msgid "Schedule Jobs List" msgstr "" -#: awx/api/views.py:675 +#: api/views.py:715 msgid "Your Tower license only permits a single organization to exist." msgstr "" -#: awx/api/views.py:803 awx/api/views.py:968 awx/api/views.py:1069 -#: awx/api/views.py:1356 awx/api/views.py:1517 awx/api/views.py:1614 -#: awx/api/views.py:1758 awx/api/views.py:1954 awx/api/views.py:2212 -#: awx/api/views.py:2528 awx/api/views.py:3121 awx/api/views.py:3194 -#: awx/api/views.py:3330 awx/api/views.py:3911 awx/api/views.py:4163 -#: awx/api/views.py:4180 -msgid "Your license does not allow use of the activity stream." -msgstr "" - -#: awx/api/views.py:906 awx/api/views.py:1270 +#: api/views.py:940 api/views.py:1299 msgid "Role 'id' field is missing." msgstr "" -#: awx/api/views.py:912 awx/api/views.py:4282 +#: api/views.py:946 api/views.py:4081 msgid "You cannot assign an Organization role as a child role for a Team." msgstr "" -#: awx/api/views.py:919 awx/api/views.py:4288 +#: api/views.py:950 api/views.py:4095 +msgid "You cannot grant system-level permissions to a team." +msgstr "" + +#: api/views.py:957 api/views.py:4087 msgid "" "You cannot grant credential access to a team when the Organization field " "isn't set, or belongs to a different organization" msgstr "" -#: awx/api/views.py:1018 +#: api/views.py:1047 msgid "Cannot delete project." msgstr "" -#: awx/api/views.py:1047 +#: api/views.py:1076 msgid "Project Schedules" msgstr "" -#: awx/api/views.py:1156 awx/api/views.py:2307 awx/api/views.py:3301 +#: api/views.py:1180 api/views.py:2270 api/views.py:3276 msgid "Cannot delete job resource when associated workflow job is running." msgstr "" -#: awx/api/views.py:1230 +#: api/views.py:1257 msgid "Me" msgstr "" -#: awx/api/views.py:1274 awx/api/views.py:4237 +#: api/views.py:1303 api/views.py:4036 msgid "You may not perform any action with your own admin_role." msgstr "" -#: awx/api/views.py:1280 awx/api/views.py:4241 +#: api/views.py:1309 api/views.py:4040 msgid "You may not change the membership of a users admin_role" msgstr "" -#: awx/api/views.py:1285 awx/api/views.py:4246 +#: api/views.py:1314 api/views.py:4045 msgid "" "You cannot grant credential access to a user not in the credentials' " "organization" msgstr "" -#: awx/api/views.py:1289 awx/api/views.py:4250 +#: api/views.py:1318 api/views.py:4049 msgid "You cannot grant private credential access to another user" msgstr "" -#: awx/api/views.py:1397 +#: api/views.py:1416 #, python-format msgid "Cannot change %s." msgstr "" -#: awx/api/views.py:1403 +#: api/views.py:1422 msgid "Cannot delete user." msgstr "" -#: awx/api/views.py:1559 +#: api/views.py:1570 msgid "Cannot delete inventory script." msgstr "" -#: awx/api/views.py:1777 -msgid "Your license does not permit use of system tracking." -msgstr "" - -#: awx/api/views.py:1824 +#: api/views.py:1805 msgid "Fact not found." msgstr "" -#: awx/api/views.py:2154 +#: api/views.py:2125 msgid "Inventory Source List" msgstr "" -#: awx/api/views.py:2182 +#: api/views.py:2153 msgid "Cannot delete inventory source." msgstr "" -#: awx/api/views.py:2190 +#: api/views.py:2161 msgid "Inventory Source Schedules" msgstr "" -#: awx/api/views.py:2229 +#: api/views.py:2191 msgid "Notification Templates can only be assigned when source is one of {}." msgstr "" -#: awx/api/views.py:2433 +#: api/views.py:2402 msgid "Job Template Schedules" msgstr "" -#: awx/api/views.py:2452 awx/api/views.py:2462 +#: api/views.py:2422 api/views.py:2438 msgid "Your license does not allow adding surveys." msgstr "" -#: awx/api/views.py:2469 +#: api/views.py:2445 msgid "'name' missing from survey spec." msgstr "" -#: awx/api/views.py:2471 +#: api/views.py:2447 msgid "'description' missing from survey spec." msgstr "" -#: awx/api/views.py:2473 +#: api/views.py:2449 msgid "'spec' missing from survey spec." msgstr "" -#: awx/api/views.py:2475 +#: api/views.py:2451 msgid "'spec' must be a list of items." msgstr "" -#: awx/api/views.py:2477 +#: api/views.py:2453 msgid "'spec' doesn't contain any items." msgstr "" -#: awx/api/views.py:2482 +#: api/views.py:2459 #, python-format msgid "Survey question %s is not a json object." msgstr "" -#: awx/api/views.py:2484 +#: api/views.py:2461 #, python-format msgid "'type' missing from survey question %s." msgstr "" -#: awx/api/views.py:2486 +#: api/views.py:2463 #, python-format msgid "'question_name' missing from survey question %s." msgstr "" -#: awx/api/views.py:2488 +#: api/views.py:2465 #, python-format msgid "'variable' missing from survey question %s." msgstr "" -#: awx/api/views.py:2490 +#: api/views.py:2467 #, python-format msgid "'variable' '%(item)s' duplicated in survey question %(survey)s." msgstr "" -#: awx/api/views.py:2495 +#: api/views.py:2472 #, python-format msgid "'required' missing from survey question %s." msgstr "" -#: awx/api/views.py:2702 +#: api/views.py:2683 msgid "No matching host could be found!" msgstr "" -#: awx/api/views.py:2705 +#: api/views.py:2686 msgid "Multiple hosts matched the request!" msgstr "" -#: awx/api/views.py:2710 +#: api/views.py:2691 msgid "Cannot start automatically, user input required!" msgstr "" -#: awx/api/views.py:2717 +#: api/views.py:2698 msgid "Host callback job already pending." msgstr "" -#: awx/api/views.py:2730 +#: api/views.py:2711 msgid "Error starting job!" msgstr "" -#: awx/api/views.py:3053 +#: api/views.py:3040 msgid "Workflow Job Template Schedules" msgstr "" -#: awx/api/views.py:3208 awx/api/views.py:3933 +#: api/views.py:3175 api/views.py:3714 msgid "Superuser privileges needed." msgstr "" -#: awx/api/views.py:3238 +#: api/views.py:3207 msgid "System Job Template Schedules" msgstr "" -#: awx/api/views.py:3428 +#: api/views.py:3399 msgid "Job Host Summaries List" msgstr "" -#: awx/api/views.py:3470 +#: api/views.py:3441 msgid "Job Event Children List" msgstr "" -#: awx/api/views.py:3479 +#: api/views.py:3450 msgid "Job Event Hosts List" msgstr "" -#: awx/api/views.py:3488 +#: api/views.py:3459 msgid "Job Events List" msgstr "" -#: awx/api/views.py:3509 -msgid "Job Plays List" -msgstr "" - -#: awx/api/views.py:3584 -msgid "Job Play Tasks List" -msgstr "" - -#: awx/api/views.py:3599 -msgid "Job not found." -msgstr "" - -#: awx/api/views.py:3603 -msgid "'event_id' not provided." -msgstr "" - -#: awx/api/views.py:3607 -msgid "Parent event not found." -msgstr "" - -#: awx/api/views.py:3879 +#: api/views.py:3668 msgid "Ad Hoc Command Events List" msgstr "" -#: awx/api/views.py:4043 +#: api/views.py:3862 #, python-format msgid "Error generating stdout download file: %s" msgstr "" -#: awx/api/views.py:4089 +#: api/views.py:3907 msgid "Delete not allowed while there are pending notifications" msgstr "" -#: awx/api/views.py:4096 -msgid "NotificationTemplate Test" +#: api/views.py:3914 +msgid "Notification Template Test" msgstr "" -#: awx/api/views.py:4231 +#: api/views.py:4030 msgid "User 'id' field is missing." msgstr "" -#: awx/api/views.py:4274 +#: api/views.py:4073 msgid "Team 'id' field is missing." msgstr "" -#: awx/conf/conf.py:20 +#: conf/conf.py:20 msgid "Bud Frogs" msgstr "" -#: awx/conf/conf.py:21 +#: conf/conf.py:21 msgid "Bunny" msgstr "" -#: awx/conf/conf.py:22 +#: conf/conf.py:22 msgid "Cheese" msgstr "" -#: awx/conf/conf.py:23 +#: conf/conf.py:23 msgid "Daemon" msgstr "" -#: awx/conf/conf.py:24 +#: conf/conf.py:24 msgid "Default Cow" msgstr "" -#: awx/conf/conf.py:25 +#: conf/conf.py:25 msgid "Dragon" msgstr "" -#: awx/conf/conf.py:26 +#: conf/conf.py:26 msgid "Elephant in Snake" msgstr "" -#: awx/conf/conf.py:27 +#: conf/conf.py:27 msgid "Elephant" msgstr "" -#: awx/conf/conf.py:28 +#: conf/conf.py:28 msgid "Eyes" msgstr "" -#: awx/conf/conf.py:29 +#: conf/conf.py:29 msgid "Hello Kitty" msgstr "" -#: awx/conf/conf.py:30 +#: conf/conf.py:30 msgid "Kitty" msgstr "" -#: awx/conf/conf.py:31 +#: conf/conf.py:31 msgid "Luke Koala" msgstr "" -#: awx/conf/conf.py:32 +#: conf/conf.py:32 msgid "Meow" msgstr "" -#: awx/conf/conf.py:33 +#: conf/conf.py:33 msgid "Milk" msgstr "" -#: awx/conf/conf.py:34 +#: conf/conf.py:34 msgid "Moofasa" msgstr "" -#: awx/conf/conf.py:35 +#: conf/conf.py:35 msgid "Moose" msgstr "" -#: awx/conf/conf.py:36 +#: conf/conf.py:36 msgid "Ren" msgstr "" -#: awx/conf/conf.py:37 +#: conf/conf.py:37 msgid "Sheep" msgstr "" -#: awx/conf/conf.py:38 +#: conf/conf.py:38 msgid "Small Cow" msgstr "" -#: awx/conf/conf.py:39 +#: conf/conf.py:39 msgid "Stegosaurus" msgstr "" -#: awx/conf/conf.py:40 +#: conf/conf.py:40 msgid "Stimpy" msgstr "" -#: awx/conf/conf.py:41 +#: conf/conf.py:41 msgid "Super Milker" msgstr "" -#: awx/conf/conf.py:42 +#: conf/conf.py:42 msgid "Three Eyes" msgstr "" -#: awx/conf/conf.py:43 +#: conf/conf.py:43 msgid "Turkey" msgstr "" -#: awx/conf/conf.py:44 +#: conf/conf.py:44 msgid "Turtle" msgstr "" -#: awx/conf/conf.py:45 +#: conf/conf.py:45 msgid "Tux" msgstr "" -#: awx/conf/conf.py:46 +#: conf/conf.py:46 msgid "Udder" msgstr "" -#: awx/conf/conf.py:47 +#: conf/conf.py:47 msgid "Vader Koala" msgstr "" -#: awx/conf/conf.py:48 +#: conf/conf.py:48 msgid "Vader" msgstr "" -#: awx/conf/conf.py:49 +#: conf/conf.py:49 msgid "WWW" msgstr "" -#: awx/conf/conf.py:52 +#: conf/conf.py:52 msgid "Cow Selection" msgstr "" -#: awx/conf/conf.py:53 +#: conf/conf.py:53 msgid "Select which cow to use with cowsay when running jobs." msgstr "" -#: awx/conf/conf.py:54 awx/conf/conf.py:75 +#: conf/conf.py:54 conf/conf.py:75 msgid "Cows" msgstr "" -#: awx/conf/conf.py:73 +#: conf/conf.py:73 msgid "Example Read-Only Setting" msgstr "" -#: awx/conf/conf.py:74 +#: conf/conf.py:74 msgid "Example setting that cannot be changed." msgstr "" -#: awx/conf/conf.py:90 +#: conf/conf.py:93 msgid "Example Setting" msgstr "" -#: awx/conf/conf.py:91 +#: conf/conf.py:94 msgid "Example setting which can be different for each user." msgstr "" -#: awx/conf/conf.py:92 awx/conf/registry.py:67 awx/conf/views.py:46 +#: conf/conf.py:95 conf/registry.py:67 conf/views.py:46 msgid "User" msgstr "" -#: awx/conf/fields.py:38 +#: conf/fields.py:38 msgid "Enter a valid URL" msgstr "" -#: awx/conf/license.py:23 +#: conf/license.py:19 msgid "Your Tower license does not allow that." msgstr "" -#: awx/conf/management/commands/migrate_to_database_settings.py:41 +#: conf/management/commands/migrate_to_database_settings.py:41 msgid "Only show which settings would be commented/migrated." msgstr "" -#: awx/conf/management/commands/migrate_to_database_settings.py:48 +#: conf/management/commands/migrate_to_database_settings.py:48 msgid "Skip over settings that would raise an error when commenting/migrating." msgstr "" -#: awx/conf/management/commands/migrate_to_database_settings.py:55 +#: conf/management/commands/migrate_to_database_settings.py:55 msgid "Skip commenting out settings in files." msgstr "" -#: awx/conf/management/commands/migrate_to_database_settings.py:61 +#: conf/management/commands/migrate_to_database_settings.py:61 msgid "Backup existing settings files with this suffix." msgstr "" -#: awx/conf/registry.py:55 +#: conf/registry.py:55 msgid "All" msgstr "" -#: awx/conf/registry.py:56 +#: conf/registry.py:56 msgid "Changed" msgstr "" -#: awx/conf/registry.py:68 +#: conf/registry.py:68 msgid "User-Defaults" msgstr "" -#: awx/conf/views.py:38 +#: conf/views.py:38 msgid "Setting Categories" msgstr "" -#: awx/conf/views.py:61 +#: conf/views.py:61 msgid "Setting Detail" msgstr "" -#: awx/main/access.py:255 +#: main/access.py:255 #, python-format msgid "Bad data found in related field %s." msgstr "" -#: awx/main/access.py:296 +#: main/access.py:296 msgid "License is missing." msgstr "" -#: awx/main/access.py:298 +#: main/access.py:298 msgid "License has expired." msgstr "" -#: awx/main/access.py:303 +#: main/access.py:303 #, python-format msgid "License count of %s instances has been reached." msgstr "" -#: awx/main/access.py:305 +#: main/access.py:305 #, python-format msgid "License count of %s instances has been exceeded." msgstr "" -#: awx/main/access.py:307 +#: main/access.py:307 msgid "Host count exceeds available instances." msgstr "" -#: awx/main/access.py:311 +#: main/access.py:311 #, python-format msgid "Feature %s is not enabled in the active license." msgstr "" -#: awx/main/access.py:313 +#: main/access.py:313 msgid "Features not found in active license." msgstr "" -#: awx/main/access.py:507 awx/main/access.py:574 awx/main/access.py:694 -#: awx/main/access.py:965 awx/main/access.py:1206 awx/main/access.py:1594 +#: main/access.py:511 main/access.py:578 main/access.py:698 main/access.py:961 +#: main/access.py:1200 main/access.py:1597 msgid "Resource is being used by running jobs" msgstr "" -#: awx/main/access.py:618 +#: main/access.py:622 msgid "Unable to change inventory on a host." msgstr "" -#: awx/main/access.py:630 awx/main/access.py:675 +#: main/access.py:634 main/access.py:679 msgid "Cannot associate two items from different inventories." msgstr "" -#: awx/main/access.py:663 +#: main/access.py:667 msgid "Unable to change inventory on a group." msgstr "" -#: awx/main/access.py:885 +#: main/access.py:881 msgid "Unable to change organization on a team." msgstr "" -#: awx/main/access.py:898 +#: main/access.py:894 msgid "The {} role cannot be assigned to a team" msgstr "" -#: awx/main/access.py:900 +#: main/access.py:896 msgid "The admin_role for a User cannot be assigned to a team" msgstr "" -#: awx/main/apps.py:9 +#: main/access.py:1670 +msgid "" +"You do not have permission to the workflow job resources required for " +"relaunch." +msgstr "" + +#: main/apps.py:9 msgid "Main" msgstr "" -#: awx/main/conf.py:17 +#: main/conf.py:17 msgid "Enable Activity Stream" msgstr "" -#: awx/main/conf.py:18 +#: main/conf.py:18 msgid "Enable capturing activity for the Tower activity stream." msgstr "" -#: awx/main/conf.py:19 awx/main/conf.py:29 awx/main/conf.py:39 -#: awx/main/conf.py:48 awx/main/conf.py:60 awx/main/conf.py:78 -#: awx/main/conf.py:103 +#: main/conf.py:19 main/conf.py:29 main/conf.py:39 main/conf.py:48 +#: main/conf.py:60 main/conf.py:78 main/conf.py:103 msgid "System" msgstr "" -#: awx/main/conf.py:27 +#: main/conf.py:27 msgid "Enable Activity Stream for Inventory Sync" msgstr "" -#: awx/main/conf.py:28 +#: main/conf.py:28 msgid "" "Enable capturing activity for the Tower activity stream when running " "inventory sync." msgstr "" -#: awx/main/conf.py:37 +#: main/conf.py:37 msgid "All Users Visible to Organization Admins" msgstr "" -#: awx/main/conf.py:38 +#: main/conf.py:38 msgid "" "Controls whether any Organization Admin can view all users, even those not " "associated with their Organization." msgstr "" -#: awx/main/conf.py:46 +#: main/conf.py:46 msgid "Enable Tower Administrator Alerts" msgstr "" -#: awx/main/conf.py:47 +#: main/conf.py:47 msgid "" "Allow Tower to email Admin users for system events that may require " "attention." msgstr "" -#: awx/main/conf.py:57 +#: main/conf.py:57 msgid "Base URL of the Tower host" msgstr "" -#: awx/main/conf.py:58 +#: main/conf.py:58 msgid "" "This setting is used by services like notifications to render a valid url to " "the Tower host." msgstr "" -#: awx/main/conf.py:67 +#: main/conf.py:67 msgid "Remote Host Headers" msgstr "" -#: awx/main/conf.py:68 +#: main/conf.py:68 msgid "" "HTTP headers and meta keys to search to determine remote host name or IP. " "Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " @@ -1056,1459 +1054,1640 @@ msgid "" "REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" msgstr "" -#: awx/main/conf.py:99 +#: main/conf.py:99 msgid "Tower License" msgstr "" -#: awx/main/conf.py:100 +#: main/conf.py:100 msgid "" "The license controls which features and functionality are enabled in Tower. " "Use /api/v1/config/ to update or change the license." msgstr "" -#: awx/main/conf.py:110 +#: main/conf.py:110 msgid "Ansible Modules Allowed for Ad Hoc Jobs" msgstr "" -#: awx/main/conf.py:111 +#: main/conf.py:111 msgid "List of modules allowed to be used by ad-hoc jobs." msgstr "" -#: awx/main/conf.py:112 awx/main/conf.py:121 awx/main/conf.py:130 -#: awx/main/conf.py:139 awx/main/conf.py:148 awx/main/conf.py:158 -#: awx/main/conf.py:168 awx/main/conf.py:178 awx/main/conf.py:187 -#: awx/main/conf.py:199 awx/main/conf.py:211 awx/main/conf.py:223 +#: main/conf.py:112 main/conf.py:121 main/conf.py:130 main/conf.py:140 +#: main/conf.py:150 main/conf.py:160 main/conf.py:170 main/conf.py:180 +#: main/conf.py:190 main/conf.py:202 main/conf.py:214 main/conf.py:226 msgid "Jobs" msgstr "" -#: awx/main/conf.py:119 +#: main/conf.py:119 msgid "Enable job isolation" msgstr "" -#: awx/main/conf.py:120 +#: main/conf.py:120 msgid "" "Isolates an Ansible job from protected parts of the Tower system to prevent " "exposing sensitive information." msgstr "" -#: awx/main/conf.py:128 +#: main/conf.py:128 msgid "Job isolation execution path" msgstr "" -#: awx/main/conf.py:129 +#: main/conf.py:129 msgid "" "Create temporary working directories for isolated jobs in this location." msgstr "" -#: awx/main/conf.py:137 +#: main/conf.py:138 msgid "Paths to hide from isolated jobs" msgstr "" -#: awx/main/conf.py:138 +#: main/conf.py:139 msgid "Additional paths to hide from isolated processes." msgstr "" -#: awx/main/conf.py:146 +#: main/conf.py:148 msgid "Paths to expose to isolated jobs" msgstr "" -#: awx/main/conf.py:147 +#: main/conf.py:149 msgid "" "Whitelist of paths that would otherwise be hidden to expose to isolated jobs." msgstr "" -#: awx/main/conf.py:156 +#: main/conf.py:158 msgid "Standard Output Maximum Display Size" msgstr "" -#: awx/main/conf.py:157 +#: main/conf.py:159 msgid "" "Maximum Size of Standard Output in bytes to display before requiring the " "output be downloaded." msgstr "" -#: awx/main/conf.py:166 +#: main/conf.py:168 msgid "Job Event Standard Output Maximum Display Size" msgstr "" -#: awx/main/conf.py:167 +#: main/conf.py:169 msgid "" "Maximum Size of Standard Output in bytes to display for a single job or ad " "hoc command event. `stdout` will end with `…` when truncated." msgstr "" -#: awx/main/conf.py:176 +#: main/conf.py:178 msgid "Maximum Scheduled Jobs" msgstr "" -#: awx/main/conf.py:177 +#: main/conf.py:179 msgid "" "Maximum number of the same job template that can be waiting to run when " "launching from a schedule before no more are created." msgstr "" -#: awx/main/conf.py:185 +#: main/conf.py:188 msgid "Ansible Callback Plugins" msgstr "" -#: awx/main/conf.py:186 +#: main/conf.py:189 msgid "" "List of paths to search for extra callback plugins to be used when running " "jobs." msgstr "" -#: awx/main/conf.py:196 +#: main/conf.py:199 msgid "Default Job Timeout" msgstr "" -#: awx/main/conf.py:197 +#: main/conf.py:200 msgid "" "Maximum time to allow jobs to run. Use value of 0 to indicate that no " "timeout should be imposed. A timeout set on an individual job template will " "override this." msgstr "" -#: awx/main/conf.py:208 +#: main/conf.py:211 msgid "Default Inventory Update Timeout" msgstr "" -#: awx/main/conf.py:209 +#: main/conf.py:212 msgid "" "Maximum time to allow inventory updates to run. Use value of 0 to indicate " "that no timeout should be imposed. A timeout set on an individual inventory " "source will override this." msgstr "" -#: awx/main/conf.py:220 +#: main/conf.py:223 msgid "Default Project Update Timeout" msgstr "" -#: awx/main/conf.py:221 +#: main/conf.py:224 msgid "" "Maximum time to allow project updates to run. Use value of 0 to indicate " "that no timeout should be imposed. A timeout set on an individual project " "will override this." msgstr "" -#: awx/main/models/activity_stream.py:22 +#: main/conf.py:234 +msgid "Logging Aggregator" +msgstr "" + +#: main/conf.py:235 +msgid "Hostname/IP where external logs will be sent to." +msgstr "" + +#: main/conf.py:236 main/conf.py:245 main/conf.py:255 main/conf.py:264 +#: main/conf.py:274 main/conf.py:288 main/conf.py:300 main/conf.py:309 +msgid "Logging" +msgstr "" + +#: main/conf.py:243 +msgid "Logging Aggregator Port" +msgstr "" + +#: main/conf.py:244 +msgid "Port on Logging Aggregator to send logs to (if required)." +msgstr "" + +#: main/conf.py:253 +msgid "Logging Aggregator Type" +msgstr "" + +#: main/conf.py:254 +msgid "Format messages for the chosen log aggregator." +msgstr "" + +#: main/conf.py:262 +msgid "Logging Aggregator Username" +msgstr "" + +#: main/conf.py:263 +msgid "Username for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:272 +msgid "Logging Aggregator Password/Token" +msgstr "" + +#: main/conf.py:273 +msgid "" +"Password or authentication token for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:281 +msgid "Loggers to send data to the log aggregator from" +msgstr "" + +#: main/conf.py:282 +msgid "" +"List of loggers that will send HTTP logs to the collector, these can include " +"any or all of: \n" +"awx - Tower service logs\n" +"activity_stream - activity stream records\n" +"job_events - callback data from Ansible job events\n" +"system_tracking - facts gathered from scan jobs." +msgstr "" + +#: main/conf.py:295 +msgid "Log System Tracking Facts Individually" +msgstr "" + +#: main/conf.py:296 +msgid "" +"If set, system tracking facts will be sent for each package, service, " +"orother item found in a scan, allowing for greater search query granularity. " +"If unset, facts will be sent as a single dictionary, allowing for greater " +"efficiency in fact processing." +msgstr "" + +#: main/conf.py:307 +msgid "Enable External Logging" +msgstr "" + +#: main/conf.py:308 +msgid "Enable sending logs to external log aggregator." +msgstr "" + +#: main/models/activity_stream.py:22 msgid "Entity Created" msgstr "" -#: awx/main/models/activity_stream.py:23 +#: main/models/activity_stream.py:23 msgid "Entity Updated" msgstr "" -#: awx/main/models/activity_stream.py:24 +#: main/models/activity_stream.py:24 msgid "Entity Deleted" msgstr "" -#: awx/main/models/activity_stream.py:25 +#: main/models/activity_stream.py:25 msgid "Entity Associated with another Entity" msgstr "" -#: awx/main/models/activity_stream.py:26 +#: main/models/activity_stream.py:26 msgid "Entity was Disassociated with another Entity" msgstr "" -#: awx/main/models/ad_hoc_commands.py:96 +#: main/models/ad_hoc_commands.py:96 msgid "No valid inventory." msgstr "" -#: awx/main/models/ad_hoc_commands.py:103 awx/main/models/jobs.py:162 +#: main/models/ad_hoc_commands.py:103 main/models/jobs.py:161 msgid "You must provide a machine / SSH credential." msgstr "" -#: awx/main/models/ad_hoc_commands.py:114 -#: awx/main/models/ad_hoc_commands.py:122 +#: main/models/ad_hoc_commands.py:114 main/models/ad_hoc_commands.py:122 msgid "Invalid type for ad hoc command" msgstr "" -#: awx/main/models/ad_hoc_commands.py:117 +#: main/models/ad_hoc_commands.py:117 msgid "Unsupported module for ad hoc commands." msgstr "" -#: awx/main/models/ad_hoc_commands.py:125 +#: main/models/ad_hoc_commands.py:125 #, python-format msgid "No argument passed to %s module." msgstr "" -#: awx/main/models/ad_hoc_commands.py:220 awx/main/models/jobs.py:766 +#: main/models/ad_hoc_commands.py:222 main/models/jobs.py:763 msgid "Host Failed" msgstr "" -#: awx/main/models/ad_hoc_commands.py:221 awx/main/models/jobs.py:767 +#: main/models/ad_hoc_commands.py:223 main/models/jobs.py:764 msgid "Host OK" msgstr "" -#: awx/main/models/ad_hoc_commands.py:222 awx/main/models/jobs.py:770 +#: main/models/ad_hoc_commands.py:224 main/models/jobs.py:767 msgid "Host Unreachable" msgstr "" -#: awx/main/models/ad_hoc_commands.py:227 awx/main/models/jobs.py:769 +#: main/models/ad_hoc_commands.py:229 main/models/jobs.py:766 msgid "Host Skipped" msgstr "" -#: awx/main/models/ad_hoc_commands.py:237 awx/main/models/jobs.py:797 +#: main/models/ad_hoc_commands.py:239 main/models/jobs.py:794 msgid "Debug" msgstr "" -#: awx/main/models/ad_hoc_commands.py:238 awx/main/models/jobs.py:798 +#: main/models/ad_hoc_commands.py:240 main/models/jobs.py:795 msgid "Verbose" msgstr "" -#: awx/main/models/ad_hoc_commands.py:239 awx/main/models/jobs.py:799 +#: main/models/ad_hoc_commands.py:241 main/models/jobs.py:796 msgid "Deprecated" msgstr "" -#: awx/main/models/ad_hoc_commands.py:240 awx/main/models/jobs.py:800 +#: main/models/ad_hoc_commands.py:242 main/models/jobs.py:797 msgid "Warning" msgstr "" -#: awx/main/models/ad_hoc_commands.py:241 awx/main/models/jobs.py:801 +#: main/models/ad_hoc_commands.py:243 main/models/jobs.py:798 msgid "System Warning" msgstr "" -#: awx/main/models/ad_hoc_commands.py:242 awx/main/models/jobs.py:802 -#: awx/main/models/unified_jobs.py:62 +#: main/models/ad_hoc_commands.py:244 main/models/jobs.py:799 +#: main/models/unified_jobs.py:64 msgid "Error" msgstr "" -#: awx/main/models/base.py:45 awx/main/models/base.py:51 -#: awx/main/models/base.py:56 +#: main/models/base.py:45 main/models/base.py:51 main/models/base.py:56 msgid "Run" msgstr "" -#: awx/main/models/base.py:46 awx/main/models/base.py:52 -#: awx/main/models/base.py:57 +#: main/models/base.py:46 main/models/base.py:52 main/models/base.py:57 msgid "Check" msgstr "" -#: awx/main/models/base.py:47 +#: main/models/base.py:47 msgid "Scan" msgstr "" -#: awx/main/models/base.py:61 +#: main/models/base.py:61 msgid "Read Inventory" msgstr "" -#: awx/main/models/base.py:62 +#: main/models/base.py:62 msgid "Edit Inventory" msgstr "" -#: awx/main/models/base.py:63 +#: main/models/base.py:63 msgid "Administrate Inventory" msgstr "" -#: awx/main/models/base.py:64 +#: main/models/base.py:64 msgid "Deploy To Inventory" msgstr "" -#: awx/main/models/base.py:65 +#: main/models/base.py:65 msgid "Deploy To Inventory (Dry Run)" msgstr "" -#: awx/main/models/base.py:66 +#: main/models/base.py:66 msgid "Scan an Inventory" msgstr "" -#: awx/main/models/base.py:67 +#: main/models/base.py:67 msgid "Create a Job Template" msgstr "" -#: awx/main/models/credential.py:33 +#: main/models/credential.py:33 msgid "Machine" msgstr "" -#: awx/main/models/credential.py:34 +#: main/models/credential.py:34 msgid "Network" msgstr "" -#: awx/main/models/credential.py:35 +#: main/models/credential.py:35 msgid "Source Control" msgstr "" -#: awx/main/models/credential.py:36 +#: main/models/credential.py:36 msgid "Amazon Web Services" msgstr "" -#: awx/main/models/credential.py:37 +#: main/models/credential.py:37 msgid "Rackspace" msgstr "" -#: awx/main/models/credential.py:38 awx/main/models/inventory.py:712 +#: main/models/credential.py:38 main/models/inventory.py:713 msgid "VMware vCenter" msgstr "" -#: awx/main/models/credential.py:39 awx/main/models/inventory.py:713 +#: main/models/credential.py:39 main/models/inventory.py:714 msgid "Red Hat Satellite 6" msgstr "" -#: awx/main/models/credential.py:40 awx/main/models/inventory.py:714 +#: main/models/credential.py:40 main/models/inventory.py:715 msgid "Red Hat CloudForms" msgstr "" -#: awx/main/models/credential.py:41 awx/main/models/inventory.py:709 +#: main/models/credential.py:41 main/models/inventory.py:710 msgid "Google Compute Engine" msgstr "" -#: awx/main/models/credential.py:42 awx/main/models/inventory.py:710 +#: main/models/credential.py:42 main/models/inventory.py:711 msgid "Microsoft Azure Classic (deprecated)" msgstr "" -#: awx/main/models/credential.py:43 awx/main/models/inventory.py:711 +#: main/models/credential.py:43 main/models/inventory.py:712 msgid "Microsoft Azure Resource Manager" msgstr "" -#: awx/main/models/credential.py:44 awx/main/models/inventory.py:715 +#: main/models/credential.py:44 main/models/inventory.py:716 msgid "OpenStack" msgstr "" -#: awx/main/models/credential.py:48 +#: main/models/credential.py:48 msgid "None" msgstr "" -#: awx/main/models/credential.py:49 +#: main/models/credential.py:49 msgid "Sudo" msgstr "" -#: awx/main/models/credential.py:50 +#: main/models/credential.py:50 msgid "Su" msgstr "" -#: awx/main/models/credential.py:51 +#: main/models/credential.py:51 msgid "Pbrun" msgstr "" -#: awx/main/models/credential.py:52 +#: main/models/credential.py:52 msgid "Pfexec" msgstr "" -#: awx/main/models/credential.py:101 +#: main/models/credential.py:53 +msgid "DZDO" +msgstr "" + +#: main/models/credential.py:54 +msgid "Pmrun" +msgstr "" + +#: main/models/credential.py:103 msgid "Host" msgstr "" -#: awx/main/models/credential.py:102 +#: main/models/credential.py:104 msgid "The hostname or IP address to use." msgstr "" -#: awx/main/models/credential.py:108 +#: main/models/credential.py:110 msgid "Username" msgstr "" -#: awx/main/models/credential.py:109 +#: main/models/credential.py:111 msgid "Username for this credential." msgstr "" -#: awx/main/models/credential.py:115 +#: main/models/credential.py:117 msgid "Password" msgstr "" -#: awx/main/models/credential.py:116 +#: main/models/credential.py:118 msgid "" "Password for this credential (or \"ASK\" to prompt the user for machine " "credentials)." msgstr "" -#: awx/main/models/credential.py:123 +#: main/models/credential.py:125 msgid "Security Token" msgstr "" -#: awx/main/models/credential.py:124 +#: main/models/credential.py:126 msgid "Security Token for this credential" msgstr "" -#: awx/main/models/credential.py:130 +#: main/models/credential.py:132 msgid "Project" msgstr "" -#: awx/main/models/credential.py:131 +#: main/models/credential.py:133 msgid "The identifier for the project." msgstr "" -#: awx/main/models/credential.py:137 +#: main/models/credential.py:139 msgid "Domain" msgstr "" -#: awx/main/models/credential.py:138 +#: main/models/credential.py:140 msgid "The identifier for the domain." msgstr "" -#: awx/main/models/credential.py:143 +#: main/models/credential.py:145 msgid "SSH private key" msgstr "" -#: awx/main/models/credential.py:144 +#: main/models/credential.py:146 msgid "RSA or DSA private key to be used instead of password." msgstr "" -#: awx/main/models/credential.py:150 +#: main/models/credential.py:152 msgid "SSH key unlock" msgstr "" -#: awx/main/models/credential.py:151 +#: main/models/credential.py:153 msgid "" "Passphrase to unlock SSH private key if encrypted (or \"ASK\" to prompt the " "user for machine credentials)." msgstr "" -#: awx/main/models/credential.py:159 +#: main/models/credential.py:161 msgid "Privilege escalation method." msgstr "" -#: awx/main/models/credential.py:165 +#: main/models/credential.py:167 msgid "Privilege escalation username." msgstr "" -#: awx/main/models/credential.py:171 +#: main/models/credential.py:173 msgid "Password for privilege escalation method." msgstr "" -#: awx/main/models/credential.py:177 +#: main/models/credential.py:179 msgid "Vault password (or \"ASK\" to prompt the user)." msgstr "" -#: awx/main/models/credential.py:181 +#: main/models/credential.py:183 msgid "Whether to use the authorize mechanism." msgstr "" -#: awx/main/models/credential.py:187 +#: main/models/credential.py:189 msgid "Password used by the authorize mechanism." msgstr "" -#: awx/main/models/credential.py:193 +#: main/models/credential.py:195 msgid "Client Id or Application Id for the credential" msgstr "" -#: awx/main/models/credential.py:199 +#: main/models/credential.py:201 msgid "Secret Token for this credential" msgstr "" -#: awx/main/models/credential.py:205 +#: main/models/credential.py:207 msgid "Subscription identifier for this credential" msgstr "" -#: awx/main/models/credential.py:211 +#: main/models/credential.py:213 msgid "Tenant identifier for this credential" msgstr "" -#: awx/main/models/credential.py:281 +#: main/models/credential.py:283 msgid "Host required for VMware credential." msgstr "" -#: awx/main/models/credential.py:283 +#: main/models/credential.py:285 msgid "Host required for OpenStack credential." msgstr "" -#: awx/main/models/credential.py:292 +#: main/models/credential.py:294 msgid "Access key required for AWS credential." msgstr "" -#: awx/main/models/credential.py:294 +#: main/models/credential.py:296 msgid "Username required for Rackspace credential." msgstr "" -#: awx/main/models/credential.py:297 +#: main/models/credential.py:299 msgid "Username required for VMware credential." msgstr "" -#: awx/main/models/credential.py:299 +#: main/models/credential.py:301 msgid "Username required for OpenStack credential." msgstr "" -#: awx/main/models/credential.py:305 +#: main/models/credential.py:307 msgid "Secret key required for AWS credential." msgstr "" -#: awx/main/models/credential.py:307 +#: main/models/credential.py:309 msgid "API key required for Rackspace credential." msgstr "" -#: awx/main/models/credential.py:309 +#: main/models/credential.py:311 msgid "Password required for VMware credential." msgstr "" -#: awx/main/models/credential.py:311 +#: main/models/credential.py:313 msgid "Password or API key required for OpenStack credential." msgstr "" -#: awx/main/models/credential.py:317 +#: main/models/credential.py:319 msgid "Project name required for OpenStack credential." msgstr "" -#: awx/main/models/credential.py:344 +#: main/models/credential.py:346 msgid "SSH key unlock must be set when SSH key is encrypted." msgstr "" -#: awx/main/models/credential.py:350 +#: main/models/credential.py:352 msgid "Credential cannot be assigned to both a user and team." msgstr "" -#: awx/main/models/fact.py:21 +#: main/models/fact.py:21 msgid "Host for the facts that the fact scan captured." msgstr "" -#: awx/main/models/fact.py:26 +#: main/models/fact.py:26 msgid "Date and time of the corresponding fact scan gathering time." msgstr "" -#: awx/main/models/fact.py:29 +#: main/models/fact.py:29 msgid "" "Arbitrary JSON structure of module facts captured at timestamp for a single " "host." msgstr "" -#: awx/main/models/inventory.py:45 +#: main/models/inventory.py:45 msgid "inventories" msgstr "" -#: awx/main/models/inventory.py:52 +#: main/models/inventory.py:52 msgid "Organization containing this inventory." msgstr "" -#: awx/main/models/inventory.py:58 +#: main/models/inventory.py:58 msgid "Inventory variables in JSON or YAML format." msgstr "" -#: awx/main/models/inventory.py:63 +#: main/models/inventory.py:63 msgid "Flag indicating whether any hosts in this inventory have failed." msgstr "" -#: awx/main/models/inventory.py:68 +#: main/models/inventory.py:68 msgid "Total number of hosts in this inventory." msgstr "" -#: awx/main/models/inventory.py:73 +#: main/models/inventory.py:73 msgid "Number of hosts in this inventory with active failures." msgstr "" -#: awx/main/models/inventory.py:78 +#: main/models/inventory.py:78 msgid "Total number of groups in this inventory." msgstr "" -#: awx/main/models/inventory.py:83 +#: main/models/inventory.py:83 msgid "Number of groups in this inventory with active failures." msgstr "" -#: awx/main/models/inventory.py:88 +#: main/models/inventory.py:88 msgid "" "Flag indicating whether this inventory has any external inventory sources." msgstr "" -#: awx/main/models/inventory.py:93 +#: main/models/inventory.py:93 msgid "" "Total number of external inventory sources configured within this inventory." msgstr "" -#: awx/main/models/inventory.py:98 +#: main/models/inventory.py:98 msgid "Number of external inventory sources in this inventory with failures." msgstr "" -#: awx/main/models/inventory.py:339 +#: main/models/inventory.py:339 msgid "Is this host online and available for running jobs?" msgstr "" -#: awx/main/models/inventory.py:349 +#: main/models/inventory.py:345 +msgid "" +"The value used by the remote inventory source to uniquely identify the host" +msgstr "" + +#: main/models/inventory.py:350 msgid "Host variables in JSON or YAML format." msgstr "" -#: awx/main/models/inventory.py:371 +#: main/models/inventory.py:372 msgid "Flag indicating whether the last job failed for this host." msgstr "" -#: awx/main/models/inventory.py:376 +#: main/models/inventory.py:377 msgid "" "Flag indicating whether this host was created/updated from any external " "inventory sources." msgstr "" -#: awx/main/models/inventory.py:382 +#: main/models/inventory.py:383 msgid "Inventory source(s) that created or modified this host." msgstr "" -#: awx/main/models/inventory.py:473 +#: main/models/inventory.py:474 msgid "Group variables in JSON or YAML format." msgstr "" -#: awx/main/models/inventory.py:479 +#: main/models/inventory.py:480 msgid "Hosts associated directly with this group." msgstr "" -#: awx/main/models/inventory.py:484 +#: main/models/inventory.py:485 msgid "Total number of hosts directly or indirectly in this group." msgstr "" -#: awx/main/models/inventory.py:489 +#: main/models/inventory.py:490 msgid "Flag indicating whether this group has any hosts with active failures." msgstr "" -#: awx/main/models/inventory.py:494 +#: main/models/inventory.py:495 msgid "Number of hosts in this group with active failures." msgstr "" -#: awx/main/models/inventory.py:499 +#: main/models/inventory.py:500 msgid "Total number of child groups contained within this group." msgstr "" -#: awx/main/models/inventory.py:504 +#: main/models/inventory.py:505 msgid "Number of child groups within this group that have active failures." msgstr "" -#: awx/main/models/inventory.py:509 +#: main/models/inventory.py:510 msgid "" "Flag indicating whether this group was created/updated from any external " "inventory sources." msgstr "" -#: awx/main/models/inventory.py:515 +#: main/models/inventory.py:516 msgid "Inventory source(s) that created or modified this group." msgstr "" -#: awx/main/models/inventory.py:705 awx/main/models/projects.py:42 -#: awx/main/models/unified_jobs.py:383 +#: main/models/inventory.py:706 main/models/projects.py:42 +#: main/models/unified_jobs.py:402 msgid "Manual" msgstr "" -#: awx/main/models/inventory.py:706 +#: main/models/inventory.py:707 msgid "Local File, Directory or Script" msgstr "" -#: awx/main/models/inventory.py:707 +#: main/models/inventory.py:708 msgid "Rackspace Cloud Servers" msgstr "" -#: awx/main/models/inventory.py:708 +#: main/models/inventory.py:709 msgid "Amazon EC2" msgstr "" -#: awx/main/models/inventory.py:716 +#: main/models/inventory.py:717 msgid "Custom Script" msgstr "" -#: awx/main/models/inventory.py:827 +#: main/models/inventory.py:828 msgid "Inventory source variables in YAML or JSON format." msgstr "" -#: awx/main/models/inventory.py:846 +#: main/models/inventory.py:847 msgid "" "Comma-separated list of filter expressions (EC2 only). Hosts are imported " "when ANY of the filters match." msgstr "" -#: awx/main/models/inventory.py:852 +#: main/models/inventory.py:853 msgid "Limit groups automatically created from inventory source (EC2 only)." msgstr "" -#: awx/main/models/inventory.py:856 +#: main/models/inventory.py:857 msgid "Overwrite local groups and hosts from remote inventory source." msgstr "" -#: awx/main/models/inventory.py:860 +#: main/models/inventory.py:861 msgid "Overwrite local variables from remote inventory source." msgstr "" -#: awx/main/models/inventory.py:892 +#: main/models/inventory.py:893 msgid "Availability Zone" msgstr "" -#: awx/main/models/inventory.py:893 +#: main/models/inventory.py:894 msgid "Image ID" msgstr "" -#: awx/main/models/inventory.py:894 +#: main/models/inventory.py:895 msgid "Instance ID" msgstr "" -#: awx/main/models/inventory.py:895 +#: main/models/inventory.py:896 msgid "Instance Type" msgstr "" -#: awx/main/models/inventory.py:896 +#: main/models/inventory.py:897 msgid "Key Name" msgstr "" -#: awx/main/models/inventory.py:897 +#: main/models/inventory.py:898 msgid "Region" msgstr "" -#: awx/main/models/inventory.py:898 +#: main/models/inventory.py:899 msgid "Security Group" msgstr "" -#: awx/main/models/inventory.py:899 +#: main/models/inventory.py:900 msgid "Tags" msgstr "" -#: awx/main/models/inventory.py:900 +#: main/models/inventory.py:901 msgid "VPC ID" msgstr "" -#: awx/main/models/inventory.py:901 +#: main/models/inventory.py:902 msgid "Tag None" msgstr "" -#: awx/main/models/inventory.py:972 +#: main/models/inventory.py:973 #, python-format msgid "" "Cloud-based inventory sources (such as %s) require credentials for the " "matching cloud service." msgstr "" -#: awx/main/models/inventory.py:979 +#: main/models/inventory.py:980 msgid "Credential is required for a cloud source." msgstr "" -#: awx/main/models/inventory.py:1004 +#: main/models/inventory.py:1005 #, python-format -msgid "Invalid %(source)s region%(plural)s: %(region)s" +msgid "Invalid %(source)s region: %(region)s" msgstr "" -#: awx/main/models/inventory.py:1030 +#: main/models/inventory.py:1030 #, python-format -msgid "Invalid filter expression%(plural)s: %(filter)s" +msgid "Invalid filter expression: %(filter)s" msgstr "" -#: awx/main/models/inventory.py:1049 +#: main/models/inventory.py:1048 #, python-format -msgid "Invalid group by choice%(plural)s: %(choice)s" +msgid "Invalid group by choice: %(choice)s" msgstr "" -#: awx/main/models/inventory.py:1197 +#: main/models/inventory.py:1195 #, python-format msgid "" "Unable to configure this item for cloud sync. It is already managed by %s." msgstr "" -#: awx/main/models/inventory.py:1292 +#: main/models/inventory.py:1290 msgid "Inventory script contents" msgstr "" -#: awx/main/models/inventory.py:1297 +#: main/models/inventory.py:1295 msgid "Organization owning this inventory script" msgstr "" -#: awx/main/models/jobs.py:170 +#: main/models/jobs.py:169 msgid "You must provide a network credential." msgstr "" -#: awx/main/models/jobs.py:178 +#: main/models/jobs.py:177 msgid "" "Must provide a credential for a cloud provider, such as Amazon Web Services " "or Rackspace." msgstr "" -#: awx/main/models/jobs.py:270 +#: main/models/jobs.py:269 msgid "Job Template must provide 'inventory' or allow prompting for it." msgstr "" -#: awx/main/models/jobs.py:274 +#: main/models/jobs.py:273 msgid "Job Template must provide 'credential' or allow prompting for it." msgstr "" -#: awx/main/models/jobs.py:363 +#: main/models/jobs.py:362 msgid "Cannot override job_type to or from a scan job." msgstr "" -#: awx/main/models/jobs.py:366 +#: main/models/jobs.py:365 msgid "Inventory cannot be changed at runtime for scan jobs." msgstr "" -#: awx/main/models/jobs.py:432 awx/main/models/projects.py:235 +#: main/models/jobs.py:431 main/models/projects.py:243 msgid "SCM Revision" msgstr "" -#: awx/main/models/jobs.py:433 +#: main/models/jobs.py:432 msgid "The SCM Revision from the Project used for this job, if available" msgstr "" -#: awx/main/models/jobs.py:441 +#: main/models/jobs.py:440 msgid "" "The SCM Refresh task used to make sure the playbooks were available for the " "job run" msgstr "" -#: awx/main/models/jobs.py:665 +#: main/models/jobs.py:662 msgid "job host summaries" msgstr "" -#: awx/main/models/jobs.py:768 +#: main/models/jobs.py:765 msgid "Host Failure" msgstr "" -#: awx/main/models/jobs.py:771 awx/main/models/jobs.py:785 +#: main/models/jobs.py:768 main/models/jobs.py:782 msgid "No Hosts Remaining" msgstr "" -#: awx/main/models/jobs.py:772 +#: main/models/jobs.py:769 msgid "Host Polling" msgstr "" -#: awx/main/models/jobs.py:773 +#: main/models/jobs.py:770 msgid "Host Async OK" msgstr "" -#: awx/main/models/jobs.py:774 +#: main/models/jobs.py:771 msgid "Host Async Failure" msgstr "" -#: awx/main/models/jobs.py:775 +#: main/models/jobs.py:772 msgid "Item OK" msgstr "" -#: awx/main/models/jobs.py:776 +#: main/models/jobs.py:773 msgid "Item Failed" msgstr "" -#: awx/main/models/jobs.py:777 +#: main/models/jobs.py:774 msgid "Item Skipped" msgstr "" -#: awx/main/models/jobs.py:778 +#: main/models/jobs.py:775 msgid "Host Retry" msgstr "" -#: awx/main/models/jobs.py:780 +#: main/models/jobs.py:777 msgid "File Difference" msgstr "" -#: awx/main/models/jobs.py:781 +#: main/models/jobs.py:778 msgid "Playbook Started" msgstr "" -#: awx/main/models/jobs.py:782 +#: main/models/jobs.py:779 msgid "Running Handlers" msgstr "" -#: awx/main/models/jobs.py:783 +#: main/models/jobs.py:780 msgid "Including File" msgstr "" -#: awx/main/models/jobs.py:784 +#: main/models/jobs.py:781 msgid "No Hosts Matched" msgstr "" -#: awx/main/models/jobs.py:786 +#: main/models/jobs.py:783 msgid "Task Started" msgstr "" -#: awx/main/models/jobs.py:788 +#: main/models/jobs.py:785 msgid "Variables Prompted" msgstr "" -#: awx/main/models/jobs.py:789 +#: main/models/jobs.py:786 msgid "Gathering Facts" msgstr "" -#: awx/main/models/jobs.py:790 +#: main/models/jobs.py:787 msgid "internal: on Import for Host" msgstr "" -#: awx/main/models/jobs.py:791 +#: main/models/jobs.py:788 msgid "internal: on Not Import for Host" msgstr "" -#: awx/main/models/jobs.py:792 +#: main/models/jobs.py:789 msgid "Play Started" msgstr "" -#: awx/main/models/jobs.py:793 +#: main/models/jobs.py:790 msgid "Playbook Complete" msgstr "" -#: awx/main/models/jobs.py:1237 +#: main/models/jobs.py:1200 msgid "Remove jobs older than a certain number of days" msgstr "" -#: awx/main/models/jobs.py:1238 +#: main/models/jobs.py:1201 msgid "Remove activity stream entries older than a certain number of days" msgstr "" -#: awx/main/models/jobs.py:1239 +#: main/models/jobs.py:1202 msgid "Purge and/or reduce the granularity of system tracking data" msgstr "" -#: awx/main/models/label.py:29 +#: main/models/label.py:29 msgid "Organization this label belongs to." msgstr "" -#: awx/main/models/notifications.py:31 +#: main/models/notifications.py:31 msgid "Email" msgstr "" -#: awx/main/models/notifications.py:32 +#: main/models/notifications.py:32 msgid "Slack" msgstr "" -#: awx/main/models/notifications.py:33 +#: main/models/notifications.py:33 msgid "Twilio" msgstr "" -#: awx/main/models/notifications.py:34 +#: main/models/notifications.py:34 msgid "Pagerduty" msgstr "" -#: awx/main/models/notifications.py:35 +#: main/models/notifications.py:35 msgid "HipChat" msgstr "" -#: awx/main/models/notifications.py:36 +#: main/models/notifications.py:36 msgid "Webhook" msgstr "" -#: awx/main/models/notifications.py:37 +#: main/models/notifications.py:37 msgid "IRC" msgstr "" -#: awx/main/models/notifications.py:127 awx/main/models/unified_jobs.py:57 +#: main/models/notifications.py:127 main/models/unified_jobs.py:59 msgid "Pending" msgstr "" -#: awx/main/models/notifications.py:128 awx/main/models/unified_jobs.py:60 +#: main/models/notifications.py:128 main/models/unified_jobs.py:62 msgid "Successful" msgstr "" -#: awx/main/models/notifications.py:129 awx/main/models/unified_jobs.py:61 +#: main/models/notifications.py:129 main/models/unified_jobs.py:63 msgid "Failed" msgstr "" -#: awx/main/models/organization.py:157 +#: main/models/organization.py:157 msgid "Execute Commands on the Inventory" msgstr "" -#: awx/main/models/organization.py:211 +#: main/models/organization.py:211 msgid "Token not invalidated" msgstr "" -#: awx/main/models/organization.py:212 +#: main/models/organization.py:212 msgid "Token is expired" msgstr "" -#: awx/main/models/organization.py:213 -msgid "Maximum per-user sessions reached" +#: main/models/organization.py:213 +msgid "The maximum number of allowed sessions for this user has been exceeded." msgstr "" -#: awx/main/models/organization.py:216 +#: main/models/organization.py:216 msgid "Invalid token" msgstr "" -#: awx/main/models/organization.py:233 +#: main/models/organization.py:233 msgid "Reason the auth token was invalidated." msgstr "" -#: awx/main/models/organization.py:272 +#: main/models/organization.py:272 msgid "Invalid reason specified" msgstr "" -#: awx/main/models/projects.py:43 +#: main/models/projects.py:43 msgid "Git" msgstr "" -#: awx/main/models/projects.py:44 +#: main/models/projects.py:44 msgid "Mercurial" msgstr "" -#: awx/main/models/projects.py:45 +#: main/models/projects.py:45 msgid "Subversion" msgstr "" -#: awx/main/models/projects.py:71 +#: main/models/projects.py:71 msgid "" "Local path (relative to PROJECTS_ROOT) containing playbooks and related " "files for this project." msgstr "" -#: awx/main/models/projects.py:80 +#: main/models/projects.py:80 msgid "SCM Type" msgstr "" -#: awx/main/models/projects.py:86 +#: main/models/projects.py:81 +msgid "Specifies the source control system used to store the project." +msgstr "" + +#: main/models/projects.py:87 msgid "SCM URL" msgstr "" -#: awx/main/models/projects.py:92 +#: main/models/projects.py:88 +msgid "The location where the project is stored." +msgstr "" + +#: main/models/projects.py:94 msgid "SCM Branch" msgstr "" -#: awx/main/models/projects.py:93 +#: main/models/projects.py:95 msgid "Specific branch, tag or commit to checkout." msgstr "" -#: awx/main/models/projects.py:125 +#: main/models/projects.py:99 +msgid "Discard any local changes before syncing the project." +msgstr "" + +#: main/models/projects.py:103 +msgid "Delete the project before syncing." +msgstr "" + +#: main/models/projects.py:116 +msgid "The amount of time to run before the task is canceled." +msgstr "" + +#: main/models/projects.py:130 msgid "Invalid SCM URL." msgstr "" -#: awx/main/models/projects.py:128 +#: main/models/projects.py:133 msgid "SCM URL is required." msgstr "" -#: awx/main/models/projects.py:137 +#: main/models/projects.py:142 msgid "Credential kind must be 'scm'." msgstr "" -#: awx/main/models/projects.py:152 +#: main/models/projects.py:157 msgid "Invalid credential." msgstr "" -#: awx/main/models/projects.py:236 +#: main/models/projects.py:229 +msgid "Update the project when a job is launched that uses the project." +msgstr "" + +#: main/models/projects.py:234 +msgid "" +"The number of seconds after the last project update ran that a newproject " +"update will be launched as a job dependency." +msgstr "" + +#: main/models/projects.py:244 msgid "The last revision fetched by a project update" msgstr "" -#: awx/main/models/projects.py:243 +#: main/models/projects.py:251 msgid "Playbook Files" msgstr "" -#: awx/main/models/projects.py:244 +#: main/models/projects.py:252 msgid "List of playbooks found in the project" msgstr "" -#: awx/main/models/rbac.py:122 +#: main/models/rbac.py:122 msgid "roles" msgstr "" -#: awx/main/models/rbac.py:435 +#: main/models/rbac.py:438 msgid "role_ancestors" msgstr "" -#: awx/main/models/unified_jobs.py:56 +#: main/models/schedules.py:69 +msgid "Enables processing of this schedule by Tower." +msgstr "" + +#: main/models/schedules.py:75 +msgid "The first occurrence of the schedule occurs on or after this time." +msgstr "" + +#: main/models/schedules.py:81 +msgid "" +"The last occurrence of the schedule occurs before this time, aftewards the " +"schedule expires." +msgstr "" + +#: main/models/schedules.py:85 +msgid "A value representing the schedules iCal recurrence rule." +msgstr "" + +#: main/models/schedules.py:91 +msgid "The next time that the scheduled action will run." +msgstr "" + +#: main/models/unified_jobs.py:58 msgid "New" msgstr "" -#: awx/main/models/unified_jobs.py:58 +#: main/models/unified_jobs.py:60 msgid "Waiting" msgstr "" -#: awx/main/models/unified_jobs.py:59 +#: main/models/unified_jobs.py:61 msgid "Running" msgstr "" -#: awx/main/models/unified_jobs.py:63 +#: main/models/unified_jobs.py:65 msgid "Canceled" msgstr "" -#: awx/main/models/unified_jobs.py:67 +#: main/models/unified_jobs.py:69 msgid "Never Updated" msgstr "" -#: awx/main/models/unified_jobs.py:71 awx/ui/templates/ui/index.html:85 -#: awx/ui/templates/ui/index.html.py:104 +#: main/models/unified_jobs.py:73 ui/templates/ui/index.html:85 +#: ui/templates/ui/index.html.py:104 msgid "OK" msgstr "" -#: awx/main/models/unified_jobs.py:72 +#: main/models/unified_jobs.py:74 msgid "Missing" msgstr "" -#: awx/main/models/unified_jobs.py:76 +#: main/models/unified_jobs.py:78 msgid "No External Source" msgstr "" -#: awx/main/models/unified_jobs.py:83 +#: main/models/unified_jobs.py:85 msgid "Updating" msgstr "" -#: awx/main/models/unified_jobs.py:384 +#: main/models/unified_jobs.py:403 msgid "Relaunch" msgstr "" -#: awx/main/models/unified_jobs.py:385 +#: main/models/unified_jobs.py:404 msgid "Callback" msgstr "" -#: awx/main/models/unified_jobs.py:386 +#: main/models/unified_jobs.py:405 msgid "Scheduled" msgstr "" -#: awx/main/models/unified_jobs.py:387 +#: main/models/unified_jobs.py:406 msgid "Dependency" msgstr "" -#: awx/main/models/unified_jobs.py:388 +#: main/models/unified_jobs.py:407 msgid "Workflow" msgstr "" -#: awx/main/notifications/base.py:17 awx/main/notifications/email_backend.py:28 +#: main/models/unified_jobs.py:408 +msgid "Sync" +msgstr "" + +#: main/models/unified_jobs.py:454 +msgid "The Tower node the job executed on." +msgstr "" + +#: main/models/unified_jobs.py:480 +msgid "The date and time the job was queued for starting." +msgstr "" + +#: main/models/unified_jobs.py:486 +msgid "The date and time the job finished execution." +msgstr "" + +#: main/models/unified_jobs.py:492 +msgid "Elapsed time in seconds that the job ran." +msgstr "" + +#: main/models/unified_jobs.py:514 +msgid "" +"A status field to indicate the state of the job if it wasn't able to run and " +"capture stdout" +msgstr "" + +#: main/notifications/base.py:17 main/notifications/email_backend.py:28 msgid "" "{} #{} had status {} on Ansible Tower, view details at {}\n" "\n" msgstr "" -#: awx/main/notifications/hipchat_backend.py:46 +#: main/notifications/hipchat_backend.py:46 msgid "Error sending messages: {}" msgstr "" -#: awx/main/notifications/hipchat_backend.py:48 +#: main/notifications/hipchat_backend.py:48 msgid "Error sending message to hipchat: {}" msgstr "" -#: awx/main/notifications/irc_backend.py:54 +#: main/notifications/irc_backend.py:54 msgid "Exception connecting to irc server: {}" msgstr "" -#: awx/main/notifications/pagerduty_backend.py:39 +#: main/notifications/pagerduty_backend.py:39 msgid "Exception connecting to PagerDuty: {}" msgstr "" -#: awx/main/notifications/pagerduty_backend.py:48 -#: awx/main/notifications/slack_backend.py:52 -#: awx/main/notifications/twilio_backend.py:46 +#: main/notifications/pagerduty_backend.py:48 +#: main/notifications/slack_backend.py:52 +#: main/notifications/twilio_backend.py:46 msgid "Exception sending messages: {}" msgstr "" -#: awx/main/notifications/twilio_backend.py:36 +#: main/notifications/twilio_backend.py:36 msgid "Exception connecting to Twilio: {}" msgstr "" -#: awx/main/notifications/webhook_backend.py:38 -#: awx/main/notifications/webhook_backend.py:40 +#: main/notifications/webhook_backend.py:38 +#: main/notifications/webhook_backend.py:40 msgid "Error sending notification webhook: {}" msgstr "" -#: awx/main/tasks.py:119 +#: main/scheduler/__init__.py:130 +msgid "" +"Job spawned from workflow could not start because it was not in the right " +"state or required manual credentials" +msgstr "" + +#: main/tasks.py:180 msgid "Ansible Tower host usage over 90%" msgstr "" -#: awx/main/tasks.py:124 +#: main/tasks.py:185 msgid "Ansible Tower license will expire soon" msgstr "" -#: awx/main/tasks.py:177 +#: main/tasks.py:240 msgid "status_str must be either succeeded or failed" msgstr "" -#: awx/main/utils.py:88 +#: main/utils/common.py:89 #, python-format msgid "Unable to convert \"%s\" to boolean" msgstr "" -#: awx/main/utils.py:242 +#: main/utils/common.py:243 #, python-format msgid "Unsupported SCM type \"%s\"" msgstr "" -#: awx/main/utils.py:249 awx/main/utils.py:261 awx/main/utils.py:280 +#: main/utils/common.py:250 main/utils/common.py:262 main/utils/common.py:281 #, python-format msgid "Invalid %s URL" msgstr "" -#: awx/main/utils.py:251 awx/main/utils.py:289 +#: main/utils/common.py:252 main/utils/common.py:290 #, python-format msgid "Unsupported %s URL" msgstr "" -#: awx/main/utils.py:291 +#: main/utils/common.py:292 #, python-format msgid "Unsupported host \"%s\" for file:// URL" msgstr "" -#: awx/main/utils.py:293 +#: main/utils/common.py:294 #, python-format msgid "Host is required for %s URL" msgstr "" -#: awx/main/utils.py:311 +#: main/utils/common.py:312 #, python-format msgid "Username must be \"git\" for SSH access to %s." msgstr "" -#: awx/main/utils.py:317 +#: main/utils/common.py:318 #, python-format msgid "Username must be \"hg\" for SSH access to %s." msgstr "" -#: awx/main/validators.py:60 +#: main/validators.py:60 #, python-format msgid "Invalid certificate or key: %r..." msgstr "" -#: awx/main/validators.py:74 +#: main/validators.py:74 #, python-format msgid "Invalid private key: unsupported type \"%s\"" msgstr "" -#: awx/main/validators.py:78 +#: main/validators.py:78 #, python-format msgid "Unsupported PEM object type: \"%s\"" msgstr "" -#: awx/main/validators.py:103 +#: main/validators.py:103 msgid "Invalid base64-encoded data" msgstr "" -#: awx/main/validators.py:122 +#: main/validators.py:122 msgid "Exactly one private key is required." msgstr "" -#: awx/main/validators.py:124 +#: main/validators.py:124 msgid "At least one private key is required." msgstr "" -#: awx/main/validators.py:126 +#: main/validators.py:126 #, python-format msgid "" "At least %(min_keys)d private keys are required, only %(key_count)d provided." msgstr "" -#: awx/main/validators.py:129 +#: main/validators.py:129 #, python-format msgid "Only one private key is allowed, %(key_count)d provided." msgstr "" -#: awx/main/validators.py:131 +#: main/validators.py:131 #, python-format msgid "" "No more than %(max_keys)d private keys are allowed, %(key_count)d provided." msgstr "" -#: awx/main/validators.py:136 +#: main/validators.py:136 msgid "Exactly one certificate is required." msgstr "" -#: awx/main/validators.py:138 +#: main/validators.py:138 msgid "At least one certificate is required." msgstr "" -#: awx/main/validators.py:140 +#: main/validators.py:140 #, python-format msgid "" "At least %(min_certs)d certificates are required, only %(cert_count)d " "provided." msgstr "" -#: awx/main/validators.py:143 +#: main/validators.py:143 #, python-format msgid "Only one certificate is allowed, %(cert_count)d provided." msgstr "" -#: awx/main/validators.py:145 +#: main/validators.py:145 #, python-format msgid "" "No more than %(max_certs)d certificates are allowed, %(cert_count)d provided." msgstr "" -#: awx/main/views.py:20 +#: main/views.py:20 msgid "API Error" msgstr "" -#: awx/main/views.py:49 +#: main/views.py:49 msgid "Bad Request" msgstr "" -#: awx/main/views.py:50 +#: main/views.py:50 msgid "The request could not be understood by the server." msgstr "" -#: awx/main/views.py:57 +#: main/views.py:57 msgid "Forbidden" msgstr "" -#: awx/main/views.py:58 +#: main/views.py:58 msgid "You don't have permission to access the requested resource." msgstr "" -#: awx/main/views.py:65 +#: main/views.py:65 msgid "Not Found" msgstr "" -#: awx/main/views.py:66 +#: main/views.py:66 msgid "The requested resource could not be found." msgstr "" -#: awx/main/views.py:73 +#: main/views.py:73 msgid "Server Error" msgstr "" -#: awx/main/views.py:74 +#: main/views.py:74 msgid "A server error has occurred." msgstr "" -#: awx/settings/defaults.py:593 +#: settings/defaults.py:611 msgid "Chicago" msgstr "" -#: awx/settings/defaults.py:594 +#: settings/defaults.py:612 msgid "Dallas/Ft. Worth" msgstr "" -#: awx/settings/defaults.py:595 +#: settings/defaults.py:613 msgid "Northern Virginia" msgstr "" -#: awx/settings/defaults.py:596 +#: settings/defaults.py:614 msgid "London" msgstr "" -#: awx/settings/defaults.py:597 +#: settings/defaults.py:615 msgid "Sydney" msgstr "" -#: awx/settings/defaults.py:598 +#: settings/defaults.py:616 msgid "Hong Kong" msgstr "" -#: awx/settings/defaults.py:625 +#: settings/defaults.py:643 msgid "US East (Northern Virginia)" msgstr "" -#: awx/settings/defaults.py:626 +#: settings/defaults.py:644 msgid "US East (Ohio)" msgstr "" -#: awx/settings/defaults.py:627 +#: settings/defaults.py:645 msgid "US West (Oregon)" msgstr "" -#: awx/settings/defaults.py:628 +#: settings/defaults.py:646 msgid "US West (Northern California)" msgstr "" -#: awx/settings/defaults.py:629 +#: settings/defaults.py:647 +msgid "Canada (Central)" +msgstr "" + +#: settings/defaults.py:648 msgid "EU (Frankfurt)" msgstr "" -#: awx/settings/defaults.py:630 +#: settings/defaults.py:649 msgid "EU (Ireland)" msgstr "" -#: awx/settings/defaults.py:631 +#: settings/defaults.py:650 +msgid "EU (London)" +msgstr "" + +#: settings/defaults.py:651 msgid "Asia Pacific (Singapore)" msgstr "" -#: awx/settings/defaults.py:632 +#: settings/defaults.py:652 msgid "Asia Pacific (Sydney)" msgstr "" -#: awx/settings/defaults.py:633 +#: settings/defaults.py:653 msgid "Asia Pacific (Tokyo)" msgstr "" -#: awx/settings/defaults.py:634 +#: settings/defaults.py:654 msgid "Asia Pacific (Seoul)" msgstr "" -#: awx/settings/defaults.py:635 +#: settings/defaults.py:655 msgid "Asia Pacific (Mumbai)" msgstr "" -#: awx/settings/defaults.py:636 +#: settings/defaults.py:656 msgid "South America (Sao Paulo)" msgstr "" -#: awx/settings/defaults.py:637 +#: settings/defaults.py:657 msgid "US West (GovCloud)" msgstr "" -#: awx/settings/defaults.py:638 +#: settings/defaults.py:658 msgid "China (Beijing)" msgstr "" -#: awx/settings/defaults.py:687 +#: settings/defaults.py:707 msgid "US East (B)" msgstr "" -#: awx/settings/defaults.py:688 +#: settings/defaults.py:708 msgid "US East (C)" msgstr "" -#: awx/settings/defaults.py:689 +#: settings/defaults.py:709 msgid "US East (D)" msgstr "" -#: awx/settings/defaults.py:690 +#: settings/defaults.py:710 msgid "US Central (A)" msgstr "" -#: awx/settings/defaults.py:691 +#: settings/defaults.py:711 msgid "US Central (B)" msgstr "" -#: awx/settings/defaults.py:692 +#: settings/defaults.py:712 msgid "US Central (C)" msgstr "" -#: awx/settings/defaults.py:693 +#: settings/defaults.py:713 msgid "US Central (F)" msgstr "" -#: awx/settings/defaults.py:694 +#: settings/defaults.py:714 msgid "Europe West (B)" msgstr "" -#: awx/settings/defaults.py:695 +#: settings/defaults.py:715 msgid "Europe West (C)" msgstr "" -#: awx/settings/defaults.py:696 +#: settings/defaults.py:716 msgid "Europe West (D)" msgstr "" -#: awx/settings/defaults.py:697 +#: settings/defaults.py:717 msgid "Asia East (A)" msgstr "" -#: awx/settings/defaults.py:698 +#: settings/defaults.py:718 msgid "Asia East (B)" msgstr "" -#: awx/settings/defaults.py:699 +#: settings/defaults.py:719 msgid "Asia East (C)" msgstr "" -#: awx/settings/defaults.py:723 +#: settings/defaults.py:743 msgid "US Central" msgstr "" -#: awx/settings/defaults.py:724 +#: settings/defaults.py:744 msgid "US East" msgstr "" -#: awx/settings/defaults.py:725 +#: settings/defaults.py:745 msgid "US East 2" msgstr "" -#: awx/settings/defaults.py:726 +#: settings/defaults.py:746 msgid "US North Central" msgstr "" -#: awx/settings/defaults.py:727 +#: settings/defaults.py:747 msgid "US South Central" msgstr "" -#: awx/settings/defaults.py:728 +#: settings/defaults.py:748 msgid "US West" msgstr "" -#: awx/settings/defaults.py:729 +#: settings/defaults.py:749 msgid "Europe North" msgstr "" -#: awx/settings/defaults.py:730 +#: settings/defaults.py:750 msgid "Europe West" msgstr "" -#: awx/settings/defaults.py:731 +#: settings/defaults.py:751 msgid "Asia Pacific East" msgstr "" -#: awx/settings/defaults.py:732 +#: settings/defaults.py:752 msgid "Asia Pacific Southeast" msgstr "" -#: awx/settings/defaults.py:733 +#: settings/defaults.py:753 msgid "Japan East" msgstr "" -#: awx/settings/defaults.py:734 +#: settings/defaults.py:754 msgid "Japan West" msgstr "" -#: awx/settings/defaults.py:735 +#: settings/defaults.py:755 msgid "Brazil South" msgstr "" -#: awx/sso/apps.py:9 +#: sso/apps.py:9 msgid "Single Sign-On" msgstr "" -#: awx/sso/conf.py:27 +#: sso/conf.py:27 msgid "" "Mapping to organization admins/users from social auth accounts. This " "setting\n" @@ -2546,7 +2725,7 @@ msgid "" " remove_admins." msgstr "" -#: awx/sso/conf.py:76 +#: sso/conf.py:76 msgid "" "Mapping of team members (users) from social auth accounts. Keys are team\n" "names (will be created if not present). Values are dictionaries of options\n" @@ -2575,40 +2754,40 @@ msgid "" " the rules above will be removed from the team." msgstr "" -#: awx/sso/conf.py:119 +#: sso/conf.py:119 msgid "Authentication Backends" msgstr "" -#: awx/sso/conf.py:120 +#: sso/conf.py:120 msgid "" "List of authentication backends that are enabled based on license features " "and other authentication settings." msgstr "" -#: awx/sso/conf.py:133 +#: sso/conf.py:133 msgid "Social Auth Organization Map" msgstr "" -#: awx/sso/conf.py:145 +#: sso/conf.py:145 msgid "Social Auth Team Map" msgstr "" -#: awx/sso/conf.py:157 +#: sso/conf.py:157 msgid "Social Auth User Fields" msgstr "" -#: awx/sso/conf.py:158 +#: sso/conf.py:158 msgid "" "When set to an empty list `[]`, this setting prevents new user accounts from " "being created. Only users who have previously logged in using social auth or " "have a user account with a matching email address will be able to login." msgstr "" -#: awx/sso/conf.py:176 +#: sso/conf.py:176 msgid "LDAP Server URI" msgstr "" -#: awx/sso/conf.py:177 +#: sso/conf.py:177 msgid "" "URI to connect to LDAP server, such as \"ldap://ldap.example.com:389\" (non-" "SSL) or \"ldaps://ldap.example.com:636\" (SSL). Multiple LDAP servers may be " @@ -2616,19 +2795,18 @@ msgid "" "disabled if this parameter is empty." msgstr "" -#: awx/sso/conf.py:181 awx/sso/conf.py:199 awx/sso/conf.py:211 -#: awx/sso/conf.py:222 awx/sso/conf.py:238 awx/sso/conf.py:257 -#: awx/sso/conf.py:278 awx/sso/conf.py:294 awx/sso/conf.py:313 -#: awx/sso/conf.py:330 awx/sso/conf.py:345 awx/sso/conf.py:360 -#: awx/sso/conf.py:377 awx/sso/conf.py:415 awx/sso/conf.py:456 +#: sso/conf.py:181 sso/conf.py:199 sso/conf.py:211 sso/conf.py:223 +#: sso/conf.py:239 sso/conf.py:258 sso/conf.py:280 sso/conf.py:296 +#: sso/conf.py:315 sso/conf.py:332 sso/conf.py:349 sso/conf.py:365 +#: sso/conf.py:382 sso/conf.py:420 sso/conf.py:461 msgid "LDAP" msgstr "" -#: awx/sso/conf.py:193 +#: sso/conf.py:193 msgid "LDAP Bind DN" msgstr "" -#: awx/sso/conf.py:194 +#: sso/conf.py:194 msgid "" "DN (Distinguished Name) of user to bind for all search queries. Normally in " "the format \"CN=Some User,OU=Users,DC=example,DC=com\" but may also be " @@ -2636,27 +2814,27 @@ msgid "" "user account we will use to login to query LDAP for other user information." msgstr "" -#: awx/sso/conf.py:209 +#: sso/conf.py:209 msgid "LDAP Bind Password" msgstr "" -#: awx/sso/conf.py:210 +#: sso/conf.py:210 msgid "Password used to bind LDAP user account." msgstr "" -#: awx/sso/conf.py:220 +#: sso/conf.py:221 msgid "LDAP Start TLS" msgstr "" -#: awx/sso/conf.py:221 +#: sso/conf.py:222 msgid "Whether to enable TLS when the LDAP connection is not using SSL." msgstr "" -#: awx/sso/conf.py:231 +#: sso/conf.py:232 msgid "LDAP Connection Options" msgstr "" -#: awx/sso/conf.py:232 +#: sso/conf.py:233 msgid "" "Additional options to set for the LDAP connection. LDAP referrals are " "disabled by default (to prevent certain LDAP queries from hanging with AD). " @@ -2665,11 +2843,11 @@ msgid "" "values that can be set." msgstr "" -#: awx/sso/conf.py:250 +#: sso/conf.py:251 msgid "LDAP User Search" msgstr "" -#: awx/sso/conf.py:251 +#: sso/conf.py:252 msgid "" "LDAP search query to find users. Any user that matches the given pattern " "will be able to login to Tower. The user should also be mapped into an " @@ -2678,11 +2856,11 @@ msgid "" "possible. See python-ldap documentation as linked at the top of this section." msgstr "" -#: awx/sso/conf.py:272 +#: sso/conf.py:274 msgid "LDAP User DN Template" msgstr "" -#: awx/sso/conf.py:273 +#: sso/conf.py:275 msgid "" "Alternative to user search, if user DNs are all of the same format. This " "approach will be more efficient for user lookups than searching if it is " @@ -2690,11 +2868,11 @@ msgid "" "will be used instead of AUTH_LDAP_USER_SEARCH." msgstr "" -#: awx/sso/conf.py:288 +#: sso/conf.py:290 msgid "LDAP User Attribute Map" msgstr "" -#: awx/sso/conf.py:289 +#: sso/conf.py:291 msgid "" "Mapping of LDAP user schema to Tower API user attributes (key is user " "attribute name, value is LDAP attribute name). The default setting is valid " @@ -2702,54 +2880,54 @@ msgid "" "change the values (not the keys) of the dictionary/hash-table." msgstr "" -#: awx/sso/conf.py:308 +#: sso/conf.py:310 msgid "LDAP Group Search" msgstr "" -#: awx/sso/conf.py:309 +#: sso/conf.py:311 msgid "" "Users in Tower are mapped to organizations based on their membership in LDAP " "groups. This setting defines the LDAP search query to find groups. Note that " "this, unlike the user search above, does not support LDAPSearchUnion." msgstr "" -#: awx/sso/conf.py:326 +#: sso/conf.py:328 msgid "LDAP Group Type" msgstr "" -#: awx/sso/conf.py:327 +#: sso/conf.py:329 msgid "" "The group type may need to be changed based on the type of the LDAP server. " "Values are listed at: http://pythonhosted.org/django-auth-ldap/groups." "html#types-of-groups" msgstr "" -#: awx/sso/conf.py:340 +#: sso/conf.py:344 msgid "LDAP Require Group" msgstr "" -#: awx/sso/conf.py:341 +#: sso/conf.py:345 msgid "" "Group DN required to login. If specified, user must be a member of this " "group to login via LDAP. If not set, everyone in LDAP that matches the user " "search will be able to login via Tower. Only one require group is supported." msgstr "" -#: awx/sso/conf.py:356 +#: sso/conf.py:361 msgid "LDAP Deny Group" msgstr "" -#: awx/sso/conf.py:357 +#: sso/conf.py:362 msgid "" "Group DN denied from login. If specified, user will not be allowed to login " "if a member of this group. Only one deny group is supported." msgstr "" -#: awx/sso/conf.py:370 +#: sso/conf.py:375 msgid "LDAP User Flags By Group" msgstr "" -#: awx/sso/conf.py:371 +#: sso/conf.py:376 msgid "" "User profile flags updated from group membership (key is user attribute " "name, value is group DN). These are boolean fields that are matched based " @@ -2758,11 +2936,11 @@ msgid "" "false at login time based on current LDAP settings." msgstr "" -#: awx/sso/conf.py:389 +#: sso/conf.py:394 msgid "LDAP Organization Map" msgstr "" -#: awx/sso/conf.py:390 +#: sso/conf.py:395 msgid "" "Mapping between organization admins/users and LDAP groups. This controls " "what users are placed into what Tower organizations relative to their LDAP " @@ -2789,11 +2967,11 @@ msgid "" "remove_admins." msgstr "" -#: awx/sso/conf.py:438 +#: sso/conf.py:443 msgid "LDAP Team Map" msgstr "" -#: awx/sso/conf.py:439 +#: sso/conf.py:444 msgid "" "Mapping between team members (users) and LDAP groups. Keys are team names " "(will be created if not present). Values are dictionaries of options for " @@ -2812,88 +2990,87 @@ msgid "" "of the given groups will be removed from the team." msgstr "" -#: awx/sso/conf.py:482 +#: sso/conf.py:487 msgid "RADIUS Server" msgstr "" -#: awx/sso/conf.py:483 +#: sso/conf.py:488 msgid "" "Hostname/IP of RADIUS server. RADIUS authentication will be disabled if this " "setting is empty." msgstr "" -#: awx/sso/conf.py:485 awx/sso/conf.py:499 awx/sso/conf.py:511 +#: sso/conf.py:490 sso/conf.py:504 sso/conf.py:516 msgid "RADIUS" msgstr "" -#: awx/sso/conf.py:497 +#: sso/conf.py:502 msgid "RADIUS Port" msgstr "" -#: awx/sso/conf.py:498 +#: sso/conf.py:503 msgid "Port of RADIUS server." msgstr "" -#: awx/sso/conf.py:509 +#: sso/conf.py:514 msgid "RADIUS Secret" msgstr "" -#: awx/sso/conf.py:510 +#: sso/conf.py:515 msgid "Shared secret for authenticating to RADIUS server." msgstr "" -#: awx/sso/conf.py:525 +#: sso/conf.py:531 msgid "Google OAuth2 Callback URL" msgstr "" -#: awx/sso/conf.py:526 +#: sso/conf.py:532 msgid "" "Create a project at https://console.developers.google.com/ to obtain an " "OAuth2 key and secret for a web application. Ensure that the Google+ API is " "enabled. Provide this URL as the callback URL for your application." msgstr "" -#: awx/sso/conf.py:530 awx/sso/conf.py:541 awx/sso/conf.py:552 -#: awx/sso/conf.py:564 awx/sso/conf.py:578 awx/sso/conf.py:590 -#: awx/sso/conf.py:602 +#: sso/conf.py:536 sso/conf.py:547 sso/conf.py:558 sso/conf.py:571 +#: sso/conf.py:585 sso/conf.py:597 sso/conf.py:609 msgid "Google OAuth2" msgstr "" -#: awx/sso/conf.py:539 +#: sso/conf.py:545 msgid "Google OAuth2 Key" msgstr "" -#: awx/sso/conf.py:540 +#: sso/conf.py:546 msgid "" "The OAuth2 key from your web application at https://console.developers." "google.com/." msgstr "" -#: awx/sso/conf.py:550 +#: sso/conf.py:556 msgid "Google OAuth2 Secret" msgstr "" -#: awx/sso/conf.py:551 +#: sso/conf.py:557 msgid "" "The OAuth2 secret from your web application at https://console.developers." "google.com/." msgstr "" -#: awx/sso/conf.py:561 +#: sso/conf.py:568 msgid "Google OAuth2 Whitelisted Domains" msgstr "" -#: awx/sso/conf.py:562 +#: sso/conf.py:569 msgid "" "Update this setting to restrict the domains who are allowed to login using " "Google OAuth2." msgstr "" -#: awx/sso/conf.py:573 +#: sso/conf.py:580 msgid "Google OAuth2 Extra Arguments" msgstr "" -#: awx/sso/conf.py:574 +#: sso/conf.py:581 msgid "" "Extra arguments for Google OAuth2 login. When only allowing a single domain " "to authenticate, set to `{\"hd\": \"yourdomain.com\"}` and Google will not " @@ -2901,60 +3078,60 @@ msgid "" "Google accounts." msgstr "" -#: awx/sso/conf.py:588 +#: sso/conf.py:595 msgid "Google OAuth2 Organization Map" msgstr "" -#: awx/sso/conf.py:600 +#: sso/conf.py:607 msgid "Google OAuth2 Team Map" msgstr "" -#: awx/sso/conf.py:616 +#: sso/conf.py:623 msgid "GitHub OAuth2 Callback URL" msgstr "" -#: awx/sso/conf.py:617 +#: sso/conf.py:624 msgid "" "Create a developer application at https://github.com/settings/developers to " "obtain an OAuth2 key (Client ID) and secret (Client Secret). Provide this " "URL as the callback URL for your application." msgstr "" -#: awx/sso/conf.py:621 awx/sso/conf.py:632 awx/sso/conf.py:642 -#: awx/sso/conf.py:653 awx/sso/conf.py:665 +#: sso/conf.py:628 sso/conf.py:639 sso/conf.py:649 sso/conf.py:661 +#: sso/conf.py:673 msgid "GitHub OAuth2" msgstr "" -#: awx/sso/conf.py:630 +#: sso/conf.py:637 msgid "GitHub OAuth2 Key" msgstr "" -#: awx/sso/conf.py:631 +#: sso/conf.py:638 msgid "The OAuth2 key (Client ID) from your GitHub developer application." msgstr "" -#: awx/sso/conf.py:640 +#: sso/conf.py:647 msgid "GitHub OAuth2 Secret" msgstr "" -#: awx/sso/conf.py:641 +#: sso/conf.py:648 msgid "" "The OAuth2 secret (Client Secret) from your GitHub developer application." msgstr "" -#: awx/sso/conf.py:651 +#: sso/conf.py:659 msgid "GitHub OAuth2 Organization Map" msgstr "" -#: awx/sso/conf.py:663 +#: sso/conf.py:671 msgid "GitHub OAuth2 Team Map" msgstr "" -#: awx/sso/conf.py:679 +#: sso/conf.py:687 msgid "GitHub Organization OAuth2 Callback URL" msgstr "" -#: awx/sso/conf.py:680 awx/sso/conf.py:754 +#: sso/conf.py:688 sso/conf.py:763 msgid "" "Create an organization-owned application at https://github.com/organizations/" "/settings/applications and obtain an OAuth2 key (Client ID) and " @@ -2962,86 +3139,86 @@ msgid "" "application." msgstr "" -#: awx/sso/conf.py:684 awx/sso/conf.py:695 awx/sso/conf.py:705 -#: awx/sso/conf.py:716 awx/sso/conf.py:727 awx/sso/conf.py:739 +#: sso/conf.py:692 sso/conf.py:703 sso/conf.py:713 sso/conf.py:725 +#: sso/conf.py:736 sso/conf.py:748 msgid "GitHub Organization OAuth2" msgstr "" -#: awx/sso/conf.py:693 +#: sso/conf.py:701 msgid "GitHub Organization OAuth2 Key" msgstr "" -#: awx/sso/conf.py:694 awx/sso/conf.py:768 +#: sso/conf.py:702 sso/conf.py:777 msgid "The OAuth2 key (Client ID) from your GitHub organization application." msgstr "" -#: awx/sso/conf.py:703 +#: sso/conf.py:711 msgid "GitHub Organization OAuth2 Secret" msgstr "" -#: awx/sso/conf.py:704 awx/sso/conf.py:778 +#: sso/conf.py:712 sso/conf.py:787 msgid "" "The OAuth2 secret (Client Secret) from your GitHub organization application." msgstr "" -#: awx/sso/conf.py:713 +#: sso/conf.py:722 msgid "GitHub Organization Name" msgstr "" -#: awx/sso/conf.py:714 +#: sso/conf.py:723 msgid "" "The name of your GitHub organization, as used in your organization's URL: " "https://github.com//." msgstr "" -#: awx/sso/conf.py:725 +#: sso/conf.py:734 msgid "GitHub Organization OAuth2 Organization Map" msgstr "" -#: awx/sso/conf.py:737 +#: sso/conf.py:746 msgid "GitHub Organization OAuth2 Team Map" msgstr "" -#: awx/sso/conf.py:753 +#: sso/conf.py:762 msgid "GitHub Team OAuth2 Callback URL" msgstr "" -#: awx/sso/conf.py:758 awx/sso/conf.py:769 awx/sso/conf.py:779 -#: awx/sso/conf.py:790 awx/sso/conf.py:801 awx/sso/conf.py:813 +#: sso/conf.py:767 sso/conf.py:778 sso/conf.py:788 sso/conf.py:800 +#: sso/conf.py:811 sso/conf.py:823 msgid "GitHub Team OAuth2" msgstr "" -#: awx/sso/conf.py:767 +#: sso/conf.py:776 msgid "GitHub Team OAuth2 Key" msgstr "" -#: awx/sso/conf.py:777 +#: sso/conf.py:786 msgid "GitHub Team OAuth2 Secret" msgstr "" -#: awx/sso/conf.py:787 +#: sso/conf.py:797 msgid "GitHub Team ID" msgstr "" -#: awx/sso/conf.py:788 +#: sso/conf.py:798 msgid "" "Find the numeric team ID using the Github API: http://fabian-kostadinov." "github.io/2015/01/16/how-to-find-a-github-team-id/." msgstr "" -#: awx/sso/conf.py:799 +#: sso/conf.py:809 msgid "GitHub Team OAuth2 Organization Map" msgstr "" -#: awx/sso/conf.py:811 +#: sso/conf.py:821 msgid "GitHub Team OAuth2 Team Map" msgstr "" -#: awx/sso/conf.py:827 +#: sso/conf.py:837 msgid "Azure AD OAuth2 Callback URL" msgstr "" -#: awx/sso/conf.py:828 +#: sso/conf.py:838 msgid "" "Register an Azure AD application as described by https://msdn.microsoft.com/" "en-us/library/azure/dn132599.aspx and obtain an OAuth2 key (Client ID) and " @@ -3049,118 +3226,117 @@ msgid "" "application." msgstr "" -#: awx/sso/conf.py:832 awx/sso/conf.py:843 awx/sso/conf.py:853 -#: awx/sso/conf.py:864 awx/sso/conf.py:876 +#: sso/conf.py:842 sso/conf.py:853 sso/conf.py:863 sso/conf.py:875 +#: sso/conf.py:887 msgid "Azure AD OAuth2" msgstr "" -#: awx/sso/conf.py:841 +#: sso/conf.py:851 msgid "Azure AD OAuth2 Key" msgstr "" -#: awx/sso/conf.py:842 +#: sso/conf.py:852 msgid "The OAuth2 key (Client ID) from your Azure AD application." msgstr "" -#: awx/sso/conf.py:851 +#: sso/conf.py:861 msgid "Azure AD OAuth2 Secret" msgstr "" -#: awx/sso/conf.py:852 +#: sso/conf.py:862 msgid "The OAuth2 secret (Client Secret) from your Azure AD application." msgstr "" -#: awx/sso/conf.py:862 +#: sso/conf.py:873 msgid "Azure AD OAuth2 Organization Map" msgstr "" -#: awx/sso/conf.py:874 +#: sso/conf.py:885 msgid "Azure AD OAuth2 Team Map" msgstr "" -#: awx/sso/conf.py:895 +#: sso/conf.py:906 msgid "SAML Service Provider Callback URL" msgstr "" -#: awx/sso/conf.py:896 +#: sso/conf.py:907 msgid "" "Register Tower as a service provider (SP) with each identity provider (IdP) " "you have configured. Provide your SP Entity ID and this callback URL for " "your application." msgstr "" -#: awx/sso/conf.py:899 awx/sso/conf.py:913 awx/sso/conf.py:927 -#: awx/sso/conf.py:941 awx/sso/conf.py:955 awx/sso/conf.py:972 -#: awx/sso/conf.py:994 awx/sso/conf.py:1013 awx/sso/conf.py:1033 -#: awx/sso/conf.py:1067 awx/sso/conf.py:1080 +#: sso/conf.py:910 sso/conf.py:924 sso/conf.py:937 sso/conf.py:951 +#: sso/conf.py:965 sso/conf.py:983 sso/conf.py:1005 sso/conf.py:1024 +#: sso/conf.py:1044 sso/conf.py:1078 sso/conf.py:1091 msgid "SAML" msgstr "" -#: awx/sso/conf.py:910 +#: sso/conf.py:921 msgid "SAML Service Provider Metadata URL" msgstr "" -#: awx/sso/conf.py:911 +#: sso/conf.py:922 msgid "" "If your identity provider (IdP) allows uploading an XML metadata file, you " "can download one from this URL." msgstr "" -#: awx/sso/conf.py:924 +#: sso/conf.py:934 msgid "SAML Service Provider Entity ID" msgstr "" -#: awx/sso/conf.py:925 +#: sso/conf.py:935 msgid "" -"Set to a URL for a domain name you own (does not need to be a valid URL; " -"only used as a unique ID)." +"The application-defined unique identifier used as the audience of the SAML " +"service provider (SP) configuration." msgstr "" -#: awx/sso/conf.py:938 +#: sso/conf.py:948 msgid "SAML Service Provider Public Certificate" msgstr "" -#: awx/sso/conf.py:939 +#: sso/conf.py:949 msgid "" "Create a keypair for Tower to use as a service provider (SP) and include the " "certificate content here." msgstr "" -#: awx/sso/conf.py:952 +#: sso/conf.py:962 msgid "SAML Service Provider Private Key" msgstr "" -#: awx/sso/conf.py:953 +#: sso/conf.py:963 msgid "" "Create a keypair for Tower to use as a service provider (SP) and include the " "private key content here." msgstr "" -#: awx/sso/conf.py:970 +#: sso/conf.py:981 msgid "SAML Service Provider Organization Info" msgstr "" -#: awx/sso/conf.py:971 +#: sso/conf.py:982 msgid "Configure this setting with information about your app." msgstr "" -#: awx/sso/conf.py:992 +#: sso/conf.py:1003 msgid "SAML Service Provider Technical Contact" msgstr "" -#: awx/sso/conf.py:993 awx/sso/conf.py:1012 +#: sso/conf.py:1004 sso/conf.py:1023 msgid "Configure this setting with your contact information." msgstr "" -#: awx/sso/conf.py:1011 +#: sso/conf.py:1022 msgid "SAML Service Provider Support Contact" msgstr "" -#: awx/sso/conf.py:1026 +#: sso/conf.py:1037 msgid "SAML Enabled Identity Providers" msgstr "" -#: awx/sso/conf.py:1027 +#: sso/conf.py:1038 msgid "" "Configure the Entity ID, SSO URL and certificate for each identity provider " "(IdP) in use. Multiple SAML IdPs are supported. Some IdPs may provide user " @@ -3169,237 +3345,217 @@ msgid "" "Attribute names may be overridden for each IdP." msgstr "" -#: awx/sso/conf.py:1065 +#: sso/conf.py:1076 msgid "SAML Organization Map" msgstr "" -#: awx/sso/conf.py:1078 +#: sso/conf.py:1089 msgid "SAML Team Map" msgstr "" -#: awx/sso/fields.py:123 -#, python-brace-format +#: sso/fields.py:123 msgid "Invalid connection option(s): {invalid_options}." msgstr "" -#: awx/sso/fields.py:182 +#: sso/fields.py:194 msgid "Base" msgstr "" -#: awx/sso/fields.py:183 +#: sso/fields.py:195 msgid "One Level" msgstr "" -#: awx/sso/fields.py:184 +#: sso/fields.py:196 msgid "Subtree" msgstr "" -#: awx/sso/fields.py:202 -#, python-brace-format +#: sso/fields.py:214 msgid "Expected a list of three items but got {length} instead." msgstr "" -#: awx/sso/fields.py:203 -#, python-brace-format +#: sso/fields.py:215 msgid "Expected an instance of LDAPSearch but got {input_type} instead." msgstr "" -#: awx/sso/fields.py:239 -#, python-brace-format +#: sso/fields.py:251 msgid "" "Expected an instance of LDAPSearch or LDAPSearchUnion but got {input_type} " "instead." msgstr "" -#: awx/sso/fields.py:266 -#, python-brace-format +#: sso/fields.py:278 msgid "Invalid user attribute(s): {invalid_attrs}." msgstr "" -#: awx/sso/fields.py:283 -#, python-brace-format +#: sso/fields.py:295 msgid "Expected an instance of LDAPGroupType but got {input_type} instead." msgstr "" -#: awx/sso/fields.py:308 -#, python-brace-format +#: sso/fields.py:323 msgid "Invalid user flag: \"{invalid_flag}\"." msgstr "" -#: awx/sso/fields.py:324 awx/sso/fields.py:491 -#, python-brace-format +#: sso/fields.py:339 sso/fields.py:506 msgid "" "Expected None, True, False, a string or list of strings but got {input_type} " "instead." msgstr "" -#: awx/sso/fields.py:360 -#, python-brace-format +#: sso/fields.py:375 msgid "Missing key(s): {missing_keys}." msgstr "" -#: awx/sso/fields.py:361 -#, python-brace-format +#: sso/fields.py:376 msgid "Invalid key(s): {invalid_keys}." msgstr "" -#: awx/sso/fields.py:410 awx/sso/fields.py:527 -#, python-brace-format +#: sso/fields.py:425 sso/fields.py:542 msgid "Invalid key(s) for organization map: {invalid_keys}." msgstr "" -#: awx/sso/fields.py:428 -#, python-brace-format +#: sso/fields.py:443 msgid "Missing required key for team map: {invalid_keys}." msgstr "" -#: awx/sso/fields.py:429 awx/sso/fields.py:546 -#, python-brace-format +#: sso/fields.py:444 sso/fields.py:561 msgid "Invalid key(s) for team map: {invalid_keys}." msgstr "" -#: awx/sso/fields.py:545 -#, python-brace-format +#: sso/fields.py:560 msgid "Missing required key for team map: {missing_keys}." msgstr "" -#: awx/sso/fields.py:563 -#, python-brace-format +#: sso/fields.py:578 msgid "Missing required key(s) for org info record: {missing_keys}." msgstr "" -#: awx/sso/fields.py:576 -#, python-brace-format +#: sso/fields.py:591 msgid "Invalid language code(s) for org info: {invalid_lang_codes}." msgstr "" -#: awx/sso/fields.py:595 -#, python-brace-format +#: sso/fields.py:610 msgid "Missing required key(s) for contact: {missing_keys}." msgstr "" -#: awx/sso/fields.py:607 -#, python-brace-format +#: sso/fields.py:622 msgid "Missing required key(s) for IdP: {missing_keys}." msgstr "" -#: awx/sso/pipeline.py:24 -#, python-brace-format +#: sso/pipeline.py:24 msgid "An account cannot be found for {0}" msgstr "" -#: awx/sso/pipeline.py:30 +#: sso/pipeline.py:30 msgid "Your account is inactive" msgstr "" -#: awx/sso/validators.py:19 awx/sso/validators.py:44 +#: sso/validators.py:19 sso/validators.py:44 #, python-format msgid "DN must include \"%%(user)s\" placeholder for username: %s" msgstr "" -#: awx/sso/validators.py:26 +#: sso/validators.py:26 #, python-format msgid "Invalid DN: %s" msgstr "" -#: awx/sso/validators.py:56 +#: sso/validators.py:56 #, python-format msgid "Invalid filter: %s" msgstr "" -#: awx/templates/error.html:4 awx/ui/templates/ui/index.html:8 +#: templates/error.html:4 ui/templates/ui/index.html:8 msgid "Ansible Tower" msgstr "" -#: awx/templates/rest_framework/api.html:39 +#: templates/rest_framework/api.html:39 msgid "Ansible Tower API Guide" msgstr "" -#: awx/templates/rest_framework/api.html:40 +#: templates/rest_framework/api.html:40 msgid "Back to Ansible Tower" msgstr "" -#: awx/templates/rest_framework/api.html:41 +#: templates/rest_framework/api.html:41 msgid "Resize" msgstr "" -#: awx/templates/rest_framework/base.html:78 -#: awx/templates/rest_framework/base.html:92 +#: templates/rest_framework/base.html:78 templates/rest_framework/base.html:92 #, python-format msgid "Make a GET request on the %(name)s resource" msgstr "" -#: awx/templates/rest_framework/base.html:80 +#: templates/rest_framework/base.html:80 msgid "Specify a format for the GET request" msgstr "" -#: awx/templates/rest_framework/base.html:86 +#: templates/rest_framework/base.html:86 #, python-format msgid "" "Make a GET request on the %(name)s resource with the format set to `" "%(format)s`" msgstr "" -#: awx/templates/rest_framework/base.html:100 +#: templates/rest_framework/base.html:100 #, python-format msgid "Make an OPTIONS request on the %(name)s resource" msgstr "" -#: awx/templates/rest_framework/base.html:106 +#: templates/rest_framework/base.html:106 #, python-format msgid "Make a DELETE request on the %(name)s resource" msgstr "" -#: awx/templates/rest_framework/base.html:113 +#: templates/rest_framework/base.html:113 msgid "Filters" msgstr "" -#: awx/templates/rest_framework/base.html:172 -#: awx/templates/rest_framework/base.html:186 +#: templates/rest_framework/base.html:172 +#: templates/rest_framework/base.html:186 #, python-format msgid "Make a POST request on the %(name)s resource" msgstr "" -#: awx/templates/rest_framework/base.html:216 -#: awx/templates/rest_framework/base.html:230 +#: templates/rest_framework/base.html:216 +#: templates/rest_framework/base.html:230 #, python-format msgid "Make a PUT request on the %(name)s resource" msgstr "" -#: awx/templates/rest_framework/base.html:233 +#: templates/rest_framework/base.html:233 #, python-format msgid "Make a PATCH request on the %(name)s resource" msgstr "" -#: awx/ui/apps.py:9 awx/ui/conf.py:22 awx/ui/conf.py:38 awx/ui/conf.py:53 +#: ui/apps.py:9 ui/conf.py:22 ui/conf.py:38 ui/conf.py:53 msgid "UI" msgstr "" -#: awx/ui/conf.py:16 +#: ui/conf.py:16 msgid "Off" msgstr "" -#: awx/ui/conf.py:17 +#: ui/conf.py:17 msgid "Anonymous" msgstr "" -#: awx/ui/conf.py:18 +#: ui/conf.py:18 msgid "Detailed" msgstr "" -#: awx/ui/conf.py:20 +#: ui/conf.py:20 msgid "Analytics Tracking State" msgstr "" -#: awx/ui/conf.py:21 +#: ui/conf.py:21 msgid "Enable or Disable Analytics Tracking." msgstr "" -#: awx/ui/conf.py:31 +#: ui/conf.py:31 msgid "Custom Login Info" msgstr "" -#: awx/ui/conf.py:32 +#: ui/conf.py:32 msgid "" "If needed, you can add specific information (such as a legal notice or a " "disclaimer) to a text box in the login modal using this setting. Any content " @@ -3408,42 +3564,42 @@ msgid "" "(paragraphs) must be escaped as `\\n` within the block of text." msgstr "" -#: awx/ui/conf.py:48 +#: ui/conf.py:48 msgid "Custom Logo" msgstr "" -#: awx/ui/conf.py:49 +#: ui/conf.py:49 msgid "" "To set up a custom logo, provide a file that you create. For the custom logo " "to look its best, use a `.png` file with a transparent background. GIF, PNG " "and JPEG formats are supported." msgstr "" -#: awx/ui/fields.py:29 +#: ui/fields.py:29 msgid "" "Invalid format for custom logo. Must be a data URL with a base64-encoded " "GIF, PNG or JPEG image." msgstr "" -#: awx/ui/fields.py:30 +#: ui/fields.py:30 msgid "Invalid base64-encoded data in data URL." msgstr "" -#: awx/ui/templates/ui/index.html:49 +#: ui/templates/ui/index.html:49 msgid "" "Your session will expire in 60 seconds, would you like to continue?" msgstr "" -#: awx/ui/templates/ui/index.html:64 +#: ui/templates/ui/index.html:64 msgid "CANCEL" msgstr "" -#: awx/ui/templates/ui/index.html:116 +#: ui/templates/ui/index.html:116 msgid "Set how many days of data should be retained." msgstr "" -#: awx/ui/templates/ui/index.html:122 +#: ui/templates/ui/index.html:122 msgid "" "Please enter an integer that is not " @@ -3452,7 +3608,7 @@ msgid "" "span>." msgstr "" -#: awx/ui/templates/ui/index.html:127 +#: ui/templates/ui/index.html:127 msgid "" "For facts collected older than the time period specified, save one fact scan " "(snapshot) per time window (frequency). For example, facts older than 30 " @@ -3464,11 +3620,11 @@ msgid "" "
" msgstr "" -#: awx/ui/templates/ui/index.html:136 +#: ui/templates/ui/index.html:136 msgid "Select a time period after which to remove old facts" msgstr "" -#: awx/ui/templates/ui/index.html:150 +#: ui/templates/ui/index.html:150 msgid "" "Please enter an integer " @@ -3477,11 +3633,11 @@ msgid "" "that is lower than 9999." msgstr "" -#: awx/ui/templates/ui/index.html:155 +#: ui/templates/ui/index.html:155 msgid "Select a frequency for snapshot retention" msgstr "" -#: awx/ui/templates/ui/index.html:169 +#: ui/templates/ui/index.html:169 msgid "" "Please enter an integer." msgstr "" -#: awx/ui/templates/ui/index.html:175 +#: ui/templates/ui/index.html:175 msgid "working..." msgstr "" diff --git a/awx/locale/en-us/LC_MESSAGES/django.po b/awx/locale/en-us/LC_MESSAGES/django.po new file mode 100644 index 0000000000..99aef1ef20 --- /dev/null +++ b/awx/locale/en-us/LC_MESSAGES/django.po @@ -0,0 +1,3637 @@ +# SOME DESCRIPTIVE TITLE. +# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER +# This file is distributed under the same license as the PACKAGE package. +# FIRST AUTHOR , YEAR. +# +#, fuzzy +msgid "" +msgstr "" +"Project-Id-Version: PACKAGE VERSION\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2016-12-14 21:27+0000\n" +"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n" +"Last-Translator: FULL NAME \n" +"Language-Team: LANGUAGE \n" +"Language: \n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" + +#: api/authentication.py:67 +msgid "Invalid token header. No credentials provided." +msgstr "" + +#: api/authentication.py:70 +msgid "Invalid token header. Token string should not contain spaces." +msgstr "" + +#: api/authentication.py:105 +msgid "User inactive or deleted" +msgstr "" + +#: api/authentication.py:161 +msgid "Invalid task token" +msgstr "" + +#: api/conf.py:12 +msgid "Idle Time Force Log Out" +msgstr "" + +#: api/conf.py:13 +msgid "" +"Number of seconds that a user is inactive before they will need to login " +"again." +msgstr "" + +#: api/conf.py:14 api/conf.py:24 api/conf.py:33 sso/conf.py:124 +#: sso/conf.py:135 sso/conf.py:147 sso/conf.py:162 +msgid "Authentication" +msgstr "" + +#: api/conf.py:22 +msgid "Maximum number of simultaneous logins" +msgstr "" + +#: api/conf.py:23 +msgid "" +"Maximum number of simultaneous logins a user may have. To disable enter -1." +msgstr "" + +#: api/conf.py:31 +msgid "Enable HTTP Basic Auth" +msgstr "" + +#: api/conf.py:32 +msgid "Enable HTTP Basic Auth for the API Browser." +msgstr "" + +#: api/generics.py:446 +msgid "\"id\" is required to disassociate" +msgstr "" + +#: api/metadata.py:50 +msgid "Database ID for this {}." +msgstr "" + +#: api/metadata.py:51 +msgid "Name of this {}." +msgstr "" + +#: api/metadata.py:52 +msgid "Optional description of this {}." +msgstr "" + +#: api/metadata.py:53 +msgid "Data type for this {}." +msgstr "" + +#: api/metadata.py:54 +msgid "URL for this {}." +msgstr "" + +#: api/metadata.py:55 +msgid "Data structure with URLs of related resources." +msgstr "" + +#: api/metadata.py:56 +msgid "Data structure with name/description for related resources." +msgstr "" + +#: api/metadata.py:57 +msgid "Timestamp when this {} was created." +msgstr "" + +#: api/metadata.py:58 +msgid "Timestamp when this {} was last modified." +msgstr "" + +#: api/parsers.py:31 +#, python-format +msgid "JSON parse error - %s" +msgstr "" + +#: api/serializers.py:248 +msgid "Playbook Run" +msgstr "" + +#: api/serializers.py:249 +msgid "Command" +msgstr "" + +#: api/serializers.py:250 +msgid "SCM Update" +msgstr "" + +#: api/serializers.py:251 +msgid "Inventory Sync" +msgstr "" + +#: api/serializers.py:252 +msgid "Management Job" +msgstr "" + +#: api/serializers.py:253 +msgid "Workflow Job" +msgstr "" + +#: api/serializers.py:655 api/serializers.py:713 api/views.py:3914 +#, python-format +msgid "" +"Standard Output too large to display (%(text_size)d bytes), only download " +"supported for sizes over %(supported_size)d bytes" +msgstr "" + +#: api/serializers.py:728 +msgid "Write-only field used to change the password." +msgstr "" + +#: api/serializers.py:730 +msgid "Set if the account is managed by an external service" +msgstr "" + +#: api/serializers.py:754 +msgid "Password required for new User." +msgstr "" + +#: api/serializers.py:838 +#, python-format +msgid "Unable to change %s on user managed by LDAP." +msgstr "" + +#: api/serializers.py:990 +msgid "Organization is missing" +msgstr "" + +#: api/serializers.py:996 +msgid "Array of playbooks available within this project." +msgstr "" + +#: api/serializers.py:1178 +#, python-format +msgid "Invalid port specification: %s" +msgstr "" + +#: api/serializers.py:1206 main/validators.py:192 +msgid "Must be valid JSON or YAML." +msgstr "" + +#: api/serializers.py:1263 +msgid "Invalid group name." +msgstr "" + +#: api/serializers.py:1338 +msgid "" +"Script must begin with a hashbang sequence: i.e.... #!/usr/bin/env python" +msgstr "" + +#: api/serializers.py:1391 +msgid "If 'source' is 'custom', 'source_script' must be provided." +msgstr "" + +#: api/serializers.py:1395 +msgid "" +"The 'source_script' does not belong to the same organization as the " +"inventory." +msgstr "" + +#: api/serializers.py:1397 +msgid "'source_script' doesn't exist." +msgstr "" + +#: api/serializers.py:1756 +msgid "" +"Write-only field used to add user to owner role. If provided, do not give " +"either team or organization. Only valid for creation." +msgstr "" + +#: api/serializers.py:1761 +msgid "" +"Write-only field used to add team to owner role. If provided, do not give " +"either user or organization. Only valid for creation." +msgstr "" + +#: api/serializers.py:1766 +msgid "" +"Inherit permissions from organization roles. If provided on creation, do not " +"give either user or team." +msgstr "" + +#: api/serializers.py:1782 +msgid "Missing 'user', 'team', or 'organization'." +msgstr "" + +#: api/serializers.py:1795 +msgid "" +"Credential organization must be set and match before assigning to a team" +msgstr "" + +#: api/serializers.py:1887 +msgid "This field is required." +msgstr "" + +#: api/serializers.py:1889 api/serializers.py:1891 +msgid "Playbook not found for project." +msgstr "" + +#: api/serializers.py:1893 +msgid "Must select playbook for project." +msgstr "" + +#: api/serializers.py:1957 main/models/jobs.py:280 +msgid "Scan jobs must be assigned a fixed inventory." +msgstr "" + +#: api/serializers.py:1959 main/models/jobs.py:283 +msgid "Job types 'run' and 'check' must have assigned a project." +msgstr "" + +#: api/serializers.py:1962 +msgid "Survey Enabled cannot be used with scan jobs." +msgstr "" + +#: api/serializers.py:2024 +msgid "Invalid job template." +msgstr "" + +#: api/serializers.py:2109 +msgid "Credential not found or deleted." +msgstr "" + +#: api/serializers.py:2111 +msgid "Job Template Project is missing or undefined." +msgstr "" + +#: api/serializers.py:2113 +msgid "Job Template Inventory is missing or undefined." +msgstr "" + +#: api/serializers.py:2398 +#, python-format +msgid "%(job_type)s is not a valid job type. The choices are %(choices)s." +msgstr "" + +#: api/serializers.py:2403 +msgid "Workflow job template is missing during creation." +msgstr "" + +#: api/serializers.py:2408 +#, python-format +msgid "Cannot nest a %s inside a WorkflowJobTemplate" +msgstr "" + +#: api/serializers.py:2646 +#, python-format +msgid "Job Template '%s' is missing or undefined." +msgstr "" + +#: api/serializers.py:2672 +msgid "Must be a valid JSON or YAML dictionary." +msgstr "" + +#: api/serializers.py:2817 +msgid "" +"Missing required fields for Notification Configuration: notification_type" +msgstr "" + +#: api/serializers.py:2840 +msgid "No values specified for field '{}'" +msgstr "" + +#: api/serializers.py:2845 +msgid "Missing required fields for Notification Configuration: {}." +msgstr "" + +#: api/serializers.py:2848 +msgid "Configuration field '{}' incorrect type, expected {}." +msgstr "" + +#: api/serializers.py:2901 +msgid "Inventory Source must be a cloud resource." +msgstr "" + +#: api/serializers.py:2903 +msgid "Manual Project can not have a schedule set." +msgstr "" + +#: api/serializers.py:2925 +msgid "DTSTART required in rrule. Value should match: DTSTART:YYYYMMDDTHHMMSSZ" +msgstr "" + +#: api/serializers.py:2927 +msgid "Multiple DTSTART is not supported." +msgstr "" + +#: api/serializers.py:2929 +msgid "RRULE require in rrule." +msgstr "" + +#: api/serializers.py:2931 +msgid "Multiple RRULE is not supported." +msgstr "" + +#: api/serializers.py:2933 +msgid "INTERVAL required in rrule." +msgstr "" + +#: api/serializers.py:2935 +msgid "TZID is not supported." +msgstr "" + +#: api/serializers.py:2937 +msgid "SECONDLY is not supported." +msgstr "" + +#: api/serializers.py:2939 +msgid "Multiple BYMONTHDAYs not supported." +msgstr "" + +#: api/serializers.py:2941 +msgid "Multiple BYMONTHs not supported." +msgstr "" + +#: api/serializers.py:2943 +msgid "BYDAY with numeric prefix not supported." +msgstr "" + +#: api/serializers.py:2945 +msgid "BYYEARDAY not supported." +msgstr "" + +#: api/serializers.py:2947 +msgid "BYWEEKNO not supported." +msgstr "" + +#: api/serializers.py:2951 +msgid "COUNT > 999 is unsupported." +msgstr "" + +#: api/serializers.py:2955 +msgid "rrule parsing failed validation." +msgstr "" + +#: api/serializers.py:2973 +msgid "" +"A summary of the new and changed values when an object is created, updated, " +"or deleted" +msgstr "" + +#: api/serializers.py:2975 +msgid "" +"For create, update, and delete events this is the object type that was " +"affected. For associate and disassociate events this is the object type " +"associated or disassociated with object2." +msgstr "" + +#: api/serializers.py:2978 +msgid "" +"Unpopulated for create, update, and delete events. For associate and " +"disassociate events this is the object type that object1 is being associated " +"with." +msgstr "" + +#: api/serializers.py:2981 +msgid "The action taken with respect to the given object(s)." +msgstr "" + +#: api/serializers.py:3081 +msgid "Unable to login with provided credentials." +msgstr "" + +#: api/serializers.py:3083 +msgid "Must include \"username\" and \"password\"." +msgstr "" + +#: api/views.py:96 +msgid "Your license does not allow use of the activity stream." +msgstr "" + +#: api/views.py:106 +msgid "Your license does not permit use of system tracking." +msgstr "" + +#: api/views.py:116 +msgid "Your license does not allow use of workflows." +msgstr "" + +#: api/views.py:124 templates/rest_framework/api.html:28 +msgid "REST API" +msgstr "" + +#: api/views.py:131 templates/rest_framework/api.html:4 +msgid "Ansible Tower REST API" +msgstr "" + +#: api/views.py:147 +msgid "Version 1" +msgstr "" + +#: api/views.py:198 +msgid "Ping" +msgstr "" + +#: api/views.py:227 conf/apps.py:12 +msgid "Configuration" +msgstr "" + +#: api/views.py:280 +msgid "Invalid license data" +msgstr "" + +#: api/views.py:282 +msgid "Missing 'eula_accepted' property" +msgstr "" + +#: api/views.py:286 +msgid "'eula_accepted' value is invalid" +msgstr "" + +#: api/views.py:289 +msgid "'eula_accepted' must be True" +msgstr "" + +#: api/views.py:296 +msgid "Invalid JSON" +msgstr "" + +#: api/views.py:304 +msgid "Invalid License" +msgstr "" + +#: api/views.py:314 +msgid "Invalid license" +msgstr "" + +#: api/views.py:322 +#, python-format +msgid "Failed to remove license (%s)" +msgstr "" + +#: api/views.py:327 +msgid "Dashboard" +msgstr "" + +#: api/views.py:433 +msgid "Dashboard Jobs Graphs" +msgstr "" + +#: api/views.py:469 +#, python-format +msgid "Unknown period \"%s\"" +msgstr "" + +#: api/views.py:483 +msgid "Schedules" +msgstr "" + +#: api/views.py:502 +msgid "Schedule Jobs List" +msgstr "" + +#: api/views.py:711 +msgid "Your Tower license only permits a single organization to exist." +msgstr "" + +#: api/views.py:932 api/views.py:1284 +msgid "Role 'id' field is missing." +msgstr "" + +#: api/views.py:938 api/views.py:4182 +msgid "You cannot assign an Organization role as a child role for a Team." +msgstr "" + +#: api/views.py:942 api/views.py:4196 +msgid "You cannot grant system-level permissions to a team." +msgstr "" + +#: api/views.py:949 api/views.py:4188 +msgid "" +"You cannot grant credential access to a team when the Organization field " +"isn't set, or belongs to a different organization" +msgstr "" + +#: api/views.py:1039 +msgid "Cannot delete project." +msgstr "" + +#: api/views.py:1068 +msgid "Project Schedules" +msgstr "" + +#: api/views.py:1168 api/views.py:2252 api/views.py:3225 +msgid "Cannot delete job resource when associated workflow job is running." +msgstr "" + +#: api/views.py:1244 +msgid "Me" +msgstr "" + +#: api/views.py:1288 api/views.py:4137 +msgid "You may not perform any action with your own admin_role." +msgstr "" + +#: api/views.py:1294 api/views.py:4141 +msgid "You may not change the membership of a users admin_role" +msgstr "" + +#: api/views.py:1299 api/views.py:4146 +msgid "" +"You cannot grant credential access to a user not in the credentials' " +"organization" +msgstr "" + +#: api/views.py:1303 api/views.py:4150 +msgid "You cannot grant private credential access to another user" +msgstr "" + +#: api/views.py:1401 +#, python-format +msgid "Cannot change %s." +msgstr "" + +#: api/views.py:1407 +msgid "Cannot delete user." +msgstr "" + +#: api/views.py:1553 +msgid "Cannot delete inventory script." +msgstr "" + +#: api/views.py:1788 +msgid "Fact not found." +msgstr "" + +#: api/views.py:2108 +msgid "Inventory Source List" +msgstr "" + +#: api/views.py:2136 +msgid "Cannot delete inventory source." +msgstr "" + +#: api/views.py:2144 +msgid "Inventory Source Schedules" +msgstr "" + +#: api/views.py:2173 +msgid "Notification Templates can only be assigned when source is one of {}." +msgstr "" + +#: api/views.py:2380 +msgid "Job Template Schedules" +msgstr "" + +#: api/views.py:2399 api/views.py:2409 +msgid "Your license does not allow adding surveys." +msgstr "" + +#: api/views.py:2416 +msgid "'name' missing from survey spec." +msgstr "" + +#: api/views.py:2418 +msgid "'description' missing from survey spec." +msgstr "" + +#: api/views.py:2420 +msgid "'spec' missing from survey spec." +msgstr "" + +#: api/views.py:2422 +msgid "'spec' must be a list of items." +msgstr "" + +#: api/views.py:2424 +msgid "'spec' doesn't contain any items." +msgstr "" + +#: api/views.py:2429 +#, python-format +msgid "Survey question %s is not a json object." +msgstr "" + +#: api/views.py:2431 +#, python-format +msgid "'type' missing from survey question %s." +msgstr "" + +#: api/views.py:2433 +#, python-format +msgid "'question_name' missing from survey question %s." +msgstr "" + +#: api/views.py:2435 +#, python-format +msgid "'variable' missing from survey question %s." +msgstr "" + +#: api/views.py:2437 +#, python-format +msgid "'variable' '%(item)s' duplicated in survey question %(survey)s." +msgstr "" + +#: api/views.py:2442 +#, python-format +msgid "'required' missing from survey question %s." +msgstr "" + +#: api/views.py:2641 +msgid "No matching host could be found!" +msgstr "" + +#: api/views.py:2644 +msgid "Multiple hosts matched the request!" +msgstr "" + +#: api/views.py:2649 +msgid "Cannot start automatically, user input required!" +msgstr "" + +#: api/views.py:2656 +msgid "Host callback job already pending." +msgstr "" + +#: api/views.py:2669 +msgid "Error starting job!" +msgstr "" + +#: api/views.py:2995 +msgid "Workflow Job Template Schedules" +msgstr "" + +#: api/views.py:3131 api/views.py:3853 +msgid "Superuser privileges needed." +msgstr "" + +#: api/views.py:3161 +msgid "System Job Template Schedules" +msgstr "" + +#: api/views.py:3344 +msgid "Job Host Summaries List" +msgstr "" + +#: api/views.py:3386 +msgid "Job Event Children List" +msgstr "" + +#: api/views.py:3395 +msgid "Job Event Hosts List" +msgstr "" + +#: api/views.py:3404 +msgid "Job Events List" +msgstr "" + +#: api/views.py:3436 +msgid "Job Plays List" +msgstr "" + +#: api/views.py:3513 +msgid "Job Play Tasks List" +msgstr "" + +#: api/views.py:3529 +msgid "Job not found." +msgstr "" + +#: api/views.py:3533 +msgid "'event_id' not provided." +msgstr "" + +#: api/views.py:3537 +msgid "Parent event not found." +msgstr "" + +#: api/views.py:3809 +msgid "Ad Hoc Command Events List" +msgstr "" + +#: api/views.py:3963 +#, python-format +msgid "Error generating stdout download file: %s" +msgstr "" + +#: api/views.py:4009 +msgid "Delete not allowed while there are pending notifications" +msgstr "" + +#: api/views.py:4016 +msgid "NotificationTemplate Test" +msgstr "" + +#: api/views.py:4131 +msgid "User 'id' field is missing." +msgstr "" + +#: api/views.py:4174 +msgid "Team 'id' field is missing." +msgstr "" + +#: conf/conf.py:20 +msgid "Bud Frogs" +msgstr "" + +#: conf/conf.py:21 +msgid "Bunny" +msgstr "" + +#: conf/conf.py:22 +msgid "Cheese" +msgstr "" + +#: conf/conf.py:23 +msgid "Daemon" +msgstr "" + +#: conf/conf.py:24 +msgid "Default Cow" +msgstr "" + +#: conf/conf.py:25 +msgid "Dragon" +msgstr "" + +#: conf/conf.py:26 +msgid "Elephant in Snake" +msgstr "" + +#: conf/conf.py:27 +msgid "Elephant" +msgstr "" + +#: conf/conf.py:28 +msgid "Eyes" +msgstr "" + +#: conf/conf.py:29 +msgid "Hello Kitty" +msgstr "" + +#: conf/conf.py:30 +msgid "Kitty" +msgstr "" + +#: conf/conf.py:31 +msgid "Luke Koala" +msgstr "" + +#: conf/conf.py:32 +msgid "Meow" +msgstr "" + +#: conf/conf.py:33 +msgid "Milk" +msgstr "" + +#: conf/conf.py:34 +msgid "Moofasa" +msgstr "" + +#: conf/conf.py:35 +msgid "Moose" +msgstr "" + +#: conf/conf.py:36 +msgid "Ren" +msgstr "" + +#: conf/conf.py:37 +msgid "Sheep" +msgstr "" + +#: conf/conf.py:38 +msgid "Small Cow" +msgstr "" + +#: conf/conf.py:39 +msgid "Stegosaurus" +msgstr "" + +#: conf/conf.py:40 +msgid "Stimpy" +msgstr "" + +#: conf/conf.py:41 +msgid "Super Milker" +msgstr "" + +#: conf/conf.py:42 +msgid "Three Eyes" +msgstr "" + +#: conf/conf.py:43 +msgid "Turkey" +msgstr "" + +#: conf/conf.py:44 +msgid "Turtle" +msgstr "" + +#: conf/conf.py:45 +msgid "Tux" +msgstr "" + +#: conf/conf.py:46 +msgid "Udder" +msgstr "" + +#: conf/conf.py:47 +msgid "Vader Koala" +msgstr "" + +#: conf/conf.py:48 +msgid "Vader" +msgstr "" + +#: conf/conf.py:49 +msgid "WWW" +msgstr "" + +#: conf/conf.py:52 +msgid "Cow Selection" +msgstr "" + +#: conf/conf.py:53 +msgid "Select which cow to use with cowsay when running jobs." +msgstr "" + +#: conf/conf.py:54 conf/conf.py:75 +msgid "Cows" +msgstr "" + +#: conf/conf.py:73 +msgid "Example Read-Only Setting" +msgstr "" + +#: conf/conf.py:74 +msgid "Example setting that cannot be changed." +msgstr "" + +#: conf/conf.py:93 +msgid "Example Setting" +msgstr "" + +#: conf/conf.py:94 +msgid "Example setting which can be different for each user." +msgstr "" + +#: conf/conf.py:95 conf/registry.py:67 conf/views.py:46 +msgid "User" +msgstr "" + +#: conf/fields.py:38 +msgid "Enter a valid URL" +msgstr "" + +#: conf/license.py:19 +msgid "Your Tower license does not allow that." +msgstr "" + +#: conf/management/commands/migrate_to_database_settings.py:41 +msgid "Only show which settings would be commented/migrated." +msgstr "" + +#: conf/management/commands/migrate_to_database_settings.py:48 +msgid "Skip over settings that would raise an error when commenting/migrating." +msgstr "" + +#: conf/management/commands/migrate_to_database_settings.py:55 +msgid "Skip commenting out settings in files." +msgstr "" + +#: conf/management/commands/migrate_to_database_settings.py:61 +msgid "Backup existing settings files with this suffix." +msgstr "" + +#: conf/registry.py:55 +msgid "All" +msgstr "" + +#: conf/registry.py:56 +msgid "Changed" +msgstr "" + +#: conf/registry.py:68 +msgid "User-Defaults" +msgstr "" + +#: conf/views.py:38 +msgid "Setting Categories" +msgstr "" + +#: conf/views.py:61 +msgid "Setting Detail" +msgstr "" + +#: main/access.py:255 +#, python-format +msgid "Bad data found in related field %s." +msgstr "" + +#: main/access.py:296 +msgid "License is missing." +msgstr "" + +#: main/access.py:298 +msgid "License has expired." +msgstr "" + +#: main/access.py:303 +#, python-format +msgid "License count of %s instances has been reached." +msgstr "" + +#: main/access.py:305 +#, python-format +msgid "License count of %s instances has been exceeded." +msgstr "" + +#: main/access.py:307 +msgid "Host count exceeds available instances." +msgstr "" + +#: main/access.py:311 +#, python-format +msgid "Feature %s is not enabled in the active license." +msgstr "" + +#: main/access.py:313 +msgid "Features not found in active license." +msgstr "" + +#: main/access.py:507 main/access.py:574 main/access.py:694 main/access.py:957 +#: main/access.py:1198 main/access.py:1587 +msgid "Resource is being used by running jobs" +msgstr "" + +#: main/access.py:618 +msgid "Unable to change inventory on a host." +msgstr "" + +#: main/access.py:630 main/access.py:675 +msgid "Cannot associate two items from different inventories." +msgstr "" + +#: main/access.py:663 +msgid "Unable to change inventory on a group." +msgstr "" + +#: main/access.py:877 +msgid "Unable to change organization on a team." +msgstr "" + +#: main/access.py:890 +msgid "The {} role cannot be assigned to a team" +msgstr "" + +#: main/access.py:892 +msgid "The admin_role for a User cannot be assigned to a team" +msgstr "" + +#: main/apps.py:9 +msgid "Main" +msgstr "" + +#: main/conf.py:17 +msgid "Enable Activity Stream" +msgstr "" + +#: main/conf.py:18 +msgid "Enable capturing activity for the Tower activity stream." +msgstr "" + +#: main/conf.py:19 main/conf.py:29 main/conf.py:39 main/conf.py:48 +#: main/conf.py:60 main/conf.py:78 main/conf.py:103 +msgid "System" +msgstr "" + +#: main/conf.py:27 +msgid "Enable Activity Stream for Inventory Sync" +msgstr "" + +#: main/conf.py:28 +msgid "" +"Enable capturing activity for the Tower activity stream when running " +"inventory sync." +msgstr "" + +#: main/conf.py:37 +msgid "All Users Visible to Organization Admins" +msgstr "" + +#: main/conf.py:38 +msgid "" +"Controls whether any Organization Admin can view all users, even those not " +"associated with their Organization." +msgstr "" + +#: main/conf.py:46 +msgid "Enable Tower Administrator Alerts" +msgstr "" + +#: main/conf.py:47 +msgid "" +"Allow Tower to email Admin users for system events that may require " +"attention." +msgstr "" + +#: main/conf.py:57 +msgid "Base URL of the Tower host" +msgstr "" + +#: main/conf.py:58 +msgid "" +"This setting is used by services like notifications to render a valid url to " +"the Tower host." +msgstr "" + +#: main/conf.py:67 +msgid "Remote Host Headers" +msgstr "" + +#: main/conf.py:68 +msgid "" +"HTTP headers and meta keys to search to determine remote host name or IP. " +"Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if " +"behind a reverse proxy.\n" +"\n" +"Note: The headers will be searched in order and the first found remote host " +"name or IP will be used.\n" +"\n" +"In the below example 8.8.8.7 would be the chosen IP address.\n" +"X-Forwarded-For: 8.8.8.7, 192.168.2.1, 127.0.0.1\n" +"Host: 127.0.0.1\n" +"REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" +msgstr "" + +#: main/conf.py:99 +msgid "Tower License" +msgstr "" + +#: main/conf.py:100 +msgid "" +"The license controls which features and functionality are enabled in Tower. " +"Use /api/v1/config/ to update or change the license." +msgstr "" + +#: main/conf.py:110 +msgid "Ansible Modules Allowed for Ad Hoc Jobs" +msgstr "" + +#: main/conf.py:111 +msgid "List of modules allowed to be used by ad-hoc jobs." +msgstr "" + +#: main/conf.py:112 main/conf.py:121 main/conf.py:130 main/conf.py:139 +#: main/conf.py:148 main/conf.py:158 main/conf.py:168 main/conf.py:178 +#: main/conf.py:187 main/conf.py:199 main/conf.py:211 main/conf.py:223 +msgid "Jobs" +msgstr "" + +#: main/conf.py:119 +msgid "Enable job isolation" +msgstr "" + +#: main/conf.py:120 +msgid "" +"Isolates an Ansible job from protected parts of the Tower system to prevent " +"exposing sensitive information." +msgstr "" + +#: main/conf.py:128 +msgid "Job isolation execution path" +msgstr "" + +#: main/conf.py:129 +msgid "" +"Create temporary working directories for isolated jobs in this location." +msgstr "" + +#: main/conf.py:137 +msgid "Paths to hide from isolated jobs" +msgstr "" + +#: main/conf.py:138 +msgid "Additional paths to hide from isolated processes." +msgstr "" + +#: main/conf.py:146 +msgid "Paths to expose to isolated jobs" +msgstr "" + +#: main/conf.py:147 +msgid "" +"Whitelist of paths that would otherwise be hidden to expose to isolated jobs." +msgstr "" + +#: main/conf.py:156 +msgid "Standard Output Maximum Display Size" +msgstr "" + +#: main/conf.py:157 +msgid "" +"Maximum Size of Standard Output in bytes to display before requiring the " +"output be downloaded." +msgstr "" + +#: main/conf.py:166 +msgid "Job Event Standard Output Maximum Display Size" +msgstr "" + +#: main/conf.py:167 +msgid "" +"Maximum Size of Standard Output in bytes to display for a single job or ad " +"hoc command event. `stdout` will end with `…` when truncated." +msgstr "" + +#: main/conf.py:176 +msgid "Maximum Scheduled Jobs" +msgstr "" + +#: main/conf.py:177 +msgid "" +"Maximum number of the same job template that can be waiting to run when " +"launching from a schedule before no more are created." +msgstr "" + +#: main/conf.py:185 +msgid "Ansible Callback Plugins" +msgstr "" + +#: main/conf.py:186 +msgid "" +"List of paths to search for extra callback plugins to be used when running " +"jobs." +msgstr "" + +#: main/conf.py:196 +msgid "Default Job Timeout" +msgstr "" + +#: main/conf.py:197 +msgid "" +"Maximum time to allow jobs to run. Use value of 0 to indicate that no " +"timeout should be imposed. A timeout set on an individual job template will " +"override this." +msgstr "" + +#: main/conf.py:208 +msgid "Default Inventory Update Timeout" +msgstr "" + +#: main/conf.py:209 +msgid "" +"Maximum time to allow inventory updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual inventory " +"source will override this." +msgstr "" + +#: main/conf.py:220 +msgid "Default Project Update Timeout" +msgstr "" + +#: main/conf.py:221 +msgid "" +"Maximum time to allow project updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual project " +"will override this." +msgstr "" + +#: main/conf.py:231 +msgid "Logging Aggregator Receiving Host" +msgstr "" + +#: main/conf.py:232 +msgid "External host maintain a log collector to send logs to" +msgstr "" + +#: main/conf.py:233 main/conf.py:242 main/conf.py:252 main/conf.py:261 +#: main/conf.py:271 main/conf.py:286 main/conf.py:297 main/conf.py:306 +msgid "Logging" +msgstr "" + +#: main/conf.py:240 +msgid "Logging Aggregator Receiving Port" +msgstr "" + +#: main/conf.py:241 +msgid "Port that the log collector is listening on" +msgstr "" + +#: main/conf.py:250 +msgid "Logging Aggregator Type: Logstash, Loggly, Datadog, etc" +msgstr "" + +#: main/conf.py:251 +msgid "The type of log aggregator service to format messages for" +msgstr "" + +#: main/conf.py:259 +msgid "Logging Aggregator Username to Authenticate With" +msgstr "" + +#: main/conf.py:260 +msgid "Username for Logstash or others (basic auth)" +msgstr "" + +#: main/conf.py:269 +msgid "Logging Aggregator Password to Authenticate With" +msgstr "" + +#: main/conf.py:270 +msgid "Password for Logstash or others (basic auth)" +msgstr "" + +#: main/conf.py:278 +msgid "Loggers to send data to the log aggregator from" +msgstr "" + +#: main/conf.py:279 +msgid "" +"List of loggers that will send HTTP logs to the collector, these can include " +"any or all of: \n" +"activity_stream - logs duplicate to records entered in activity stream\n" +"job_events - callback data from Ansible job events\n" +"system_tracking - data generated from scan jobs\n" +"Sending generic Tower logs must be configured through local_settings." +"pyinstead of this mechanism." +msgstr "" + +#: main/conf.py:293 +msgid "" +"Flag denoting to send individual messages for each fact in system tracking" +msgstr "" + +#: main/conf.py:294 +msgid "" +"If not set, the data from system tracking will be sent inside of a single " +"dictionary, but if set, separate requests will be sent for each package, " +"service, etc. that is found in the scan." +msgstr "" + +#: main/conf.py:304 +msgid "Flag denoting whether to use the external logger system" +msgstr "" + +#: main/conf.py:305 +msgid "" +"If not set, only normal settings data will be used to configure loggers." +msgstr "" + +#: main/models/activity_stream.py:22 +msgid "Entity Created" +msgstr "" + +#: main/models/activity_stream.py:23 +msgid "Entity Updated" +msgstr "" + +#: main/models/activity_stream.py:24 +msgid "Entity Deleted" +msgstr "" + +#: main/models/activity_stream.py:25 +msgid "Entity Associated with another Entity" +msgstr "" + +#: main/models/activity_stream.py:26 +msgid "Entity was Disassociated with another Entity" +msgstr "" + +#: main/models/ad_hoc_commands.py:96 +msgid "No valid inventory." +msgstr "" + +#: main/models/ad_hoc_commands.py:103 main/models/jobs.py:163 +msgid "You must provide a machine / SSH credential." +msgstr "" + +#: main/models/ad_hoc_commands.py:114 main/models/ad_hoc_commands.py:122 +msgid "Invalid type for ad hoc command" +msgstr "" + +#: main/models/ad_hoc_commands.py:117 +msgid "Unsupported module for ad hoc commands." +msgstr "" + +#: main/models/ad_hoc_commands.py:125 +#, python-format +msgid "No argument passed to %s module." +msgstr "" + +#: main/models/ad_hoc_commands.py:220 main/models/jobs.py:767 +msgid "Host Failed" +msgstr "" + +#: main/models/ad_hoc_commands.py:221 main/models/jobs.py:768 +msgid "Host OK" +msgstr "" + +#: main/models/ad_hoc_commands.py:222 main/models/jobs.py:771 +msgid "Host Unreachable" +msgstr "" + +#: main/models/ad_hoc_commands.py:227 main/models/jobs.py:770 +msgid "Host Skipped" +msgstr "" + +#: main/models/ad_hoc_commands.py:237 main/models/jobs.py:798 +msgid "Debug" +msgstr "" + +#: main/models/ad_hoc_commands.py:238 main/models/jobs.py:799 +msgid "Verbose" +msgstr "" + +#: main/models/ad_hoc_commands.py:239 main/models/jobs.py:800 +msgid "Deprecated" +msgstr "" + +#: main/models/ad_hoc_commands.py:240 main/models/jobs.py:801 +msgid "Warning" +msgstr "" + +#: main/models/ad_hoc_commands.py:241 main/models/jobs.py:802 +msgid "System Warning" +msgstr "" + +#: main/models/ad_hoc_commands.py:242 main/models/jobs.py:803 +#: main/models/unified_jobs.py:62 +msgid "Error" +msgstr "" + +#: main/models/base.py:45 main/models/base.py:51 main/models/base.py:56 +msgid "Run" +msgstr "" + +#: main/models/base.py:46 main/models/base.py:52 main/models/base.py:57 +msgid "Check" +msgstr "" + +#: main/models/base.py:47 +msgid "Scan" +msgstr "" + +#: main/models/base.py:61 +msgid "Read Inventory" +msgstr "" + +#: main/models/base.py:62 +msgid "Edit Inventory" +msgstr "" + +#: main/models/base.py:63 +msgid "Administrate Inventory" +msgstr "" + +#: main/models/base.py:64 +msgid "Deploy To Inventory" +msgstr "" + +#: main/models/base.py:65 +msgid "Deploy To Inventory (Dry Run)" +msgstr "" + +#: main/models/base.py:66 +msgid "Scan an Inventory" +msgstr "" + +#: main/models/base.py:67 +msgid "Create a Job Template" +msgstr "" + +#: main/models/credential.py:33 +msgid "Machine" +msgstr "" + +#: main/models/credential.py:34 +msgid "Network" +msgstr "" + +#: main/models/credential.py:35 +msgid "Source Control" +msgstr "" + +#: main/models/credential.py:36 +msgid "Amazon Web Services" +msgstr "" + +#: main/models/credential.py:37 +msgid "Rackspace" +msgstr "" + +#: main/models/credential.py:38 main/models/inventory.py:713 +msgid "VMware vCenter" +msgstr "" + +#: main/models/credential.py:39 main/models/inventory.py:714 +msgid "Red Hat Satellite 6" +msgstr "" + +#: main/models/credential.py:40 main/models/inventory.py:715 +msgid "Red Hat CloudForms" +msgstr "" + +#: main/models/credential.py:41 main/models/inventory.py:710 +msgid "Google Compute Engine" +msgstr "" + +#: main/models/credential.py:42 main/models/inventory.py:711 +msgid "Microsoft Azure Classic (deprecated)" +msgstr "" + +#: main/models/credential.py:43 main/models/inventory.py:712 +msgid "Microsoft Azure Resource Manager" +msgstr "" + +#: main/models/credential.py:44 main/models/inventory.py:716 +msgid "OpenStack" +msgstr "" + +#: main/models/credential.py:48 +msgid "None" +msgstr "" + +#: main/models/credential.py:49 +msgid "Sudo" +msgstr "" + +#: main/models/credential.py:50 +msgid "Su" +msgstr "" + +#: main/models/credential.py:51 +msgid "Pbrun" +msgstr "" + +#: main/models/credential.py:52 +msgid "Pfexec" +msgstr "" + +#: main/models/credential.py:101 +msgid "Host" +msgstr "" + +#: main/models/credential.py:102 +msgid "The hostname or IP address to use." +msgstr "" + +#: main/models/credential.py:108 +msgid "Username" +msgstr "" + +#: main/models/credential.py:109 +msgid "Username for this credential." +msgstr "" + +#: main/models/credential.py:115 +msgid "Password" +msgstr "" + +#: main/models/credential.py:116 +msgid "" +"Password for this credential (or \"ASK\" to prompt the user for machine " +"credentials)." +msgstr "" + +#: main/models/credential.py:123 +msgid "Security Token" +msgstr "" + +#: main/models/credential.py:124 +msgid "Security Token for this credential" +msgstr "" + +#: main/models/credential.py:130 +msgid "Project" +msgstr "" + +#: main/models/credential.py:131 +msgid "The identifier for the project." +msgstr "" + +#: main/models/credential.py:137 +msgid "Domain" +msgstr "" + +#: main/models/credential.py:138 +msgid "The identifier for the domain." +msgstr "" + +#: main/models/credential.py:143 +msgid "SSH private key" +msgstr "" + +#: main/models/credential.py:144 +msgid "RSA or DSA private key to be used instead of password." +msgstr "" + +#: main/models/credential.py:150 +msgid "SSH key unlock" +msgstr "" + +#: main/models/credential.py:151 +msgid "" +"Passphrase to unlock SSH private key if encrypted (or \"ASK\" to prompt the " +"user for machine credentials)." +msgstr "" + +#: main/models/credential.py:159 +msgid "Privilege escalation method." +msgstr "" + +#: main/models/credential.py:165 +msgid "Privilege escalation username." +msgstr "" + +#: main/models/credential.py:171 +msgid "Password for privilege escalation method." +msgstr "" + +#: main/models/credential.py:177 +msgid "Vault password (or \"ASK\" to prompt the user)." +msgstr "" + +#: main/models/credential.py:181 +msgid "Whether to use the authorize mechanism." +msgstr "" + +#: main/models/credential.py:187 +msgid "Password used by the authorize mechanism." +msgstr "" + +#: main/models/credential.py:193 +msgid "Client Id or Application Id for the credential" +msgstr "" + +#: main/models/credential.py:199 +msgid "Secret Token for this credential" +msgstr "" + +#: main/models/credential.py:205 +msgid "Subscription identifier for this credential" +msgstr "" + +#: main/models/credential.py:211 +msgid "Tenant identifier for this credential" +msgstr "" + +#: main/models/credential.py:281 +msgid "Host required for VMware credential." +msgstr "" + +#: main/models/credential.py:283 +msgid "Host required for OpenStack credential." +msgstr "" + +#: main/models/credential.py:292 +msgid "Access key required for AWS credential." +msgstr "" + +#: main/models/credential.py:294 +msgid "Username required for Rackspace credential." +msgstr "" + +#: main/models/credential.py:297 +msgid "Username required for VMware credential." +msgstr "" + +#: main/models/credential.py:299 +msgid "Username required for OpenStack credential." +msgstr "" + +#: main/models/credential.py:305 +msgid "Secret key required for AWS credential." +msgstr "" + +#: main/models/credential.py:307 +msgid "API key required for Rackspace credential." +msgstr "" + +#: main/models/credential.py:309 +msgid "Password required for VMware credential." +msgstr "" + +#: main/models/credential.py:311 +msgid "Password or API key required for OpenStack credential." +msgstr "" + +#: main/models/credential.py:317 +msgid "Project name required for OpenStack credential." +msgstr "" + +#: main/models/credential.py:344 +msgid "SSH key unlock must be set when SSH key is encrypted." +msgstr "" + +#: main/models/credential.py:350 +msgid "Credential cannot be assigned to both a user and team." +msgstr "" + +#: main/models/fact.py:21 +msgid "Host for the facts that the fact scan captured." +msgstr "" + +#: main/models/fact.py:26 +msgid "Date and time of the corresponding fact scan gathering time." +msgstr "" + +#: main/models/fact.py:29 +msgid "" +"Arbitrary JSON structure of module facts captured at timestamp for a single " +"host." +msgstr "" + +#: main/models/inventory.py:45 +msgid "inventories" +msgstr "" + +#: main/models/inventory.py:52 +msgid "Organization containing this inventory." +msgstr "" + +#: main/models/inventory.py:58 +msgid "Inventory variables in JSON or YAML format." +msgstr "" + +#: main/models/inventory.py:63 +msgid "Flag indicating whether any hosts in this inventory have failed." +msgstr "" + +#: main/models/inventory.py:68 +msgid "Total number of hosts in this inventory." +msgstr "" + +#: main/models/inventory.py:73 +msgid "Number of hosts in this inventory with active failures." +msgstr "" + +#: main/models/inventory.py:78 +msgid "Total number of groups in this inventory." +msgstr "" + +#: main/models/inventory.py:83 +msgid "Number of groups in this inventory with active failures." +msgstr "" + +#: main/models/inventory.py:88 +msgid "" +"Flag indicating whether this inventory has any external inventory sources." +msgstr "" + +#: main/models/inventory.py:93 +msgid "" +"Total number of external inventory sources configured within this inventory." +msgstr "" + +#: main/models/inventory.py:98 +msgid "Number of external inventory sources in this inventory with failures." +msgstr "" + +#: main/models/inventory.py:339 +msgid "Is this host online and available for running jobs?" +msgstr "" + +#: main/models/inventory.py:345 +msgid "" +"The value used by the remote inventory source to uniquely identify the host" +msgstr "" + +#: main/models/inventory.py:350 +msgid "Host variables in JSON or YAML format." +msgstr "" + +#: main/models/inventory.py:372 +msgid "Flag indicating whether the last job failed for this host." +msgstr "" + +#: main/models/inventory.py:377 +msgid "" +"Flag indicating whether this host was created/updated from any external " +"inventory sources." +msgstr "" + +#: main/models/inventory.py:383 +msgid "Inventory source(s) that created or modified this host." +msgstr "" + +#: main/models/inventory.py:474 +msgid "Group variables in JSON or YAML format." +msgstr "" + +#: main/models/inventory.py:480 +msgid "Hosts associated directly with this group." +msgstr "" + +#: main/models/inventory.py:485 +msgid "Total number of hosts directly or indirectly in this group." +msgstr "" + +#: main/models/inventory.py:490 +msgid "Flag indicating whether this group has any hosts with active failures." +msgstr "" + +#: main/models/inventory.py:495 +msgid "Number of hosts in this group with active failures." +msgstr "" + +#: main/models/inventory.py:500 +msgid "Total number of child groups contained within this group." +msgstr "" + +#: main/models/inventory.py:505 +msgid "Number of child groups within this group that have active failures." +msgstr "" + +#: main/models/inventory.py:510 +msgid "" +"Flag indicating whether this group was created/updated from any external " +"inventory sources." +msgstr "" + +#: main/models/inventory.py:516 +msgid "Inventory source(s) that created or modified this group." +msgstr "" + +#: main/models/inventory.py:706 main/models/projects.py:42 +#: main/models/unified_jobs.py:386 +msgid "Manual" +msgstr "" + +#: main/models/inventory.py:707 +msgid "Local File, Directory or Script" +msgstr "" + +#: main/models/inventory.py:708 +msgid "Rackspace Cloud Servers" +msgstr "" + +#: main/models/inventory.py:709 +msgid "Amazon EC2" +msgstr "" + +#: main/models/inventory.py:717 +msgid "Custom Script" +msgstr "" + +#: main/models/inventory.py:828 +msgid "Inventory source variables in YAML or JSON format." +msgstr "" + +#: main/models/inventory.py:847 +msgid "" +"Comma-separated list of filter expressions (EC2 only). Hosts are imported " +"when ANY of the filters match." +msgstr "" + +#: main/models/inventory.py:853 +msgid "Limit groups automatically created from inventory source (EC2 only)." +msgstr "" + +#: main/models/inventory.py:857 +msgid "Overwrite local groups and hosts from remote inventory source." +msgstr "" + +#: main/models/inventory.py:861 +msgid "Overwrite local variables from remote inventory source." +msgstr "" + +#: main/models/inventory.py:893 +msgid "Availability Zone" +msgstr "" + +#: main/models/inventory.py:894 +msgid "Image ID" +msgstr "" + +#: main/models/inventory.py:895 +msgid "Instance ID" +msgstr "" + +#: main/models/inventory.py:896 +msgid "Instance Type" +msgstr "" + +#: main/models/inventory.py:897 +msgid "Key Name" +msgstr "" + +#: main/models/inventory.py:898 +msgid "Region" +msgstr "" + +#: main/models/inventory.py:899 +msgid "Security Group" +msgstr "" + +#: main/models/inventory.py:900 +msgid "Tags" +msgstr "" + +#: main/models/inventory.py:901 +msgid "VPC ID" +msgstr "" + +#: main/models/inventory.py:902 +msgid "Tag None" +msgstr "" + +#: main/models/inventory.py:973 +#, python-format +msgid "" +"Cloud-based inventory sources (such as %s) require credentials for the " +"matching cloud service." +msgstr "" + +#: main/models/inventory.py:980 +msgid "Credential is required for a cloud source." +msgstr "" + +#: main/models/inventory.py:1005 +#, python-format +msgid "Invalid %(source)s region%(plural)s: %(region)s" +msgstr "" + +#: main/models/inventory.py:1031 +#, python-format +msgid "Invalid filter expression%(plural)s: %(filter)s" +msgstr "" + +#: main/models/inventory.py:1050 +#, python-format +msgid "Invalid group by choice%(plural)s: %(choice)s" +msgstr "" + +#: main/models/inventory.py:1198 +#, python-format +msgid "" +"Unable to configure this item for cloud sync. It is already managed by %s." +msgstr "" + +#: main/models/inventory.py:1293 +msgid "Inventory script contents" +msgstr "" + +#: main/models/inventory.py:1298 +msgid "Organization owning this inventory script" +msgstr "" + +#: main/models/jobs.py:171 +msgid "You must provide a network credential." +msgstr "" + +#: main/models/jobs.py:179 +msgid "" +"Must provide a credential for a cloud provider, such as Amazon Web Services " +"or Rackspace." +msgstr "" + +#: main/models/jobs.py:271 +msgid "Job Template must provide 'inventory' or allow prompting for it." +msgstr "" + +#: main/models/jobs.py:275 +msgid "Job Template must provide 'credential' or allow prompting for it." +msgstr "" + +#: main/models/jobs.py:364 +msgid "Cannot override job_type to or from a scan job." +msgstr "" + +#: main/models/jobs.py:367 +msgid "Inventory cannot be changed at runtime for scan jobs." +msgstr "" + +#: main/models/jobs.py:433 main/models/projects.py:243 +msgid "SCM Revision" +msgstr "" + +#: main/models/jobs.py:434 +msgid "The SCM Revision from the Project used for this job, if available" +msgstr "" + +#: main/models/jobs.py:442 +msgid "" +"The SCM Refresh task used to make sure the playbooks were available for the " +"job run" +msgstr "" + +#: main/models/jobs.py:666 +msgid "job host summaries" +msgstr "" + +#: main/models/jobs.py:769 +msgid "Host Failure" +msgstr "" + +#: main/models/jobs.py:772 main/models/jobs.py:786 +msgid "No Hosts Remaining" +msgstr "" + +#: main/models/jobs.py:773 +msgid "Host Polling" +msgstr "" + +#: main/models/jobs.py:774 +msgid "Host Async OK" +msgstr "" + +#: main/models/jobs.py:775 +msgid "Host Async Failure" +msgstr "" + +#: main/models/jobs.py:776 +msgid "Item OK" +msgstr "" + +#: main/models/jobs.py:777 +msgid "Item Failed" +msgstr "" + +#: main/models/jobs.py:778 +msgid "Item Skipped" +msgstr "" + +#: main/models/jobs.py:779 +msgid "Host Retry" +msgstr "" + +#: main/models/jobs.py:781 +msgid "File Difference" +msgstr "" + +#: main/models/jobs.py:782 +msgid "Playbook Started" +msgstr "" + +#: main/models/jobs.py:783 +msgid "Running Handlers" +msgstr "" + +#: main/models/jobs.py:784 +msgid "Including File" +msgstr "" + +#: main/models/jobs.py:785 +msgid "No Hosts Matched" +msgstr "" + +#: main/models/jobs.py:787 +msgid "Task Started" +msgstr "" + +#: main/models/jobs.py:789 +msgid "Variables Prompted" +msgstr "" + +#: main/models/jobs.py:790 +msgid "Gathering Facts" +msgstr "" + +#: main/models/jobs.py:791 +msgid "internal: on Import for Host" +msgstr "" + +#: main/models/jobs.py:792 +msgid "internal: on Not Import for Host" +msgstr "" + +#: main/models/jobs.py:793 +msgid "Play Started" +msgstr "" + +#: main/models/jobs.py:794 +msgid "Playbook Complete" +msgstr "" + +#: main/models/jobs.py:1240 +msgid "Remove jobs older than a certain number of days" +msgstr "" + +#: main/models/jobs.py:1241 +msgid "Remove activity stream entries older than a certain number of days" +msgstr "" + +#: main/models/jobs.py:1242 +msgid "Purge and/or reduce the granularity of system tracking data" +msgstr "" + +#: main/models/label.py:29 +msgid "Organization this label belongs to." +msgstr "" + +#: main/models/notifications.py:31 +msgid "Email" +msgstr "" + +#: main/models/notifications.py:32 +msgid "Slack" +msgstr "" + +#: main/models/notifications.py:33 +msgid "Twilio" +msgstr "" + +#: main/models/notifications.py:34 +msgid "Pagerduty" +msgstr "" + +#: main/models/notifications.py:35 +msgid "HipChat" +msgstr "" + +#: main/models/notifications.py:36 +msgid "Webhook" +msgstr "" + +#: main/models/notifications.py:37 +msgid "IRC" +msgstr "" + +#: main/models/notifications.py:127 main/models/unified_jobs.py:57 +msgid "Pending" +msgstr "" + +#: main/models/notifications.py:128 main/models/unified_jobs.py:60 +msgid "Successful" +msgstr "" + +#: main/models/notifications.py:129 main/models/unified_jobs.py:61 +msgid "Failed" +msgstr "" + +#: main/models/organization.py:157 +msgid "Execute Commands on the Inventory" +msgstr "" + +#: main/models/organization.py:211 +msgid "Token not invalidated" +msgstr "" + +#: main/models/organization.py:212 +msgid "Token is expired" +msgstr "" + +#: main/models/organization.py:213 +msgid "Maximum per-user sessions reached" +msgstr "" + +#: main/models/organization.py:216 +msgid "Invalid token" +msgstr "" + +#: main/models/organization.py:233 +msgid "Reason the auth token was invalidated." +msgstr "" + +#: main/models/organization.py:272 +msgid "Invalid reason specified" +msgstr "" + +#: main/models/projects.py:43 +msgid "Git" +msgstr "" + +#: main/models/projects.py:44 +msgid "Mercurial" +msgstr "" + +#: main/models/projects.py:45 +msgid "Subversion" +msgstr "" + +#: main/models/projects.py:71 +msgid "" +"Local path (relative to PROJECTS_ROOT) containing playbooks and related " +"files for this project." +msgstr "" + +#: main/models/projects.py:80 +msgid "SCM Type" +msgstr "" + +#: main/models/projects.py:81 +msgid "Specifies the source control system used to store the project." +msgstr "" + +#: main/models/projects.py:87 +msgid "SCM URL" +msgstr "" + +#: main/models/projects.py:88 +msgid "The location where the project is stored." +msgstr "" + +#: main/models/projects.py:94 +msgid "SCM Branch" +msgstr "" + +#: main/models/projects.py:95 +msgid "Specific branch, tag or commit to checkout." +msgstr "" + +#: main/models/projects.py:99 +msgid "Discard any local changes before syncing the project." +msgstr "" + +#: main/models/projects.py:103 +msgid "Delete the project before syncing." +msgstr "" + +#: main/models/projects.py:116 +msgid "The amount of time to run before the task is canceled." +msgstr "" + +#: main/models/projects.py:130 +msgid "Invalid SCM URL." +msgstr "" + +#: main/models/projects.py:133 +msgid "SCM URL is required." +msgstr "" + +#: main/models/projects.py:142 +msgid "Credential kind must be 'scm'." +msgstr "" + +#: main/models/projects.py:157 +msgid "Invalid credential." +msgstr "" + +#: main/models/projects.py:229 +msgid "Update the project when a job is launched that uses the project." +msgstr "" + +#: main/models/projects.py:234 +msgid "" +"The number of seconds after the last project update ran that a newproject " +"update will be launched as a job dependency." +msgstr "" + +#: main/models/projects.py:244 +msgid "The last revision fetched by a project update" +msgstr "" + +#: main/models/projects.py:251 +msgid "Playbook Files" +msgstr "" + +#: main/models/projects.py:252 +msgid "List of playbooks found in the project" +msgstr "" + +#: main/models/rbac.py:122 +msgid "roles" +msgstr "" + +#: main/models/rbac.py:438 +msgid "role_ancestors" +msgstr "" + +#: main/models/schedules.py:69 +msgid "Enables processing of this schedule by Tower." +msgstr "" + +#: main/models/schedules.py:75 +msgid "The first occurrence of the schedule occurs on or after this time." +msgstr "" + +#: main/models/schedules.py:81 +msgid "" +"The last occurrence of the schedule occurs before this time, aftewards the " +"schedule expires." +msgstr "" + +#: main/models/schedules.py:85 +msgid "A value representing the schedules iCal recurrence rule." +msgstr "" + +#: main/models/schedules.py:91 +msgid "The next time that the scheduled action will run." +msgstr "" + +#: main/models/unified_jobs.py:56 +msgid "New" +msgstr "" + +#: main/models/unified_jobs.py:58 +msgid "Waiting" +msgstr "" + +#: main/models/unified_jobs.py:59 +msgid "Running" +msgstr "" + +#: main/models/unified_jobs.py:63 +msgid "Canceled" +msgstr "" + +#: main/models/unified_jobs.py:67 +msgid "Never Updated" +msgstr "" + +#: main/models/unified_jobs.py:71 ui/templates/ui/index.html:85 +#: ui/templates/ui/index.html.py:104 +msgid "OK" +msgstr "" + +#: main/models/unified_jobs.py:72 +msgid "Missing" +msgstr "" + +#: main/models/unified_jobs.py:76 +msgid "No External Source" +msgstr "" + +#: main/models/unified_jobs.py:83 +msgid "Updating" +msgstr "" + +#: main/models/unified_jobs.py:387 +msgid "Relaunch" +msgstr "" + +#: main/models/unified_jobs.py:388 +msgid "Callback" +msgstr "" + +#: main/models/unified_jobs.py:389 +msgid "Scheduled" +msgstr "" + +#: main/models/unified_jobs.py:390 +msgid "Dependency" +msgstr "" + +#: main/models/unified_jobs.py:391 +msgid "Workflow" +msgstr "" + +#: main/models/unified_jobs.py:437 +msgid "The Tower node the job executed on." +msgstr "" + +#: main/models/unified_jobs.py:463 +msgid "The date and time the job was queued for starting." +msgstr "" + +#: main/models/unified_jobs.py:469 +msgid "The date and time the job finished execution." +msgstr "" + +#: main/models/unified_jobs.py:475 +msgid "Elapsed time in seconds that the job ran." +msgstr "" + +#: main/models/unified_jobs.py:497 +msgid "" +"A status field to indicate the state of the job if it wasn't able to run and " +"capture stdout" +msgstr "" + +#: main/notifications/base.py:17 main/notifications/email_backend.py:28 +msgid "" +"{} #{} had status {} on Ansible Tower, view details at {}\n" +"\n" +msgstr "" + +#: main/notifications/hipchat_backend.py:46 +msgid "Error sending messages: {}" +msgstr "" + +#: main/notifications/hipchat_backend.py:48 +msgid "Error sending message to hipchat: {}" +msgstr "" + +#: main/notifications/irc_backend.py:54 +msgid "Exception connecting to irc server: {}" +msgstr "" + +#: main/notifications/pagerduty_backend.py:39 +msgid "Exception connecting to PagerDuty: {}" +msgstr "" + +#: main/notifications/pagerduty_backend.py:48 +#: main/notifications/slack_backend.py:52 +#: main/notifications/twilio_backend.py:46 +msgid "Exception sending messages: {}" +msgstr "" + +#: main/notifications/twilio_backend.py:36 +msgid "Exception connecting to Twilio: {}" +msgstr "" + +#: main/notifications/webhook_backend.py:38 +#: main/notifications/webhook_backend.py:40 +msgid "Error sending notification webhook: {}" +msgstr "" + +#: main/tasks.py:139 +msgid "Ansible Tower host usage over 90%" +msgstr "" + +#: main/tasks.py:144 +msgid "Ansible Tower license will expire soon" +msgstr "" + +#: main/tasks.py:197 +msgid "status_str must be either succeeded or failed" +msgstr "" + +#: main/utils/common.py:88 +#, python-format +msgid "Unable to convert \"%s\" to boolean" +msgstr "" + +#: main/utils/common.py:242 +#, python-format +msgid "Unsupported SCM type \"%s\"" +msgstr "" + +#: main/utils/common.py:249 main/utils/common.py:261 main/utils/common.py:280 +#, python-format +msgid "Invalid %s URL" +msgstr "" + +#: main/utils/common.py:251 main/utils/common.py:289 +#, python-format +msgid "Unsupported %s URL" +msgstr "" + +#: main/utils/common.py:291 +#, python-format +msgid "Unsupported host \"%s\" for file:// URL" +msgstr "" + +#: main/utils/common.py:293 +#, python-format +msgid "Host is required for %s URL" +msgstr "" + +#: main/utils/common.py:311 +#, python-format +msgid "Username must be \"git\" for SSH access to %s." +msgstr "" + +#: main/utils/common.py:317 +#, python-format +msgid "Username must be \"hg\" for SSH access to %s." +msgstr "" + +#: main/validators.py:60 +#, python-format +msgid "Invalid certificate or key: %r..." +msgstr "" + +#: main/validators.py:74 +#, python-format +msgid "Invalid private key: unsupported type \"%s\"" +msgstr "" + +#: main/validators.py:78 +#, python-format +msgid "Unsupported PEM object type: \"%s\"" +msgstr "" + +#: main/validators.py:103 +msgid "Invalid base64-encoded data" +msgstr "" + +#: main/validators.py:122 +msgid "Exactly one private key is required." +msgstr "" + +#: main/validators.py:124 +msgid "At least one private key is required." +msgstr "" + +#: main/validators.py:126 +#, python-format +msgid "" +"At least %(min_keys)d private keys are required, only %(key_count)d provided." +msgstr "" + +#: main/validators.py:129 +#, python-format +msgid "Only one private key is allowed, %(key_count)d provided." +msgstr "" + +#: main/validators.py:131 +#, python-format +msgid "" +"No more than %(max_keys)d private keys are allowed, %(key_count)d provided." +msgstr "" + +#: main/validators.py:136 +msgid "Exactly one certificate is required." +msgstr "" + +#: main/validators.py:138 +msgid "At least one certificate is required." +msgstr "" + +#: main/validators.py:140 +#, python-format +msgid "" +"At least %(min_certs)d certificates are required, only %(cert_count)d " +"provided." +msgstr "" + +#: main/validators.py:143 +#, python-format +msgid "Only one certificate is allowed, %(cert_count)d provided." +msgstr "" + +#: main/validators.py:145 +#, python-format +msgid "" +"No more than %(max_certs)d certificates are allowed, %(cert_count)d provided." +msgstr "" + +#: main/views.py:20 +msgid "API Error" +msgstr "" + +#: main/views.py:49 +msgid "Bad Request" +msgstr "" + +#: main/views.py:50 +msgid "The request could not be understood by the server." +msgstr "" + +#: main/views.py:57 +msgid "Forbidden" +msgstr "" + +#: main/views.py:58 +msgid "You don't have permission to access the requested resource." +msgstr "" + +#: main/views.py:65 +msgid "Not Found" +msgstr "" + +#: main/views.py:66 +msgid "The requested resource could not be found." +msgstr "" + +#: main/views.py:73 +msgid "Server Error" +msgstr "" + +#: main/views.py:74 +msgid "A server error has occurred." +msgstr "" + +#: settings/defaults.py:593 +msgid "Chicago" +msgstr "" + +#: settings/defaults.py:594 +msgid "Dallas/Ft. Worth" +msgstr "" + +#: settings/defaults.py:595 +msgid "Northern Virginia" +msgstr "" + +#: settings/defaults.py:596 +msgid "London" +msgstr "" + +#: settings/defaults.py:597 +msgid "Sydney" +msgstr "" + +#: settings/defaults.py:598 +msgid "Hong Kong" +msgstr "" + +#: settings/defaults.py:625 +msgid "US East (Northern Virginia)" +msgstr "" + +#: settings/defaults.py:626 +msgid "US East (Ohio)" +msgstr "" + +#: settings/defaults.py:627 +msgid "US West (Oregon)" +msgstr "" + +#: settings/defaults.py:628 +msgid "US West (Northern California)" +msgstr "" + +#: settings/defaults.py:629 +msgid "EU (Frankfurt)" +msgstr "" + +#: settings/defaults.py:630 +msgid "EU (Ireland)" +msgstr "" + +#: settings/defaults.py:631 +msgid "Asia Pacific (Singapore)" +msgstr "" + +#: settings/defaults.py:632 +msgid "Asia Pacific (Sydney)" +msgstr "" + +#: settings/defaults.py:633 +msgid "Asia Pacific (Tokyo)" +msgstr "" + +#: settings/defaults.py:634 +msgid "Asia Pacific (Seoul)" +msgstr "" + +#: settings/defaults.py:635 +msgid "Asia Pacific (Mumbai)" +msgstr "" + +#: settings/defaults.py:636 +msgid "South America (Sao Paulo)" +msgstr "" + +#: settings/defaults.py:637 +msgid "US West (GovCloud)" +msgstr "" + +#: settings/defaults.py:638 +msgid "China (Beijing)" +msgstr "" + +#: settings/defaults.py:687 +msgid "US East (B)" +msgstr "" + +#: settings/defaults.py:688 +msgid "US East (C)" +msgstr "" + +#: settings/defaults.py:689 +msgid "US East (D)" +msgstr "" + +#: settings/defaults.py:690 +msgid "US Central (A)" +msgstr "" + +#: settings/defaults.py:691 +msgid "US Central (B)" +msgstr "" + +#: settings/defaults.py:692 +msgid "US Central (C)" +msgstr "" + +#: settings/defaults.py:693 +msgid "US Central (F)" +msgstr "" + +#: settings/defaults.py:694 +msgid "Europe West (B)" +msgstr "" + +#: settings/defaults.py:695 +msgid "Europe West (C)" +msgstr "" + +#: settings/defaults.py:696 +msgid "Europe West (D)" +msgstr "" + +#: settings/defaults.py:697 +msgid "Asia East (A)" +msgstr "" + +#: settings/defaults.py:698 +msgid "Asia East (B)" +msgstr "" + +#: settings/defaults.py:699 +msgid "Asia East (C)" +msgstr "" + +#: settings/defaults.py:723 +msgid "US Central" +msgstr "" + +#: settings/defaults.py:724 +msgid "US East" +msgstr "" + +#: settings/defaults.py:725 +msgid "US East 2" +msgstr "" + +#: settings/defaults.py:726 +msgid "US North Central" +msgstr "" + +#: settings/defaults.py:727 +msgid "US South Central" +msgstr "" + +#: settings/defaults.py:728 +msgid "US West" +msgstr "" + +#: settings/defaults.py:729 +msgid "Europe North" +msgstr "" + +#: settings/defaults.py:730 +msgid "Europe West" +msgstr "" + +#: settings/defaults.py:731 +msgid "Asia Pacific East" +msgstr "" + +#: settings/defaults.py:732 +msgid "Asia Pacific Southeast" +msgstr "" + +#: settings/defaults.py:733 +msgid "Japan East" +msgstr "" + +#: settings/defaults.py:734 +msgid "Japan West" +msgstr "" + +#: settings/defaults.py:735 +msgid "Brazil South" +msgstr "" + +#: sso/apps.py:9 +msgid "Single Sign-On" +msgstr "" + +#: sso/conf.py:27 +msgid "" +"Mapping to organization admins/users from social auth accounts. This " +"setting\n" +"controls which users are placed into which Tower organizations based on\n" +"their username and email address. Dictionary keys are organization names.\n" +"organizations will be created if not present if the license allows for\n" +"multiple organizations, otherwise the single default organization is used\n" +"regardless of the key. Values are dictionaries defining the options for\n" +"each organization's membership. For each organization it is possible to\n" +"specify which users are automatically users of the organization and also\n" +"which users can administer the organization. \n" +"\n" +"- admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated.\n" +" If True, all users using social auth will automatically be added as " +"admins\n" +" of the organization.\n" +" If False, no social auth users will be automatically added as admins of\n" +" the organization.\n" +" If a string or list of strings, specifies the usernames and emails for\n" +" users who will be added to the organization. Strings in the format\n" +" \"//\" will be interpreted as JavaScript regular " +"expressions and\n" +" may also be used instead of string literals; only \"i\" and \"m\" are " +"supported\n" +" for flags.\n" +"- remove_admins: True/False. Defaults to True.\n" +" If True, a user who does not match will be removed from the " +"organization's\n" +" administrative list.\n" +"- users: None, True/False, string or list of strings. Same rules apply as " +"for\n" +" admins.\n" +"- remove_users: True/False. Defaults to True. Same rules as apply for \n" +" remove_admins." +msgstr "" + +#: sso/conf.py:76 +msgid "" +"Mapping of team members (users) from social auth accounts. Keys are team\n" +"names (will be created if not present). Values are dictionaries of options\n" +"for each team's membership, where each can contain the following " +"parameters:\n" +"\n" +"- organization: string. The name of the organization to which the team\n" +" belongs. The team will be created if the combination of organization and\n" +" team name does not exist. The organization will first be created if it\n" +" does not exist. If the license does not allow for multiple " +"organizations,\n" +" the team will always be assigned to the single default organization.\n" +"- users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all social auth users will be added/removed as team\n" +" members.\n" +" If a string or list of strings, specifies expressions used to match " +"users.\n" +" User will be added as a team member if the username or email matches.\n" +" Strings in the format \"//\" will be interpreted as " +"JavaScript\n" +" regular expressions and may also be used instead of string literals; only " +"\"i\"\n" +" and \"m\" are supported for flags.\n" +"- remove: True/False. Defaults to True. If True, a user who does not match\n" +" the rules above will be removed from the team." +msgstr "" + +#: sso/conf.py:119 +msgid "Authentication Backends" +msgstr "" + +#: sso/conf.py:120 +msgid "" +"List of authentication backends that are enabled based on license features " +"and other authentication settings." +msgstr "" + +#: sso/conf.py:133 +msgid "Social Auth Organization Map" +msgstr "" + +#: sso/conf.py:145 +msgid "Social Auth Team Map" +msgstr "" + +#: sso/conf.py:157 +msgid "Social Auth User Fields" +msgstr "" + +#: sso/conf.py:158 +msgid "" +"When set to an empty list `[]`, this setting prevents new user accounts from " +"being created. Only users who have previously logged in using social auth or " +"have a user account with a matching email address will be able to login." +msgstr "" + +#: sso/conf.py:176 +msgid "LDAP Server URI" +msgstr "" + +#: sso/conf.py:177 +msgid "" +"URI to connect to LDAP server, such as \"ldap://ldap.example.com:389\" (non-" +"SSL) or \"ldaps://ldap.example.com:636\" (SSL). Multiple LDAP servers may be " +"specified by separating with spaces or commas. LDAP authentication is " +"disabled if this parameter is empty." +msgstr "" + +#: sso/conf.py:181 sso/conf.py:199 sso/conf.py:211 sso/conf.py:223 +#: sso/conf.py:239 sso/conf.py:258 sso/conf.py:279 sso/conf.py:295 +#: sso/conf.py:314 sso/conf.py:331 sso/conf.py:347 sso/conf.py:362 +#: sso/conf.py:379 sso/conf.py:417 sso/conf.py:458 +msgid "LDAP" +msgstr "" + +#: sso/conf.py:193 +msgid "LDAP Bind DN" +msgstr "" + +#: sso/conf.py:194 +msgid "" +"DN (Distinguished Name) of user to bind for all search queries. Normally in " +"the format \"CN=Some User,OU=Users,DC=example,DC=com\" but may also be " +"specified as \"DOMAIN\\username\" for Active Directory. This is the system " +"user account we will use to login to query LDAP for other user information." +msgstr "" + +#: sso/conf.py:209 +msgid "LDAP Bind Password" +msgstr "" + +#: sso/conf.py:210 +msgid "Password used to bind LDAP user account." +msgstr "" + +#: sso/conf.py:221 +msgid "LDAP Start TLS" +msgstr "" + +#: sso/conf.py:222 +msgid "Whether to enable TLS when the LDAP connection is not using SSL." +msgstr "" + +#: sso/conf.py:232 +msgid "LDAP Connection Options" +msgstr "" + +#: sso/conf.py:233 +msgid "" +"Additional options to set for the LDAP connection. LDAP referrals are " +"disabled by default (to prevent certain LDAP queries from hanging with AD). " +"Option names should be strings (e.g. \"OPT_REFERRALS\"). Refer to https://" +"www.python-ldap.org/doc/html/ldap.html#options for possible options and " +"values that can be set." +msgstr "" + +#: sso/conf.py:251 +msgid "LDAP User Search" +msgstr "" + +#: sso/conf.py:252 +msgid "" +"LDAP search query to find users. Any user that matches the given pattern " +"will be able to login to Tower. The user should also be mapped into an " +"Tower organization (as defined in the AUTH_LDAP_ORGANIZATION_MAP setting). " +"If multiple search queries need to be supported use of \"LDAPUnion\" is " +"possible. See python-ldap documentation as linked at the top of this section." +msgstr "" + +#: sso/conf.py:273 +msgid "LDAP User DN Template" +msgstr "" + +#: sso/conf.py:274 +msgid "" +"Alternative to user search, if user DNs are all of the same format. This " +"approach will be more efficient for user lookups than searching if it is " +"usable in your organizational environment. If this setting has a value it " +"will be used instead of AUTH_LDAP_USER_SEARCH." +msgstr "" + +#: sso/conf.py:289 +msgid "LDAP User Attribute Map" +msgstr "" + +#: sso/conf.py:290 +msgid "" +"Mapping of LDAP user schema to Tower API user attributes (key is user " +"attribute name, value is LDAP attribute name). The default setting is valid " +"for ActiveDirectory but users with other LDAP configurations may need to " +"change the values (not the keys) of the dictionary/hash-table." +msgstr "" + +#: sso/conf.py:309 +msgid "LDAP Group Search" +msgstr "" + +#: sso/conf.py:310 +msgid "" +"Users in Tower are mapped to organizations based on their membership in LDAP " +"groups. This setting defines the LDAP search query to find groups. Note that " +"this, unlike the user search above, does not support LDAPSearchUnion." +msgstr "" + +#: sso/conf.py:327 +msgid "LDAP Group Type" +msgstr "" + +#: sso/conf.py:328 +msgid "" +"The group type may need to be changed based on the type of the LDAP server. " +"Values are listed at: http://pythonhosted.org/django-auth-ldap/groups." +"html#types-of-groups" +msgstr "" + +#: sso/conf.py:342 +msgid "LDAP Require Group" +msgstr "" + +#: sso/conf.py:343 +msgid "" +"Group DN required to login. If specified, user must be a member of this " +"group to login via LDAP. If not set, everyone in LDAP that matches the user " +"search will be able to login via Tower. Only one require group is supported." +msgstr "" + +#: sso/conf.py:358 +msgid "LDAP Deny Group" +msgstr "" + +#: sso/conf.py:359 +msgid "" +"Group DN denied from login. If specified, user will not be allowed to login " +"if a member of this group. Only one deny group is supported." +msgstr "" + +#: sso/conf.py:372 +msgid "LDAP User Flags By Group" +msgstr "" + +#: sso/conf.py:373 +msgid "" +"User profile flags updated from group membership (key is user attribute " +"name, value is group DN). These are boolean fields that are matched based " +"on whether the user is a member of the given group. So far only " +"is_superuser is settable via this method. This flag is set both true and " +"false at login time based on current LDAP settings." +msgstr "" + +#: sso/conf.py:391 +msgid "LDAP Organization Map" +msgstr "" + +#: sso/conf.py:392 +msgid "" +"Mapping between organization admins/users and LDAP groups. This controls " +"what users are placed into what Tower organizations relative to their LDAP " +"group memberships. Keys are organization names. Organizations will be " +"created if not present. Values are dictionaries defining the options for " +"each organization's membership. For each organization it is possible to " +"specify what groups are automatically users of the organization and also " +"what groups can administer the organization.\n" +"\n" +" - admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated based on LDAP values.\n" +" If True, all users in LDAP will automatically be added as admins of the " +"organization.\n" +" If False, no LDAP users will be automatically added as admins of the " +"organization.\n" +" If a string or list of strings, specifies the group DN(s) that will be " +"added of the organization if they match any of the specified groups.\n" +" - remove_admins: True/False. Defaults to True.\n" +" If True, a user who is not an member of the given groups will be removed " +"from the organization's administrative list.\n" +" - users: None, True/False, string or list of strings. Same rules apply as " +"for admins.\n" +" - remove_users: True/False. Defaults to True. Same rules apply as for " +"remove_admins." +msgstr "" + +#: sso/conf.py:440 +msgid "LDAP Team Map" +msgstr "" + +#: sso/conf.py:441 +msgid "" +"Mapping between team members (users) and LDAP groups. Keys are team names " +"(will be created if not present). Values are dictionaries of options for " +"each team's membership, where each can contain the following parameters:\n" +"\n" +" - organization: string. The name of the organization to which the team " +"belongs. The team will be created if the combination of organization and " +"team name does not exist. The organization will first be created if it does " +"not exist.\n" +" - users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all LDAP users will be added/removed as team members.\n" +" If a string or list of strings, specifies the group DN(s). User will be " +"added as a team member if the user is a member of ANY of these groups.\n" +"- remove: True/False. Defaults to True. If True, a user who is not a member " +"of the given groups will be removed from the team." +msgstr "" + +#: sso/conf.py:484 +msgid "RADIUS Server" +msgstr "" + +#: sso/conf.py:485 +msgid "" +"Hostname/IP of RADIUS server. RADIUS authentication will be disabled if this " +"setting is empty." +msgstr "" + +#: sso/conf.py:487 sso/conf.py:501 sso/conf.py:513 +msgid "RADIUS" +msgstr "" + +#: sso/conf.py:499 +msgid "RADIUS Port" +msgstr "" + +#: sso/conf.py:500 +msgid "Port of RADIUS server." +msgstr "" + +#: sso/conf.py:511 +msgid "RADIUS Secret" +msgstr "" + +#: sso/conf.py:512 +msgid "Shared secret for authenticating to RADIUS server." +msgstr "" + +#: sso/conf.py:528 +msgid "Google OAuth2 Callback URL" +msgstr "" + +#: sso/conf.py:529 +msgid "" +"Create a project at https://console.developers.google.com/ to obtain an " +"OAuth2 key and secret for a web application. Ensure that the Google+ API is " +"enabled. Provide this URL as the callback URL for your application." +msgstr "" + +#: sso/conf.py:533 sso/conf.py:544 sso/conf.py:555 sso/conf.py:568 +#: sso/conf.py:582 sso/conf.py:594 sso/conf.py:606 +msgid "Google OAuth2" +msgstr "" + +#: sso/conf.py:542 +msgid "Google OAuth2 Key" +msgstr "" + +#: sso/conf.py:543 +msgid "" +"The OAuth2 key from your web application at https://console.developers." +"google.com/." +msgstr "" + +#: sso/conf.py:553 +msgid "Google OAuth2 Secret" +msgstr "" + +#: sso/conf.py:554 +msgid "" +"The OAuth2 secret from your web application at https://console.developers." +"google.com/." +msgstr "" + +#: sso/conf.py:565 +msgid "Google OAuth2 Whitelisted Domains" +msgstr "" + +#: sso/conf.py:566 +msgid "" +"Update this setting to restrict the domains who are allowed to login using " +"Google OAuth2." +msgstr "" + +#: sso/conf.py:577 +msgid "Google OAuth2 Extra Arguments" +msgstr "" + +#: sso/conf.py:578 +msgid "" +"Extra arguments for Google OAuth2 login. When only allowing a single domain " +"to authenticate, set to `{\"hd\": \"yourdomain.com\"}` and Google will not " +"display any other accounts even if the user is logged in with multiple " +"Google accounts." +msgstr "" + +#: sso/conf.py:592 +msgid "Google OAuth2 Organization Map" +msgstr "" + +#: sso/conf.py:604 +msgid "Google OAuth2 Team Map" +msgstr "" + +#: sso/conf.py:620 +msgid "GitHub OAuth2 Callback URL" +msgstr "" + +#: sso/conf.py:621 +msgid "" +"Create a developer application at https://github.com/settings/developers to " +"obtain an OAuth2 key (Client ID) and secret (Client Secret). Provide this " +"URL as the callback URL for your application." +msgstr "" + +#: sso/conf.py:625 sso/conf.py:636 sso/conf.py:646 sso/conf.py:658 +#: sso/conf.py:670 +msgid "GitHub OAuth2" +msgstr "" + +#: sso/conf.py:634 +msgid "GitHub OAuth2 Key" +msgstr "" + +#: sso/conf.py:635 +msgid "The OAuth2 key (Client ID) from your GitHub developer application." +msgstr "" + +#: sso/conf.py:644 +msgid "GitHub OAuth2 Secret" +msgstr "" + +#: sso/conf.py:645 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub developer application." +msgstr "" + +#: sso/conf.py:656 +msgid "GitHub OAuth2 Organization Map" +msgstr "" + +#: sso/conf.py:668 +msgid "GitHub OAuth2 Team Map" +msgstr "" + +#: sso/conf.py:684 +msgid "GitHub Organization OAuth2 Callback URL" +msgstr "" + +#: sso/conf.py:685 sso/conf.py:760 +msgid "" +"Create an organization-owned application at https://github.com/organizations/" +"/settings/applications and obtain an OAuth2 key (Client ID) and " +"secret (Client Secret). Provide this URL as the callback URL for your " +"application." +msgstr "" + +#: sso/conf.py:689 sso/conf.py:700 sso/conf.py:710 sso/conf.py:722 +#: sso/conf.py:733 sso/conf.py:745 +msgid "GitHub Organization OAuth2" +msgstr "" + +#: sso/conf.py:698 +msgid "GitHub Organization OAuth2 Key" +msgstr "" + +#: sso/conf.py:699 sso/conf.py:774 +msgid "The OAuth2 key (Client ID) from your GitHub organization application." +msgstr "" + +#: sso/conf.py:708 +msgid "GitHub Organization OAuth2 Secret" +msgstr "" + +#: sso/conf.py:709 sso/conf.py:784 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub organization application." +msgstr "" + +#: sso/conf.py:719 +msgid "GitHub Organization Name" +msgstr "" + +#: sso/conf.py:720 +msgid "" +"The name of your GitHub organization, as used in your organization's URL: " +"https://github.com//." +msgstr "" + +#: sso/conf.py:731 +msgid "GitHub Organization OAuth2 Organization Map" +msgstr "" + +#: sso/conf.py:743 +msgid "GitHub Organization OAuth2 Team Map" +msgstr "" + +#: sso/conf.py:759 +msgid "GitHub Team OAuth2 Callback URL" +msgstr "" + +#: sso/conf.py:764 sso/conf.py:775 sso/conf.py:785 sso/conf.py:797 +#: sso/conf.py:808 sso/conf.py:820 +msgid "GitHub Team OAuth2" +msgstr "" + +#: sso/conf.py:773 +msgid "GitHub Team OAuth2 Key" +msgstr "" + +#: sso/conf.py:783 +msgid "GitHub Team OAuth2 Secret" +msgstr "" + +#: sso/conf.py:794 +msgid "GitHub Team ID" +msgstr "" + +#: sso/conf.py:795 +msgid "" +"Find the numeric team ID using the Github API: http://fabian-kostadinov." +"github.io/2015/01/16/how-to-find-a-github-team-id/." +msgstr "" + +#: sso/conf.py:806 +msgid "GitHub Team OAuth2 Organization Map" +msgstr "" + +#: sso/conf.py:818 +msgid "GitHub Team OAuth2 Team Map" +msgstr "" + +#: sso/conf.py:834 +msgid "Azure AD OAuth2 Callback URL" +msgstr "" + +#: sso/conf.py:835 +msgid "" +"Register an Azure AD application as described by https://msdn.microsoft.com/" +"en-us/library/azure/dn132599.aspx and obtain an OAuth2 key (Client ID) and " +"secret (Client Secret). Provide this URL as the callback URL for your " +"application." +msgstr "" + +#: sso/conf.py:839 sso/conf.py:850 sso/conf.py:860 sso/conf.py:872 +#: sso/conf.py:884 +msgid "Azure AD OAuth2" +msgstr "" + +#: sso/conf.py:848 +msgid "Azure AD OAuth2 Key" +msgstr "" + +#: sso/conf.py:849 +msgid "The OAuth2 key (Client ID) from your Azure AD application." +msgstr "" + +#: sso/conf.py:858 +msgid "Azure AD OAuth2 Secret" +msgstr "" + +#: sso/conf.py:859 +msgid "The OAuth2 secret (Client Secret) from your Azure AD application." +msgstr "" + +#: sso/conf.py:870 +msgid "Azure AD OAuth2 Organization Map" +msgstr "" + +#: sso/conf.py:882 +msgid "Azure AD OAuth2 Team Map" +msgstr "" + +#: sso/conf.py:903 +msgid "SAML Service Provider Callback URL" +msgstr "" + +#: sso/conf.py:904 +msgid "" +"Register Tower as a service provider (SP) with each identity provider (IdP) " +"you have configured. Provide your SP Entity ID and this callback URL for " +"your application." +msgstr "" + +#: sso/conf.py:907 sso/conf.py:921 sso/conf.py:934 sso/conf.py:948 +#: sso/conf.py:962 sso/conf.py:980 sso/conf.py:1002 sso/conf.py:1021 +#: sso/conf.py:1041 sso/conf.py:1075 sso/conf.py:1088 +msgid "SAML" +msgstr "" + +#: sso/conf.py:918 +msgid "SAML Service Provider Metadata URL" +msgstr "" + +#: sso/conf.py:919 +msgid "" +"If your identity provider (IdP) allows uploading an XML metadata file, you " +"can download one from this URL." +msgstr "" + +#: sso/conf.py:931 +msgid "SAML Service Provider Entity ID" +msgstr "" + +#: sso/conf.py:932 +msgid "" +"The application-defined unique identifier used as the audience of the SAML " +"service provider (SP) configuration." +msgstr "" + +#: sso/conf.py:945 +msgid "SAML Service Provider Public Certificate" +msgstr "" + +#: sso/conf.py:946 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the " +"certificate content here." +msgstr "" + +#: sso/conf.py:959 +msgid "SAML Service Provider Private Key" +msgstr "" + +#: sso/conf.py:960 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the " +"private key content here." +msgstr "" + +#: sso/conf.py:978 +msgid "SAML Service Provider Organization Info" +msgstr "" + +#: sso/conf.py:979 +msgid "Configure this setting with information about your app." +msgstr "" + +#: sso/conf.py:1000 +msgid "SAML Service Provider Technical Contact" +msgstr "" + +#: sso/conf.py:1001 sso/conf.py:1020 +msgid "Configure this setting with your contact information." +msgstr "" + +#: sso/conf.py:1019 +msgid "SAML Service Provider Support Contact" +msgstr "" + +#: sso/conf.py:1034 +msgid "SAML Enabled Identity Providers" +msgstr "" + +#: sso/conf.py:1035 +msgid "" +"Configure the Entity ID, SSO URL and certificate for each identity provider " +"(IdP) in use. Multiple SAML IdPs are supported. Some IdPs may provide user " +"data using attribute names that differ from the default OIDs (https://github." +"com/omab/python-social-auth/blob/master/social/backends/saml.py#L16). " +"Attribute names may be overridden for each IdP." +msgstr "" + +#: sso/conf.py:1073 +msgid "SAML Organization Map" +msgstr "" + +#: sso/conf.py:1086 +msgid "SAML Team Map" +msgstr "" + +#: sso/fields.py:123 +msgid "Invalid connection option(s): {invalid_options}." +msgstr "" + +#: sso/fields.py:182 +msgid "Base" +msgstr "" + +#: sso/fields.py:183 +msgid "One Level" +msgstr "" + +#: sso/fields.py:184 +msgid "Subtree" +msgstr "" + +#: sso/fields.py:202 +msgid "Expected a list of three items but got {length} instead." +msgstr "" + +#: sso/fields.py:203 +msgid "Expected an instance of LDAPSearch but got {input_type} instead." +msgstr "" + +#: sso/fields.py:239 +msgid "" +"Expected an instance of LDAPSearch or LDAPSearchUnion but got {input_type} " +"instead." +msgstr "" + +#: sso/fields.py:266 +msgid "Invalid user attribute(s): {invalid_attrs}." +msgstr "" + +#: sso/fields.py:283 +msgid "Expected an instance of LDAPGroupType but got {input_type} instead." +msgstr "" + +#: sso/fields.py:308 +msgid "Invalid user flag: \"{invalid_flag}\"." +msgstr "" + +#: sso/fields.py:324 sso/fields.py:491 +msgid "" +"Expected None, True, False, a string or list of strings but got {input_type} " +"instead." +msgstr "" + +#: sso/fields.py:360 +msgid "Missing key(s): {missing_keys}." +msgstr "" + +#: sso/fields.py:361 +msgid "Invalid key(s): {invalid_keys}." +msgstr "" + +#: sso/fields.py:410 sso/fields.py:527 +msgid "Invalid key(s) for organization map: {invalid_keys}." +msgstr "" + +#: sso/fields.py:428 +msgid "Missing required key for team map: {invalid_keys}." +msgstr "" + +#: sso/fields.py:429 sso/fields.py:546 +msgid "Invalid key(s) for team map: {invalid_keys}." +msgstr "" + +#: sso/fields.py:545 +msgid "Missing required key for team map: {missing_keys}." +msgstr "" + +#: sso/fields.py:563 +msgid "Missing required key(s) for org info record: {missing_keys}." +msgstr "" + +#: sso/fields.py:576 +msgid "Invalid language code(s) for org info: {invalid_lang_codes}." +msgstr "" + +#: sso/fields.py:595 +msgid "Missing required key(s) for contact: {missing_keys}." +msgstr "" + +#: sso/fields.py:607 +msgid "Missing required key(s) for IdP: {missing_keys}." +msgstr "" + +#: sso/pipeline.py:24 +msgid "An account cannot be found for {0}" +msgstr "" + +#: sso/pipeline.py:30 +msgid "Your account is inactive" +msgstr "" + +#: sso/validators.py:19 sso/validators.py:44 +#, python-format +msgid "DN must include \"%%(user)s\" placeholder for username: %s" +msgstr "" + +#: sso/validators.py:26 +#, python-format +msgid "Invalid DN: %s" +msgstr "" + +#: sso/validators.py:56 +#, python-format +msgid "Invalid filter: %s" +msgstr "" + +#: templates/error.html:4 ui/templates/ui/index.html:8 +msgid "Ansible Tower" +msgstr "" + +#: templates/rest_framework/api.html:39 +msgid "Ansible Tower API Guide" +msgstr "" + +#: templates/rest_framework/api.html:40 +msgid "Back to Ansible Tower" +msgstr "" + +#: templates/rest_framework/api.html:41 +msgid "Resize" +msgstr "" + +#: templates/rest_framework/base.html:78 templates/rest_framework/base.html:92 +#, python-format +msgid "Make a GET request on the %(name)s resource" +msgstr "" + +#: templates/rest_framework/base.html:80 +msgid "Specify a format for the GET request" +msgstr "" + +#: templates/rest_framework/base.html:86 +#, python-format +msgid "" +"Make a GET request on the %(name)s resource with the format set to `" +"%(format)s`" +msgstr "" + +#: templates/rest_framework/base.html:100 +#, python-format +msgid "Make an OPTIONS request on the %(name)s resource" +msgstr "" + +#: templates/rest_framework/base.html:106 +#, python-format +msgid "Make a DELETE request on the %(name)s resource" +msgstr "" + +#: templates/rest_framework/base.html:113 +msgid "Filters" +msgstr "" + +#: templates/rest_framework/base.html:172 +#: templates/rest_framework/base.html:186 +#, python-format +msgid "Make a POST request on the %(name)s resource" +msgstr "" + +#: templates/rest_framework/base.html:216 +#: templates/rest_framework/base.html:230 +#, python-format +msgid "Make a PUT request on the %(name)s resource" +msgstr "" + +#: templates/rest_framework/base.html:233 +#, python-format +msgid "Make a PATCH request on the %(name)s resource" +msgstr "" + +#: ui/apps.py:9 ui/conf.py:22 ui/conf.py:38 ui/conf.py:53 +msgid "UI" +msgstr "" + +#: ui/conf.py:16 +msgid "Off" +msgstr "" + +#: ui/conf.py:17 +msgid "Anonymous" +msgstr "" + +#: ui/conf.py:18 +msgid "Detailed" +msgstr "" + +#: ui/conf.py:20 +msgid "Analytics Tracking State" +msgstr "" + +#: ui/conf.py:21 +msgid "Enable or Disable Analytics Tracking." +msgstr "" + +#: ui/conf.py:31 +msgid "Custom Login Info" +msgstr "" + +#: ui/conf.py:32 +msgid "" +"If needed, you can add specific information (such as a legal notice or a " +"disclaimer) to a text box in the login modal using this setting. Any content " +"added must be in plain text, as custom HTML or other markup languages are " +"not supported. If multiple paragraphs of text are needed, new lines " +"(paragraphs) must be escaped as `\\n` within the block of text." +msgstr "" + +#: ui/conf.py:48 +msgid "Custom Logo" +msgstr "" + +#: ui/conf.py:49 +msgid "" +"To set up a custom logo, provide a file that you create. For the custom logo " +"to look its best, use a `.png` file with a transparent background. GIF, PNG " +"and JPEG formats are supported." +msgstr "" + +#: ui/fields.py:29 +msgid "" +"Invalid format for custom logo. Must be a data URL with a base64-encoded " +"GIF, PNG or JPEG image." +msgstr "" + +#: ui/fields.py:30 +msgid "Invalid base64-encoded data in data URL." +msgstr "" + +#: ui/templates/ui/index.html:49 +msgid "" +"Your session will expire in 60 seconds, would you like to continue?" +msgstr "" + +#: ui/templates/ui/index.html:64 +msgid "CANCEL" +msgstr "" + +#: ui/templates/ui/index.html:116 +msgid "Set how many days of data should be retained." +msgstr "" + +#: ui/templates/ui/index.html:122 +msgid "" +"Please enter an integer that is not " +"negative that is lower than 9999." +msgstr "" + +#: ui/templates/ui/index.html:127 +msgid "" +"For facts collected older than the time period specified, save one fact scan " +"(snapshot) per time window (frequency). For example, facts older than 30 " +"days are purged, while one weekly fact scan is kept.\n" +"
\n" +"
CAUTION: Setting both numerical variables to \"0\" " +"will delete all facts.\n" +"
\n" +"
" +msgstr "" + +#: ui/templates/ui/index.html:136 +msgid "Select a time period after which to remove old facts" +msgstr "" + +#: ui/templates/ui/index.html:150 +msgid "" +"Please enter an integer " +"that is not negative " +"that is lower than 9999." +msgstr "" + +#: ui/templates/ui/index.html:155 +msgid "Select a frequency for snapshot retention" +msgstr "" + +#: ui/templates/ui/index.html:169 +msgid "" +"Please enter an integer that is not negative that is " +"lower than 9999." +msgstr "" + +#: ui/templates/ui/index.html:175 +msgid "working..." +msgstr "" diff --git a/awx/locale/fr/LC_MESSAGES/django.po b/awx/locale/fr/LC_MESSAGES/django.po new file mode 100644 index 0000000000..720fe8b149 --- /dev/null +++ b/awx/locale/fr/LC_MESSAGES/django.po @@ -0,0 +1,4053 @@ +# Corina Roe , 2017. #zanata +# Sam Friedmann , 2017. #zanata +msgid "" +msgstr "" +"Project-Id-Version: PACKAGE VERSION\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2017-01-27 17:35+0000\n" +"PO-Revision-Date: 2017-01-18 10:49+0000\n" +"Last-Translator: Corina Roe \n" +"Language-Team: French\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Language: fr\n" +"Plural-Forms: nplurals=2; plural=(n > 1)\n" +"X-Generator: Zanata 3.9.6\n" + +#: api/authentication.py:67 +msgid "Invalid token header. No credentials provided." +msgstr "" +"En-tête de token non valide. Aucune information d'identification fournie." + +#: api/authentication.py:70 +msgid "Invalid token header. Token string should not contain spaces." +msgstr "" +"En-tête de token non valide. La chaîne token ne doit pas contenir d'espaces." + +#: api/authentication.py:105 +msgid "User inactive or deleted" +msgstr "Utilisateur inactif ou supprimé" + +#: api/authentication.py:161 +msgid "Invalid task token" +msgstr "Token de tâche non valide" + +#: api/conf.py:12 +msgid "Idle Time Force Log Out" +msgstr "Temps d'inactivité - Forcer la déconnexion" + +#: api/conf.py:13 +msgid "" +"Number of seconds that a user is inactive before they will need to login " +"again." +msgstr "" +"Délai en secondes pendant lequel un utilisateur peut rester inactif avant de" +" devoir se reconnecter." + +#: api/conf.py:14 api/conf.py:24 api/conf.py:33 sso/conf.py:124 +#: sso/conf.py:135 sso/conf.py:147 sso/conf.py:162 +msgid "Authentication" +msgstr "Authentification" + +#: api/conf.py:22 +msgid "Maximum number of simultaneous logins" +msgstr "Nombre maximal de connexions simultanées" + +#: api/conf.py:23 +msgid "" +"Maximum number of simultaneous logins a user may have. To disable enter -1." +msgstr "" +"Nombre maximal de connexions simultanées dont un utilisateur peut disposer. " +"Pour désactiver cette option, entrez -1." + +#: api/conf.py:31 +msgid "Enable HTTP Basic Auth" +msgstr "Activer l'authentification HTTP de base" + +#: api/conf.py:32 +msgid "Enable HTTP Basic Auth for the API Browser." +msgstr "Activer l'authentification HTTP de base pour le navigateur d'API." + +#: api/generics.py:462 +msgid "\"id\" is required to disassociate" +msgstr "\"id\" est nécessaire pour dissocier" + +#: api/metadata.py:50 +msgid "Database ID for this {}." +msgstr "ID de base de données pour ce {}." + +#: api/metadata.py:51 +msgid "Name of this {}." +msgstr "Nom de ce {}." + +#: api/metadata.py:52 +msgid "Optional description of this {}." +msgstr "Description facultative de ce {}." + +#: api/metadata.py:53 +msgid "Data type for this {}." +msgstr "Type de données pour ce {}." + +#: api/metadata.py:54 +msgid "URL for this {}." +msgstr "URL de ce {}." + +#: api/metadata.py:55 +msgid "Data structure with URLs of related resources." +msgstr "Structure de données avec URL des ressources associées." + +#: api/metadata.py:56 +msgid "Data structure with name/description for related resources." +msgstr "Structure de données avec nom/description des ressources associées." + +#: api/metadata.py:57 +msgid "Timestamp when this {} was created." +msgstr "Horodatage lors de la création de ce {}." + +#: api/metadata.py:58 +msgid "Timestamp when this {} was last modified." +msgstr "Horodatage lors de la modification de ce {}." + +#: api/parsers.py:31 +#, python-format +msgid "JSON parse error - %s" +msgstr "Erreur d'analyse JSON - %s" + +#: api/serializers.py:248 +msgid "Playbook Run" +msgstr "Exécution du playbook" + +#: api/serializers.py:249 +msgid "Command" +msgstr "Commande" + +#: api/serializers.py:250 +msgid "SCM Update" +msgstr "Mise à jour SCM" + +#: api/serializers.py:251 +msgid "Inventory Sync" +msgstr "Synchronisation des inventaires" + +#: api/serializers.py:252 +msgid "Management Job" +msgstr "Tâche de gestion" + +#: api/serializers.py:253 +msgid "Workflow Job" +msgstr "Tâche de workflow" + +#: api/serializers.py:254 +msgid "Workflow Template" +msgstr "" + +#: api/serializers.py:656 api/serializers.py:714 api/views.py:3805 +#, python-format +msgid "" +"Standard Output too large to display (%(text_size)d bytes), only download " +"supported for sizes over %(supported_size)d bytes" +msgstr "" +"Sortie standard trop grande pour pouvoir s'afficher (%(text_size)d octets). " +"Le téléchargement est pris en charge seulement pour une taille supérieure à " +"%(supported_size)d octets" + +#: api/serializers.py:729 +msgid "Write-only field used to change the password." +msgstr "Champ en écriture seule servant à modifier le mot de passe." + +#: api/serializers.py:731 +msgid "Set if the account is managed by an external service" +msgstr "À définir si le compte est géré par un service externe" + +#: api/serializers.py:755 +msgid "Password required for new User." +msgstr "Mot de passe requis pour le nouvel utilisateur." + +#: api/serializers.py:839 +#, python-format +msgid "Unable to change %s on user managed by LDAP." +msgstr "Impossible de redéfinir %s sur un utilisateur géré par LDAP." + +#: api/serializers.py:991 +msgid "Organization is missing" +msgstr "L'organisation est manquante" + +#: api/serializers.py:997 +msgid "Array of playbooks available within this project." +msgstr "Tableau des playbooks disponibles dans ce projet." + +#: api/serializers.py:1179 +#, python-format +msgid "Invalid port specification: %s" +msgstr "Spécification de port non valide : %s" + +#: api/serializers.py:1207 main/validators.py:193 +msgid "Must be valid JSON or YAML." +msgstr "Syntaxe JSON ou YAML valide exigée." + +#: api/serializers.py:1264 +msgid "Invalid group name." +msgstr "Nom de groupe incorrect." + +#: api/serializers.py:1339 +msgid "" +"Script must begin with a hashbang sequence: i.e.... #!/usr/bin/env python" +msgstr "" +"Le script doit commencer par une séquence hashbang : c.-à-d. ... " +"#!/usr/bin/env python" + +#: api/serializers.py:1392 +msgid "If 'source' is 'custom', 'source_script' must be provided." +msgstr "Si la valeur 'source' est 'custom', 'source_script' doit être défini." + +#: api/serializers.py:1396 +msgid "" +"The 'source_script' does not belong to the same organization as the " +"inventory." +msgstr "" +"Le 'source_script' n'appartient pas à la même organisation que l'inventaire." + +#: api/serializers.py:1398 +msgid "'source_script' doesn't exist." +msgstr "'source_script' n'existe pas." + +#: api/serializers.py:1757 +msgid "" +"Write-only field used to add user to owner role. If provided, do not give " +"either team or organization. Only valid for creation." +msgstr "" +"Champ en écriture seule qui sert à ajouter un utilisateur au rôle de " +"propriétaire. Si vous le définissez, n'entrez ni équipe ni organisation. " +"Seulement valable pour la création." + +#: api/serializers.py:1762 +msgid "" +"Write-only field used to add team to owner role. If provided, do not give " +"either user or organization. Only valid for creation." +msgstr "" +"Champ en écriture seule qui sert à ajouter une équipe au rôle de " +"propriétaire. Si vous le définissez, n'entrez ni utilisateur ni " +"organisation. Seulement valable pour la création." + +#: api/serializers.py:1767 +msgid "" +"Inherit permissions from organization roles. If provided on creation, do not" +" give either user or team." +msgstr "" +"Hériter des permissions à partir des rôles d'organisation. Si vous le " +"définissez lors de la création, n'entrez ni utilisateur ni équipe." + +#: api/serializers.py:1783 +msgid "Missing 'user', 'team', or 'organization'." +msgstr "Valeur 'utilisateur', 'équipe' ou 'organisation' manquante." + +#: api/serializers.py:1796 +msgid "" +"Credential organization must be set and match before assigning to a team" +msgstr "" +"L'organisation des informations d'identification doit être définie et mise " +"en correspondance avant de l'attribuer à une équipe" + +#: api/serializers.py:1888 +msgid "This field is required." +msgstr "Ce champ est obligatoire." + +#: api/serializers.py:1890 api/serializers.py:1892 +msgid "Playbook not found for project." +msgstr "Playbook introuvable pour le projet." + +#: api/serializers.py:1894 +msgid "Must select playbook for project." +msgstr "Un playbook doit être sélectionné pour le project." + +#: api/serializers.py:1958 main/models/jobs.py:278 +msgid "Scan jobs must be assigned a fixed inventory." +msgstr "Un inventaire fixe doit être assigné aux tâches de scan." + +#: api/serializers.py:1960 main/models/jobs.py:281 +msgid "Job types 'run' and 'check' must have assigned a project." +msgstr "Un projet doit être assigné aux types de tâche 'run' et 'check'." + +#: api/serializers.py:1963 +msgid "Survey Enabled cannot be used with scan jobs." +msgstr "" +"L'option Questionnaire activé ne peut pas être utilisée avec les tâches de " +"scan." + +#: api/serializers.py:2023 +msgid "Invalid job template." +msgstr "Modèle de tâche non valide." + +#: api/serializers.py:2108 +msgid "Credential not found or deleted." +msgstr "Informations d'identification introuvables ou supprimées." + +#: api/serializers.py:2110 +msgid "Job Template Project is missing or undefined." +msgstr "Le projet de modèle de tâche est manquant ou non défini." + +#: api/serializers.py:2112 +msgid "Job Template Inventory is missing or undefined." +msgstr "Le projet de modèle d'inventaire est manquant ou non défini." + +#: api/serializers.py:2397 +#, python-format +msgid "%(job_type)s is not a valid job type. The choices are %(choices)s." +msgstr "" +"%(job_type)s n'est pas un type de tâche valide. Les choix sont %(choices)s." + +#: api/serializers.py:2402 +msgid "Workflow job template is missing during creation." +msgstr "Le modèle de tâche Workflow est manquant lors de la création." + +#: api/serializers.py:2407 +#, python-format +msgid "Cannot nest a %s inside a WorkflowJobTemplate" +msgstr "Impossible d'imbriquer %s dans un modèle de tâche Workflow." + +#: api/serializers.py:2645 +#, python-format +msgid "Job Template '%s' is missing or undefined." +msgstr "Le modèle de tâche '%s' est manquant ou non défini." + +#: api/serializers.py:2671 +msgid "Must be a valid JSON or YAML dictionary." +msgstr "Dictionnaire JSON ou YAML valide exigé." + +#: api/serializers.py:2813 +msgid "" +"Missing required fields for Notification Configuration: notification_type" +msgstr "" +"Champs obligatoires manquants pour la configuration des notifications : " +"notification_type" + +#: api/serializers.py:2836 +msgid "No values specified for field '{}'" +msgstr "Aucune valeur spécifiée pour le champ '{}'" + +#: api/serializers.py:2841 +msgid "Missing required fields for Notification Configuration: {}." +msgstr "" +"Champs obligatoires manquants pour la configuration des notifications : {}." + +#: api/serializers.py:2844 +msgid "Configuration field '{}' incorrect type, expected {}." +msgstr "Type de champ de configuration '{}' incorrect, {} attendu." + +#: api/serializers.py:2897 +msgid "Inventory Source must be a cloud resource." +msgstr "La source d'inventaire doit être une ressource cloud." + +#: api/serializers.py:2899 +msgid "Manual Project can not have a schedule set." +msgstr "Le projet manuel ne peut pas avoir de calendrier défini." + +#: api/serializers.py:2921 +msgid "" +"DTSTART required in rrule. Value should match: DTSTART:YYYYMMDDTHHMMSSZ" +msgstr "" +"DTSTART obligatoire dans rrule. La valeur doit correspondre à : " +"DTSTART:YYYYMMDDTHHMMSSZ" + +#: api/serializers.py:2923 +msgid "Multiple DTSTART is not supported." +msgstr "Une seule valeur DTSTART est prise en charge." + +#: api/serializers.py:2925 +msgid "RRULE require in rrule." +msgstr "RRULE obligatoire dans rrule." + +#: api/serializers.py:2927 +msgid "Multiple RRULE is not supported." +msgstr "Une seule valeur RRULE est prise en charge." + +#: api/serializers.py:2929 +msgid "INTERVAL required in rrule." +msgstr "INTERVAL obligatoire dans rrule." + +#: api/serializers.py:2931 +msgid "TZID is not supported." +msgstr "TZID n'est pas pris en charge." + +#: api/serializers.py:2933 +msgid "SECONDLY is not supported." +msgstr "SECONDLY n'est pas pris en charge." + +#: api/serializers.py:2935 +msgid "Multiple BYMONTHDAYs not supported." +msgstr "Une seule valeur BYMONTHDAY est prise en charge." + +#: api/serializers.py:2937 +msgid "Multiple BYMONTHs not supported." +msgstr "Une seule valeur BYMONTH est prise en charge." + +#: api/serializers.py:2939 +msgid "BYDAY with numeric prefix not supported." +msgstr "BYDAY avec un préfixe numérique non pris en charge." + +#: api/serializers.py:2941 +msgid "BYYEARDAY not supported." +msgstr "BYYEARDAY non pris en charge." + +#: api/serializers.py:2943 +msgid "BYWEEKNO not supported." +msgstr "BYWEEKNO non pris en charge." + +#: api/serializers.py:2947 +msgid "COUNT > 999 is unsupported." +msgstr "COUNT > 999 non pris en charge." + +#: api/serializers.py:2951 +msgid "rrule parsing failed validation." +msgstr "L'analyse rrule n'a pas pu être validée." + +#: api/serializers.py:2969 +msgid "" +"A summary of the new and changed values when an object is created, updated, " +"or deleted" +msgstr "" +"Un récapitulatif des valeurs nouvelles et modifiées lorsqu'un objet est " +"créé, mis à jour ou supprimé" + +#: api/serializers.py:2971 +msgid "" +"For create, update, and delete events this is the object type that was " +"affected. For associate and disassociate events this is the object type " +"associated or disassociated with object2." +msgstr "" +"Pour créer, mettre à jour et supprimer des événements, il s'agit du type " +"d'objet qui a été affecté. Pour associer et dissocier des événements, il " +"s'agit du type d'objet associé à ou dissocié de object2." + +#: api/serializers.py:2974 +msgid "" +"Unpopulated for create, update, and delete events. For associate and " +"disassociate events this is the object type that object1 is being associated" +" with." +msgstr "" +"Laisser vide pour créer, mettre à jour et supprimer des événements. Pour " +"associer et dissocier des événements, il s'agit du type d'objet auquel " +"object1 est associé." + +#: api/serializers.py:2977 +msgid "The action taken with respect to the given object(s)." +msgstr "Action appliquée par rapport à l'objet ou aux objets donnés." + +#: api/serializers.py:3077 +msgid "Unable to login with provided credentials." +msgstr "Connexion impossible avec les informations d'identification fournies." + +#: api/serializers.py:3079 +msgid "Must include \"username\" and \"password\"." +msgstr "Elles doivent inclure le nom d'utilisateur et le mot de passe." + +#: api/views.py:99 +msgid "Your license does not allow use of the activity stream." +msgstr "Votre licence ne permet pas l'utilisation du flux d'activité." + +#: api/views.py:109 +msgid "Your license does not permit use of system tracking." +msgstr "Votre licence ne permet pas l'utilisation du suivi du système." + +#: api/views.py:119 +msgid "Your license does not allow use of workflows." +msgstr "Votre licence ne permet pas l'utilisation de workflows." + +#: api/views.py:127 templates/rest_framework/api.html:28 +msgid "REST API" +msgstr "API REST" + +#: api/views.py:134 templates/rest_framework/api.html:4 +msgid "Ansible Tower REST API" +msgstr "API REST Ansible Tower" + +#: api/views.py:150 +msgid "Version 1" +msgstr "Version 1" + +#: api/views.py:201 +msgid "Ping" +msgstr "Ping" + +#: api/views.py:230 conf/apps.py:12 +msgid "Configuration" +msgstr "Configuration" + +#: api/views.py:283 +msgid "Invalid license data" +msgstr "Données de licence non valides" + +#: api/views.py:285 +msgid "Missing 'eula_accepted' property" +msgstr "Propriété 'eula_accepted' manquante" + +#: api/views.py:289 +msgid "'eula_accepted' value is invalid" +msgstr "La valeur 'eula_accepted' n'est pas valide" + +#: api/views.py:292 +msgid "'eula_accepted' must be True" +msgstr "La valeur 'eula_accepted' doit être True" + +#: api/views.py:299 +msgid "Invalid JSON" +msgstr "Syntaxe JSON non valide" + +#: api/views.py:307 +msgid "Invalid License" +msgstr "Licence non valide" + +#: api/views.py:317 +msgid "Invalid license" +msgstr "Licence non valide" + +#: api/views.py:325 +#, python-format +msgid "Failed to remove license (%s)" +msgstr "Suppression de la licence (%s) impossible" + +#: api/views.py:330 +msgid "Dashboard" +msgstr "Tableau de bord" + +#: api/views.py:436 +msgid "Dashboard Jobs Graphs" +msgstr "Graphiques de tâches du tableau de bord" + +#: api/views.py:472 +#, python-format +msgid "Unknown period \"%s\"" +msgstr "Période \"%s\" inconnue" + +#: api/views.py:486 +msgid "Schedules" +msgstr "Calendriers" + +#: api/views.py:505 +msgid "Schedule Jobs List" +msgstr "Listes des tâches de planification" + +#: api/views.py:715 +msgid "Your Tower license only permits a single organization to exist." +msgstr "Votre licence Tower permet l'existence d'une seule organisation." + +#: api/views.py:940 api/views.py:1299 +msgid "Role 'id' field is missing." +msgstr "Le champ \"id\" du rôle est manquant." + +#: api/views.py:946 api/views.py:4081 +msgid "You cannot assign an Organization role as a child role for a Team." +msgstr "" +"Vous ne pouvez pas attribuer un rôle Organisation en tant que rôle enfant " +"pour une équipe." + +#: api/views.py:950 api/views.py:4095 +msgid "You cannot grant system-level permissions to a team." +msgstr "" +"Vous ne pouvez pas accorder de permissions au niveau système à une équipe." + +#: api/views.py:957 api/views.py:4087 +msgid "" +"You cannot grant credential access to a team when the Organization field " +"isn't set, or belongs to a different organization" +msgstr "" +"Vous ne pouvez pas accorder d'accès par informations d'identification à une " +"équipe lorsque le champ Organisation n'est pas défini ou qu'elle appartient " +"à une organisation différente" + +#: api/views.py:1047 +msgid "Cannot delete project." +msgstr "Suppression du projet impossible." + +#: api/views.py:1076 +msgid "Project Schedules" +msgstr "Calendriers des projets" + +#: api/views.py:1180 api/views.py:2270 api/views.py:3276 +msgid "Cannot delete job resource when associated workflow job is running." +msgstr "" +"Impossible de supprimer les ressources de tâche lorsqu'une tâche de workflow" +" associée est en cours d'exécution." + +#: api/views.py:1257 +msgid "Me" +msgstr "Moi-même" + +#: api/views.py:1303 api/views.py:4036 +msgid "You may not perform any action with your own admin_role." +msgstr "Vous ne pouvez pas effectuer d'action avec votre propre admin_role." + +#: api/views.py:1309 api/views.py:4040 +msgid "You may not change the membership of a users admin_role" +msgstr "" +"Vous ne pouvez pas modifier l'appartenance de l'admin_role d'un utilisateur" + +#: api/views.py:1314 api/views.py:4045 +msgid "" +"You cannot grant credential access to a user not in the credentials' " +"organization" +msgstr "" +"Vous ne pouvez pas accorder d'accès par informations d'identification à un " +"utilisateur ne figurant pas dans l'organisation d'informations " +"d'identification." + +#: api/views.py:1318 api/views.py:4049 +msgid "You cannot grant private credential access to another user" +msgstr "" +"Vous ne pouvez pas accorder d'accès privé par informations d'identification " +"à un autre utilisateur" + +#: api/views.py:1416 +#, python-format +msgid "Cannot change %s." +msgstr "Impossible de modifier %s." + +#: api/views.py:1422 +msgid "Cannot delete user." +msgstr "Impossible de supprimer l'utilisateur." + +#: api/views.py:1570 +msgid "Cannot delete inventory script." +msgstr "Impossible de supprimer le script d'inventaire." + +#: api/views.py:1805 +msgid "Fact not found." +msgstr "Fait introuvable." + +#: api/views.py:2125 +msgid "Inventory Source List" +msgstr "Liste des sources d'inventaire" + +#: api/views.py:2153 +msgid "Cannot delete inventory source." +msgstr "Impossible de supprimer la source d'inventaire." + +#: api/views.py:2161 +msgid "Inventory Source Schedules" +msgstr "Calendriers des sources d'inventaire" + +#: api/views.py:2191 +msgid "Notification Templates can only be assigned when source is one of {}." +msgstr "" +"Les modèles de notification ne peuvent être attribués que lorsque la source " +"est l'une des {}." + +#: api/views.py:2402 +msgid "Job Template Schedules" +msgstr "Calendriers des modèles de tâche" + +#: api/views.py:2422 api/views.py:2438 +msgid "Your license does not allow adding surveys." +msgstr "Votre licence ne permet pas l'ajout de questionnaires." + +#: api/views.py:2445 +msgid "'name' missing from survey spec." +msgstr "'name' manquant dans la spécification du questionnaire." + +#: api/views.py:2447 +msgid "'description' missing from survey spec." +msgstr "'description' manquante dans la spécification du questionnaire." + +#: api/views.py:2449 +msgid "'spec' missing from survey spec." +msgstr "'spec' manquante dans la spécification du questionnaire." + +#: api/views.py:2451 +msgid "'spec' must be a list of items." +msgstr "'spec' doit être une liste d'éléments" + +#: api/views.py:2453 +msgid "'spec' doesn't contain any items." +msgstr "'spec' ne contient aucun élément." + +#: api/views.py:2459 +#, python-format +msgid "Survey question %s is not a json object." +msgstr "La question %s n'est pas un objet json." + +#: api/views.py:2461 +#, python-format +msgid "'type' missing from survey question %s." +msgstr "'type' est manquant dans la question %s." + +#: api/views.py:2463 +#, python-format +msgid "'question_name' missing from survey question %s." +msgstr "'question_name' est manquant dans la question %s." + +#: api/views.py:2465 +#, python-format +msgid "'variable' missing from survey question %s." +msgstr "'variable' est manquant dans la question %s." + +#: api/views.py:2467 +#, python-format +msgid "'variable' '%(item)s' duplicated in survey question %(survey)s." +msgstr "'variable' '%(item)s' en double dans la question %(survey)s." + +#: api/views.py:2472 +#, python-format +msgid "'required' missing from survey question %s." +msgstr "'required' est manquant dans la question %s." + +#: api/views.py:2683 +msgid "No matching host could be found!" +msgstr "Aucun hôte correspondant n'a été trouvé." + +#: api/views.py:2686 +msgid "Multiple hosts matched the request!" +msgstr "Plusieurs hôtes correspondent à la requête." + +#: api/views.py:2691 +msgid "Cannot start automatically, user input required!" +msgstr "" +"Impossible de démarrer automatiquement, saisie de l'utilisateur obligatoire." + +#: api/views.py:2698 +msgid "Host callback job already pending." +msgstr "La tâche de rappel de l'hôte est déjà en attente." + +#: api/views.py:2711 +msgid "Error starting job!" +msgstr "Erreur lors du démarrage de la tâche." + +#: api/views.py:3040 +msgid "Workflow Job Template Schedules" +msgstr "Calendriers des modèles de tâche Workflow" + +#: api/views.py:3175 api/views.py:3714 +msgid "Superuser privileges needed." +msgstr "Privilèges de superutilisateur requis." + +#: api/views.py:3207 +msgid "System Job Template Schedules" +msgstr "Calendriers des modèles de tâche Système" + +#: api/views.py:3399 +msgid "Job Host Summaries List" +msgstr "Liste récapitulative des hôtes de la tâche" + +#: api/views.py:3441 +msgid "Job Event Children List" +msgstr "Liste des enfants d'événement de la tâche" + +#: api/views.py:3450 +msgid "Job Event Hosts List" +msgstr "Liste des hôtes d'événement de la tâche" + +#: api/views.py:3459 +msgid "Job Events List" +msgstr "Liste des événements de la tâche" + +#: api/views.py:3668 +msgid "Ad Hoc Command Events List" +msgstr "Liste d'événements de la commande ad hoc" + +#: api/views.py:3862 +#, python-format +msgid "Error generating stdout download file: %s" +msgstr "Erreur lors de la génération du fichier de téléchargement stdout : %s" + +#: api/views.py:3907 +msgid "Delete not allowed while there are pending notifications" +msgstr "Suppression non autorisée tant que des notifications sont en attente" + +#: api/views.py:3914 +msgid "Notification Template Test" +msgstr "" + +#: api/views.py:4030 +msgid "User 'id' field is missing." +msgstr "Le champ \"id\" de l'utilisateur est manquant." + +#: api/views.py:4073 +msgid "Team 'id' field is missing." +msgstr "Le champ \"id\" de l'équipe est manquant." + +#: conf/conf.py:20 +msgid "Bud Frogs" +msgstr "Bud Frogs" + +#: conf/conf.py:21 +msgid "Bunny" +msgstr "Bunny" + +#: conf/conf.py:22 +msgid "Cheese" +msgstr "Cheese" + +#: conf/conf.py:23 +msgid "Daemon" +msgstr "Daemon" + +#: conf/conf.py:24 +msgid "Default Cow" +msgstr "Default Cow" + +#: conf/conf.py:25 +msgid "Dragon" +msgstr "Dragon" + +#: conf/conf.py:26 +msgid "Elephant in Snake" +msgstr "Elephant in Snake" + +#: conf/conf.py:27 +msgid "Elephant" +msgstr "Elephant" + +#: conf/conf.py:28 +msgid "Eyes" +msgstr "Eyes" + +#: conf/conf.py:29 +msgid "Hello Kitty" +msgstr "Hello Kitty" + +#: conf/conf.py:30 +msgid "Kitty" +msgstr "Kitty" + +#: conf/conf.py:31 +msgid "Luke Koala" +msgstr "Luke Koala" + +#: conf/conf.py:32 +msgid "Meow" +msgstr "Meow" + +#: conf/conf.py:33 +msgid "Milk" +msgstr "Milk" + +#: conf/conf.py:34 +msgid "Moofasa" +msgstr "Moofasa" + +#: conf/conf.py:35 +msgid "Moose" +msgstr "Moose" + +#: conf/conf.py:36 +msgid "Ren" +msgstr "Ren" + +#: conf/conf.py:37 +msgid "Sheep" +msgstr "Sheep" + +#: conf/conf.py:38 +msgid "Small Cow" +msgstr "Small Cow" + +#: conf/conf.py:39 +msgid "Stegosaurus" +msgstr "Stegosaurus" + +#: conf/conf.py:40 +msgid "Stimpy" +msgstr "Stimpy" + +#: conf/conf.py:41 +msgid "Super Milker" +msgstr "Super Milker" + +#: conf/conf.py:42 +msgid "Three Eyes" +msgstr "Three Eyes" + +#: conf/conf.py:43 +msgid "Turkey" +msgstr "Turkey" + +#: conf/conf.py:44 +msgid "Turtle" +msgstr "Turtle" + +#: conf/conf.py:45 +msgid "Tux" +msgstr "Tux" + +#: conf/conf.py:46 +msgid "Udder" +msgstr "Udder" + +#: conf/conf.py:47 +msgid "Vader Koala" +msgstr "Vader Koala" + +#: conf/conf.py:48 +msgid "Vader" +msgstr "Vader" + +#: conf/conf.py:49 +msgid "WWW" +msgstr "WWW" + +#: conf/conf.py:52 +msgid "Cow Selection" +msgstr "Sélection cow" + +#: conf/conf.py:53 +msgid "Select which cow to use with cowsay when running jobs." +msgstr "" +"Sélectionnez quel cow utiliser avec cowsay lors de l'exécution de tâches." + +#: conf/conf.py:54 conf/conf.py:75 +msgid "Cows" +msgstr "Cows" + +#: conf/conf.py:73 +msgid "Example Read-Only Setting" +msgstr "Exemple de paramètre en lecture seule" + +#: conf/conf.py:74 +msgid "Example setting that cannot be changed." +msgstr "L'exemple de paramètre ne peut pas être modifié." + +#: conf/conf.py:93 +msgid "Example Setting" +msgstr "Exemple de paramètre" + +#: conf/conf.py:94 +msgid "Example setting which can be different for each user." +msgstr "Exemple de paramètre qui peut être différent pour chaque utilisateur." + +#: conf/conf.py:95 conf/registry.py:67 conf/views.py:46 +msgid "User" +msgstr "Utilisateur" + +#: conf/fields.py:38 +msgid "Enter a valid URL" +msgstr "Entez une URL valide" + +#: conf/license.py:19 +msgid "Your Tower license does not allow that." +msgstr "Votre licence Tower ne vous y autorise pas." + +#: conf/management/commands/migrate_to_database_settings.py:41 +msgid "Only show which settings would be commented/migrated." +msgstr "" +"Afficher seulement les paramètres qui pourraient être commentés/migrés." + +#: conf/management/commands/migrate_to_database_settings.py:48 +msgid "" +"Skip over settings that would raise an error when commenting/migrating." +msgstr "" +"Ignorer les paramètres qui pourraient provoquer une erreur lors de la saisie" +" de commentaires/de la migration." + +#: conf/management/commands/migrate_to_database_settings.py:55 +msgid "Skip commenting out settings in files." +msgstr "Ignorer la saisie de commentaires de paramètres dans les fichiers." + +#: conf/management/commands/migrate_to_database_settings.py:61 +msgid "Backup existing settings files with this suffix." +msgstr "Sauvegardez les fichiers de paramètres existants avec ce suffixe." + +#: conf/registry.py:55 +msgid "All" +msgstr "Tous" + +#: conf/registry.py:56 +msgid "Changed" +msgstr "Modifié" + +#: conf/registry.py:68 +msgid "User-Defaults" +msgstr "Paramètres utilisateur par défaut" + +#: conf/views.py:38 +msgid "Setting Categories" +msgstr "Catégories de paramètre" + +#: conf/views.py:61 +msgid "Setting Detail" +msgstr "Détails du paramètre" + +#: main/access.py:255 +#, python-format +msgid "Bad data found in related field %s." +msgstr "Données incorrectes trouvées dans le champ %s associé." + +#: main/access.py:296 +msgid "License is missing." +msgstr "La licence est manquante." + +#: main/access.py:298 +msgid "License has expired." +msgstr "La licence est arrivée à expiration." + +#: main/access.py:303 +#, python-format +msgid "License count of %s instances has been reached." +msgstr "Le nombre de licences d'instances %s a été atteint." + +#: main/access.py:305 +#, python-format +msgid "License count of %s instances has been exceeded." +msgstr "Le nombre de licences d'instances %s a été dépassé." + +#: main/access.py:307 +msgid "Host count exceeds available instances." +msgstr "Le nombre d'hôtes dépasse celui des instances disponibles." + +#: main/access.py:311 +#, python-format +msgid "Feature %s is not enabled in the active license." +msgstr "La fonctionnalité %s n'est pas activée dans la licence active." + +#: main/access.py:313 +msgid "Features not found in active license." +msgstr "Fonctionnalités introuvables dans la licence active." + +#: main/access.py:511 main/access.py:578 main/access.py:698 main/access.py:961 +#: main/access.py:1200 main/access.py:1597 +msgid "Resource is being used by running jobs" +msgstr "La ressource est utilisée par des tâches en cours d'exécution" + +#: main/access.py:622 +msgid "Unable to change inventory on a host." +msgstr "Impossible de modifier l'inventaire sur un hôte." + +#: main/access.py:634 main/access.py:679 +msgid "Cannot associate two items from different inventories." +msgstr "Impossible d'associer deux éléments d'inventaires différents." + +#: main/access.py:667 +msgid "Unable to change inventory on a group." +msgstr "Impossible de modifier l'inventaire sur un groupe." + +#: main/access.py:881 +msgid "Unable to change organization on a team." +msgstr "Impossible de modifier l'organisation d'une équipe." + +#: main/access.py:894 +msgid "The {} role cannot be assigned to a team" +msgstr "Le rôle {} ne peut pas être attribué à une équipe" + +#: main/access.py:896 +msgid "The admin_role for a User cannot be assigned to a team" +msgstr "L'admin_role d'un utilisateur ne peut pas être attribué à une équipe" + +#: main/access.py:1670 +msgid "" +"You do not have permission to the workflow job resources required for " +"relaunch." +msgstr "" + +#: main/apps.py:9 +msgid "Main" +msgstr "Principal" + +#: main/conf.py:17 +msgid "Enable Activity Stream" +msgstr "Activer le flux d'activité" + +#: main/conf.py:18 +msgid "Enable capturing activity for the Tower activity stream." +msgstr "Activer la capture d'activités pour le flux d'activité Tower." + +#: main/conf.py:19 main/conf.py:29 main/conf.py:39 main/conf.py:48 +#: main/conf.py:60 main/conf.py:78 main/conf.py:103 +msgid "System" +msgstr "Système" + +#: main/conf.py:27 +msgid "Enable Activity Stream for Inventory Sync" +msgstr "Activer le flux d'activité pour la synchronisation des inventaires" + +#: main/conf.py:28 +msgid "" +"Enable capturing activity for the Tower activity stream when running " +"inventory sync." +msgstr "" +"Activer la capture d'activités pour le flux d'activité Tower lors de la " +"synchronisation des inventaires." + +#: main/conf.py:37 +msgid "All Users Visible to Organization Admins" +msgstr "" +"Tous les utilisateurs visibles pour les administrateurs de l'organisation" + +#: main/conf.py:38 +msgid "" +"Controls whether any Organization Admin can view all users, even those not " +"associated with their Organization." +msgstr "" +"Contrôle si un administrateur d'organisation peut ou non afficher tous les " +"utilisateurs, même ceux qui ne sont pas associés à son organisation." + +#: main/conf.py:46 +msgid "Enable Tower Administrator Alerts" +msgstr "Activer les alertes administrateur de Tower" + +#: main/conf.py:47 +msgid "" +"Allow Tower to email Admin users for system events that may require " +"attention." +msgstr "" +"Autoriser Tower à alerter les administrateurs par email concernant des " +"événements système susceptibles de mériter leur attention." + +#: main/conf.py:57 +msgid "Base URL of the Tower host" +msgstr "URL de base pour l'hôte Tower" + +#: main/conf.py:58 +msgid "" +"This setting is used by services like notifications to render a valid url to" +" the Tower host." +msgstr "" +"Ce paramètre est utilisé par des services sous la forme de notifications " +"permettant de rendre valide une URL pour l'hôte Tower." + +#: main/conf.py:67 +msgid "Remote Host Headers" +msgstr "En-têtes d'hôte distant" + +#: main/conf.py:68 +msgid "" +"HTTP headers and meta keys to search to determine remote host name or IP. Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if behind a reverse proxy.\n" +"\n" +"Note: The headers will be searched in order and the first found remote host name or IP will be used.\n" +"\n" +"In the below example 8.8.8.7 would be the chosen IP address.\n" +"X-Forwarded-For: 8.8.8.7, 192.168.2.1, 127.0.0.1\n" +"Host: 127.0.0.1\n" +"REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" +msgstr "" +"En-têtes HTTP et méta-clés à rechercher afin de déterminer le nom ou l'adresse IP d'un hôte distant. Ajoutez des éléments supplémentaires à cette liste, tels que \"HTTP_X_FORWARDED_FOR\", en présence d'un proxy inverse.\n" +"\n" +"Remarque : les en-têtes seront recherchés dans l'ordre, et le premier nom ou la première adresse IP d'hôte distant trouvé(e) sera utilisé(e).\n" +"\n" +"Dans l'exemple ci-dessous 8.8.8.7 est l'adresse IP choisie. \n" +"X-Forwarded-For : 8.8.8.7, 192.168.2.1, 127.0.0.1\n" +"Hôte : 127.0.0.1\n" +"REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" + +#: main/conf.py:99 +msgid "Tower License" +msgstr "Licence Tower" + +#: main/conf.py:100 +msgid "" +"The license controls which features and functionality are enabled in Tower. " +"Use /api/v1/config/ to update or change the license." +msgstr "" +"La licence détermine les fonctionnalités et les fonctions qui sont activées " +"dans Tower. Utilisez /api/v1/config/ pour mettre à jour ou modifier la " +"licence." + +#: main/conf.py:110 +msgid "Ansible Modules Allowed for Ad Hoc Jobs" +msgstr "Modules Ansible autorisés pour des tâches ad hoc" + +#: main/conf.py:111 +msgid "List of modules allowed to be used by ad-hoc jobs." +msgstr "Liste des modules que des tâches ad hoc sont autorisées à utiliser." + +#: main/conf.py:112 main/conf.py:121 main/conf.py:130 main/conf.py:140 +#: main/conf.py:150 main/conf.py:160 main/conf.py:170 main/conf.py:180 +#: main/conf.py:190 main/conf.py:202 main/conf.py:214 main/conf.py:226 +msgid "Jobs" +msgstr "Tâches" + +#: main/conf.py:119 +msgid "Enable job isolation" +msgstr "Activer l'isolement des tâches" + +#: main/conf.py:120 +msgid "" +"Isolates an Ansible job from protected parts of the Tower system to prevent " +"exposing sensitive information." +msgstr "" +"Permet d'isoler une tâche Ansible des parties protégées du système Tower " +"pour éviter l'exposition d'informations sensibles." + +#: main/conf.py:128 +msgid "Job isolation execution path" +msgstr "Chemin d'exécution pour l'isolement des tâches" + +#: main/conf.py:129 +msgid "" +"Create temporary working directories for isolated jobs in this location." +msgstr "" +"Créez des répertoires de travail temporaires pour les tâches isolées à cet " +"emplacement." + +#: main/conf.py:138 +msgid "Paths to hide from isolated jobs" +msgstr "Chemins à dissimuler des tâches isolées" + +#: main/conf.py:139 +msgid "Additional paths to hide from isolated processes." +msgstr "Chemins supplémentaires à dissimuler des processus isolés." + +#: main/conf.py:148 +msgid "Paths to expose to isolated jobs" +msgstr "Chemins à exposer aux tâches isolées" + +#: main/conf.py:149 +msgid "" +"Whitelist of paths that would otherwise be hidden to expose to isolated " +"jobs." +msgstr "" +"Liste blanche des chemins qui seraient autrement dissimulés de façon à ne " +"pas être exposés aux tâches isolées." + +#: main/conf.py:158 +msgid "Standard Output Maximum Display Size" +msgstr "Taille d'affichage maximale pour une sortie standard" + +#: main/conf.py:159 +msgid "" +"Maximum Size of Standard Output in bytes to display before requiring the " +"output be downloaded." +msgstr "" +"Taille maximale d'une sortie standard en octets à afficher avant de demander" +" le téléchargement de la sortie." + +#: main/conf.py:168 +msgid "Job Event Standard Output Maximum Display Size" +msgstr "" +"Taille d'affichage maximale pour une sortie standard d'événement de tâche" + +#: main/conf.py:169 +msgid "" +"Maximum Size of Standard Output in bytes to display for a single job or ad " +"hoc command event. `stdout` will end with `…` when truncated." +msgstr "" +"Taille maximale de la sortie standard en octets à afficher pour une seule " +"tâche ou pour un seul événement de commande ad hoc. `stdout` se terminera " +"par `...` quand il sera tronqué." + +#: main/conf.py:178 +msgid "Maximum Scheduled Jobs" +msgstr "Nombre max. de tâches planifiées" + +#: main/conf.py:179 +msgid "" +"Maximum number of the same job template that can be waiting to run when " +"launching from a schedule before no more are created." +msgstr "" +"Nombre maximal du même modèle de tâche qui peut être mis en attente " +"d'exécution lors de son lancement à partir d'un calendrier, avant que " +"d'autres ne soient créés." + +#: main/conf.py:188 +msgid "Ansible Callback Plugins" +msgstr "Plug-ins de rappel Ansible" + +#: main/conf.py:189 +msgid "" +"List of paths to search for extra callback plugins to be used when running " +"jobs." +msgstr "" +"Liste des chemins servant à rechercher d'autres plug-ins de rappel qui " +"serviront lors de l'exécution de tâches." + +#: main/conf.py:199 +msgid "Default Job Timeout" +msgstr "Délai d'attente par défaut des tâches" + +#: main/conf.py:200 +msgid "" +"Maximum time to allow jobs to run. Use value of 0 to indicate that no " +"timeout should be imposed. A timeout set on an individual job template will " +"override this." +msgstr "" +"Délai maximal d'exécution des tâches. Utilisez la valeur 0 pour indiquer " +"qu'aucun délai ne doit être imposé. Un délai d'attente défini sur celui d'un" +" modèle de tâche précis écrasera cette valeur." + +#: main/conf.py:211 +msgid "Default Inventory Update Timeout" +msgstr "Délai d'attente par défaut pour la mise à jour d'inventaire" + +#: main/conf.py:212 +msgid "" +"Maximum time to allow inventory updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual inventory " +"source will override this." +msgstr "" +"Délai maximal d'exécution des mises à jour d'inventaire. Utilisez la valeur " +"0 pour indiquer qu'aucun délai ne doit être imposé. Un délai d'attente " +"défini sur celui d'une source d'inventaire précise écrasera cette valeur." + +#: main/conf.py:223 +msgid "Default Project Update Timeout" +msgstr "Délai d'attente par défaut pour la mise à jour de projet" + +#: main/conf.py:224 +msgid "" +"Maximum time to allow project updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual project " +"will override this." +msgstr "" +"Délai maximal d'exécution des mises à jour de projet. Utilisez la valeur 0 " +"pour indiquer qu'aucun délai ne doit être imposé. Un délai d'attente défini " +"sur celui d'un projet précis écrasera cette valeur." + +#: main/conf.py:234 +msgid "Logging Aggregator" +msgstr "" + +#: main/conf.py:235 +msgid "Hostname/IP where external logs will be sent to." +msgstr "" + +#: main/conf.py:236 main/conf.py:245 main/conf.py:255 main/conf.py:264 +#: main/conf.py:274 main/conf.py:288 main/conf.py:300 main/conf.py:309 +msgid "Logging" +msgstr "Journalisation" + +#: main/conf.py:243 +msgid "Logging Aggregator Port" +msgstr "" + +#: main/conf.py:244 +msgid "Port on Logging Aggregator to send logs to (if required)." +msgstr "" + +#: main/conf.py:253 +msgid "Logging Aggregator Type" +msgstr "" + +#: main/conf.py:254 +msgid "Format messages for the chosen log aggregator." +msgstr "" + +#: main/conf.py:262 +msgid "Logging Aggregator Username" +msgstr "" + +#: main/conf.py:263 +msgid "Username for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:272 +msgid "Logging Aggregator Password/Token" +msgstr "" + +#: main/conf.py:273 +msgid "" +"Password or authentication token for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:281 +msgid "Loggers to send data to the log aggregator from" +msgstr "" +"Journaliseurs à partir duquel envoyer des données à l'agrégateur de journaux" + +#: main/conf.py:282 +msgid "" +"List of loggers that will send HTTP logs to the collector, these can include any or all of: \n" +"awx - Tower service logs\n" +"activity_stream - activity stream records\n" +"job_events - callback data from Ansible job events\n" +"system_tracking - facts gathered from scan jobs." +msgstr "" + +#: main/conf.py:295 +msgid "Log System Tracking Facts Individually" +msgstr "" + +#: main/conf.py:296 +msgid "" +"If set, system tracking facts will be sent for each package, service, " +"orother item found in a scan, allowing for greater search query granularity." +" If unset, facts will be sent as a single dictionary, allowing for greater " +"efficiency in fact processing." +msgstr "" + +#: main/conf.py:307 +msgid "Enable External Logging" +msgstr "" + +#: main/conf.py:308 +msgid "Enable sending logs to external log aggregator." +msgstr "" + +#: main/models/activity_stream.py:22 +msgid "Entity Created" +msgstr "Entité créée" + +#: main/models/activity_stream.py:23 +msgid "Entity Updated" +msgstr "Entité mise à jour" + +#: main/models/activity_stream.py:24 +msgid "Entity Deleted" +msgstr "Entité supprimée" + +#: main/models/activity_stream.py:25 +msgid "Entity Associated with another Entity" +msgstr "Entité associée à une autre entité" + +#: main/models/activity_stream.py:26 +msgid "Entity was Disassociated with another Entity" +msgstr "Entité dissociée d'une autre entité" + +#: main/models/ad_hoc_commands.py:96 +msgid "No valid inventory." +msgstr "Aucun inventaire valide." + +#: main/models/ad_hoc_commands.py:103 main/models/jobs.py:161 +msgid "You must provide a machine / SSH credential." +msgstr "Vous devez fournir des informations d'identification machine / SSH." + +#: main/models/ad_hoc_commands.py:114 main/models/ad_hoc_commands.py:122 +msgid "Invalid type for ad hoc command" +msgstr "Type non valide pour la commande ad hoc" + +#: main/models/ad_hoc_commands.py:117 +msgid "Unsupported module for ad hoc commands." +msgstr "Module non pris en charge pour les commandes ad hoc." + +#: main/models/ad_hoc_commands.py:125 +#, python-format +msgid "No argument passed to %s module." +msgstr "Aucun argument transmis au module %s." + +#: main/models/ad_hoc_commands.py:222 main/models/jobs.py:763 +msgid "Host Failed" +msgstr "Échec de l'hôte" + +#: main/models/ad_hoc_commands.py:223 main/models/jobs.py:764 +msgid "Host OK" +msgstr "Hôte OK" + +#: main/models/ad_hoc_commands.py:224 main/models/jobs.py:767 +msgid "Host Unreachable" +msgstr "Hôte inaccessible" + +#: main/models/ad_hoc_commands.py:229 main/models/jobs.py:766 +msgid "Host Skipped" +msgstr "Hôte ignoré" + +#: main/models/ad_hoc_commands.py:239 main/models/jobs.py:794 +msgid "Debug" +msgstr "Déboguer" + +#: main/models/ad_hoc_commands.py:240 main/models/jobs.py:795 +msgid "Verbose" +msgstr "Verbeux" + +#: main/models/ad_hoc_commands.py:241 main/models/jobs.py:796 +msgid "Deprecated" +msgstr "Obsolète" + +#: main/models/ad_hoc_commands.py:242 main/models/jobs.py:797 +msgid "Warning" +msgstr "Avertissement" + +#: main/models/ad_hoc_commands.py:243 main/models/jobs.py:798 +msgid "System Warning" +msgstr "Avertissement système" + +#: main/models/ad_hoc_commands.py:244 main/models/jobs.py:799 +#: main/models/unified_jobs.py:64 +msgid "Error" +msgstr "Erreur" + +#: main/models/base.py:45 main/models/base.py:51 main/models/base.py:56 +msgid "Run" +msgstr "Exécuter" + +#: main/models/base.py:46 main/models/base.py:52 main/models/base.py:57 +msgid "Check" +msgstr "Vérifier" + +#: main/models/base.py:47 +msgid "Scan" +msgstr "Scanner" + +#: main/models/base.py:61 +msgid "Read Inventory" +msgstr "Lire l'inventaire" + +#: main/models/base.py:62 +msgid "Edit Inventory" +msgstr "Modifier l'inventaire" + +#: main/models/base.py:63 +msgid "Administrate Inventory" +msgstr "Administrer l'inventaire" + +#: main/models/base.py:64 +msgid "Deploy To Inventory" +msgstr "Déployer dans l'inventaire" + +#: main/models/base.py:65 +msgid "Deploy To Inventory (Dry Run)" +msgstr "Déployer dans l'inventaire (test uniquement)" + +#: main/models/base.py:66 +msgid "Scan an Inventory" +msgstr "Scanner un inventaire" + +#: main/models/base.py:67 +msgid "Create a Job Template" +msgstr "Créer un modèle de tâche" + +#: main/models/credential.py:33 +msgid "Machine" +msgstr "Machine" + +#: main/models/credential.py:34 +msgid "Network" +msgstr "Réseau" + +#: main/models/credential.py:35 +msgid "Source Control" +msgstr "Contrôle de la source" + +#: main/models/credential.py:36 +msgid "Amazon Web Services" +msgstr "Amazon Web Services" + +#: main/models/credential.py:37 +msgid "Rackspace" +msgstr "Rackspace" + +#: main/models/credential.py:38 main/models/inventory.py:713 +msgid "VMware vCenter" +msgstr "VMware vCenter" + +#: main/models/credential.py:39 main/models/inventory.py:714 +msgid "Red Hat Satellite 6" +msgstr "Red Hat Satellite 6" + +#: main/models/credential.py:40 main/models/inventory.py:715 +msgid "Red Hat CloudForms" +msgstr "Red Hat CloudForms" + +#: main/models/credential.py:41 main/models/inventory.py:710 +msgid "Google Compute Engine" +msgstr "Google Compute Engine" + +#: main/models/credential.py:42 main/models/inventory.py:711 +msgid "Microsoft Azure Classic (deprecated)" +msgstr "Microsoft Azure Classic (obsolète)" + +#: main/models/credential.py:43 main/models/inventory.py:712 +msgid "Microsoft Azure Resource Manager" +msgstr "Microsoft Azure Resource Manager" + +#: main/models/credential.py:44 main/models/inventory.py:716 +msgid "OpenStack" +msgstr "OpenStack" + +#: main/models/credential.py:48 +msgid "None" +msgstr "Aucun" + +#: main/models/credential.py:49 +msgid "Sudo" +msgstr "Sudo" + +#: main/models/credential.py:50 +msgid "Su" +msgstr "Su" + +#: main/models/credential.py:51 +msgid "Pbrun" +msgstr "Pbrun" + +#: main/models/credential.py:52 +msgid "Pfexec" +msgstr "Pfexec" + +#: main/models/credential.py:53 +msgid "DZDO" +msgstr "" + +#: main/models/credential.py:54 +msgid "Pmrun" +msgstr "" + +#: main/models/credential.py:103 +msgid "Host" +msgstr "Hôte" + +#: main/models/credential.py:104 +msgid "The hostname or IP address to use." +msgstr "Nom d'hôte ou adresse IP à utiliser." + +#: main/models/credential.py:110 +msgid "Username" +msgstr "Nom d'utilisateur" + +#: main/models/credential.py:111 +msgid "Username for this credential." +msgstr "Nom d'utilisateur pour ces informations d'identification." + +#: main/models/credential.py:117 +msgid "Password" +msgstr "Mot de passe" + +#: main/models/credential.py:118 +msgid "" +"Password for this credential (or \"ASK\" to prompt the user for machine " +"credentials)." +msgstr "" +"Mot de passe pour ces informations d'identification (ou \"ASK\" pour " +"demander à l'utilisateur les informations d'identification de la machine)." + +#: main/models/credential.py:125 +msgid "Security Token" +msgstr "Token de sécurité" + +#: main/models/credential.py:126 +msgid "Security Token for this credential" +msgstr "Token de sécurité pour ces informations d'identification" + +#: main/models/credential.py:132 +msgid "Project" +msgstr "Projet" + +#: main/models/credential.py:133 +msgid "The identifier for the project." +msgstr "Identifiant du projet." + +#: main/models/credential.py:139 +msgid "Domain" +msgstr "Domaine" + +#: main/models/credential.py:140 +msgid "The identifier for the domain." +msgstr "Identifiant du domaine." + +#: main/models/credential.py:145 +msgid "SSH private key" +msgstr "Clé privée SSH" + +#: main/models/credential.py:146 +msgid "RSA or DSA private key to be used instead of password." +msgstr "Clé privée RSA ou DSA à utiliser au lieu du mot de passe." + +#: main/models/credential.py:152 +msgid "SSH key unlock" +msgstr "Déverrouillage de la clé SSH" + +#: main/models/credential.py:153 +msgid "" +"Passphrase to unlock SSH private key if encrypted (or \"ASK\" to prompt the " +"user for machine credentials)." +msgstr "" +"Phrase de passe servant à déverrouiller la clé privée SSH si elle est " +"chiffrée (ou \"ASK\" pour demander à l'utilisateur les informations " +"d'identification de la machine)." + +#: main/models/credential.py:161 +msgid "Privilege escalation method." +msgstr "Méthode d'élévation des privilèges." + +#: main/models/credential.py:167 +msgid "Privilege escalation username." +msgstr "Nom d'utilisateur pour l'élévation des privilèges" + +#: main/models/credential.py:173 +msgid "Password for privilege escalation method." +msgstr "Mot de passe pour la méthode d'élévation des privilèges." + +#: main/models/credential.py:179 +msgid "Vault password (or \"ASK\" to prompt the user)." +msgstr "Mot de passe Vault (ou \"ASK\" pour le demander à l'utilisateur)." + +#: main/models/credential.py:183 +msgid "Whether to use the authorize mechanism." +msgstr "Indique s'il faut ou non utiliser le mécanisme d'autorisation." + +#: main/models/credential.py:189 +msgid "Password used by the authorize mechanism." +msgstr "Mot de passe utilisé par le mécanisme d'autorisation." + +#: main/models/credential.py:195 +msgid "Client Id or Application Id for the credential" +msgstr "" +"ID du client ou de l'application pour les informations d'identification" + +#: main/models/credential.py:201 +msgid "Secret Token for this credential" +msgstr "Token secret pour ces informations d'identification" + +#: main/models/credential.py:207 +msgid "Subscription identifier for this credential" +msgstr "ID d'abonnement pour ces informations d'identification" + +#: main/models/credential.py:213 +msgid "Tenant identifier for this credential" +msgstr "ID de tenant pour ces informations d'identification" + +#: main/models/credential.py:283 +msgid "Host required for VMware credential." +msgstr "Hôte requis pour les informations d'identification VMware." + +#: main/models/credential.py:285 +msgid "Host required for OpenStack credential." +msgstr "Hôte requis pour les informations d'identification OpenStack." + +#: main/models/credential.py:294 +msgid "Access key required for AWS credential." +msgstr "Clé d'accès requise pour les informations d'identification AWS." + +#: main/models/credential.py:296 +msgid "Username required for Rackspace credential." +msgstr "" +"Nom d'utilisateur requis pour les informations d'identification Rackspace." + +#: main/models/credential.py:299 +msgid "Username required for VMware credential." +msgstr "" +"Nom d'utilisateur requis pour les informations d'identification VMware." + +#: main/models/credential.py:301 +msgid "Username required for OpenStack credential." +msgstr "" +"Nom d'utilisateur requis pour les informations d'identification OpenStack." + +#: main/models/credential.py:307 +msgid "Secret key required for AWS credential." +msgstr "Clé secrète requise pour les informations d'identification AWS." + +#: main/models/credential.py:309 +msgid "API key required for Rackspace credential." +msgstr "Clé API requise pour les informations d'identification Rackspace." + +#: main/models/credential.py:311 +msgid "Password required for VMware credential." +msgstr "Mot de passe requis pour les informations d'identification VMware." + +#: main/models/credential.py:313 +msgid "Password or API key required for OpenStack credential." +msgstr "" +"Mot de passe ou clé API requis(e) pour les informations d'identification " +"OpenStack." + +#: main/models/credential.py:319 +msgid "Project name required for OpenStack credential." +msgstr "" +"Nom de projet requis pour les informations d'identification OpenStack." + +#: main/models/credential.py:346 +msgid "SSH key unlock must be set when SSH key is encrypted." +msgstr "" +"Le déverrouillage de la clé SSH doit être défini lorsque la clé SSH est " +"chiffrée." + +#: main/models/credential.py:352 +msgid "Credential cannot be assigned to both a user and team." +msgstr "" +"Les informations d'identification ne peuvent pas être attribuées à la fois à" +" un utilisateur et une équipe." + +#: main/models/fact.py:21 +msgid "Host for the facts that the fact scan captured." +msgstr "Hôte pour les faits que le scan de faits a capturés." + +#: main/models/fact.py:26 +msgid "Date and time of the corresponding fact scan gathering time." +msgstr "" +"Date et heure du scan de faits correspondant au moment de la collecte des " +"faits." + +#: main/models/fact.py:29 +msgid "" +"Arbitrary JSON structure of module facts captured at timestamp for a single " +"host." +msgstr "" +"Structure JSON arbitraire des faits de module capturés au moment de " +"l'horodatage pour un seul hôte." + +#: main/models/inventory.py:45 +msgid "inventories" +msgstr "inventaires" + +#: main/models/inventory.py:52 +msgid "Organization containing this inventory." +msgstr "Organisation contenant cet inventaire." + +#: main/models/inventory.py:58 +msgid "Inventory variables in JSON or YAML format." +msgstr "Variables d'inventaire au format JSON ou YAML." + +#: main/models/inventory.py:63 +msgid "Flag indicating whether any hosts in this inventory have failed." +msgstr "Marqueur indiquant si les hôtes de cet inventaire ont échoué." + +#: main/models/inventory.py:68 +msgid "Total number of hosts in this inventory." +msgstr "Nombre total d'hôtes dans cet inventaire." + +#: main/models/inventory.py:73 +msgid "Number of hosts in this inventory with active failures." +msgstr "Nombre d'hôtes dans cet inventaire avec des échecs non résolus." + +#: main/models/inventory.py:78 +msgid "Total number of groups in this inventory." +msgstr "Nombre total de groupes dans cet inventaire." + +#: main/models/inventory.py:83 +msgid "Number of groups in this inventory with active failures." +msgstr "Nombre de groupes dans cet inventaire avec des échecs non résolus." + +#: main/models/inventory.py:88 +msgid "" +"Flag indicating whether this inventory has any external inventory sources." +msgstr "" +"Marqueur indiquant si cet inventaire contient des sources d'inventaire " +"externes." + +#: main/models/inventory.py:93 +msgid "" +"Total number of external inventory sources configured within this inventory." +msgstr "" +"Nombre total de sources d'inventaire externes configurées dans cet " +"inventaire." + +#: main/models/inventory.py:98 +msgid "Number of external inventory sources in this inventory with failures." +msgstr "" +"Nombre total de sources d'inventaire externes en échec dans cet inventaire." + +#: main/models/inventory.py:339 +msgid "Is this host online and available for running jobs?" +msgstr "Cet hôte est-il en ligne et disponible pour exécuter des tâches ?" + +#: main/models/inventory.py:345 +msgid "" +"The value used by the remote inventory source to uniquely identify the host" +msgstr "" +"Valeur utilisée par la source d'inventaire distante pour identifier l'hôte " +"de façon unique" + +#: main/models/inventory.py:350 +msgid "Host variables in JSON or YAML format." +msgstr "Variables d'hôte au format JSON ou YAML." + +#: main/models/inventory.py:372 +msgid "Flag indicating whether the last job failed for this host." +msgstr "Marqueur indiquant si la dernière tâche a échoué pour cet hôte." + +#: main/models/inventory.py:377 +msgid "" +"Flag indicating whether this host was created/updated from any external " +"inventory sources." +msgstr "" +"Marqueur indiquant si cet hôte a été créé/mis à jour à partir de sources " +"d'inventaire externes." + +#: main/models/inventory.py:383 +msgid "Inventory source(s) that created or modified this host." +msgstr "Sources d'inventaire qui ont créé ou modifié cet hôte." + +#: main/models/inventory.py:474 +msgid "Group variables in JSON or YAML format." +msgstr "Variables de groupe au format JSON ou YAML." + +#: main/models/inventory.py:480 +msgid "Hosts associated directly with this group." +msgstr "Hôtes associés directement à ce groupe." + +#: main/models/inventory.py:485 +msgid "Total number of hosts directly or indirectly in this group." +msgstr "" +"Nombre total d'hôtes associés directement ou indirectement à ce groupe." + +#: main/models/inventory.py:490 +msgid "Flag indicating whether this group has any hosts with active failures." +msgstr "" +"Marqueur indiquant si ce groupe possède ou non des hôtes avec des échecs non" +" résolus." + +#: main/models/inventory.py:495 +msgid "Number of hosts in this group with active failures." +msgstr "Nombre d'hôtes dans ce groupe avec des échecs non résolus." + +#: main/models/inventory.py:500 +msgid "Total number of child groups contained within this group." +msgstr "Nombre total de groupes enfants compris dans ce groupe." + +#: main/models/inventory.py:505 +msgid "Number of child groups within this group that have active failures." +msgstr "Nombre de groupes enfants dans ce groupe avec des échecs non résolus." + +#: main/models/inventory.py:510 +msgid "" +"Flag indicating whether this group was created/updated from any external " +"inventory sources." +msgstr "" +"Marqueur indiquant si ce groupe a été créé/mis à jour à partir de sources " +"d'inventaire externes." + +#: main/models/inventory.py:516 +msgid "Inventory source(s) that created or modified this group." +msgstr "Sources d'inventaire qui ont créé ou modifié ce groupe." + +#: main/models/inventory.py:706 main/models/projects.py:42 +#: main/models/unified_jobs.py:402 +msgid "Manual" +msgstr "Manuel" + +#: main/models/inventory.py:707 +msgid "Local File, Directory or Script" +msgstr "Fichier local, répertoire ou script" + +#: main/models/inventory.py:708 +msgid "Rackspace Cloud Servers" +msgstr "Serveurs cloud Rackspace" + +#: main/models/inventory.py:709 +msgid "Amazon EC2" +msgstr "Amazon EC2" + +#: main/models/inventory.py:717 +msgid "Custom Script" +msgstr "Script personnalisé" + +#: main/models/inventory.py:828 +msgid "Inventory source variables in YAML or JSON format." +msgstr "Variables de source d'inventaire au format JSON ou YAML." + +#: main/models/inventory.py:847 +msgid "" +"Comma-separated list of filter expressions (EC2 only). Hosts are imported " +"when ANY of the filters match." +msgstr "" +"Liste d'expressions de filtre séparées par des virgules (EC2 uniquement). " +"Les hôtes sont importés lorsque l'UN des filtres correspondent." + +#: main/models/inventory.py:853 +msgid "Limit groups automatically created from inventory source (EC2 only)." +msgstr "" +"Limiter automatiquement les groupes créés à partir de la source d'inventaire" +" (EC2 uniquement)." + +#: main/models/inventory.py:857 +msgid "Overwrite local groups and hosts from remote inventory source." +msgstr "" +"Écraser les groupes locaux et les hôtes de la source d'inventaire distante." + +#: main/models/inventory.py:861 +msgid "Overwrite local variables from remote inventory source." +msgstr "Écraser les variables locales de la source d'inventaire distante." + +#: main/models/inventory.py:893 +msgid "Availability Zone" +msgstr "Zone de disponibilité" + +#: main/models/inventory.py:894 +msgid "Image ID" +msgstr "ID d'image" + +#: main/models/inventory.py:895 +msgid "Instance ID" +msgstr "ID d'instance" + +#: main/models/inventory.py:896 +msgid "Instance Type" +msgstr "Type d'instance" + +#: main/models/inventory.py:897 +msgid "Key Name" +msgstr "Nom de la clé" + +#: main/models/inventory.py:898 +msgid "Region" +msgstr "Région" + +#: main/models/inventory.py:899 +msgid "Security Group" +msgstr "Groupe de sécurité" + +#: main/models/inventory.py:900 +msgid "Tags" +msgstr "Balises" + +#: main/models/inventory.py:901 +msgid "VPC ID" +msgstr "ID VPC" + +#: main/models/inventory.py:902 +msgid "Tag None" +msgstr "Ne rien baliser" + +#: main/models/inventory.py:973 +#, python-format +msgid "" +"Cloud-based inventory sources (such as %s) require credentials for the " +"matching cloud service." +msgstr "" +"Les sources d'inventaire cloud (telles que% s) requièrent des informations " +"d'identification pour le service cloud correspondant." + +#: main/models/inventory.py:980 +msgid "Credential is required for a cloud source." +msgstr "" +"Les informations d'identification sont requises pour une source cloud." + +#: main/models/inventory.py:1005 +#, python-format +msgid "Invalid %(source)s region: %(region)s" +msgstr "" + +#: main/models/inventory.py:1030 +#, python-format +msgid "Invalid filter expression: %(filter)s" +msgstr "" + +#: main/models/inventory.py:1048 +#, python-format +msgid "Invalid group by choice: %(choice)s" +msgstr "" + +#: main/models/inventory.py:1195 +#, python-format +msgid "" +"Unable to configure this item for cloud sync. It is already managed by %s." +msgstr "" +"Impossible de configurer cet élément pour la synchronisation dans le cloud. " +"Il est déjà géré par %s." + +#: main/models/inventory.py:1290 +msgid "Inventory script contents" +msgstr "Contenus des scripts d'inventaire" + +#: main/models/inventory.py:1295 +msgid "Organization owning this inventory script" +msgstr "Organisation propriétaire de ce script d'inventaire." + +#: main/models/jobs.py:169 +msgid "You must provide a network credential." +msgstr "Vous devez fournir des informations d'identification réseau." + +#: main/models/jobs.py:177 +msgid "" +"Must provide a credential for a cloud provider, such as Amazon Web Services " +"or Rackspace." +msgstr "" +"Entrez les informations d'identification d'un fournisseur de services cloud " +"comme Amazon Web Services ou Rackspace." + +#: main/models/jobs.py:269 +msgid "Job Template must provide 'inventory' or allow prompting for it." +msgstr "" +"Le modèle de tâche doit fournir un inventaire ou permettre d'en demander un." + +#: main/models/jobs.py:273 +msgid "Job Template must provide 'credential' or allow prompting for it." +msgstr "" +"Le modèle de tâche doit fournir des informations d'identification ou " +"permettre d'en demander." + +#: main/models/jobs.py:362 +msgid "Cannot override job_type to or from a scan job." +msgstr "Impossible de remplacer job_type vers ou depuis une tâche de scan." + +#: main/models/jobs.py:365 +msgid "Inventory cannot be changed at runtime for scan jobs." +msgstr "" +"L'inventaire ne peut pas être modifié à l'exécution pour les tâches de scan." + +#: main/models/jobs.py:431 main/models/projects.py:243 +msgid "SCM Revision" +msgstr "Révision SCM" + +#: main/models/jobs.py:432 +msgid "The SCM Revision from the Project used for this job, if available" +msgstr "Révision SCM du projet utilisé pour cette tâche, le cas échéant" + +#: main/models/jobs.py:440 +msgid "" +"The SCM Refresh task used to make sure the playbooks were available for the " +"job run" +msgstr "" +"Activité d'actualisation du SCM qui permet de s'assurer que les playbooks " +"étaient disponibles pour l'exécution de la tâche" + +#: main/models/jobs.py:662 +msgid "job host summaries" +msgstr "récapitulatifs des hôtes pour la tâche" + +#: main/models/jobs.py:765 +msgid "Host Failure" +msgstr "Échec de l'hôte" + +#: main/models/jobs.py:768 main/models/jobs.py:782 +msgid "No Hosts Remaining" +msgstr "Aucun hôte restant" + +#: main/models/jobs.py:769 +msgid "Host Polling" +msgstr "Interrogation de l'hôte" + +#: main/models/jobs.py:770 +msgid "Host Async OK" +msgstr "Désynchronisation des hôtes OK" + +#: main/models/jobs.py:771 +msgid "Host Async Failure" +msgstr "Échec de désynchronisation des hôtes" + +#: main/models/jobs.py:772 +msgid "Item OK" +msgstr "Élément OK" + +#: main/models/jobs.py:773 +msgid "Item Failed" +msgstr "Échec de l'élément" + +#: main/models/jobs.py:774 +msgid "Item Skipped" +msgstr "Élément ignoré" + +#: main/models/jobs.py:775 +msgid "Host Retry" +msgstr "Nouvel essai de l'hôte" + +#: main/models/jobs.py:777 +msgid "File Difference" +msgstr "Écart entre les fichiers" + +#: main/models/jobs.py:778 +msgid "Playbook Started" +msgstr "Playbook démarré" + +#: main/models/jobs.py:779 +msgid "Running Handlers" +msgstr "Descripteurs d'exécution" + +#: main/models/jobs.py:780 +msgid "Including File" +msgstr "Ajout de fichier" + +#: main/models/jobs.py:781 +msgid "No Hosts Matched" +msgstr "Aucun hôte correspondant" + +#: main/models/jobs.py:783 +msgid "Task Started" +msgstr "Tâche démarrée" + +#: main/models/jobs.py:785 +msgid "Variables Prompted" +msgstr "Variables demandées" + +#: main/models/jobs.py:786 +msgid "Gathering Facts" +msgstr "Collecte des faits" + +#: main/models/jobs.py:787 +msgid "internal: on Import for Host" +msgstr "interne : à l'importation pour l'hôte" + +#: main/models/jobs.py:788 +msgid "internal: on Not Import for Host" +msgstr "interne : à la non-importation pour l'hôte" + +#: main/models/jobs.py:789 +msgid "Play Started" +msgstr "Scène démarrée" + +#: main/models/jobs.py:790 +msgid "Playbook Complete" +msgstr "Playbook terminé" + +#: main/models/jobs.py:1200 +msgid "Remove jobs older than a certain number of days" +msgstr "Supprimer les tâches plus anciennes qu'un certain nombre de jours" + +#: main/models/jobs.py:1201 +msgid "Remove activity stream entries older than a certain number of days" +msgstr "" +"Supprimer les entrées du flux d'activité plus anciennes qu'un certain nombre" +" de jours" + +#: main/models/jobs.py:1202 +msgid "Purge and/or reduce the granularity of system tracking data" +msgstr "Purger et/ou réduire la granularité des données de suivi du système" + +#: main/models/label.py:29 +msgid "Organization this label belongs to." +msgstr "Organisation à laquelle appartient ce libellé." + +#: main/models/notifications.py:31 +msgid "Email" +msgstr "Email" + +#: main/models/notifications.py:32 +msgid "Slack" +msgstr "Slack" + +#: main/models/notifications.py:33 +msgid "Twilio" +msgstr "Twilio" + +#: main/models/notifications.py:34 +msgid "Pagerduty" +msgstr "Pagerduty" + +#: main/models/notifications.py:35 +msgid "HipChat" +msgstr "HipChat" + +#: main/models/notifications.py:36 +msgid "Webhook" +msgstr "Webhook" + +#: main/models/notifications.py:37 +msgid "IRC" +msgstr "IRC" + +#: main/models/notifications.py:127 main/models/unified_jobs.py:59 +msgid "Pending" +msgstr "En attente" + +#: main/models/notifications.py:128 main/models/unified_jobs.py:62 +msgid "Successful" +msgstr "Réussi" + +#: main/models/notifications.py:129 main/models/unified_jobs.py:63 +msgid "Failed" +msgstr "Échec" + +#: main/models/organization.py:157 +msgid "Execute Commands on the Inventory" +msgstr "Exécuter des commandes sur l'inventaire" + +#: main/models/organization.py:211 +msgid "Token not invalidated" +msgstr "Token non invalidé" + +#: main/models/organization.py:212 +msgid "Token is expired" +msgstr "Token arrivé à expiration" + +#: main/models/organization.py:213 +msgid "" +"The maximum number of allowed sessions for this user has been exceeded." +msgstr "" + +#: main/models/organization.py:216 +msgid "Invalid token" +msgstr "Token non valide" + +#: main/models/organization.py:233 +msgid "Reason the auth token was invalidated." +msgstr "" +"Raison pour laquelle le token d'authentification a été rendu non valide." + +#: main/models/organization.py:272 +msgid "Invalid reason specified" +msgstr "Raison de non validité spécifiée" + +#: main/models/projects.py:43 +msgid "Git" +msgstr "Git" + +#: main/models/projects.py:44 +msgid "Mercurial" +msgstr "Mercurial" + +#: main/models/projects.py:45 +msgid "Subversion" +msgstr "Subversion" + +#: main/models/projects.py:71 +msgid "" +"Local path (relative to PROJECTS_ROOT) containing playbooks and related " +"files for this project." +msgstr "" +"Chemin local (relatif à PROJECTS_ROOT) contenant des playbooks et des " +"fichiers associés pour ce projet." + +#: main/models/projects.py:80 +msgid "SCM Type" +msgstr "Type de SCM" + +#: main/models/projects.py:81 +msgid "Specifies the source control system used to store the project." +msgstr "" +"Spécifie le système de contrôle des sources utilisé pour stocker le projet." + +#: main/models/projects.py:87 +msgid "SCM URL" +msgstr "URL du SCM" + +#: main/models/projects.py:88 +msgid "The location where the project is stored." +msgstr "Emplacement où le projet est stocké." + +#: main/models/projects.py:94 +msgid "SCM Branch" +msgstr "Branche SCM" + +#: main/models/projects.py:95 +msgid "Specific branch, tag or commit to checkout." +msgstr "Branche, balise ou validation spécifique à valider." + +#: main/models/projects.py:99 +msgid "Discard any local changes before syncing the project." +msgstr "Ignorez les modifications locales avant de synchroniser le projet." + +#: main/models/projects.py:103 +msgid "Delete the project before syncing." +msgstr "Supprimez le projet avant la synchronisation." + +#: main/models/projects.py:116 +msgid "The amount of time to run before the task is canceled." +msgstr "Délai écoulé avant que la tâche ne soit annulée." + +#: main/models/projects.py:130 +msgid "Invalid SCM URL." +msgstr "URL du SCM incorrecte." + +#: main/models/projects.py:133 +msgid "SCM URL is required." +msgstr "L'URL du SCM est requise." + +#: main/models/projects.py:142 +msgid "Credential kind must be 'scm'." +msgstr "Le type d'informations d'identification doit être 'scm'." + +#: main/models/projects.py:157 +msgid "Invalid credential." +msgstr "Informations d'identification non valides." + +#: main/models/projects.py:229 +msgid "Update the project when a job is launched that uses the project." +msgstr "Mettez à jour le projet lorsqu'une tâche qui l'utilise est lancée." + +#: main/models/projects.py:234 +msgid "" +"The number of seconds after the last project update ran that a newproject " +"update will be launched as a job dependency." +msgstr "" +"Délai écoulé (en secondes) entre la dernière mise à jour du projet et le " +"lancement d'une nouvelle mise à jour en tant que dépendance de la tâche." + +#: main/models/projects.py:244 +msgid "The last revision fetched by a project update" +msgstr "Dernière révision récupérée par une mise à jour du projet" + +#: main/models/projects.py:251 +msgid "Playbook Files" +msgstr "Fichiers de playbook" + +#: main/models/projects.py:252 +msgid "List of playbooks found in the project" +msgstr "Liste des playbooks trouvés dans le projet" + +#: main/models/rbac.py:122 +msgid "roles" +msgstr "rôles" + +#: main/models/rbac.py:438 +msgid "role_ancestors" +msgstr "role_ancestors" + +#: main/models/schedules.py:69 +msgid "Enables processing of this schedule by Tower." +msgstr "Active le traitement de ce calendrier par Tower." + +#: main/models/schedules.py:75 +msgid "The first occurrence of the schedule occurs on or after this time." +msgstr "" +"La première occurrence du calendrier se produit à ce moment précis ou " +"ultérieurement." + +#: main/models/schedules.py:81 +msgid "" +"The last occurrence of the schedule occurs before this time, aftewards the " +"schedule expires." +msgstr "" +"La dernière occurrence du calendrier se produit avant ce moment précis. " +"Passé ce délai, le calendrier arrive à expiration." + +#: main/models/schedules.py:85 +msgid "A value representing the schedules iCal recurrence rule." +msgstr "Valeur représentant la règle de récurrence iCal des calendriers." + +#: main/models/schedules.py:91 +msgid "The next time that the scheduled action will run." +msgstr "La prochaine fois que l'action planifiée s'exécutera." + +#: main/models/unified_jobs.py:58 +msgid "New" +msgstr "Nouveau" + +#: main/models/unified_jobs.py:60 +msgid "Waiting" +msgstr "En attente" + +#: main/models/unified_jobs.py:61 +msgid "Running" +msgstr "En cours d'exécution" + +#: main/models/unified_jobs.py:65 +msgid "Canceled" +msgstr "Annulé" + +#: main/models/unified_jobs.py:69 +msgid "Never Updated" +msgstr "Jamais mis à jour" + +#: main/models/unified_jobs.py:73 ui/templates/ui/index.html:85 +#: ui/templates/ui/index.html.py:104 +msgid "OK" +msgstr "OK" + +#: main/models/unified_jobs.py:74 +msgid "Missing" +msgstr "Manquant" + +#: main/models/unified_jobs.py:78 +msgid "No External Source" +msgstr "Aucune source externe" + +#: main/models/unified_jobs.py:85 +msgid "Updating" +msgstr "Mise à jour en cours" + +#: main/models/unified_jobs.py:403 +msgid "Relaunch" +msgstr "Relancer" + +#: main/models/unified_jobs.py:404 +msgid "Callback" +msgstr "Rappeler" + +#: main/models/unified_jobs.py:405 +msgid "Scheduled" +msgstr "Planifié" + +#: main/models/unified_jobs.py:406 +msgid "Dependency" +msgstr "Dépendance" + +#: main/models/unified_jobs.py:407 +msgid "Workflow" +msgstr "Workflow" + +#: main/models/unified_jobs.py:408 +msgid "Sync" +msgstr "" + +#: main/models/unified_jobs.py:454 +msgid "The Tower node the job executed on." +msgstr "Nœud Tower sur lequel la tâche s'est exécutée." + +#: main/models/unified_jobs.py:480 +msgid "The date and time the job was queued for starting." +msgstr "" +"Date et heure auxquelles la tâche a été mise en file d'attente pour le " +"démarrage." + +#: main/models/unified_jobs.py:486 +msgid "The date and time the job finished execution." +msgstr "Date et heure de fin d'exécution de la tâche." + +#: main/models/unified_jobs.py:492 +msgid "Elapsed time in seconds that the job ran." +msgstr "Délai écoulé (en secondes) pendant lequel la tâche s'est exécutée." + +#: main/models/unified_jobs.py:514 +msgid "" +"A status field to indicate the state of the job if it wasn't able to run and" +" capture stdout" +msgstr "" +"Champ d'état indiquant l'état de la tâche si elle n'a pas pu s'exécuter et " +"capturer stdout" + +#: main/notifications/base.py:17 main/notifications/email_backend.py:28 +msgid "" +"{} #{} had status {} on Ansible Tower, view details at {}\n" +"\n" +msgstr "" +"{} #{} était à l'état {} sur Ansible Tower, voir les détails sur {}\n" +"\n" + +#: main/notifications/hipchat_backend.py:46 +msgid "Error sending messages: {}" +msgstr "Erreur lors de l'envoi de messages : {}" + +#: main/notifications/hipchat_backend.py:48 +msgid "Error sending message to hipchat: {}" +msgstr "Erreur lors de l'envoi d'un message à hipchat : {}" + +#: main/notifications/irc_backend.py:54 +msgid "Exception connecting to irc server: {}" +msgstr "Exception lors de la connexion au serveur irc : {}" + +#: main/notifications/pagerduty_backend.py:39 +msgid "Exception connecting to PagerDuty: {}" +msgstr "Exception lors de la connexion à PagerDuty : {}" + +#: main/notifications/pagerduty_backend.py:48 +#: main/notifications/slack_backend.py:52 +#: main/notifications/twilio_backend.py:46 +msgid "Exception sending messages: {}" +msgstr "Exception lors de l'envoi de messages : {}" + +#: main/notifications/twilio_backend.py:36 +msgid "Exception connecting to Twilio: {}" +msgstr "Exception lors de la connexion à Twilio : {}" + +#: main/notifications/webhook_backend.py:38 +#: main/notifications/webhook_backend.py:40 +msgid "Error sending notification webhook: {}" +msgstr "Erreur lors de l'envoi d'un webhook de notification : {}" + +#: main/scheduler/__init__.py:130 +msgid "" +"Job spawned from workflow could not start because it was not in the right " +"state or required manual credentials" +msgstr "" + +#: main/tasks.py:180 +msgid "Ansible Tower host usage over 90%" +msgstr "Utilisation d'hôtes Ansible Tower supérieure à 90 %" + +#: main/tasks.py:185 +msgid "Ansible Tower license will expire soon" +msgstr "La licence Ansible Tower expirera bientôt" + +#: main/tasks.py:240 +msgid "status_str must be either succeeded or failed" +msgstr "status_str doit être une réussite ou un échec" + +#: main/utils/common.py:89 +#, python-format +msgid "Unable to convert \"%s\" to boolean" +msgstr "Impossible de convertir \"% s\" en booléen" + +#: main/utils/common.py:243 +#, python-format +msgid "Unsupported SCM type \"%s\"" +msgstr "Type de SCM \"%s\" non pris en charge" + +#: main/utils/common.py:250 main/utils/common.py:262 main/utils/common.py:281 +#, python-format +msgid "Invalid %s URL" +msgstr "URL %s non valide." + +#: main/utils/common.py:252 main/utils/common.py:290 +#, python-format +msgid "Unsupported %s URL" +msgstr "URL %s non prise en charge" + +#: main/utils/common.py:292 +#, python-format +msgid "Unsupported host \"%s\" for file:// URL" +msgstr "Hôte \"%s\" non pris en charge pour le fichier ://URL" + +#: main/utils/common.py:294 +#, python-format +msgid "Host is required for %s URL" +msgstr "L'hôte est requis pour l'URL %s" + +#: main/utils/common.py:312 +#, python-format +msgid "Username must be \"git\" for SSH access to %s." +msgstr "Le nom d'utilisateur doit être \"git\" pour l'accès SSH à %s." + +#: main/utils/common.py:318 +#, python-format +msgid "Username must be \"hg\" for SSH access to %s." +msgstr "Le nom d'utilisateur doit être \"hg\" pour l'accès SSH à %s." + +#: main/validators.py:60 +#, python-format +msgid "Invalid certificate or key: %r..." +msgstr "Certificat ou clé non valide : %r..." + +#: main/validators.py:74 +#, python-format +msgid "Invalid private key: unsupported type \"%s\"" +msgstr "Clé privée non valide : type \"%s\" non pris en charge" + +#: main/validators.py:78 +#, python-format +msgid "Unsupported PEM object type: \"%s\"" +msgstr "Type d'objet PEM non pris en charge : \"%s\"" + +#: main/validators.py:103 +msgid "Invalid base64-encoded data" +msgstr "Données codées en base64 non valides" + +#: main/validators.py:122 +msgid "Exactly one private key is required." +msgstr "Une clé privée uniquement est nécessaire." + +#: main/validators.py:124 +msgid "At least one private key is required." +msgstr "Une clé privée au moins est nécessaire." + +#: main/validators.py:126 +#, python-format +msgid "" +"At least %(min_keys)d private keys are required, only %(key_count)d " +"provided." +msgstr "" +"%(min_keys)d clés privées au moins sont requises, mais %(key_count)d " +"uniquement ont été fournies." + +#: main/validators.py:129 +#, python-format +msgid "Only one private key is allowed, %(key_count)d provided." +msgstr "Une seule clé privée est autorisée, %(key_count)d ont été fournies." + +#: main/validators.py:131 +#, python-format +msgid "" +"No more than %(max_keys)d private keys are allowed, %(key_count)d provided." +msgstr "" +"Pas plus de %(max_keys)d clés privées sont autorisées, %(key_count)d ont été" +" fournies." + +#: main/validators.py:136 +msgid "Exactly one certificate is required." +msgstr "Un certificat uniquement est nécessaire." + +#: main/validators.py:138 +msgid "At least one certificate is required." +msgstr "Un certificat au moins est nécessaire." + +#: main/validators.py:140 +#, python-format +msgid "" +"At least %(min_certs)d certificates are required, only %(cert_count)d " +"provided." +msgstr "" +"%(min_certs)d certificats au moins sont requis, mais %(cert_count)d " +"uniquement ont été fournis." + +#: main/validators.py:143 +#, python-format +msgid "Only one certificate is allowed, %(cert_count)d provided." +msgstr "Un seul certificat est autorisé, %(cert_count)d ont été fournis." + +#: main/validators.py:145 +#, python-format +msgid "" +"No more than %(max_certs)d certificates are allowed, %(cert_count)d " +"provided." +msgstr "" +"Pas plus de %(max_certs)d certificats sont autorisés, %(cert_count)d ont été" +" fournis." + +#: main/views.py:20 +msgid "API Error" +msgstr "Erreur API" + +#: main/views.py:49 +msgid "Bad Request" +msgstr "Requête incorrecte" + +#: main/views.py:50 +msgid "The request could not be understood by the server." +msgstr "La requête n'a pas pu être comprise par le serveur." + +#: main/views.py:57 +msgid "Forbidden" +msgstr "Interdiction" + +#: main/views.py:58 +msgid "You don't have permission to access the requested resource." +msgstr "Vous n'êtes pas autorisé à accéder à la ressource demandée." + +#: main/views.py:65 +msgid "Not Found" +msgstr "Introuvable" + +#: main/views.py:66 +msgid "The requested resource could not be found." +msgstr "Impossible de trouver la ressource demandée." + +#: main/views.py:73 +msgid "Server Error" +msgstr "Erreur serveur" + +#: main/views.py:74 +msgid "A server error has occurred." +msgstr "Une erreur serveur s'est produite." + +#: settings/defaults.py:611 +msgid "Chicago" +msgstr "Chicago" + +#: settings/defaults.py:612 +msgid "Dallas/Ft. Worth" +msgstr "Dallas/Ft. Worth" + +#: settings/defaults.py:613 +msgid "Northern Virginia" +msgstr "Virginie du Nord" + +#: settings/defaults.py:614 +msgid "London" +msgstr "Londres" + +#: settings/defaults.py:615 +msgid "Sydney" +msgstr "Sydney" + +#: settings/defaults.py:616 +msgid "Hong Kong" +msgstr "Hong Kong" + +#: settings/defaults.py:643 +msgid "US East (Northern Virginia)" +msgstr "Est des États-Unis (Virginie du Nord)" + +#: settings/defaults.py:644 +msgid "US East (Ohio)" +msgstr "Est des États-Unis (Ohio)" + +#: settings/defaults.py:645 +msgid "US West (Oregon)" +msgstr "Ouest des États-Unis (Oregon)" + +#: settings/defaults.py:646 +msgid "US West (Northern California)" +msgstr "Ouest des États-Unis (Nord de la Californie)" + +#: settings/defaults.py:647 +msgid "Canada (Central)" +msgstr "" + +#: settings/defaults.py:648 +msgid "EU (Frankfurt)" +msgstr "UE (Francfort)" + +#: settings/defaults.py:649 +msgid "EU (Ireland)" +msgstr "UE (Irlande)" + +#: settings/defaults.py:650 +msgid "EU (London)" +msgstr "" + +#: settings/defaults.py:651 +msgid "Asia Pacific (Singapore)" +msgstr "Asie-Pacifique (Singapour)" + +#: settings/defaults.py:652 +msgid "Asia Pacific (Sydney)" +msgstr "Asie-Pacifique (Sydney)" + +#: settings/defaults.py:653 +msgid "Asia Pacific (Tokyo)" +msgstr "Asie-Pacifique (Tokyo)" + +#: settings/defaults.py:654 +msgid "Asia Pacific (Seoul)" +msgstr "Asie-Pacifique (Séoul)" + +#: settings/defaults.py:655 +msgid "Asia Pacific (Mumbai)" +msgstr "Asie-Pacifique (Mumbai)" + +#: settings/defaults.py:656 +msgid "South America (Sao Paulo)" +msgstr "Amérique du Sud (Sao Paulo)" + +#: settings/defaults.py:657 +msgid "US West (GovCloud)" +msgstr "Ouest des États-Unis (GovCloud)" + +#: settings/defaults.py:658 +msgid "China (Beijing)" +msgstr "Chine (Pékin)" + +#: settings/defaults.py:707 +msgid "US East (B)" +msgstr "Est des États-Unis (B)" + +#: settings/defaults.py:708 +msgid "US East (C)" +msgstr "Est des États-Unis (C)" + +#: settings/defaults.py:709 +msgid "US East (D)" +msgstr "Est des États-Unis (D)" + +#: settings/defaults.py:710 +msgid "US Central (A)" +msgstr "Centre des États-Unis (A)" + +#: settings/defaults.py:711 +msgid "US Central (B)" +msgstr "Centre des États-Unis (B)" + +#: settings/defaults.py:712 +msgid "US Central (C)" +msgstr "Centre des États-Unis (C)" + +#: settings/defaults.py:713 +msgid "US Central (F)" +msgstr "Centre des États-Unis (F)" + +#: settings/defaults.py:714 +msgid "Europe West (B)" +msgstr "Europe de l'Ouest (B)" + +#: settings/defaults.py:715 +msgid "Europe West (C)" +msgstr "Europe de l'Ouest (C)" + +#: settings/defaults.py:716 +msgid "Europe West (D)" +msgstr "Europe de l'Ouest (D)" + +#: settings/defaults.py:717 +msgid "Asia East (A)" +msgstr "Asie de l'Est (A)" + +#: settings/defaults.py:718 +msgid "Asia East (B)" +msgstr "Asie de l'Est (B)" + +#: settings/defaults.py:719 +msgid "Asia East (C)" +msgstr "Asie de l'Est (C)" + +#: settings/defaults.py:743 +msgid "US Central" +msgstr "Centre des États-Unis" + +#: settings/defaults.py:744 +msgid "US East" +msgstr "Est des États-Unis" + +#: settings/defaults.py:745 +msgid "US East 2" +msgstr "Est des États-Unis 2" + +#: settings/defaults.py:746 +msgid "US North Central" +msgstr "Centre-Nord des États-Unis" + +#: settings/defaults.py:747 +msgid "US South Central" +msgstr "Centre-Sud des États-Unis" + +#: settings/defaults.py:748 +msgid "US West" +msgstr "Ouest des États-Unis" + +#: settings/defaults.py:749 +msgid "Europe North" +msgstr "Europe du Nord" + +#: settings/defaults.py:750 +msgid "Europe West" +msgstr "Europe de l'Ouest" + +#: settings/defaults.py:751 +msgid "Asia Pacific East" +msgstr "Asie-Pacifique Est" + +#: settings/defaults.py:752 +msgid "Asia Pacific Southeast" +msgstr "Asie-Pacifique Sud-Est" + +#: settings/defaults.py:753 +msgid "Japan East" +msgstr "Est du Japon" + +#: settings/defaults.py:754 +msgid "Japan West" +msgstr "Ouest du Japon" + +#: settings/defaults.py:755 +msgid "Brazil South" +msgstr "Sud du Brésil" + +#: sso/apps.py:9 +msgid "Single Sign-On" +msgstr "Single Sign-On" + +#: sso/conf.py:27 +msgid "" +"Mapping to organization admins/users from social auth accounts. This setting\n" +"controls which users are placed into which Tower organizations based on\n" +"their username and email address. Dictionary keys are organization names.\n" +"organizations will be created if not present if the license allows for\n" +"multiple organizations, otherwise the single default organization is used\n" +"regardless of the key. Values are dictionaries defining the options for\n" +"each organization's membership. For each organization it is possible to\n" +"specify which users are automatically users of the organization and also\n" +"which users can administer the organization. \n" +"\n" +"- admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated.\n" +" If True, all users using social auth will automatically be added as admins\n" +" of the organization.\n" +" If False, no social auth users will be automatically added as admins of\n" +" the organization.\n" +" If a string or list of strings, specifies the usernames and emails for\n" +" users who will be added to the organization. Strings in the format\n" +" \"//\" will be interpreted as JavaScript regular expressions and\n" +" may also be used instead of string literals; only \"i\" and \"m\" are supported\n" +" for flags.\n" +"- remove_admins: True/False. Defaults to True.\n" +" If True, a user who does not match will be removed from the organization's\n" +" administrative list.\n" +"- users: None, True/False, string or list of strings. Same rules apply as for\n" +" admins.\n" +"- remove_users: True/False. Defaults to True. Same rules as apply for \n" +" remove_admins." +msgstr "" +"Mappage avec des administrateurs/utilisateurs d'organisation appartenant à des comptes d'authentification sociale. Ce paramètre\n" +"contrôle les utilisateurs qui sont placés dans les organisations Tower en fonction de\n" +"leur nom d'utilisateur et adresse électronique. Les clés de dictionnaire sont des noms d'organisation.\n" +"Des organisations seront créées si elles ne sont pas présentes dans le cas où la licence autoriserait\n" +"plusieurs organisations, sinon l'organisation par défaut est utilisée\n" +"indépendamment de la clé. Les valeurs sont des dictionnaires définissant les options\n" +"d'appartenance de chaque organisation. Pour chaque organisation, il est possible de\n" +"préciser les utilisateurs qui sont automatiquement utilisateurs de l'organisation et\n" +"ceux qui peuvent administrer l'organisation. \n" +"\n" +"- admins : None, True/False, chaîne ou liste de chaînes.\n" +" Si défini sur None, les administrateurs de l'organisation ne sont pas mis à jour.\n" +" Si défini sur True, tous les utilisateurs se servant de l'authentification sociale sont automatiquement ajoutés en tant qu'administrateurs\n" +" de l'organisation.\n" +" Si défini sur False, aucun utilisateur d'authentification sociale n'est automatiquement ajouté en tant qu'administrateur de\n" +" l'organisation.\n" +" Si une chaîne ou une liste de chaînes est entrée, elle spécifie les noms d'utilisateur et les adresses électroniques des\n" +" utilisateurs qui seront ajoutés à l'organisation. Les chaînes au format\n" +" \"//\" sont interprétées comme des expressions JavaScript normales et\n" +" peuvent également être utilisées à la place de littéraux de chaîne ; seuls \"i\" et \"m\" sont pris en charge\n" +" pour les marqueurs.\n" +"- remove_admins : True/False. Par défaut défini sur True.\n" +" Si défini sur True, l'utilisateur qui ne correspond pas est supprimé de la liste administrative\n" +" de l'organisation.\n" +"- users : None, True/False, chaîne ou liste de chaînes. Les mêmes règles s'appliquent que pour\n" +" admins.\n" +"- remove_users : True/False. Par défaut défini sur True. Les mêmes règles s'appliquent que pour \n" +" remove_admins." + +#: sso/conf.py:76 +msgid "" +"Mapping of team members (users) from social auth accounts. Keys are team\n" +"names (will be created if not present). Values are dictionaries of options\n" +"for each team's membership, where each can contain the following parameters:\n" +"\n" +"- organization: string. The name of the organization to which the team\n" +" belongs. The team will be created if the combination of organization and\n" +" team name does not exist. The organization will first be created if it\n" +" does not exist. If the license does not allow for multiple organizations,\n" +" the team will always be assigned to the single default organization.\n" +"- users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all social auth users will be added/removed as team\n" +" members.\n" +" If a string or list of strings, specifies expressions used to match users.\n" +" User will be added as a team member if the username or email matches.\n" +" Strings in the format \"//\" will be interpreted as JavaScript\n" +" regular expressions and may also be used instead of string literals; only \"i\"\n" +" and \"m\" are supported for flags.\n" +"- remove: True/False. Defaults to True. If True, a user who does not match\n" +" the rules above will be removed from the team." +msgstr "" +"Mappage des membres d'équipe (utilisateurs) de compte d'authentification sociale. Les clés sont des \n" +"noms d'équipe (seront créés s'ils ne sont pas présents). Les valeurs sont des dictionnaires d'options\n" +"d'appartenance à chaque équipe, où chacune peut contenir les paramètres suivants :\n" +"\n" +"-organization : chaîne. Nom de l'organisation à laquelle l'équipe\n" +" appartient. Une équipe est créée si la combinaison nom de l'organisation/nom de \n" +"l'équipe n'existe pas. L'organisation sera d'abord créée \n" +"si elle n'existe pas. Si la licence n'autorise pas plusieurs organisations, \n" +"l'équipe est toujours attribuée à l'organisation par défaut. \n" +"- users : None, True/False, chaîne ou liste de chaînes.\n" +" Si défini sur None, les membres de l'équipe ne sont pas mis à jour.\n" +" Si défini sur True/False, tous les utilisateurs d'authentification sociale sont ajoutés/supprimés en tant que membres\n" +" d'équipe.\n" +" Si une chaîne ou une liste de chaînes est entrée, elle spécifie les expressions utilisées pour comparer les utilisateurs. \n" +"L'utilisateur est ajouté en tant que membre d'équipe si son nom d'utilisateur ou son adresse électronique correspond.\n" +" Les chaînes au format \"//\" sont interprétées comme des expressions JavaScript\n" +" normales et peuvent également être utilisées à la place de littéraux de chaîne ; Seuls \"i\"\n" +" et \"m\" sont pris en charge pour les marqueurs.\n" +"- remove : True/False. Par défaut défini sur True. Si défini sur True, tout utilisateur qui ne correspond\n" +" pas aux règles ci-dessus est supprimé de l'équipe." + +#: sso/conf.py:119 +msgid "Authentication Backends" +msgstr "Backends d'authentification" + +#: sso/conf.py:120 +msgid "" +"List of authentication backends that are enabled based on license features " +"and other authentication settings." +msgstr "" +"Liste des backends d'authentification activés en fonction des " +"caractéristiques des licences et d'autres paramètres d'authentification." + +#: sso/conf.py:133 +msgid "Social Auth Organization Map" +msgstr "Authentification sociale - Mappage des organisations" + +#: sso/conf.py:145 +msgid "Social Auth Team Map" +msgstr "Authentification sociale - Mappage des équipes" + +#: sso/conf.py:157 +msgid "Social Auth User Fields" +msgstr "Authentification sociale - Champs d'utilisateurs" + +#: sso/conf.py:158 +msgid "" +"When set to an empty list `[]`, this setting prevents new user accounts from" +" being created. Only users who have previously logged in using social auth " +"or have a user account with a matching email address will be able to login." +msgstr "" +"Lorsqu'il est défini sur une liste vide `[]`, ce paramètre empêche la " +"création de nouveaux comptes d'utilisateur. Seuls les utilisateurs ayant " +"déjà ouvert une session au moyen de l'authentification sociale ou disposant " +"d'un compte utilisateur avec une adresse électronique correspondante " +"pourront se connecter." + +#: sso/conf.py:176 +msgid "LDAP Server URI" +msgstr "URI du serveur LDAP" + +#: sso/conf.py:177 +msgid "" +"URI to connect to LDAP server, such as \"ldap://ldap.example.com:389\" (non-" +"SSL) or \"ldaps://ldap.example.com:636\" (SSL). Multiple LDAP servers may be" +" specified by separating with spaces or commas. LDAP authentication is " +"disabled if this parameter is empty." +msgstr "" +"URI de connexion au serveur LDAP, tel que \"ldap://ldap.exemple.com:389\" " +"(non SSL) ou \"ldaps://ldap.exemple.com:636\" (SSL). Plusieurs serveurs LDAP" +" peuvent être définis en les séparant par des espaces ou des virgules. " +"L'authentification LDAP est désactivée si ce paramètre est vide." + +#: sso/conf.py:181 sso/conf.py:199 sso/conf.py:211 sso/conf.py:223 +#: sso/conf.py:239 sso/conf.py:258 sso/conf.py:280 sso/conf.py:296 +#: sso/conf.py:315 sso/conf.py:332 sso/conf.py:349 sso/conf.py:365 +#: sso/conf.py:382 sso/conf.py:420 sso/conf.py:461 +msgid "LDAP" +msgstr "LDAP" + +#: sso/conf.py:193 +msgid "LDAP Bind DN" +msgstr "ND de la liaison LDAP" + +#: sso/conf.py:194 +msgid "" +"DN (Distinguished Name) of user to bind for all search queries. Normally in " +"the format \"CN=Some User,OU=Users,DC=example,DC=com\" but may also be " +"specified as \"DOMAIN\\username\" for Active Directory. This is the system " +"user account we will use to login to query LDAP for other user information." +msgstr "" +"ND (nom distinctif) de l'utilisateur à lier pour toutes les requêtes de " +"recherche. Normalement, au format \"CN = Certains utilisateurs, OU = " +"Utilisateurs, DC = exemple, DC = com\" mais peut aussi être entré au format " +"\"DOMAINE\\nom d'utilisateur\" pour Active Directory. Il s'agit du compte " +"utilisateur système que nous utiliserons pour nous connecter afin " +"d'interroger LDAP et obtenir d'autres informations utilisateur." + +#: sso/conf.py:209 +msgid "LDAP Bind Password" +msgstr "Mot de passe de la liaison LDAP" + +#: sso/conf.py:210 +msgid "Password used to bind LDAP user account." +msgstr "Mot de passe utilisé pour lier le compte utilisateur LDAP." + +#: sso/conf.py:221 +msgid "LDAP Start TLS" +msgstr "LDAP - Lancer TLS" + +#: sso/conf.py:222 +msgid "Whether to enable TLS when the LDAP connection is not using SSL." +msgstr "Pour activer ou non TLS lorsque la connexion LDAP n'utilise pas SSL." + +#: sso/conf.py:232 +msgid "LDAP Connection Options" +msgstr "Options de connexion à LDAP" + +#: sso/conf.py:233 +msgid "" +"Additional options to set for the LDAP connection. LDAP referrals are " +"disabled by default (to prevent certain LDAP queries from hanging with AD). " +"Option names should be strings (e.g. \"OPT_REFERRALS\"). Refer to " +"https://www.python-ldap.org/doc/html/ldap.html#options for possible options " +"and values that can be set." +msgstr "" +"Options supplémentaires à définir pour la connexion LDAP. Les références " +"LDAP sont désactivées par défaut (pour empêcher certaines requêtes LDAP de " +"se bloquer avec AD). Les noms d'options doivent être des chaînes (par " +"exemple \"OPT_REFERRALS\"). Reportez-vous à https://www.python-" +"ldap.org/doc/html/ldap.html#options afin de connaître les options possibles " +"et les valeurs que vous pouvez définir." + +#: sso/conf.py:251 +msgid "LDAP User Search" +msgstr "Recherche d'utilisateurs LDAP" + +#: sso/conf.py:252 +msgid "" +"LDAP search query to find users. Any user that matches the given pattern " +"will be able to login to Tower. The user should also be mapped into an " +"Tower organization (as defined in the AUTH_LDAP_ORGANIZATION_MAP setting). " +"If multiple search queries need to be supported use of \"LDAPUnion\" is " +"possible. See python-ldap documentation as linked at the top of this " +"section." +msgstr "" +"Requête de recherche LDAP servant à retrouver des utilisateurs. Tout " +"utilisateur qui correspond au modèle donné pourra se connecter à Tower. " +"L'utilisateur doit également être mappé dans une organisation Tower (tel que" +" défini dans le paramètre AUTH_LDAP_ORGANIZATION_MAP). Si plusieurs requêtes" +" de recherche doivent être prises en charge, l'utilisation de \"LDAPUnion\" " +"est possible. Se reporter à la documentation sur python-ldap en suivant le " +"lien indiqué en haut de cette section." + +#: sso/conf.py:274 +msgid "LDAP User DN Template" +msgstr "Modèle de ND pour les utilisateurs LDAP" + +#: sso/conf.py:275 +msgid "" +"Alternative to user search, if user DNs are all of the same format. This " +"approach will be more efficient for user lookups than searching if it is " +"usable in your organizational environment. If this setting has a value it " +"will be used instead of AUTH_LDAP_USER_SEARCH." +msgstr "" +"Autre méthode de recherche d'utilisateurs, si les ND d'utilisateur se " +"présentent tous au même format. Cette approche est plus efficace qu'une " +"recherche d'utilisateurs si vous pouvez l'utiliser dans votre environnement " +"organisationnel. Si ce paramètre est défini, sa valeur sera utilisée à la " +"place de AUTH_LDAP_USER_SEARCH." + +#: sso/conf.py:290 +msgid "LDAP User Attribute Map" +msgstr "Mappe des attributs d'utilisateurs LDAP" + +#: sso/conf.py:291 +msgid "" +"Mapping of LDAP user schema to Tower API user attributes (key is user " +"attribute name, value is LDAP attribute name). The default setting is valid" +" for ActiveDirectory but users with other LDAP configurations may need to " +"change the values (not the keys) of the dictionary/hash-table." +msgstr "" +"Mappage du schéma utilisateur LDAP avec les attributs utilisateur d'API " +"Tower (la clé est le nom de l'attribut utilisateur, la valeur est le nom de " +"l'attribut LDAP). Le paramètre par défaut est valide pour ActiveDirectory, " +"mais les utilisateurs ayant d'autres configurations LDAP peuvent être amenés" +" à modifier les valeurs (et non les clés) du dictionnaire/de la table de " +"hachage." + +#: sso/conf.py:310 +msgid "LDAP Group Search" +msgstr "Recherche de groupes LDAP" + +#: sso/conf.py:311 +msgid "" +"Users in Tower are mapped to organizations based on their membership in LDAP" +" groups. This setting defines the LDAP search query to find groups. Note " +"that this, unlike the user search above, does not support LDAPSearchUnion." +msgstr "" +"Les utilisateurs de Tower sont mappés à des organisations en fonction de " +"leur appartenance à des groupes LDAP. Ce paramètre définit la requête de " +"recherche LDAP servant à rechercher des groupes. Notez que cette méthode, " +"contrairement à la recherche d'utilisateurs LDAP, ne prend pas en charge " +"LDAPSearchUnion." + +#: sso/conf.py:328 +msgid "LDAP Group Type" +msgstr "Type de groupe LDAP" + +#: sso/conf.py:329 +msgid "" +"The group type may need to be changed based on the type of the LDAP server." +" Values are listed at: http://pythonhosted.org/django-auth-ldap/groups.html" +"#types-of-groups" +msgstr "" +"Il convient parfois de modifier le type de groupe en fonction du type de " +"serveur LDAP. Les valeurs sont répertoriées à l'adresse suivante : " +"http://pythonhosted.org/django-auth-ldap/groups.html#types-of-groups" + +#: sso/conf.py:344 +msgid "LDAP Require Group" +msgstr "Groupe LDAP obligatoire" + +#: sso/conf.py:345 +msgid "" +"Group DN required to login. If specified, user must be a member of this " +"group to login via LDAP. If not set, everyone in LDAP that matches the user " +"search will be able to login via Tower. Only one require group is supported." +msgstr "" +"Le ND du groupe d'utilisateurs qui doit se connecter. S'il est spécifié, " +"l'utilisateur doit être membre de ce groupe pour pouvoir se connecter via " +"LDAP. S'il n'est pas défini, tout utilisateur LDAP qui correspond à la " +"recherche d'utilisateurs pourra se connecter via Tower. Un seul groupe est " +"pris en charge." + +#: sso/conf.py:361 +msgid "LDAP Deny Group" +msgstr "Groupe LDAP refusé" + +#: sso/conf.py:362 +msgid "" +"Group DN denied from login. If specified, user will not be allowed to login " +"if a member of this group. Only one deny group is supported." +msgstr "" +"ND du groupe dont la connexion est refusée. S'il est spécifié, l'utilisateur" +" n'est pas autorisé à se connecter s'il est membre de ce groupe. Un seul " +"groupe refusé est pris en charge." + +#: sso/conf.py:375 +msgid "LDAP User Flags By Group" +msgstr "Marqueurs d'utilisateur LDAP par groupe" + +#: sso/conf.py:376 +msgid "" +"User profile flags updated from group membership (key is user attribute " +"name, value is group DN). These are boolean fields that are matched based " +"on whether the user is a member of the given group. So far only " +"is_superuser is settable via this method. This flag is set both true and " +"false at login time based on current LDAP settings." +msgstr "" +"Marqueurs de profil utilisateur mis à jour selon l'appartenance au groupe " +"(la clé est le nom de l'attribut utilisateur, la valeur est le ND du " +"groupe). Il s'agit de champs booléens qui sont associés selon que " +"l'utilisateur est ou non membre du groupe donné. Jusqu'à présent, seul " +"is_superuser peut être défini avec cette méthode. Ce marqueur est défini à " +"la fois sur True et False au moment de la connexion, en fonction des " +"paramètres LDAP actifs." + +#: sso/conf.py:394 +msgid "LDAP Organization Map" +msgstr "Mappe d'organisations LDAP" + +#: sso/conf.py:395 +msgid "" +"Mapping between organization admins/users and LDAP groups. This controls what users are placed into what Tower organizations relative to their LDAP group memberships. Keys are organization names. Organizations will be created if not present. Values are dictionaries defining the options for each organization's membership. For each organization it is possible to specify what groups are automatically users of the organization and also what groups can administer the organization.\n" +"\n" +" - admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated based on LDAP values.\n" +" If True, all users in LDAP will automatically be added as admins of the organization.\n" +" If False, no LDAP users will be automatically added as admins of the organization.\n" +" If a string or list of strings, specifies the group DN(s) that will be added of the organization if they match any of the specified groups.\n" +" - remove_admins: True/False. Defaults to True.\n" +" If True, a user who is not an member of the given groups will be removed from the organization's administrative list.\n" +" - users: None, True/False, string or list of strings. Same rules apply as for admins.\n" +" - remove_users: True/False. Defaults to True. Same rules apply as for remove_admins." +msgstr "" +"Mappage entre les administrateurs/utilisateurs de l'organisation et les groupes LDAP. Ce paramètre détermine les utilisateurs qui sont placés dans les organisations Tower par rapport à leurs appartenances à un groupe LDAP. Les clés sont les noms d'organisation. Les organisations seront créées si elles ne sont pas présentes. Les valeurs sont des dictionnaires définissant les options d'appartenance à chaque organisation. Pour chaque organisation, il est possible de spécifier les groupes qui sont automatiquement des utilisateurs de l'organisation et ceux qui peuvent administrer l'organisation.\n" +"\n" +" - admins : None, True/False, chaîne ou liste de chaînes.\n" +"Si défini sur None, les administrateurs de l'organisation ne sont pas mis à jour en fonction des valeurs LDAP.\n" +" Si défini sur True, tous les utilisateurs LDAP sont automatiquement ajoutés en tant qu'administrateurs de l'organisation.\n" +" Si défini sur False, aucun utilisateur LDAP n'est automatiquement ajouté en tant qu'administrateur de l'organisation.\n" +" Si une chaîne ou une liste de chaînes est entrée, elle spécifie le ou les NDD de groupe qui seront ajoutés à l'organisation s'ils correspondent à l'un des groupes spécifiés.\n" +" - remove_admins : True/False. Par défaut défini sur True.\n" +" Si défini sur True, tout utilisateur qui n'est pas membre des groupes donnés est supprimé de la liste administrative de l'organisation.\n" +" - users : None, True/False, chaîne ou liste de chaînes. Les mêmes règles s'appliquent que pour admins.\n" +" - remove_users : True/False. Par défaut défini sur True. Les mêmes règles s'appliquent que pour remove_admins." + +#: sso/conf.py:443 +msgid "LDAP Team Map" +msgstr "Mappe d'équipes LDAP" + +#: sso/conf.py:444 +msgid "" +"Mapping between team members (users) and LDAP groups. Keys are team names (will be created if not present). Values are dictionaries of options for each team's membership, where each can contain the following parameters:\n" +"\n" +" - organization: string. The name of the organization to which the team belongs. The team will be created if the combination of organization and team name does not exist. The organization will first be created if it does not exist.\n" +" - users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all LDAP users will be added/removed as team members.\n" +" If a string or list of strings, specifies the group DN(s). User will be added as a team member if the user is a member of ANY of these groups.\n" +"- remove: True/False. Defaults to True. If True, a user who is not a member of the given groups will be removed from the team." +msgstr "" +"Mappage entre les membres d'équipe (utilisateurs) et les groupes LDAP. Les clés sont des noms d'équipe (seront créés s'ils ne sont pas présents). Les valeurs sont des dictionnaires d'options d'appartenance à chaque équipe, où chacune peut contenir les paramètres suivants :\n" +"\n" +" - organization : chaîne. Nom de l'organisation à laquelle l'équipe appartient. Une équipe est créée si la combinaison nom de l'organisation/nom de l'équipe n'existe pas. L'organisation sera d'abord créée si elle n'existe pas.\n" +" - users : None, True/False, chaîne ou liste de chaînes.\n" +" Si défini sur None, les membres de l'équipe ne sont pas mis à jour.\n" +" Si défini sur True/False, tous les utilisateurs LDAP seront ajoutés/supprimés en tant que membres d'équipe.\n" +" Si une chaîne ou une liste de chaînes est entrée, elle spécifie les NDD des groupes. L'utilisateur est ajouté en tant que membre d'équipe s'il est membre de l'UN de ces groupes.\n" +"- remove : True/False. Par défaut défini sur True. Si défini sur True, tout utilisateur qui n'est pas membre des groupes donnés est supprimé de l'équipe." + +#: sso/conf.py:487 +msgid "RADIUS Server" +msgstr "Serveur RADIUS" + +#: sso/conf.py:488 +msgid "" +"Hostname/IP of RADIUS server. RADIUS authentication will be disabled if this" +" setting is empty." +msgstr "" +"Nom d'hôte/IP du serveur RADIUS. L'authentification RADIUS est désactivée si" +" ce paramètre est vide." + +#: sso/conf.py:490 sso/conf.py:504 sso/conf.py:516 +msgid "RADIUS" +msgstr "RADIUS" + +#: sso/conf.py:502 +msgid "RADIUS Port" +msgstr "Port RADIUS" + +#: sso/conf.py:503 +msgid "Port of RADIUS server." +msgstr "Port du serveur RADIUS." + +#: sso/conf.py:514 +msgid "RADIUS Secret" +msgstr "Secret RADIUS" + +#: sso/conf.py:515 +msgid "Shared secret for authenticating to RADIUS server." +msgstr "Secret partagé pour l'authentification sur le serveur RADIUS." + +#: sso/conf.py:531 +msgid "Google OAuth2 Callback URL" +msgstr "URL de rappel OAuth2 pour Google" + +#: sso/conf.py:532 +msgid "" +"Create a project at https://console.developers.google.com/ to obtain an " +"OAuth2 key and secret for a web application. Ensure that the Google+ API is " +"enabled. Provide this URL as the callback URL for your application." +msgstr "" +"Créez un projet sur https://console.developers.google.com/ afin d'obtenir " +"une clé OAuth2 et un secret pour une application Web. Assurez-vous que l'API" +" Google+ est activée. Entrez cette URL comme URL de rappel de votre " +"application." + +#: sso/conf.py:536 sso/conf.py:547 sso/conf.py:558 sso/conf.py:571 +#: sso/conf.py:585 sso/conf.py:597 sso/conf.py:609 +msgid "Google OAuth2" +msgstr "OAuth2 pour Google" + +#: sso/conf.py:545 +msgid "Google OAuth2 Key" +msgstr "Clé OAuth2 pour Google" + +#: sso/conf.py:546 +msgid "" +"The OAuth2 key from your web application at " +"https://console.developers.google.com/." +msgstr "" +"Clé OAuth2 de votre application Web sur " +"https://console.developers.google.com/." + +#: sso/conf.py:556 +msgid "Google OAuth2 Secret" +msgstr "Secret OAuth2 pour Google" + +#: sso/conf.py:557 +msgid "" +"The OAuth2 secret from your web application at " +"https://console.developers.google.com/." +msgstr "" +"Secret OAuth2 de votre application Web sur " +"https://console.developers.google.com/." + +#: sso/conf.py:568 +msgid "Google OAuth2 Whitelisted Domains" +msgstr "Domaines sur liste blanche OAuth2 pour Google" + +#: sso/conf.py:569 +msgid "" +"Update this setting to restrict the domains who are allowed to login using " +"Google OAuth2." +msgstr "" +"Mettez à jour ce paramètre pour limiter les domaines qui sont autorisés à se" +" connecter à l'aide de l'authentification OAuth2 avec un compte Google." + +#: sso/conf.py:580 +msgid "Google OAuth2 Extra Arguments" +msgstr "Arguments OAuth2 supplémentaires pour Google" + +#: sso/conf.py:581 +msgid "" +"Extra arguments for Google OAuth2 login. When only allowing a single domain " +"to authenticate, set to `{\"hd\": \"yourdomain.com\"}` and Google will not " +"display any other accounts even if the user is logged in with multiple " +"Google accounts." +msgstr "" +"Arguments supplémentaires pour l'authentification OAuth2 avec un compte " +"Google. Lorsque vous autorisez un seul domaine à s'authentifier, définissez " +"ce paramètre sur {{\"hd\": \"votredomaine.com\"}. Google n'affichera aucun " +"autre compte même si l'utilisateur est connecté avec plusieurs comptes " +"Google." + +#: sso/conf.py:595 +msgid "Google OAuth2 Organization Map" +msgstr "Mappe d'organisations OAuth2 pour Google" + +#: sso/conf.py:607 +msgid "Google OAuth2 Team Map" +msgstr "Mappe d'équipes OAuth2 pour Google" + +#: sso/conf.py:623 +msgid "GitHub OAuth2 Callback URL" +msgstr "URL de rappel OAuth2 pour GitHub" + +#: sso/conf.py:624 +msgid "" +"Create a developer application at https://github.com/settings/developers to " +"obtain an OAuth2 key (Client ID) and secret (Client Secret). Provide this " +"URL as the callback URL for your application." +msgstr "" +"Créez une application de développeur sur " +"https://github.com/settings/developers pour obtenir une clé OAuth2 (ID " +"client) et un secret (secret client). Entrez cette URL comme URL de rappel " +"de votre application." + +#: sso/conf.py:628 sso/conf.py:639 sso/conf.py:649 sso/conf.py:661 +#: sso/conf.py:673 +msgid "GitHub OAuth2" +msgstr "OAuth2 pour GitHub" + +#: sso/conf.py:637 +msgid "GitHub OAuth2 Key" +msgstr "Clé OAuth2 pour GitHub" + +#: sso/conf.py:638 +msgid "The OAuth2 key (Client ID) from your GitHub developer application." +msgstr "Clé OAuth2 (ID client) de votre application de développeur GitHub." + +#: sso/conf.py:647 +msgid "GitHub OAuth2 Secret" +msgstr "Secret OAuth2 pour GitHub" + +#: sso/conf.py:648 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub developer application." +msgstr "" +"Secret OAuth2 (secret client) de votre application de développeur GitHub." + +#: sso/conf.py:659 +msgid "GitHub OAuth2 Organization Map" +msgstr "Mappe d'organisations OAuth2 pour GitHub" + +#: sso/conf.py:671 +msgid "GitHub OAuth2 Team Map" +msgstr "Mappe d'équipes OAuth2 pour GitHub" + +#: sso/conf.py:687 +msgid "GitHub Organization OAuth2 Callback URL" +msgstr "URL de rappel OAuth2 pour les organisations GitHub" + +#: sso/conf.py:688 sso/conf.py:763 +msgid "" +"Create an organization-owned application at " +"https://github.com/organizations//settings/applications and obtain " +"an OAuth2 key (Client ID) and secret (Client Secret). Provide this URL as " +"the callback URL for your application." +msgstr "" +"Créez une application appartenant à une organisation sur " +"https://github.com/organizations//settings/applications et obtenez" +" une clé OAuth2 (ID client) et un secret (secret client). Entrez cette URL " +"comme URL de rappel de votre application." + +#: sso/conf.py:692 sso/conf.py:703 sso/conf.py:713 sso/conf.py:725 +#: sso/conf.py:736 sso/conf.py:748 +msgid "GitHub Organization OAuth2" +msgstr "OAuth2 pour les organisations GitHub" + +#: sso/conf.py:701 +msgid "GitHub Organization OAuth2 Key" +msgstr "Clé OAuth2 pour les organisations GitHub" + +#: sso/conf.py:702 sso/conf.py:777 +msgid "The OAuth2 key (Client ID) from your GitHub organization application." +msgstr "Clé OAuth2 (ID client) de votre application d'organisation GitHub." + +#: sso/conf.py:711 +msgid "GitHub Organization OAuth2 Secret" +msgstr "Secret OAuth2 pour les organisations GitHub" + +#: sso/conf.py:712 sso/conf.py:787 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub organization application." +msgstr "" +"Secret OAuth2 (secret client) de votre application d'organisation GitHub." + +#: sso/conf.py:722 +msgid "GitHub Organization Name" +msgstr "Nom de l'organisation GitHub" + +#: sso/conf.py:723 +msgid "" +"The name of your GitHub organization, as used in your organization's URL: " +"https://github.com//." +msgstr "" +"Nom de votre organisation GitHub, tel qu'utilisé dans l'URL de votre " +"organisation : https://github.com//." + +#: sso/conf.py:734 +msgid "GitHub Organization OAuth2 Organization Map" +msgstr "Mappe d'organisations OAuth2 pour les organisations GitHub" + +#: sso/conf.py:746 +msgid "GitHub Organization OAuth2 Team Map" +msgstr "Mappe d'équipes OAuth2 pour les organisations GitHub" + +#: sso/conf.py:762 +msgid "GitHub Team OAuth2 Callback URL" +msgstr "URL de rappel OAuth2 pour les équipes GitHub" + +#: sso/conf.py:767 sso/conf.py:778 sso/conf.py:788 sso/conf.py:800 +#: sso/conf.py:811 sso/conf.py:823 +msgid "GitHub Team OAuth2" +msgstr "OAuth2 pour les équipes GitHub" + +#: sso/conf.py:776 +msgid "GitHub Team OAuth2 Key" +msgstr "Clé OAuth2 pour les équipes GitHub" + +#: sso/conf.py:786 +msgid "GitHub Team OAuth2 Secret" +msgstr "Secret OAuth2 pour les équipes GitHub" + +#: sso/conf.py:797 +msgid "GitHub Team ID" +msgstr "ID d'équipe GitHub" + +#: sso/conf.py:798 +msgid "" +"Find the numeric team ID using the Github API: http://fabian-" +"kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/." +msgstr "" +"Recherchez votre ID d'équipe numérique à l'aide de l'API Github : http" +"://fabian-kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/." + +#: sso/conf.py:809 +msgid "GitHub Team OAuth2 Organization Map" +msgstr "Mappe d'organisations OAuth2 pour les équipes GitHub" + +#: sso/conf.py:821 +msgid "GitHub Team OAuth2 Team Map" +msgstr "Mappe d'équipes OAuth2 pour les équipes GitHub" + +#: sso/conf.py:837 +msgid "Azure AD OAuth2 Callback URL" +msgstr "URL de rappel OAuth2 pour Azure AD" + +#: sso/conf.py:838 +msgid "" +"Register an Azure AD application as described by https://msdn.microsoft.com" +"/en-us/library/azure/dn132599.aspx and obtain an OAuth2 key (Client ID) and " +"secret (Client Secret). Provide this URL as the callback URL for your " +"application." +msgstr "" +"Enregistrez une application AD Azure selon la procédure décrite sur " +"https://msdn.microsoft.com/en-us/library/azure/dn132599.aspx et obtenez une " +"clé OAuth2 (ID client) et un secret (secret client). Entrez cette URL comme " +"URL de rappel de votre application." + +#: sso/conf.py:842 sso/conf.py:853 sso/conf.py:863 sso/conf.py:875 +#: sso/conf.py:887 +msgid "Azure AD OAuth2" +msgstr "OAuth2 pour Azure AD" + +#: sso/conf.py:851 +msgid "Azure AD OAuth2 Key" +msgstr "Clé OAuth2 pour Azure AD" + +#: sso/conf.py:852 +msgid "The OAuth2 key (Client ID) from your Azure AD application." +msgstr "Clé OAuth2 (ID client) de votre application Azure AD." + +#: sso/conf.py:861 +msgid "Azure AD OAuth2 Secret" +msgstr "Secret OAuth2 pour Azure AD" + +#: sso/conf.py:862 +msgid "The OAuth2 secret (Client Secret) from your Azure AD application." +msgstr "Secret OAuth2 (secret client) de votre application Azure AD." + +#: sso/conf.py:873 +msgid "Azure AD OAuth2 Organization Map" +msgstr "Mappe d'organisations OAuth2 pour Azure AD" + +#: sso/conf.py:885 +msgid "Azure AD OAuth2 Team Map" +msgstr "Mappe d'équipes OAuth2 pour Azure AD" + +#: sso/conf.py:906 +msgid "SAML Service Provider Callback URL" +msgstr "URL de rappel du fournisseur de services SAML" + +#: sso/conf.py:907 +msgid "" +"Register Tower as a service provider (SP) with each identity provider (IdP) " +"you have configured. Provide your SP Entity ID and this callback URL for " +"your application." +msgstr "" +"Enregistrez Tower en tant que fournisseur de services (SP) auprès de chaque " +"fournisseur d'identité (IdP) que vous avez configuré. Entrez votre ID " +"d'entité SP et cette URL de rappel pour votre application." + +#: sso/conf.py:910 sso/conf.py:924 sso/conf.py:937 sso/conf.py:951 +#: sso/conf.py:965 sso/conf.py:983 sso/conf.py:1005 sso/conf.py:1024 +#: sso/conf.py:1044 sso/conf.py:1078 sso/conf.py:1091 +msgid "SAML" +msgstr "SAML" + +#: sso/conf.py:921 +msgid "SAML Service Provider Metadata URL" +msgstr "URL de métadonnées du fournisseur de services SAML" + +#: sso/conf.py:922 +msgid "" +"If your identity provider (IdP) allows uploading an XML metadata file, you " +"can download one from this URL." +msgstr "" +"Si votre fournisseur d'identité (IdP) permet de télécharger un fichier de " +"métadonnées XML, vous pouvez le faire à partir de cette URL." + +#: sso/conf.py:934 +msgid "SAML Service Provider Entity ID" +msgstr "ID d'entité du fournisseur de services SAML" + +#: sso/conf.py:935 +msgid "" +"The application-defined unique identifier used as the audience of the SAML " +"service provider (SP) configuration." +msgstr "" +"Identifiant unique défini par l'application utilisé comme audience dans la " +"configuration du fournisseur de services (SP) SAML." + +#: sso/conf.py:948 +msgid "SAML Service Provider Public Certificate" +msgstr "Certificat public du fournisseur de services SAML" + +#: sso/conf.py:949 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the" +" certificate content here." +msgstr "" +"Créez une paire de clés pour que Tower puisse être utilisé comme fournisseur" +" de services (SP) et entrez le contenu du certificat ici." + +#: sso/conf.py:962 +msgid "SAML Service Provider Private Key" +msgstr "Clé privée du fournisseur de services SAML" + +#: sso/conf.py:963 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the" +" private key content here." +msgstr "" +"Créez une paire de clés pour que Tower puisse être utilisé comme fournisseur" +" de services (SP) et entrez le contenu de la clé privée ici." + +#: sso/conf.py:981 +msgid "SAML Service Provider Organization Info" +msgstr "Infos organisationnelles du fournisseur de services SAML" + +#: sso/conf.py:982 +msgid "Configure this setting with information about your app." +msgstr "" +"Configurez ce paramètre en vous servant des informations de votre " +"application." + +#: sso/conf.py:1003 +msgid "SAML Service Provider Technical Contact" +msgstr "Contact technique du fournisseur de services SAML" + +#: sso/conf.py:1004 sso/conf.py:1023 +msgid "Configure this setting with your contact information." +msgstr "Configurez ce paramètre en vous servant de vos coordonnées." + +#: sso/conf.py:1022 +msgid "SAML Service Provider Support Contact" +msgstr "Contact support du fournisseur de services SAML" + +#: sso/conf.py:1037 +msgid "SAML Enabled Identity Providers" +msgstr "Fournisseurs d'identité compatibles SAML" + +#: sso/conf.py:1038 +msgid "" +"Configure the Entity ID, SSO URL and certificate for each identity provider " +"(IdP) in use. Multiple SAML IdPs are supported. Some IdPs may provide user " +"data using attribute names that differ from the default OIDs " +"(https://github.com/omab/python-social-" +"auth/blob/master/social/backends/saml.py#L16). Attribute names may be " +"overridden for each IdP." +msgstr "" +"Configurez l'ID d'entité, l'URL SSO et le certificat pour chaque fournisseur" +" d'identité (IdP) utilisé. Plusieurs IdP SAML sont pris en charge. Certains " +"IdP peuvent fournir des données utilisateur à l'aide de noms d'attributs qui" +" diffèrent des OID par défaut (https://github.com/omab/python-social-" +"auth/blob/master/social/backends/saml.py#L16). Les noms d'attributs peuvent " +"être remplacés pour chaque IdP." + +#: sso/conf.py:1076 +msgid "SAML Organization Map" +msgstr "Mappe d'organisations SAML" + +#: sso/conf.py:1089 +msgid "SAML Team Map" +msgstr "Mappe d'équipes SAML" + +#: sso/fields.py:123 +msgid "Invalid connection option(s): {invalid_options}." +msgstr "Option(s) de connexion non valide(s) : {invalid_options}." + +#: sso/fields.py:194 +msgid "Base" +msgstr "Base" + +#: sso/fields.py:195 +msgid "One Level" +msgstr "Un niveau" + +#: sso/fields.py:196 +msgid "Subtree" +msgstr "Sous-arborescence" + +#: sso/fields.py:214 +msgid "Expected a list of three items but got {length} instead." +msgstr "" +"Une liste de trois éléments était attendue, mais {length} a été obtenu à la " +"place." + +#: sso/fields.py:215 +msgid "Expected an instance of LDAPSearch but got {input_type} instead." +msgstr "" +"Une instance de LDAPSearch était attendue, mais {input_type} a été obtenu à " +"la place." + +#: sso/fields.py:251 +msgid "" +"Expected an instance of LDAPSearch or LDAPSearchUnion but got {input_type} " +"instead." +msgstr "" +"Une instance de LDAPSearch ou de LDAPSearchUnion était attendue, mais " +"{input_type} a été obtenu à la place." + +#: sso/fields.py:278 +msgid "Invalid user attribute(s): {invalid_attrs}." +msgstr "Attribut(s) d'utilisateur non valide(s) : {invalid_attrs}." + +#: sso/fields.py:295 +msgid "Expected an instance of LDAPGroupType but got {input_type} instead." +msgstr "" +"Une instance de LDAPGroupType était attendue, mais {input_type} a été obtenu" +" à la place." + +#: sso/fields.py:323 +msgid "Invalid user flag: \"{invalid_flag}\"." +msgstr "Marqueur d'utilisateur non valide : \"{invalid_flag}\"." + +#: sso/fields.py:339 sso/fields.py:506 +msgid "" +"Expected None, True, False, a string or list of strings but got {input_type}" +" instead." +msgstr "" +"Les valeurs None, True, False, une chaîne ou une liste de chaînes étaient " +"attendues, mais {input_type} a été obtenu à la place." + +#: sso/fields.py:375 +msgid "Missing key(s): {missing_keys}." +msgstr "Clé(s) manquante(s) : {missing_keys}." + +#: sso/fields.py:376 +msgid "Invalid key(s): {invalid_keys}." +msgstr "Clé(s) non valide(s) : {invalid_keys}." + +#: sso/fields.py:425 sso/fields.py:542 +msgid "Invalid key(s) for organization map: {invalid_keys}." +msgstr "Clé(s) non valide(s) pour la mappe d'organisations : {invalid_keys}." + +#: sso/fields.py:443 +msgid "Missing required key for team map: {invalid_keys}." +msgstr "Clé obligatoire manquante pour la mappe d'équipes : {invalid_keys}." + +#: sso/fields.py:444 sso/fields.py:561 +msgid "Invalid key(s) for team map: {invalid_keys}." +msgstr "Clé(s) non valide(s) pour la mappe d'équipes : {invalid_keys}." + +#: sso/fields.py:560 +msgid "Missing required key for team map: {missing_keys}." +msgstr "Clé obligatoire manquante pour la mappe d'équipes : {missing_keys}." + +#: sso/fields.py:578 +msgid "Missing required key(s) for org info record: {missing_keys}." +msgstr "" +"Clé(s) obligatoire(s) manquante(s) pour l'enregistrement des infos organis. " +": {missing_keys}." + +#: sso/fields.py:591 +msgid "Invalid language code(s) for org info: {invalid_lang_codes}." +msgstr "" +"Code(s) langage non valide(s) pour les infos organis. : " +"{invalid_lang_codes}." + +#: sso/fields.py:610 +msgid "Missing required key(s) for contact: {missing_keys}." +msgstr "Clé(s) obligatoire(s) manquante(s) pour le contact : {missing_keys}." + +#: sso/fields.py:622 +msgid "Missing required key(s) for IdP: {missing_keys}." +msgstr "Clé(s) obligatoire(s) manquante(s) pour l'IdP : {missing_keys}." + +#: sso/pipeline.py:24 +msgid "An account cannot be found for {0}" +msgstr "Impossible de trouver un compte pour {0}" + +#: sso/pipeline.py:30 +msgid "Your account is inactive" +msgstr "Votre compte est inactif" + +#: sso/validators.py:19 sso/validators.py:44 +#, python-format +msgid "DN must include \"%%(user)s\" placeholder for username: %s" +msgstr "" +"Le ND doit inclure l'espace réservé \"%% (user)s\" pour le nom d'utilisateur" +" : % s" + +#: sso/validators.py:26 +#, python-format +msgid "Invalid DN: %s" +msgstr "ND non valide : %s" + +#: sso/validators.py:56 +#, python-format +msgid "Invalid filter: %s" +msgstr "Filtre incorrect : %s" + +#: templates/error.html:4 ui/templates/ui/index.html:8 +msgid "Ansible Tower" +msgstr "Ansible Tower" + +#: templates/rest_framework/api.html:39 +msgid "Ansible Tower API Guide" +msgstr "Guide pour les API d'Ansible Tower" + +#: templates/rest_framework/api.html:40 +msgid "Back to Ansible Tower" +msgstr "Retour à Ansible Tower" + +#: templates/rest_framework/api.html:41 +msgid "Resize" +msgstr "Redimensionner" + +#: templates/rest_framework/base.html:78 templates/rest_framework/base.html:92 +#, python-format +msgid "Make a GET request on the %(name)s resource" +msgstr "Appliquez une requête GET sur la ressource %(name)s" + +#: templates/rest_framework/base.html:80 +msgid "Specify a format for the GET request" +msgstr "Spécifiez un format pour la requête GET" + +#: templates/rest_framework/base.html:86 +#, python-format +msgid "" +"Make a GET request on the %(name)s resource with the format set to " +"`%(format)s`" +msgstr "" +"Appliquez une requête GET sur la ressource %(name)s avec un format défini " +"sur`%(format)s`" + +#: templates/rest_framework/base.html:100 +#, python-format +msgid "Make an OPTIONS request on the %(name)s resource" +msgstr "Appliquez une requête OPTIONS sur la ressource %(name)s" + +#: templates/rest_framework/base.html:106 +#, python-format +msgid "Make a DELETE request on the %(name)s resource" +msgstr "Appliquez une requête DELETE sur la ressource %(name)s" + +#: templates/rest_framework/base.html:113 +msgid "Filters" +msgstr "Filtres" + +#: templates/rest_framework/base.html:172 +#: templates/rest_framework/base.html:186 +#, python-format +msgid "Make a POST request on the %(name)s resource" +msgstr "Appliquez une requête POST sur la ressource %(name)s" + +#: templates/rest_framework/base.html:216 +#: templates/rest_framework/base.html:230 +#, python-format +msgid "Make a PUT request on the %(name)s resource" +msgstr "Appliquez une requête PUT sur la ressource %(name)s" + +#: templates/rest_framework/base.html:233 +#, python-format +msgid "Make a PATCH request on the %(name)s resource" +msgstr "Appliquez une requête PATCH sur la ressource %(name)s" + +#: ui/apps.py:9 ui/conf.py:22 ui/conf.py:38 ui/conf.py:53 +msgid "UI" +msgstr "IU" + +#: ui/conf.py:16 +msgid "Off" +msgstr "Désactivé" + +#: ui/conf.py:17 +msgid "Anonymous" +msgstr "Anonyme" + +#: ui/conf.py:18 +msgid "Detailed" +msgstr "Détaillé" + +#: ui/conf.py:20 +msgid "Analytics Tracking State" +msgstr "État du suivi analytique" + +#: ui/conf.py:21 +msgid "Enable or Disable Analytics Tracking." +msgstr "Activez ou désactivez le suivi analytique." + +#: ui/conf.py:31 +msgid "Custom Login Info" +msgstr "Infos de connexion personnalisées" + +#: ui/conf.py:32 +msgid "" +"If needed, you can add specific information (such as a legal notice or a " +"disclaimer) to a text box in the login modal using this setting. Any content" +" added must be in plain text, as custom HTML or other markup languages are " +"not supported. If multiple paragraphs of text are needed, new lines " +"(paragraphs) must be escaped as `\\n` within the block of text." +msgstr "" +"Si nécessaire, vous pouvez ajouter des informations particulières (telles " +"qu'une mention légale ou une clause de non-responsabilité) à une zone de " +"texte dans la fenêtre modale de connexion, grâce à ce paramètre. Tout " +"contenu ajouté doit l'être en texte brut, dans la mesure où le langage HTML " +"personnalisé et les autres langages de balisage ne sont pas pris en charge. " +"Si plusieurs paragraphes de texte sont nécessaires, les nouvelles lignes " +"(paragraphes) doivent être échappées sous la forme `\\n` dans le bloc de " +"texte." + +#: ui/conf.py:48 +msgid "Custom Logo" +msgstr "Logo personnalisé" + +#: ui/conf.py:49 +msgid "" +"To set up a custom logo, provide a file that you create. For the custom logo" +" to look its best, use a `.png` file with a transparent background. GIF, PNG" +" and JPEG formats are supported." +msgstr "" +"Pour configurer un logo personnalisé, chargez un fichier que vous avez créé." +" Pour optimiser l'affichage du logo personnalisé, utilisez un fichier `.png`" +" avec un fond transparent. Les formats GIF, PNG et JPEG sont pris en charge." + +#: ui/fields.py:29 +msgid "" +"Invalid format for custom logo. Must be a data URL with a base64-encoded " +"GIF, PNG or JPEG image." +msgstr "" +"Format de logo personnalisé non valide. Entrez une URL de données avec une " +"image GIF, PNG ou JPEG codée en base64." + +#: ui/fields.py:30 +msgid "Invalid base64-encoded data in data URL." +msgstr "Données codées en base64 non valides dans l'URL de données" + +#: ui/templates/ui/index.html:49 +msgid "" +"Your session will expire in 60 seconds, would you like to " +"continue?" +msgstr "" +"Votre session expirera dans 60 secondes, voulez-vous continuer ?" + +#: ui/templates/ui/index.html:64 +msgid "CANCEL" +msgstr "ANNULER" + +#: ui/templates/ui/index.html:116 +msgid "Set how many days of data should be retained." +msgstr "" +"Définissez le nombre de jours pendant lesquels les données doivent être " +"conservées." + +#: ui/templates/ui/index.html:122 +msgid "" +"Please enter an integer that is not " +"negative that is lower than " +"9999." +msgstr "" +"Entrez un entier non négatif et inférieur à 9999." + +#: ui/templates/ui/index.html:127 +msgid "" +"For facts collected older than the time period specified, save one fact scan (snapshot) per time window (frequency). For example, facts older than 30 days are purged, while one weekly fact scan is kept.\n" +"
\n" +"
CAUTION: Setting both numerical variables to \"0\" will delete all facts.\n" +"
\n" +"
" +msgstr "" +"Pour les faits collectés en amont de la période spécifiée, enregistrez un scan des faits (instantané) par fenêtre temporelle (fréquence). Par exemple, les faits antérieurs à 30 jours sont purgés, tandis qu'un scan de faits hebdomadaire est conservé.\n" +"
\n" +"
ATTENTION : le paramétrage des deux variables numériques sur \"0\" supprime l'ensemble des faits.\n" +"
\n" +"
" + +#: ui/templates/ui/index.html:136 +msgid "Select a time period after which to remove old facts" +msgstr "" +"Sélectionnez un intervalle de temps après lequel les faits anciens pourront " +"être supprimés" + +#: ui/templates/ui/index.html:150 +msgid "" +"Please enter an integer that is not " +"negative that is lower than " +"9999." +msgstr "" +"Entrez un entier non négatif et inférieur à " +"9999
." + +#: ui/templates/ui/index.html:155 +msgid "Select a frequency for snapshot retention" +msgstr "Sélectionnez une fréquence pour la conservation des instantanés" + +#: ui/templates/ui/index.html:169 +msgid "" +"Please enter an integer that is not" +" negative that is " +"lower than 9999." +msgstr "" +"Entrez un entier non " +"négatif et " +"inférieur à 9999." + +#: ui/templates/ui/index.html:175 +msgid "working..." +msgstr "en cours..." diff --git a/awx/locale/ja/LC_MESSAGES/django.po b/awx/locale/ja/LC_MESSAGES/django.po new file mode 100644 index 0000000000..7e5cf1156c --- /dev/null +++ b/awx/locale/ja/LC_MESSAGES/django.po @@ -0,0 +1,3808 @@ +# asasaki , 2017. #zanata +msgid "" +msgstr "" +"Project-Id-Version: PACKAGE VERSION\n" +"Report-Msgid-Bugs-To: \n" +"POT-Creation-Date: 2016-12-15 12:05+0530\n" +"PO-Revision-Date: 2017-01-20 12:04+0000\n" +"Last-Translator: Copied by Zanata \n" +"Language-Team: Japanese\n" +"MIME-Version: 1.0\n" +"Content-Type: text/plain; charset=UTF-8\n" +"Content-Transfer-Encoding: 8bit\n" +"Language: ja\n" +"Plural-Forms: nplurals=1; plural=0\n" +"X-Generator: Zanata 3.9.6\n" + +#: api/authentication.py:67 +msgid "Invalid token header. No credentials provided." +msgstr "無効なトークンヘッダーです。認証情報が提供されていません。" + +#: api/authentication.py:70 +msgid "Invalid token header. Token string should not contain spaces." +msgstr "無効なトークンヘッダーです。トークン文字列にはスペースを含めることができません。" + +#: api/authentication.py:105 +msgid "User inactive or deleted" +msgstr "ユーザーが非アクティブか、または削除されています" + +#: api/authentication.py:161 +msgid "Invalid task token" +msgstr "無効なタスクトークン" + +#: api/conf.py:12 +msgid "Idle Time Force Log Out" +msgstr "アイドル時間、強制ログアウト" + +#: api/conf.py:13 +msgid "" +"Number of seconds that a user is inactive before they will need to login " +"again." +msgstr "ユーザーが再ログインするまでに非アクティブな状態になる秒数です。" + +#: api/conf.py:14 api/conf.py:24 api/conf.py:33 sso/conf.py:124 +#: sso/conf.py:135 sso/conf.py:147 sso/conf.py:162 +msgid "Authentication" +msgstr "認証" + +#: api/conf.py:22 +msgid "Maximum number of simultaneous logins" +msgstr "同時ログインの最大数" + +#: api/conf.py:23 +msgid "" +"Maximum number of simultaneous logins a user may have. To disable enter -1." +msgstr "ユーザーが実行できる同時ログインの最大数です。無効にするには -1 を入力します。" + +#: api/conf.py:31 +msgid "Enable HTTP Basic Auth" +msgstr "HTTP Basic 認証の有効化" + +#: api/conf.py:32 +msgid "Enable HTTP Basic Auth for the API Browser." +msgstr "API ブラウザーの HTTP Basic 認証を有効にします。" + +#: api/generics.py:462 +msgid "\"id\" is required to disassociate" +msgstr "関連付けを解除するには 「id」が必要です" + +#: api/metadata.py:50 +msgid "Database ID for this {}." +msgstr "この{}のデータベース ID。" + +#: api/metadata.py:51 +msgid "Name of this {}." +msgstr "この{}の名前。" + +#: api/metadata.py:52 +msgid "Optional description of this {}." +msgstr "この{}のオプションの説明。" + +#: api/metadata.py:53 +msgid "Data type for this {}." +msgstr "この{}のデータタイプ。" + +#: api/metadata.py:54 +msgid "URL for this {}." +msgstr "この{}の URL。" + +#: api/metadata.py:55 +msgid "Data structure with URLs of related resources." +msgstr "関連リソースの URL のあるデータ構造。" + +#: api/metadata.py:56 +msgid "Data structure with name/description for related resources." +msgstr "関連リソースの名前/説明のあるデータ構造。" + +#: api/metadata.py:57 +msgid "Timestamp when this {} was created." +msgstr "この {} の作成時のタイムスタンプ。" + +#: api/metadata.py:58 +msgid "Timestamp when this {} was last modified." +msgstr "この {} の最終変更時のタイムスタンプ。" + +#: api/parsers.py:31 +#, python-format +msgid "JSON parse error - %s" +msgstr "JSON パースエラー: %s" + +#: api/serializers.py:248 +msgid "Playbook Run" +msgstr "Playbook 実行" + +#: api/serializers.py:249 +msgid "Command" +msgstr "コマンド" + +#: api/serializers.py:250 +msgid "SCM Update" +msgstr "SCM 更新" + +#: api/serializers.py:251 +msgid "Inventory Sync" +msgstr "インベントリーの同期" + +#: api/serializers.py:252 +msgid "Management Job" +msgstr "管理ジョブ" + +#: api/serializers.py:253 +msgid "Workflow Job" +msgstr "ワークフロージョブ" + +#: api/serializers.py:254 +msgid "Workflow Template" +msgstr "" + +#: api/serializers.py:656 api/serializers.py:714 api/views.py:3805 +#, python-format +msgid "" +"Standard Output too large to display (%(text_size)d bytes), only download " +"supported for sizes over %(supported_size)d bytes" +msgstr "" +"標準出力が大きすぎて表示できません (%(text_size)d バイト)。サイズが %(supported_size)d " +"バイトを超える場合はダウンロードのみがサポートされます。" + +#: api/serializers.py:729 +msgid "Write-only field used to change the password." +msgstr "パスワードを変更するために使用される書き込み専用フィールド。" + +#: api/serializers.py:731 +msgid "Set if the account is managed by an external service" +msgstr "アカウントが外部サービスで管理される場合に設定されます" + +#: api/serializers.py:755 +msgid "Password required for new User." +msgstr "新規ユーザーのパスワードを入力してください。" + +#: api/serializers.py:839 +#, python-format +msgid "Unable to change %s on user managed by LDAP." +msgstr "LDAP で管理されたユーザーの %s を変更できません。" + +#: api/serializers.py:991 +msgid "Organization is missing" +msgstr "組織がありません" + +#: api/serializers.py:997 +msgid "Array of playbooks available within this project." +msgstr "このプロジェクト内で利用可能な一連の Playbook。" + +#: api/serializers.py:1179 +#, python-format +msgid "Invalid port specification: %s" +msgstr "無効なポート指定: %s" + +#: api/serializers.py:1207 main/validators.py:193 +msgid "Must be valid JSON or YAML." +msgstr "有効な JSON または YAML である必要があります。" + +#: api/serializers.py:1264 +msgid "Invalid group name." +msgstr "無効なグループ名。" + +#: api/serializers.py:1339 +msgid "" +"Script must begin with a hashbang sequence: i.e.... #!/usr/bin/env python" +msgstr "スクリプトは hashbang シーケンスで開始する必要があります (例: .... #!/usr/bin/env python)" + +#: api/serializers.py:1392 +msgid "If 'source' is 'custom', 'source_script' must be provided." +msgstr "「source」が「custom」である場合、「source_script」を指定する必要があります。" + +#: api/serializers.py:1396 +msgid "" +"The 'source_script' does not belong to the same organization as the " +"inventory." +msgstr "「source_script」はインベントリーと同じ組織に属しません。" + +#: api/serializers.py:1398 +msgid "'source_script' doesn't exist." +msgstr "「source_script」は存在しません。" + +#: api/serializers.py:1757 +msgid "" +"Write-only field used to add user to owner role. If provided, do not give " +"either team or organization. Only valid for creation." +msgstr "" +"ユーザーを所有者ロールに追加するために使用される書き込み専用フィールドです。提供されている場合は、チームまたは組織のいずれも指定しないでください。作成時にのみ有効です。" + +#: api/serializers.py:1762 +msgid "" +"Write-only field used to add team to owner role. If provided, do not give " +"either user or organization. Only valid for creation." +msgstr "" +"チームを所有者ロールに追加するために使用される書き込み専用フィールドです。提供されている場合は、ユーザーまたは組織のいずれも指定しないでください。作成時にのみ有効です。" + +#: api/serializers.py:1767 +msgid "" +"Inherit permissions from organization roles. If provided on creation, do not" +" give either user or team." +msgstr "組織ロールからパーミッションを継承します。作成時に提供される場合は、ユーザーまたはチームのいずれも指定しないでください。" + +#: api/serializers.py:1783 +msgid "Missing 'user', 'team', or 'organization'." +msgstr "「user」、「team」、または「organization」がありません。" + +#: api/serializers.py:1796 +msgid "" +"Credential organization must be set and match before assigning to a team" +msgstr "認証情報の組織が設定され、一致している状態でチームに割り当てる必要があります。" + +#: api/serializers.py:1888 +msgid "This field is required." +msgstr "このフィールドは必須です。" + +#: api/serializers.py:1890 api/serializers.py:1892 +msgid "Playbook not found for project." +msgstr "プロジェクトの Playbook が見つかりません。" + +#: api/serializers.py:1894 +msgid "Must select playbook for project." +msgstr "プロジェクトの Playbook を選択してください。" + +#: api/serializers.py:1958 main/models/jobs.py:278 +msgid "Scan jobs must be assigned a fixed inventory." +msgstr "スキャンジョブに固定インベントリーが割り当てられている必要があります。" + +#: api/serializers.py:1960 main/models/jobs.py:281 +msgid "Job types 'run' and 'check' must have assigned a project." +msgstr "ジョブタイプ「run」および「check」によりプロジェクトが割り当てられている必要があります。" + +#: api/serializers.py:1963 +msgid "Survey Enabled cannot be used with scan jobs." +msgstr "Survey Enabled はスキャンジョブで使用できません。" + +#: api/serializers.py:2023 +msgid "Invalid job template." +msgstr "無効なジョブテンプレート。" + +#: api/serializers.py:2108 +msgid "Credential not found or deleted." +msgstr "認証情報が見つからないか、または削除されました。" + +#: api/serializers.py:2110 +msgid "Job Template Project is missing or undefined." +msgstr "ジョブテンプレートプロジェクトが見つからないか、または定義されていません。" + +#: api/serializers.py:2112 +msgid "Job Template Inventory is missing or undefined." +msgstr "ジョブテンプレートインベントリーが見つからないか、または定義されていません。" + +#: api/serializers.py:2397 +#, python-format +msgid "%(job_type)s is not a valid job type. The choices are %(choices)s." +msgstr "%(job_type)s は有効なジョブタイプではありません。%(choices)s を選択できます。" + +#: api/serializers.py:2402 +msgid "Workflow job template is missing during creation." +msgstr "ワークフロージョブテンプレートが作成時に見つかりません。" + +#: api/serializers.py:2407 +#, python-format +msgid "Cannot nest a %s inside a WorkflowJobTemplate" +msgstr "ワークフロージョブテンプレート内に %s をネストできません" + +#: api/serializers.py:2645 +#, python-format +msgid "Job Template '%s' is missing or undefined." +msgstr "ジョブテンプレート「%s」が見つからない、または定義されていません。" + +#: api/serializers.py:2671 +msgid "Must be a valid JSON or YAML dictionary." +msgstr "有効な JSON または YAML 辞書でなければなりません。" + +#: api/serializers.py:2813 +msgid "" +"Missing required fields for Notification Configuration: notification_type" +msgstr "通知設定の必須フィールドがありません: notification_type" + +#: api/serializers.py:2836 +msgid "No values specified for field '{}'" +msgstr "フィールド '{}' に値が指定されていません" + +#: api/serializers.py:2841 +msgid "Missing required fields for Notification Configuration: {}." +msgstr "通知設定の必須フィールドがありません: {}。" + +#: api/serializers.py:2844 +msgid "Configuration field '{}' incorrect type, expected {}." +msgstr "設定フィールド '{}' のタイプが正しくありません。{} が予期されました。" + +#: api/serializers.py:2897 +msgid "Inventory Source must be a cloud resource." +msgstr "インベントリーソースはクラウドリソースでなければなりません。" + +#: api/serializers.py:2899 +msgid "Manual Project can not have a schedule set." +msgstr "手動プロジェクトにはスケジュールを設定できません。" + +#: api/serializers.py:2921 +msgid "" +"DTSTART required in rrule. Value should match: DTSTART:YYYYMMDDTHHMMSSZ" +msgstr "DTSTART が rrule で必要です。値は、DSTART:YYYYMMDDTHHMMSSZ に一致する必要があります。" + +#: api/serializers.py:2923 +msgid "Multiple DTSTART is not supported." +msgstr "複数の DTSTART はサポートされません。" + +#: api/serializers.py:2925 +msgid "RRULE require in rrule." +msgstr "RRULE が rrule で必要です。" + +#: api/serializers.py:2927 +msgid "Multiple RRULE is not supported." +msgstr "複数の RRULE はサポートされません。" + +#: api/serializers.py:2929 +msgid "INTERVAL required in rrule." +msgstr "INTERVAL が rrule で必要です。" + +#: api/serializers.py:2931 +msgid "TZID is not supported." +msgstr "TZID はサポートされません。" + +#: api/serializers.py:2933 +msgid "SECONDLY is not supported." +msgstr "SECONDLY はサポートされません。" + +#: api/serializers.py:2935 +msgid "Multiple BYMONTHDAYs not supported." +msgstr "複数の BYMONTHDAY はサポートされません。" + +#: api/serializers.py:2937 +msgid "Multiple BYMONTHs not supported." +msgstr "複数の BYMONTH はサポートされません。" + +#: api/serializers.py:2939 +msgid "BYDAY with numeric prefix not supported." +msgstr "数字の接頭辞のある BYDAY はサポートされません。" + +#: api/serializers.py:2941 +msgid "BYYEARDAY not supported." +msgstr "BYYEARDAY はサポートされません。" + +#: api/serializers.py:2943 +msgid "BYWEEKNO not supported." +msgstr "BYWEEKNO はサポートされません。" + +#: api/serializers.py:2947 +msgid "COUNT > 999 is unsupported." +msgstr "COUNT > 999 はサポートされません。" + +#: api/serializers.py:2951 +msgid "rrule parsing failed validation." +msgstr "rrule の構文解析で検証に失敗しました。" + +#: api/serializers.py:2969 +msgid "" +"A summary of the new and changed values when an object is created, updated, " +"or deleted" +msgstr "オブジェクトの作成、更新または削除時の新規値および変更された値の概要" + +#: api/serializers.py:2971 +msgid "" +"For create, update, and delete events this is the object type that was " +"affected. For associate and disassociate events this is the object type " +"associated or disassociated with object2." +msgstr "" +"作成、更新、および削除イベントの場合、これは影響を受けたオブジェクトタイプになります。関連付けおよび関連付け解除イベントの場合、これは object2 " +"に関連付けられたか、またはその関連付けが解除されたオブジェクトタイプになります。" + +#: api/serializers.py:2974 +msgid "" +"Unpopulated for create, update, and delete events. For associate and " +"disassociate events this is the object type that object1 is being associated" +" with." +msgstr "" +"作成、更新、および削除イベントの場合は設定されません。関連付けおよび関連付け解除イベントの場合、これは object1 " +"が関連付けられるオブジェクトタイプになります。" + +#: api/serializers.py:2977 +msgid "The action taken with respect to the given object(s)." +msgstr "指定されたオブジェクトについて実行されたアクション。" + +#: api/serializers.py:3077 +msgid "Unable to login with provided credentials." +msgstr "提供される認証情報でログインできません。" + +#: api/serializers.py:3079 +msgid "Must include \"username\" and \"password\"." +msgstr "「username」および「password」を含める必要があります。" + +#: api/views.py:99 +msgid "Your license does not allow use of the activity stream." +msgstr "お使いのライセンスではアクティビティーストリームを使用できません。" + +#: api/views.py:109 +msgid "Your license does not permit use of system tracking." +msgstr "お使いのライセンスではシステムトラッキングを使用できません。" + +#: api/views.py:119 +msgid "Your license does not allow use of workflows." +msgstr "お使いのライセンスではワークフローを使用できません。" + +#: api/views.py:127 templates/rest_framework/api.html:28 +msgid "REST API" +msgstr "REST API" + +#: api/views.py:134 templates/rest_framework/api.html:4 +msgid "Ansible Tower REST API" +msgstr "Ansible Tower REST API" + +#: api/views.py:150 +msgid "Version 1" +msgstr "バージョン 1" + +#: api/views.py:201 +msgid "Ping" +msgstr "Ping" + +#: api/views.py:230 conf/apps.py:12 +msgid "Configuration" +msgstr "設定" + +#: api/views.py:283 +msgid "Invalid license data" +msgstr "無効なライセンスデータ" + +#: api/views.py:285 +msgid "Missing 'eula_accepted' property" +msgstr "'eula_accepted' プロパティーがありません" + +#: api/views.py:289 +msgid "'eula_accepted' value is invalid" +msgstr "'eula_accepted' 値は無効です。" + +#: api/views.py:292 +msgid "'eula_accepted' must be True" +msgstr "'eula_accepted' は True でなければなりません" + +#: api/views.py:299 +msgid "Invalid JSON" +msgstr "無効な JSON" + +#: api/views.py:307 +msgid "Invalid License" +msgstr "無効なライセンス" + +#: api/views.py:317 +msgid "Invalid license" +msgstr "無効なライセンス" + +#: api/views.py:325 +#, python-format +msgid "Failed to remove license (%s)" +msgstr "ライセンスの削除に失敗しました (%s)" + +#: api/views.py:330 +msgid "Dashboard" +msgstr "ダッシュボード" + +#: api/views.py:436 +msgid "Dashboard Jobs Graphs" +msgstr "ダッシュボードのジョブグラフ" + +#: api/views.py:472 +#, python-format +msgid "Unknown period \"%s\"" +msgstr "不明な期間 \"%s\"" + +#: api/views.py:486 +msgid "Schedules" +msgstr "スケジュール" + +#: api/views.py:505 +msgid "Schedule Jobs List" +msgstr "スケジュールジョブの一覧" + +#: api/views.py:715 +msgid "Your Tower license only permits a single organization to exist." +msgstr "お使いの Tower ライセンスでは、単一組織のみの存在が許可されます。" + +#: api/views.py:940 api/views.py:1299 +msgid "Role 'id' field is missing." +msgstr "ロール「id」フィールドがありません。" + +#: api/views.py:946 api/views.py:4081 +msgid "You cannot assign an Organization role as a child role for a Team." +msgstr "組織ロールをチームの子ロールとして割り当てることができません。" + +#: api/views.py:950 api/views.py:4095 +msgid "You cannot grant system-level permissions to a team." +msgstr "システムレベルのパーミッションをチームに付与できません。" + +#: api/views.py:957 api/views.py:4087 +msgid "" +"You cannot grant credential access to a team when the Organization field " +"isn't set, or belongs to a different organization" +msgstr "組織フィールドが設定されていないか、または別の組織に属する場合に認証情報のアクセス権をチームに付与できません" + +#: api/views.py:1047 +msgid "Cannot delete project." +msgstr "プロジェクトを削除できません。" + +#: api/views.py:1076 +msgid "Project Schedules" +msgstr "プロジェクトのスケジュール" + +#: api/views.py:1180 api/views.py:2270 api/views.py:3276 +msgid "Cannot delete job resource when associated workflow job is running." +msgstr "関連付けられたワークフロージョブが実行中の場合、ジョブリソースを削除できません。" + +#: api/views.py:1257 +msgid "Me" +msgstr "自分" + +#: api/views.py:1303 api/views.py:4036 +msgid "You may not perform any action with your own admin_role." +msgstr "独自の admin_role でアクションを実行することはできません。" + +#: api/views.py:1309 api/views.py:4040 +msgid "You may not change the membership of a users admin_role" +msgstr "ユーザーの admin_role のメンバーシップを変更することはできません" + +#: api/views.py:1314 api/views.py:4045 +msgid "" +"You cannot grant credential access to a user not in the credentials' " +"organization" +msgstr "認証情報の組織に属さないユーザーに認証情報のアクセス権を付与することはできません" + +#: api/views.py:1318 api/views.py:4049 +msgid "You cannot grant private credential access to another user" +msgstr "非公開の認証情報のアクセス権を別のユーザーに付与することはできません" + +#: api/views.py:1416 +#, python-format +msgid "Cannot change %s." +msgstr "%s を変更できません。" + +#: api/views.py:1422 +msgid "Cannot delete user." +msgstr "ユーザーを削除できません。" + +#: api/views.py:1570 +msgid "Cannot delete inventory script." +msgstr "インベントリースクリプトを削除できません。" + +#: api/views.py:1805 +msgid "Fact not found." +msgstr "ファクトが見つかりませんでした。" + +#: api/views.py:2125 +msgid "Inventory Source List" +msgstr "インベントリーソース一覧" + +#: api/views.py:2153 +msgid "Cannot delete inventory source." +msgstr "インベントリーソースを削除できません。" + +#: api/views.py:2161 +msgid "Inventory Source Schedules" +msgstr "インベントリーソースのスケジュール" + +#: api/views.py:2191 +msgid "Notification Templates can only be assigned when source is one of {}." +msgstr "ソースが {} のいずれかである場合、通知テンプレートのみを割り当てることができます。" + +#: api/views.py:2402 +msgid "Job Template Schedules" +msgstr "ジョブテンプレートスケジュール" + +#: api/views.py:2422 api/views.py:2438 +msgid "Your license does not allow adding surveys." +msgstr "お使いのライセンスでは Survey を追加できません。" + +#: api/views.py:2445 +msgid "'name' missing from survey spec." +msgstr "Survey の指定に「name」がありません。" + +#: api/views.py:2447 +msgid "'description' missing from survey spec." +msgstr "Survey の指定に「description」がありません。" + +#: api/views.py:2449 +msgid "'spec' missing from survey spec." +msgstr "Survey の指定に「spec」がありません。" + +#: api/views.py:2451 +msgid "'spec' must be a list of items." +msgstr "「spec」は項目の一覧にする必要があります。" + +#: api/views.py:2453 +msgid "'spec' doesn't contain any items." +msgstr "「spec」には項目が含まれません。" + +#: api/views.py:2459 +#, python-format +msgid "Survey question %s is not a json object." +msgstr "Survey の質問 %s は json オブジェクトではありません。" + +#: api/views.py:2461 +#, python-format +msgid "'type' missing from survey question %s." +msgstr "Survey の質問 %s に「type」がありません。" + +#: api/views.py:2463 +#, python-format +msgid "'question_name' missing from survey question %s." +msgstr "Survey の質問 %s に「question_name」がありません。" + +#: api/views.py:2465 +#, python-format +msgid "'variable' missing from survey question %s." +msgstr "Survey の質問 %s に「variable」がありません。" + +#: api/views.py:2467 +#, python-format +msgid "'variable' '%(item)s' duplicated in survey question %(survey)s." +msgstr "Survey の質問%(survey)s で「variable」の「%(item)s」が重複しています。" + +#: api/views.py:2472 +#, python-format +msgid "'required' missing from survey question %s." +msgstr "Survey の質問 %s に「required」がありません。" + +#: api/views.py:2683 +msgid "No matching host could be found!" +msgstr "一致するホストが見つかりませんでした!" + +#: api/views.py:2686 +msgid "Multiple hosts matched the request!" +msgstr "複数のホストが要求に一致しました!" + +#: api/views.py:2691 +msgid "Cannot start automatically, user input required!" +msgstr "自動的に開始できません。ユーザー入力が必要です!" + +#: api/views.py:2698 +msgid "Host callback job already pending." +msgstr "ホストのコールバックジョブがすでに保留中です。" + +#: api/views.py:2711 +msgid "Error starting job!" +msgstr "ジョブの開始時にエラーが発生しました!" + +#: api/views.py:3040 +msgid "Workflow Job Template Schedules" +msgstr "ワークフロージョブテンプレートのスケジュール" + +#: api/views.py:3175 api/views.py:3714 +msgid "Superuser privileges needed." +msgstr "スーパーユーザー権限が必要です。" + +#: api/views.py:3207 +msgid "System Job Template Schedules" +msgstr "システムジョブテンプレートのスケジュール" + +#: api/views.py:3399 +msgid "Job Host Summaries List" +msgstr "ジョブホスト概要一覧" + +#: api/views.py:3441 +msgid "Job Event Children List" +msgstr "ジョブイベント子一覧" + +#: api/views.py:3450 +msgid "Job Event Hosts List" +msgstr "ジョブイベントホスト一覧" + +#: api/views.py:3459 +msgid "Job Events List" +msgstr "ジョブイベント一覧" + +#: api/views.py:3668 +msgid "Ad Hoc Command Events List" +msgstr "アドホックコマンドイベント一覧" + +#: api/views.py:3862 +#, python-format +msgid "Error generating stdout download file: %s" +msgstr "stdout ダウンロードファイルの生成中にエラーが発生しました: %s" + +#: api/views.py:3907 +msgid "Delete not allowed while there are pending notifications" +msgstr "保留中の通知がある場合に削除は許可されません" + +#: api/views.py:3914 +msgid "Notification Template Test" +msgstr "" + +#: api/views.py:4030 +msgid "User 'id' field is missing." +msgstr "ユーザー「id」フィールドがありません。" + +#: api/views.py:4073 +msgid "Team 'id' field is missing." +msgstr "チーム「id」フィールドがありません。" + +#: conf/conf.py:20 +msgid "Bud Frogs" +msgstr "Bud Frogs" + +#: conf/conf.py:21 +msgid "Bunny" +msgstr "Bunny" + +#: conf/conf.py:22 +msgid "Cheese" +msgstr "Cheese" + +#: conf/conf.py:23 +msgid "Daemon" +msgstr "Daemon" + +#: conf/conf.py:24 +msgid "Default Cow" +msgstr "Default Cow" + +#: conf/conf.py:25 +msgid "Dragon" +msgstr "Dragon" + +#: conf/conf.py:26 +msgid "Elephant in Snake" +msgstr "Elephant in Snake" + +#: conf/conf.py:27 +msgid "Elephant" +msgstr "Elephant" + +#: conf/conf.py:28 +msgid "Eyes" +msgstr "Eyes" + +#: conf/conf.py:29 +msgid "Hello Kitty" +msgstr "Hello Kitty" + +#: conf/conf.py:30 +msgid "Kitty" +msgstr "Kitty" + +#: conf/conf.py:31 +msgid "Luke Koala" +msgstr "Luke Koala" + +#: conf/conf.py:32 +msgid "Meow" +msgstr "Meow" + +#: conf/conf.py:33 +msgid "Milk" +msgstr "Milk" + +#: conf/conf.py:34 +msgid "Moofasa" +msgstr "Moofasa" + +#: conf/conf.py:35 +msgid "Moose" +msgstr "Moose" + +#: conf/conf.py:36 +msgid "Ren" +msgstr "Ren" + +#: conf/conf.py:37 +msgid "Sheep" +msgstr "Sheep" + +#: conf/conf.py:38 +msgid "Small Cow" +msgstr "Small Cow" + +#: conf/conf.py:39 +msgid "Stegosaurus" +msgstr "Stegosaurus" + +#: conf/conf.py:40 +msgid "Stimpy" +msgstr "Stimpy" + +#: conf/conf.py:41 +msgid "Super Milker" +msgstr "Super Milker" + +#: conf/conf.py:42 +msgid "Three Eyes" +msgstr "Three Eyes" + +#: conf/conf.py:43 +msgid "Turkey" +msgstr "Turkey" + +#: conf/conf.py:44 +msgid "Turtle" +msgstr "Turtle" + +#: conf/conf.py:45 +msgid "Tux" +msgstr "Tux" + +#: conf/conf.py:46 +msgid "Udder" +msgstr "Udder" + +#: conf/conf.py:47 +msgid "Vader Koala" +msgstr "Vader Koala" + +#: conf/conf.py:48 +msgid "Vader" +msgstr "Vader" + +#: conf/conf.py:49 +msgid "WWW" +msgstr "WWW" + +#: conf/conf.py:52 +msgid "Cow Selection" +msgstr "Cow Selection" + +#: conf/conf.py:53 +msgid "Select which cow to use with cowsay when running jobs." +msgstr "ジョブの実行時に cowsay で使用する cow を選択します。" + +#: conf/conf.py:54 conf/conf.py:75 +msgid "Cows" +msgstr "Cows" + +#: conf/conf.py:73 +msgid "Example Read-Only Setting" +msgstr "読み取り専用設定の例" + +#: conf/conf.py:74 +msgid "Example setting that cannot be changed." +msgstr "変更不可能な設定例" + +#: conf/conf.py:93 +msgid "Example Setting" +msgstr "設定例" + +#: conf/conf.py:94 +msgid "Example setting which can be different for each user." +msgstr "ユーザーごとに異なる設定例" + +#: conf/conf.py:95 conf/registry.py:67 conf/views.py:46 +msgid "User" +msgstr "ユーザー" + +#: conf/fields.py:38 +msgid "Enter a valid URL" +msgstr "無効な URL の入力" + +#: conf/license.py:19 +msgid "Your Tower license does not allow that." +msgstr "お使いの Tower ライセンスではこれを許可しません。" + +#: conf/management/commands/migrate_to_database_settings.py:41 +msgid "Only show which settings would be commented/migrated." +msgstr "コメント/移行する設定についてのみ表示します。" + +#: conf/management/commands/migrate_to_database_settings.py:48 +msgid "" +"Skip over settings that would raise an error when commenting/migrating." +msgstr "コメント/移行時にエラーを発生させる設定をスキップします。" + +#: conf/management/commands/migrate_to_database_settings.py:55 +msgid "Skip commenting out settings in files." +msgstr "ファイル内の設定のコメント化をスキップします。" + +#: conf/management/commands/migrate_to_database_settings.py:61 +msgid "Backup existing settings files with this suffix." +msgstr "この接尾辞を持つ既存の設定ファイルをバックアップします。" + +#: conf/registry.py:55 +msgid "All" +msgstr "すべて" + +#: conf/registry.py:56 +msgid "Changed" +msgstr "変更済み" + +#: conf/registry.py:68 +msgid "User-Defaults" +msgstr "ユーザー設定" + +#: conf/views.py:38 +msgid "Setting Categories" +msgstr "設定カテゴリー" + +#: conf/views.py:61 +msgid "Setting Detail" +msgstr "設定の詳細" + +#: main/access.py:255 +#, python-format +msgid "Bad data found in related field %s." +msgstr "関連フィールド %s に不正データが見つかりました。" + +#: main/access.py:296 +msgid "License is missing." +msgstr "ライセンスが見つかりません。" + +#: main/access.py:298 +msgid "License has expired." +msgstr "ライセンスの有効期限が切れました。" + +#: main/access.py:303 +#, python-format +msgid "License count of %s instances has been reached." +msgstr "%s インスタンスのライセンス数に達しました。" + +#: main/access.py:305 +#, python-format +msgid "License count of %s instances has been exceeded." +msgstr "%s インスタンスのライセンス数を超えました。" + +#: main/access.py:307 +msgid "Host count exceeds available instances." +msgstr "ホスト数が利用可能なインスタンスの上限を上回っています。" + +#: main/access.py:311 +#, python-format +msgid "Feature %s is not enabled in the active license." +msgstr "機能 %s はアクティブなライセンスで有効にされていません。" + +#: main/access.py:313 +msgid "Features not found in active license." +msgstr "各種機能はアクティブなライセンスにありません。" + +#: main/access.py:511 main/access.py:578 main/access.py:698 main/access.py:961 +#: main/access.py:1200 main/access.py:1597 +msgid "Resource is being used by running jobs" +msgstr "リソースが実行中のジョブで使用されています" + +#: main/access.py:622 +msgid "Unable to change inventory on a host." +msgstr "ホストのインベントリーを変更できません。" + +#: main/access.py:634 main/access.py:679 +msgid "Cannot associate two items from different inventories." +msgstr "異なるインベントリーの 2 つの項目を関連付けることはできません。" + +#: main/access.py:667 +msgid "Unable to change inventory on a group." +msgstr "グループのインベントリーを変更できません。" + +#: main/access.py:881 +msgid "Unable to change organization on a team." +msgstr "チームの組織を変更できません。" + +#: main/access.py:894 +msgid "The {} role cannot be assigned to a team" +msgstr "{} ロールをチームに割り当てることができません" + +#: main/access.py:896 +msgid "The admin_role for a User cannot be assigned to a team" +msgstr "ユーザーの admin_role をチームに割り当てることができません" + +#: main/access.py:1670 +msgid "" +"You do not have permission to the workflow job resources required for " +"relaunch." +msgstr "" + +#: main/apps.py:9 +msgid "Main" +msgstr "メイン" + +#: main/conf.py:17 +msgid "Enable Activity Stream" +msgstr "アクティビティーストリームの有効化" + +#: main/conf.py:18 +msgid "Enable capturing activity for the Tower activity stream." +msgstr "Tower アクティビティーストリームのアクティビティーのキャプチャーを有効にします。" + +#: main/conf.py:19 main/conf.py:29 main/conf.py:39 main/conf.py:48 +#: main/conf.py:60 main/conf.py:78 main/conf.py:103 +msgid "System" +msgstr "システム" + +#: main/conf.py:27 +msgid "Enable Activity Stream for Inventory Sync" +msgstr "インベントリー同期のアクティビティティーストリームの有効化" + +#: main/conf.py:28 +msgid "" +"Enable capturing activity for the Tower activity stream when running " +"inventory sync." +msgstr "インベントリー同期の実行時に Tower アクティビティーストリームのアクティビティーのキャプチャーを有効にします。" + +#: main/conf.py:37 +msgid "All Users Visible to Organization Admins" +msgstr "組織管理者に表示されるすべてのユーザー" + +#: main/conf.py:38 +msgid "" +"Controls whether any Organization Admin can view all users, even those not " +"associated with their Organization." +msgstr "組織管理者が、それぞれの組織に関連付けられていないすべてのユーザーを閲覧できるかどうかを制御します。" + +#: main/conf.py:46 +msgid "Enable Tower Administrator Alerts" +msgstr "Tower 管理者アラートの有効化" + +#: main/conf.py:47 +msgid "" +"Allow Tower to email Admin users for system events that may require " +"attention." +msgstr "Tower から管理者ユーザーに対し、注意を要する可能性のあるシステムイベントについてのメールを送信することを許可します。" + +#: main/conf.py:57 +msgid "Base URL of the Tower host" +msgstr "Tower ホストのベース URL" + +#: main/conf.py:58 +msgid "" +"This setting is used by services like notifications to render a valid url to" +" the Tower host." +msgstr "この設定は、有効な URL を Tower ホストにレンダリングする通知などのサービスで使用されます。" + +#: main/conf.py:67 +msgid "Remote Host Headers" +msgstr "リモートホストヘッダー" + +#: main/conf.py:68 +msgid "" +"HTTP headers and meta keys to search to determine remote host name or IP. Add additional items to this list, such as \"HTTP_X_FORWARDED_FOR\", if behind a reverse proxy.\n" +"\n" +"Note: The headers will be searched in order and the first found remote host name or IP will be used.\n" +"\n" +"In the below example 8.8.8.7 would be the chosen IP address.\n" +"X-Forwarded-For: 8.8.8.7, 192.168.2.1, 127.0.0.1\n" +"Host: 127.0.0.1\n" +"REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" +msgstr "" +"リモートホスト名または IP を判別するために検索する HTTP ヘッダーおよびメタキーです。リバースプロキシーの後ろの場合は、\"HTTP_X_FORWARDED_FOR\" のように項目をこの一覧に追加します。\n" +"\n" +"注: ヘッダーが順番に検索され、最初に検出されるリモートホスト名または IP が使用されます。\n" +"\n" +"以下の例では、8.8.8.7 が選択された IP アドレスになります。\n" +"X-Forwarded-For: 8.8.8.7, 192.168.2.1, 127.0.0.1\n" +"Host: 127.0.0.1\n" +"REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST']" + +#: main/conf.py:99 +msgid "Tower License" +msgstr "Tower ライセンス" + +#: main/conf.py:100 +msgid "" +"The license controls which features and functionality are enabled in Tower. " +"Use /api/v1/config/ to update or change the license." +msgstr "" +"ライセンスによって、Tower で有効にされる特長および機能が制御されます。ライセンスを更新または変更するには、/api/v1/config/ " +"を使用します。" + +#: main/conf.py:110 +msgid "Ansible Modules Allowed for Ad Hoc Jobs" +msgstr "アドホックジョブで許可される Ansible モジュール" + +#: main/conf.py:111 +msgid "List of modules allowed to be used by ad-hoc jobs." +msgstr "アドホックジョブで使用できるモジュール一覧。" + +#: main/conf.py:112 main/conf.py:121 main/conf.py:130 main/conf.py:140 +#: main/conf.py:150 main/conf.py:160 main/conf.py:170 main/conf.py:180 +#: main/conf.py:190 main/conf.py:202 main/conf.py:214 main/conf.py:226 +msgid "Jobs" +msgstr "ジョブ" + +#: main/conf.py:119 +msgid "Enable job isolation" +msgstr "ジョブの分離の有効化" + +#: main/conf.py:120 +msgid "" +"Isolates an Ansible job from protected parts of the Tower system to prevent " +"exposing sensitive information." +msgstr "機密情報の公開を防ぐために Tower システムの保護された部分から Ansible ジョブを分離します。" + +#: main/conf.py:128 +msgid "Job isolation execution path" +msgstr "ジョブ分離の実行パス" + +#: main/conf.py:129 +msgid "" +"Create temporary working directories for isolated jobs in this location." +msgstr "この場所に分離されたジョブの一時作業ディレクトリーを作成します。" + +#: main/conf.py:138 +msgid "Paths to hide from isolated jobs" +msgstr "分離されたジョブから非表示にするパス" + +#: main/conf.py:139 +msgid "Additional paths to hide from isolated processes." +msgstr "分離されたプロセスから非表示にする追加パス。" + +#: main/conf.py:148 +msgid "Paths to expose to isolated jobs" +msgstr "分離されたジョブに公開するパス" + +#: main/conf.py:149 +msgid "" +"Whitelist of paths that would otherwise be hidden to expose to isolated " +"jobs." +msgstr "分離されたジョブに公開されないように非表示にされることがあるパスのホワイトリスト。" + +#: main/conf.py:158 +msgid "Standard Output Maximum Display Size" +msgstr "標準出力の最大表示サイズ" + +#: main/conf.py:159 +msgid "" +"Maximum Size of Standard Output in bytes to display before requiring the " +"output be downloaded." +msgstr "出力のダウンロードを要求する前に表示される標準出力の最大サイズ (バイト単位)。" + +#: main/conf.py:168 +msgid "Job Event Standard Output Maximum Display Size" +msgstr "ジョブイベントの標準出力の最大表示サイズ" + +#: main/conf.py:169 +msgid "" +"Maximum Size of Standard Output in bytes to display for a single job or ad " +"hoc command event. `stdout` will end with `…` when truncated." +msgstr "" +"単一ジョブまたはアドホックコマンドイベントについて表示される標準出力の最大サイズ (バイト単位)。`stdout` は切り捨てが実行されると `…` " +"で終了します。" + +#: main/conf.py:178 +msgid "Maximum Scheduled Jobs" +msgstr "スケジュール済みジョブの最大数" + +#: main/conf.py:179 +msgid "" +"Maximum number of the same job template that can be waiting to run when " +"launching from a schedule before no more are created." +msgstr "スケジュールからの起動時に実行を待機している同じジョブテンプレートの最大数です (これ以上作成されることはありません)。" + +#: main/conf.py:188 +msgid "Ansible Callback Plugins" +msgstr "Ansible コールバックプラグイン" + +#: main/conf.py:189 +msgid "" +"List of paths to search for extra callback plugins to be used when running " +"jobs." +msgstr "ジョブの実行時に使用される追加のコールバックプラグインについて検索するパスの一覧。" + +#: main/conf.py:199 +msgid "Default Job Timeout" +msgstr "デフォルトのジョブタイムアウト" + +#: main/conf.py:200 +msgid "" +"Maximum time to allow jobs to run. Use value of 0 to indicate that no " +"timeout should be imposed. A timeout set on an individual job template will " +"override this." +msgstr "" +"ジョブの実行可能な最大時間。値 0 " +"が使用されている場合はタイムアウトを設定できないことを示します。個別のジョブテンプレートに設定されるタイムアウトはこれを上書きします。" + +#: main/conf.py:211 +msgid "Default Inventory Update Timeout" +msgstr "デフォルトのインベントリー更新タイムアウト" + +#: main/conf.py:212 +msgid "" +"Maximum time to allow inventory updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual inventory " +"source will override this." +msgstr "" +"インベントリー更新の実行可能な最大時間。値 0 " +"が設定されている場合はタイムアウトを設定できないことを示します。個別のインベントリーソースに設定されるタイムアウトはこれを上書きします。" + +#: main/conf.py:223 +msgid "Default Project Update Timeout" +msgstr "デフォルトのプロジェクト更新タイムアウト" + +#: main/conf.py:224 +msgid "" +"Maximum time to allow project updates to run. Use value of 0 to indicate " +"that no timeout should be imposed. A timeout set on an individual project " +"will override this." +msgstr "" +"プロジェクト更新の実行可能な最大時間。値 0 " +"が設定されている場合はタイムアウトを設定できないことを示します。個別のプロジェクトに設定されるタイムアウトはこれを上書きします。" + +#: main/conf.py:234 +msgid "Logging Aggregator" +msgstr "" + +#: main/conf.py:235 +msgid "Hostname/IP where external logs will be sent to." +msgstr "" + +#: main/conf.py:236 main/conf.py:245 main/conf.py:255 main/conf.py:264 +#: main/conf.py:274 main/conf.py:288 main/conf.py:300 main/conf.py:309 +msgid "Logging" +msgstr "ロギング" + +#: main/conf.py:243 +msgid "Logging Aggregator Port" +msgstr "" + +#: main/conf.py:244 +msgid "Port on Logging Aggregator to send logs to (if required)." +msgstr "" + +#: main/conf.py:253 +msgid "Logging Aggregator Type" +msgstr "" + +#: main/conf.py:254 +msgid "Format messages for the chosen log aggregator." +msgstr "" + +#: main/conf.py:262 +msgid "Logging Aggregator Username" +msgstr "" + +#: main/conf.py:263 +msgid "Username for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:272 +msgid "Logging Aggregator Password/Token" +msgstr "" + +#: main/conf.py:273 +msgid "" +"Password or authentication token for external log aggregator (if required)." +msgstr "" + +#: main/conf.py:281 +msgid "Loggers to send data to the log aggregator from" +msgstr "ログアグリゲーターにデータを送信するロガー" + +#: main/conf.py:282 +msgid "" +"List of loggers that will send HTTP logs to the collector, these can include any or all of: \n" +"awx - Tower service logs\n" +"activity_stream - activity stream records\n" +"job_events - callback data from Ansible job events\n" +"system_tracking - facts gathered from scan jobs." +msgstr "" + +#: main/conf.py:295 +msgid "Log System Tracking Facts Individually" +msgstr "" + +#: main/conf.py:296 +msgid "" +"If set, system tracking facts will be sent for each package, service, " +"orother item found in a scan, allowing for greater search query granularity." +" If unset, facts will be sent as a single dictionary, allowing for greater " +"efficiency in fact processing." +msgstr "" + +#: main/conf.py:307 +msgid "Enable External Logging" +msgstr "" + +#: main/conf.py:308 +msgid "Enable sending logs to external log aggregator." +msgstr "" + +#: main/models/activity_stream.py:22 +msgid "Entity Created" +msgstr "エンティティーの作成" + +#: main/models/activity_stream.py:23 +msgid "Entity Updated" +msgstr "エンティティーの更新" + +#: main/models/activity_stream.py:24 +msgid "Entity Deleted" +msgstr "エンティティーの削除" + +#: main/models/activity_stream.py:25 +msgid "Entity Associated with another Entity" +msgstr "エンティティーの別のエンティティーへの関連付け" + +#: main/models/activity_stream.py:26 +msgid "Entity was Disassociated with another Entity" +msgstr "エンティティーの別のエンティティーとの関連付けの解除" + +#: main/models/ad_hoc_commands.py:96 +msgid "No valid inventory." +msgstr "有効なインベントリーはありません。" + +#: main/models/ad_hoc_commands.py:103 main/models/jobs.py:161 +msgid "You must provide a machine / SSH credential." +msgstr "マシン/SSH 認証情報を入力してください。" + +#: main/models/ad_hoc_commands.py:114 main/models/ad_hoc_commands.py:122 +msgid "Invalid type for ad hoc command" +msgstr "アドホックコマンドの無効なタイプ" + +#: main/models/ad_hoc_commands.py:117 +msgid "Unsupported module for ad hoc commands." +msgstr "アドホックコマンドのサポートされていないモジュール。" + +#: main/models/ad_hoc_commands.py:125 +#, python-format +msgid "No argument passed to %s module." +msgstr "%s モジュールに渡される引数はありません。" + +#: main/models/ad_hoc_commands.py:222 main/models/jobs.py:763 +msgid "Host Failed" +msgstr "ホストの失敗" + +#: main/models/ad_hoc_commands.py:223 main/models/jobs.py:764 +msgid "Host OK" +msgstr "ホスト OK" + +#: main/models/ad_hoc_commands.py:224 main/models/jobs.py:767 +msgid "Host Unreachable" +msgstr "ホストに到達できません" + +#: main/models/ad_hoc_commands.py:229 main/models/jobs.py:766 +msgid "Host Skipped" +msgstr "ホストがスキップされました" + +#: main/models/ad_hoc_commands.py:239 main/models/jobs.py:794 +msgid "Debug" +msgstr "デバッグ" + +#: main/models/ad_hoc_commands.py:240 main/models/jobs.py:795 +msgid "Verbose" +msgstr "詳細" + +#: main/models/ad_hoc_commands.py:241 main/models/jobs.py:796 +msgid "Deprecated" +msgstr "非推奨" + +#: main/models/ad_hoc_commands.py:242 main/models/jobs.py:797 +msgid "Warning" +msgstr "警告" + +#: main/models/ad_hoc_commands.py:243 main/models/jobs.py:798 +msgid "System Warning" +msgstr "システム警告" + +#: main/models/ad_hoc_commands.py:244 main/models/jobs.py:799 +#: main/models/unified_jobs.py:64 +msgid "Error" +msgstr "エラー" + +#: main/models/base.py:45 main/models/base.py:51 main/models/base.py:56 +msgid "Run" +msgstr "実行" + +#: main/models/base.py:46 main/models/base.py:52 main/models/base.py:57 +msgid "Check" +msgstr "チェック" + +#: main/models/base.py:47 +msgid "Scan" +msgstr "スキャン" + +#: main/models/base.py:61 +msgid "Read Inventory" +msgstr "インベントリーの読み取り" + +#: main/models/base.py:62 +msgid "Edit Inventory" +msgstr "インベントリーの編集" + +#: main/models/base.py:63 +msgid "Administrate Inventory" +msgstr "インベントリーの管理" + +#: main/models/base.py:64 +msgid "Deploy To Inventory" +msgstr "インベントリーへのデプロイ" + +#: main/models/base.py:65 +msgid "Deploy To Inventory (Dry Run)" +msgstr "インベントリーへのデプロイ (ドライラン)" + +#: main/models/base.py:66 +msgid "Scan an Inventory" +msgstr "インベントリーのスキャン" + +#: main/models/base.py:67 +msgid "Create a Job Template" +msgstr "ジョブテンプレートの作成" + +#: main/models/credential.py:33 +msgid "Machine" +msgstr "マシン" + +#: main/models/credential.py:34 +msgid "Network" +msgstr "ネットワーク" + +#: main/models/credential.py:35 +msgid "Source Control" +msgstr "ソースコントロール" + +#: main/models/credential.py:36 +msgid "Amazon Web Services" +msgstr "Amazon Web サービス" + +#: main/models/credential.py:37 +msgid "Rackspace" +msgstr "Rackspace" + +#: main/models/credential.py:38 main/models/inventory.py:713 +msgid "VMware vCenter" +msgstr "VMware vCenter" + +#: main/models/credential.py:39 main/models/inventory.py:714 +msgid "Red Hat Satellite 6" +msgstr "Red Hat Satellite 6" + +#: main/models/credential.py:40 main/models/inventory.py:715 +msgid "Red Hat CloudForms" +msgstr "Red Hat CloudForms" + +#: main/models/credential.py:41 main/models/inventory.py:710 +msgid "Google Compute Engine" +msgstr "Google Compute Engine" + +#: main/models/credential.py:42 main/models/inventory.py:711 +msgid "Microsoft Azure Classic (deprecated)" +msgstr "Microsoft Azure Classic (非推奨)" + +#: main/models/credential.py:43 main/models/inventory.py:712 +msgid "Microsoft Azure Resource Manager" +msgstr "Microsoft Azure Resource Manager" + +#: main/models/credential.py:44 main/models/inventory.py:716 +msgid "OpenStack" +msgstr "OpenStack" + +#: main/models/credential.py:48 +msgid "None" +msgstr "なし" + +#: main/models/credential.py:49 +msgid "Sudo" +msgstr "Sudo" + +#: main/models/credential.py:50 +msgid "Su" +msgstr "Su" + +#: main/models/credential.py:51 +msgid "Pbrun" +msgstr "Pbrun" + +#: main/models/credential.py:52 +msgid "Pfexec" +msgstr "Pfexec" + +#: main/models/credential.py:53 +msgid "DZDO" +msgstr "" + +#: main/models/credential.py:54 +msgid "Pmrun" +msgstr "" + +#: main/models/credential.py:103 +msgid "Host" +msgstr "ホスト" + +#: main/models/credential.py:104 +msgid "The hostname or IP address to use." +msgstr "使用するホスト名または IP アドレス。" + +#: main/models/credential.py:110 +msgid "Username" +msgstr "ユーザー名" + +#: main/models/credential.py:111 +msgid "Username for this credential." +msgstr "この認証情報のユーザー名。" + +#: main/models/credential.py:117 +msgid "Password" +msgstr "パスワード" + +#: main/models/credential.py:118 +msgid "" +"Password for this credential (or \"ASK\" to prompt the user for machine " +"credentials)." +msgstr "この認証情報のパスワード (またはマシンの認証情報を求めるプロンプトを出すには 「ASK」)。" + +#: main/models/credential.py:125 +msgid "Security Token" +msgstr "セキュリティートークン" + +#: main/models/credential.py:126 +msgid "Security Token for this credential" +msgstr "この認証情報のセキュリティートークン" + +#: main/models/credential.py:132 +msgid "Project" +msgstr "プロジェクト" + +#: main/models/credential.py:133 +msgid "The identifier for the project." +msgstr "プロジェクトの識別子。" + +#: main/models/credential.py:139 +msgid "Domain" +msgstr "ドメイン" + +#: main/models/credential.py:140 +msgid "The identifier for the domain." +msgstr "ドメインの識別子。" + +#: main/models/credential.py:145 +msgid "SSH private key" +msgstr "SSH 秘密鍵" + +#: main/models/credential.py:146 +msgid "RSA or DSA private key to be used instead of password." +msgstr "パスワードの代わりに使用される RSA または DSA 秘密鍵。" + +#: main/models/credential.py:152 +msgid "SSH key unlock" +msgstr "SSH キーのロック解除" + +#: main/models/credential.py:153 +msgid "" +"Passphrase to unlock SSH private key if encrypted (or \"ASK\" to prompt the " +"user for machine credentials)." +msgstr "" +"暗号化されている場合は SSH 秘密鍵のロックを解除するためのパスフレーズ (またはマシンの認証情報を求めるプロンプトを出すには「ASK」)。" + +#: main/models/credential.py:161 +msgid "Privilege escalation method." +msgstr "権限昇格メソッド。" + +#: main/models/credential.py:167 +msgid "Privilege escalation username." +msgstr "権限昇格ユーザー名。" + +#: main/models/credential.py:173 +msgid "Password for privilege escalation method." +msgstr "権限昇格メソッドのパスワード。" + +#: main/models/credential.py:179 +msgid "Vault password (or \"ASK\" to prompt the user)." +msgstr "Vault パスワード (またはユーザーにプロンプトを出すには「ASK」)。" + +#: main/models/credential.py:183 +msgid "Whether to use the authorize mechanism." +msgstr "承認メカニズムを使用するかどうか。" + +#: main/models/credential.py:189 +msgid "Password used by the authorize mechanism." +msgstr "承認メカニズムで使用されるパスワード。" + +#: main/models/credential.py:195 +msgid "Client Id or Application Id for the credential" +msgstr "認証情報のクライアント ID またはアプリケーション ID" + +#: main/models/credential.py:201 +msgid "Secret Token for this credential" +msgstr "この認証情報のシークレットトークン" + +#: main/models/credential.py:207 +msgid "Subscription identifier for this credential" +msgstr "この認証情報のサブスクリプション識別子" + +#: main/models/credential.py:213 +msgid "Tenant identifier for this credential" +msgstr "この認証情報のテナント識別子" + +#: main/models/credential.py:283 +msgid "Host required for VMware credential." +msgstr "VMware 認証情報に必要なホスト。" + +#: main/models/credential.py:285 +msgid "Host required for OpenStack credential." +msgstr "OpenStack 認証情報に必要なホスト。" + +#: main/models/credential.py:294 +msgid "Access key required for AWS credential." +msgstr "AWS 認証情報に必要なアクセスキー。" + +#: main/models/credential.py:296 +msgid "Username required for Rackspace credential." +msgstr "Rackspace 認証情報に必要なユーザー名。" + +#: main/models/credential.py:299 +msgid "Username required for VMware credential." +msgstr "VMware 認証情報に必要なユーザー名。" + +#: main/models/credential.py:301 +msgid "Username required for OpenStack credential." +msgstr "OpenStack 認証情報に必要なユーザー名。" + +#: main/models/credential.py:307 +msgid "Secret key required for AWS credential." +msgstr "AWS 認証情報に必要なシークレットキー。" + +#: main/models/credential.py:309 +msgid "API key required for Rackspace credential." +msgstr "Rackspace 認証情報に必要な API キー。" + +#: main/models/credential.py:311 +msgid "Password required for VMware credential." +msgstr "VMware 認証情報に必要なパスワード。" + +#: main/models/credential.py:313 +msgid "Password or API key required for OpenStack credential." +msgstr "OpenStack 認証情報に必要なパスワードまたは API キー。" + +#: main/models/credential.py:319 +msgid "Project name required for OpenStack credential." +msgstr "OpenStack 認証情報に必要なプロジェクト名。" + +#: main/models/credential.py:346 +msgid "SSH key unlock must be set when SSH key is encrypted." +msgstr "SSH キーの暗号化時に SSH キーのロック解除を設定する必要があります。" + +#: main/models/credential.py:352 +msgid "Credential cannot be assigned to both a user and team." +msgstr "認証情報はユーザーとチームの両方に割り当てることができません。" + +#: main/models/fact.py:21 +msgid "Host for the facts that the fact scan captured." +msgstr "ファクトスキャンがキャプチャーしたファクトのホスト。" + +#: main/models/fact.py:26 +msgid "Date and time of the corresponding fact scan gathering time." +msgstr "対応するファクトスキャン収集時間の日時。" + +#: main/models/fact.py:29 +msgid "" +"Arbitrary JSON structure of module facts captured at timestamp for a single " +"host." +msgstr "単一ホストのタイムスタンプでキャプチャーされるモジュールファクトの任意の JSON 構造。" + +#: main/models/inventory.py:45 +msgid "inventories" +msgstr "インベントリー" + +#: main/models/inventory.py:52 +msgid "Organization containing this inventory." +msgstr "このインベントリーを含む組織。" + +#: main/models/inventory.py:58 +msgid "Inventory variables in JSON or YAML format." +msgstr "JSON または YAML 形式のインベントリー変数。" + +#: main/models/inventory.py:63 +msgid "Flag indicating whether any hosts in this inventory have failed." +msgstr "このインベントリーのホストが失敗したかどうかを示すフラグ。" + +#: main/models/inventory.py:68 +msgid "Total number of hosts in this inventory." +msgstr "このインべントリー内のホストの合計数。" + +#: main/models/inventory.py:73 +msgid "Number of hosts in this inventory with active failures." +msgstr "アクティブなエラーのあるこのインベントリー内のホストの数。" + +#: main/models/inventory.py:78 +msgid "Total number of groups in this inventory." +msgstr "このインべントリー内のグループの合計数。" + +#: main/models/inventory.py:83 +msgid "Number of groups in this inventory with active failures." +msgstr "アクティブなエラーのあるこのインベントリー内のグループの数。" + +#: main/models/inventory.py:88 +msgid "" +"Flag indicating whether this inventory has any external inventory sources." +msgstr "このインベントリーに外部のインベントリーソースがあるかどうかを示すフラグ。" + +#: main/models/inventory.py:93 +msgid "" +"Total number of external inventory sources configured within this inventory." +msgstr "このインベントリー内で設定される外部インベントリーソースの合計数。" + +#: main/models/inventory.py:98 +msgid "Number of external inventory sources in this inventory with failures." +msgstr "エラーのあるこのインベントリー内の外部インベントリーソースの数。" + +#: main/models/inventory.py:339 +msgid "Is this host online and available for running jobs?" +msgstr "このホストはオンラインで、ジョブを実行するために利用できますか?" + +#: main/models/inventory.py:345 +msgid "" +"The value used by the remote inventory source to uniquely identify the host" +msgstr "ホストを一意に識別するためにリモートインベントリーソースで使用される値" + +#: main/models/inventory.py:350 +msgid "Host variables in JSON or YAML format." +msgstr "JSON または YAML 形式のホスト変数。" + +#: main/models/inventory.py:372 +msgid "Flag indicating whether the last job failed for this host." +msgstr "このホストの最後のジョブが失敗したかどうかを示すフラグ。" + +#: main/models/inventory.py:377 +msgid "" +"Flag indicating whether this host was created/updated from any external " +"inventory sources." +msgstr "このホストが外部インベントリーソースから作成/更新されたかどうかを示すフラグ。" + +#: main/models/inventory.py:383 +msgid "Inventory source(s) that created or modified this host." +msgstr "このホストを作成または変更したインベントリーソース。" + +#: main/models/inventory.py:474 +msgid "Group variables in JSON or YAML format." +msgstr "JSON または YAML 形式のグループ変数。" + +#: main/models/inventory.py:480 +msgid "Hosts associated directly with this group." +msgstr "このグループに直接関連付けられたホスト。" + +#: main/models/inventory.py:485 +msgid "Total number of hosts directly or indirectly in this group." +msgstr "このグループに直接的または間接的に属するホストの合計数。" + +#: main/models/inventory.py:490 +msgid "Flag indicating whether this group has any hosts with active failures." +msgstr "このグループにアクティブなエラーのあるホストがあるかどうかを示すフラグ。" + +#: main/models/inventory.py:495 +msgid "Number of hosts in this group with active failures." +msgstr "アクティブなエラーのあるこのグループ内のホストの数。" + +#: main/models/inventory.py:500 +msgid "Total number of child groups contained within this group." +msgstr "このグループに含まれる子グループの合計数。" + +#: main/models/inventory.py:505 +msgid "Number of child groups within this group that have active failures." +msgstr "アクティブなエラーのあるこのグループ内の子グループの数。" + +#: main/models/inventory.py:510 +msgid "" +"Flag indicating whether this group was created/updated from any external " +"inventory sources." +msgstr "このグループが外部インベントリーソースから作成/更新されたかどうかを示すフラグ。" + +#: main/models/inventory.py:516 +msgid "Inventory source(s) that created or modified this group." +msgstr "このグループを作成または変更したインベントリーソース。" + +#: main/models/inventory.py:706 main/models/projects.py:42 +#: main/models/unified_jobs.py:402 +msgid "Manual" +msgstr "手動" + +#: main/models/inventory.py:707 +msgid "Local File, Directory or Script" +msgstr "ローカルファイル、ディレクトリーまたはスクリプト" + +#: main/models/inventory.py:708 +msgid "Rackspace Cloud Servers" +msgstr "Rackspace クラウドサーバー" + +#: main/models/inventory.py:709 +msgid "Amazon EC2" +msgstr "Amazon EC2" + +#: main/models/inventory.py:717 +msgid "Custom Script" +msgstr "カスタムスクリプト" + +#: main/models/inventory.py:828 +msgid "Inventory source variables in YAML or JSON format." +msgstr "YAML または JSON 形式のインベントリーソース変数。" + +#: main/models/inventory.py:847 +msgid "" +"Comma-separated list of filter expressions (EC2 only). Hosts are imported " +"when ANY of the filters match." +msgstr "カンマ区切りのフィルター式の一覧 (EC2 のみ) です。ホストは、フィルターのいずれかが一致する場合にインポートされます。" + +#: main/models/inventory.py:853 +msgid "Limit groups automatically created from inventory source (EC2 only)." +msgstr "インベントリーソースから自動的に作成されるグループを制限します (EC2 のみ)。" + +#: main/models/inventory.py:857 +msgid "Overwrite local groups and hosts from remote inventory source." +msgstr "リモートインベントリーソースからのローカルグループおよびホストを上書きします。" + +#: main/models/inventory.py:861 +msgid "Overwrite local variables from remote inventory source." +msgstr "リモートインベントリーソースからのローカル変数を上書きします。" + +#: main/models/inventory.py:893 +msgid "Availability Zone" +msgstr "アベイラビリティーゾーン" + +#: main/models/inventory.py:894 +msgid "Image ID" +msgstr "イメージ ID" + +#: main/models/inventory.py:895 +msgid "Instance ID" +msgstr "インスタンス ID" + +#: main/models/inventory.py:896 +msgid "Instance Type" +msgstr "インスタンスタイプ" + +#: main/models/inventory.py:897 +msgid "Key Name" +msgstr "キー名" + +#: main/models/inventory.py:898 +msgid "Region" +msgstr "リージョン" + +#: main/models/inventory.py:899 +msgid "Security Group" +msgstr "セキュリティーグループ" + +#: main/models/inventory.py:900 +msgid "Tags" +msgstr "タグ" + +#: main/models/inventory.py:901 +msgid "VPC ID" +msgstr "VPC ID" + +#: main/models/inventory.py:902 +msgid "Tag None" +msgstr "タグ None" + +#: main/models/inventory.py:973 +#, python-format +msgid "" +"Cloud-based inventory sources (such as %s) require credentials for the " +"matching cloud service." +msgstr "クラウドベースのインベントリーソース (%s など) には一致するクラウドサービスの認証情報が必要です。" + +#: main/models/inventory.py:980 +msgid "Credential is required for a cloud source." +msgstr "認証情報がクラウドソースに必要です。" + +#: main/models/inventory.py:1005 +#, python-format +msgid "Invalid %(source)s region: %(region)s" +msgstr "" + +#: main/models/inventory.py:1030 +#, python-format +msgid "Invalid filter expression: %(filter)s" +msgstr "" + +#: main/models/inventory.py:1048 +#, python-format +msgid "Invalid group by choice: %(choice)s" +msgstr "" + +#: main/models/inventory.py:1195 +#, python-format +msgid "" +"Unable to configure this item for cloud sync. It is already managed by %s." +msgstr "クラウド同期用にこの項目を設定できません。すでに %s によって管理されています。" + +#: main/models/inventory.py:1290 +msgid "Inventory script contents" +msgstr "インベントリースクリプトの内容" + +#: main/models/inventory.py:1295 +msgid "Organization owning this inventory script" +msgstr "このインベントリースクリプトを所有する組織" + +#: main/models/jobs.py:169 +msgid "You must provide a network credential." +msgstr "ネットワーク認証情報を指定する必要があります。" + +#: main/models/jobs.py:177 +msgid "" +"Must provide a credential for a cloud provider, such as Amazon Web Services " +"or Rackspace." +msgstr "Amazon Web Services または Rackspace などのクラウドプロバイダーの認証情報を指定する必要があります。" + +#: main/models/jobs.py:269 +msgid "Job Template must provide 'inventory' or allow prompting for it." +msgstr "ジョブテンプレートは「inventory」を指定するか、このプロンプトを許可する必要があります。" + +#: main/models/jobs.py:273 +msgid "Job Template must provide 'credential' or allow prompting for it." +msgstr "ジョブテンプレートは「credential」を指定するか、このプロンプトを許可する必要があります。" + +#: main/models/jobs.py:362 +msgid "Cannot override job_type to or from a scan job." +msgstr "スキャンジョブから/への job_type を上書きを実行できません。" + +#: main/models/jobs.py:365 +msgid "Inventory cannot be changed at runtime for scan jobs." +msgstr "インベントリーはスキャンジョブの実行時に変更できません。" + +#: main/models/jobs.py:431 main/models/projects.py:243 +msgid "SCM Revision" +msgstr "SCM リビジョン" + +#: main/models/jobs.py:432 +msgid "The SCM Revision from the Project used for this job, if available" +msgstr "このジョブに使用されるプロジェクトからの SCM リビジョン (ある場合)" + +#: main/models/jobs.py:440 +msgid "" +"The SCM Refresh task used to make sure the playbooks were available for the " +"job run" +msgstr "SCM 更新タスクは、Playbook がジョブの実行で利用可能であったことを確認するために使用されます" + +#: main/models/jobs.py:662 +msgid "job host summaries" +msgstr "ジョブホストの概要" + +#: main/models/jobs.py:765 +msgid "Host Failure" +msgstr "ホストの失敗" + +#: main/models/jobs.py:768 main/models/jobs.py:782 +msgid "No Hosts Remaining" +msgstr "残りのホストがありません" + +#: main/models/jobs.py:769 +msgid "Host Polling" +msgstr "ホストのポーリング" + +#: main/models/jobs.py:770 +msgid "Host Async OK" +msgstr "ホストの非同期 OK" + +#: main/models/jobs.py:771 +msgid "Host Async Failure" +msgstr "ホストの非同期失敗" + +#: main/models/jobs.py:772 +msgid "Item OK" +msgstr "項目 OK" + +#: main/models/jobs.py:773 +msgid "Item Failed" +msgstr "項目の失敗" + +#: main/models/jobs.py:774 +msgid "Item Skipped" +msgstr "項目のスキップ" + +#: main/models/jobs.py:775 +msgid "Host Retry" +msgstr "ホストの再試行" + +#: main/models/jobs.py:777 +msgid "File Difference" +msgstr "ファイルの相違点" + +#: main/models/jobs.py:778 +msgid "Playbook Started" +msgstr "Playbook の開始" + +#: main/models/jobs.py:779 +msgid "Running Handlers" +msgstr "実行中のハンドラー" + +#: main/models/jobs.py:780 +msgid "Including File" +msgstr "組み込みファイル" + +#: main/models/jobs.py:781 +msgid "No Hosts Matched" +msgstr "一致するホストがありません" + +#: main/models/jobs.py:783 +msgid "Task Started" +msgstr "タスクの開始" + +#: main/models/jobs.py:785 +msgid "Variables Prompted" +msgstr "変数のプロモート" + +#: main/models/jobs.py:786 +msgid "Gathering Facts" +msgstr "ファクトの収集" + +#: main/models/jobs.py:787 +msgid "internal: on Import for Host" +msgstr "内部: ホストのインポート時" + +#: main/models/jobs.py:788 +msgid "internal: on Not Import for Host" +msgstr "内部: ホストの非インポート時" + +#: main/models/jobs.py:789 +msgid "Play Started" +msgstr "プレイの開始" + +#: main/models/jobs.py:790 +msgid "Playbook Complete" +msgstr "Playbook の完了" + +#: main/models/jobs.py:1200 +msgid "Remove jobs older than a certain number of days" +msgstr "特定の日数より前のジョブを削除" + +#: main/models/jobs.py:1201 +msgid "Remove activity stream entries older than a certain number of days" +msgstr "特定の日数より前のアクティビティーストリームのエントリーを削除" + +#: main/models/jobs.py:1202 +msgid "Purge and/or reduce the granularity of system tracking data" +msgstr "システムトラッキングデータの詳細度の削除/削減" + +#: main/models/label.py:29 +msgid "Organization this label belongs to." +msgstr "このラベルが属する組織。" + +#: main/models/notifications.py:31 +msgid "Email" +msgstr "メール" + +#: main/models/notifications.py:32 +msgid "Slack" +msgstr "Slack" + +#: main/models/notifications.py:33 +msgid "Twilio" +msgstr "Twilio" + +#: main/models/notifications.py:34 +msgid "Pagerduty" +msgstr "Pagerduty" + +#: main/models/notifications.py:35 +msgid "HipChat" +msgstr "HipChat" + +#: main/models/notifications.py:36 +msgid "Webhook" +msgstr "Webhook" + +#: main/models/notifications.py:37 +msgid "IRC" +msgstr "IRC" + +#: main/models/notifications.py:127 main/models/unified_jobs.py:59 +msgid "Pending" +msgstr "保留中" + +#: main/models/notifications.py:128 main/models/unified_jobs.py:62 +msgid "Successful" +msgstr "成功" + +#: main/models/notifications.py:129 main/models/unified_jobs.py:63 +msgid "Failed" +msgstr "失敗" + +#: main/models/organization.py:157 +msgid "Execute Commands on the Inventory" +msgstr "インベントリーでのコマンドの実行" + +#: main/models/organization.py:211 +msgid "Token not invalidated" +msgstr "トークンが無効にされませんでした" + +#: main/models/organization.py:212 +msgid "Token is expired" +msgstr "トークンは期限切れです" + +#: main/models/organization.py:213 +msgid "" +"The maximum number of allowed sessions for this user has been exceeded." +msgstr "" + +#: main/models/organization.py:216 +msgid "Invalid token" +msgstr "無効なトークン" + +#: main/models/organization.py:233 +msgid "Reason the auth token was invalidated." +msgstr "認証トークンが無効にされた理由。" + +#: main/models/organization.py:272 +msgid "Invalid reason specified" +msgstr "無効な理由が特定されました" + +#: main/models/projects.py:43 +msgid "Git" +msgstr "Git" + +#: main/models/projects.py:44 +msgid "Mercurial" +msgstr "Mercurial" + +#: main/models/projects.py:45 +msgid "Subversion" +msgstr "Subversion" + +#: main/models/projects.py:71 +msgid "" +"Local path (relative to PROJECTS_ROOT) containing playbooks and related " +"files for this project." +msgstr "このプロジェクトの Playbook および関連するファイルを含むローカルパス (PROJECTS_ROOT との相対)。" + +#: main/models/projects.py:80 +msgid "SCM Type" +msgstr "SCM タイプ" + +#: main/models/projects.py:81 +msgid "Specifies the source control system used to store the project." +msgstr "プロジェクトを保存するために使用されるソースコントロールシステムを指定します。" + +#: main/models/projects.py:87 +msgid "SCM URL" +msgstr "SCM URL" + +#: main/models/projects.py:88 +msgid "The location where the project is stored." +msgstr "プロジェクトが保存される場所。" + +#: main/models/projects.py:94 +msgid "SCM Branch" +msgstr "SCM ブランチ" + +#: main/models/projects.py:95 +msgid "Specific branch, tag or commit to checkout." +msgstr "チェックアウトする特定のブランチ、タグまたはコミット。" + +#: main/models/projects.py:99 +msgid "Discard any local changes before syncing the project." +msgstr "ローカル変更を破棄してからプロジェクトを同期します。" + +#: main/models/projects.py:103 +msgid "Delete the project before syncing." +msgstr "プロジェクトを削除してから同期します。" + +#: main/models/projects.py:116 +msgid "The amount of time to run before the task is canceled." +msgstr "タスクが取り消される前の実行時間。" + +#: main/models/projects.py:130 +msgid "Invalid SCM URL." +msgstr "無効な SCM URL。" + +#: main/models/projects.py:133 +msgid "SCM URL is required." +msgstr "SCM URL が必要です。" + +#: main/models/projects.py:142 +msgid "Credential kind must be 'scm'." +msgstr "認証情報の種類は 'scm' にする必要があります。" + +#: main/models/projects.py:157 +msgid "Invalid credential." +msgstr "無効な認証情報。" + +#: main/models/projects.py:229 +msgid "Update the project when a job is launched that uses the project." +msgstr "プロジェクトを使用するジョブの起動時にプロジェクトを更新します。" + +#: main/models/projects.py:234 +msgid "" +"The number of seconds after the last project update ran that a newproject " +"update will be launched as a job dependency." +msgstr "新規プロジェクトの更新がジョブの依存関係として起動される最終プロジェクト更新後の秒数。" + +#: main/models/projects.py:244 +msgid "The last revision fetched by a project update" +msgstr "プロジェクト更新で取得される最新リビジョン" + +#: main/models/projects.py:251 +msgid "Playbook Files" +msgstr "Playbook ファイル" + +#: main/models/projects.py:252 +msgid "List of playbooks found in the project" +msgstr "プロジェクトにある Playbook の一覧" + +#: main/models/rbac.py:122 +msgid "roles" +msgstr "ロール" + +#: main/models/rbac.py:438 +msgid "role_ancestors" +msgstr "role_ancestors" + +#: main/models/schedules.py:69 +msgid "Enables processing of this schedule by Tower." +msgstr "Tower によるこのスケジュールの処理を有効にします。" + +#: main/models/schedules.py:75 +msgid "The first occurrence of the schedule occurs on or after this time." +msgstr "スケジュールの最初のオカレンスはこの時間またはこの時間の後に生じます。" + +#: main/models/schedules.py:81 +msgid "" +"The last occurrence of the schedule occurs before this time, aftewards the " +"schedule expires." +msgstr "スケジュールの最後のオカレンスはこの時間の前に生じます。その後スケジュールが期限切れになります。" + +#: main/models/schedules.py:85 +msgid "A value representing the schedules iCal recurrence rule." +msgstr "スケジュールの iCal 繰り返しルールを表す値。" + +#: main/models/schedules.py:91 +msgid "The next time that the scheduled action will run." +msgstr "スケジュールされたアクションが次に実行される時間。" + +#: main/models/unified_jobs.py:58 +msgid "New" +msgstr "新規" + +#: main/models/unified_jobs.py:60 +msgid "Waiting" +msgstr "待機中" + +#: main/models/unified_jobs.py:61 +msgid "Running" +msgstr "実行中" + +#: main/models/unified_jobs.py:65 +msgid "Canceled" +msgstr "取り消されました" + +#: main/models/unified_jobs.py:69 +msgid "Never Updated" +msgstr "更新されていません" + +#: main/models/unified_jobs.py:73 ui/templates/ui/index.html:85 +#: ui/templates/ui/index.html.py:104 +msgid "OK" +msgstr "OK" + +#: main/models/unified_jobs.py:74 +msgid "Missing" +msgstr "不明" + +#: main/models/unified_jobs.py:78 +msgid "No External Source" +msgstr "外部ソースがありません" + +#: main/models/unified_jobs.py:85 +msgid "Updating" +msgstr "更新中" + +#: main/models/unified_jobs.py:403 +msgid "Relaunch" +msgstr "再起動" + +#: main/models/unified_jobs.py:404 +msgid "Callback" +msgstr "コールバック" + +#: main/models/unified_jobs.py:405 +msgid "Scheduled" +msgstr "スケジュール済み" + +#: main/models/unified_jobs.py:406 +msgid "Dependency" +msgstr "依存関係" + +#: main/models/unified_jobs.py:407 +msgid "Workflow" +msgstr "ワークフロー" + +#: main/models/unified_jobs.py:408 +msgid "Sync" +msgstr "" + +#: main/models/unified_jobs.py:454 +msgid "The Tower node the job executed on." +msgstr "ジョブが実行される Tower ノード。" + +#: main/models/unified_jobs.py:480 +msgid "The date and time the job was queued for starting." +msgstr "ジョブが開始のために待機した日時。" + +#: main/models/unified_jobs.py:486 +msgid "The date and time the job finished execution." +msgstr "ジョブが実行を完了した日時。" + +#: main/models/unified_jobs.py:492 +msgid "Elapsed time in seconds that the job ran." +msgstr "ジョブ実行の経過時間 (秒単位)" + +#: main/models/unified_jobs.py:514 +msgid "" +"A status field to indicate the state of the job if it wasn't able to run and" +" capture stdout" +msgstr "stdout の実行およびキャプチャーを実行できない場合のジョブの状態を示すための状態フィールド" + +#: main/notifications/base.py:17 main/notifications/email_backend.py:28 +msgid "" +"{} #{} had status {} on Ansible Tower, view details at {}\n" +"\n" +msgstr "" +"{} #{} には Ansible Tower のステータス {} があります。詳細については {} で確認してください\n" +"\n" + +#: main/notifications/hipchat_backend.py:46 +msgid "Error sending messages: {}" +msgstr "メッセージの送信時のエラー: {}" + +#: main/notifications/hipchat_backend.py:48 +msgid "Error sending message to hipchat: {}" +msgstr "メッセージの hipchat への送信時のエラー: {}" + +#: main/notifications/irc_backend.py:54 +msgid "Exception connecting to irc server: {}" +msgstr "irc サーバーへの接続時の例外: {}" + +#: main/notifications/pagerduty_backend.py:39 +msgid "Exception connecting to PagerDuty: {}" +msgstr "PagerDuty への接続時の例外: {}" + +#: main/notifications/pagerduty_backend.py:48 +#: main/notifications/slack_backend.py:52 +#: main/notifications/twilio_backend.py:46 +msgid "Exception sending messages: {}" +msgstr "メッセージの送信時の例外: {}" + +#: main/notifications/twilio_backend.py:36 +msgid "Exception connecting to Twilio: {}" +msgstr "Twilio への接続時の例外: {}" + +#: main/notifications/webhook_backend.py:38 +#: main/notifications/webhook_backend.py:40 +msgid "Error sending notification webhook: {}" +msgstr "通知 webhook の送信時のエラー: {}" + +#: main/scheduler/__init__.py:130 +msgid "" +"Job spawned from workflow could not start because it was not in the right " +"state or required manual credentials" +msgstr "" + +#: main/tasks.py:180 +msgid "Ansible Tower host usage over 90%" +msgstr "Ansible Tower ホストの使用率が 90% を超えました" + +#: main/tasks.py:185 +msgid "Ansible Tower license will expire soon" +msgstr "Ansible Tower ライセンスがまもなく期限切れになります" + +#: main/tasks.py:240 +msgid "status_str must be either succeeded or failed" +msgstr "status_str は成功または失敗のいずれかである必要があります" + +#: main/utils/common.py:89 +#, python-format +msgid "Unable to convert \"%s\" to boolean" +msgstr "\"%s\" をブール値に変換できません" + +#: main/utils/common.py:243 +#, python-format +msgid "Unsupported SCM type \"%s\"" +msgstr "サポートされない SCM タイプ \"%s\"" + +#: main/utils/common.py:250 main/utils/common.py:262 main/utils/common.py:281 +#, python-format +msgid "Invalid %s URL" +msgstr "無効な %s URL" + +#: main/utils/common.py:252 main/utils/common.py:290 +#, python-format +msgid "Unsupported %s URL" +msgstr "サポートされていない %s URL" + +#: main/utils/common.py:292 +#, python-format +msgid "Unsupported host \"%s\" for file:// URL" +msgstr "ファイル:// URL のサポートされていないホスト \"%s\" " + +#: main/utils/common.py:294 +#, python-format +msgid "Host is required for %s URL" +msgstr "%s URL にはホストが必要です" + +#: main/utils/common.py:312 +#, python-format +msgid "Username must be \"git\" for SSH access to %s." +msgstr "%s への SSH アクセスではユーザー名を \"git\" にする必要があります。" + +#: main/utils/common.py:318 +#, python-format +msgid "Username must be \"hg\" for SSH access to %s." +msgstr "%s への SSH アクセスではユーザー名を \"hg\" にする必要があります。" + +#: main/validators.py:60 +#, python-format +msgid "Invalid certificate or key: %r..." +msgstr "無効な証明書またはキー: %r..." + +#: main/validators.py:74 +#, python-format +msgid "Invalid private key: unsupported type \"%s\"" +msgstr "無効な秘密鍵: サポートされていないタイプ \"%s\"" + +#: main/validators.py:78 +#, python-format +msgid "Unsupported PEM object type: \"%s\"" +msgstr "サポートされていない PEM オブジェクトタイプ: \"%s\"" + +#: main/validators.py:103 +msgid "Invalid base64-encoded data" +msgstr "無効な base64 エンコードされたデータ" + +#: main/validators.py:122 +msgid "Exactly one private key is required." +msgstr "秘密鍵が 1 つのみ必要です。" + +#: main/validators.py:124 +msgid "At least one private key is required." +msgstr "1 つ以上の秘密鍵が必要です。" + +#: main/validators.py:126 +#, python-format +msgid "" +"At least %(min_keys)d private keys are required, only %(key_count)d " +"provided." +msgstr "%(min_keys)d 以上の秘密鍵が必要です。提供数: %(key_count)d のみ。" + +#: main/validators.py:129 +#, python-format +msgid "Only one private key is allowed, %(key_count)d provided." +msgstr "秘密鍵が 1 つのみ許可されます。提供数: %(key_count)d" + +#: main/validators.py:131 +#, python-format +msgid "" +"No more than %(max_keys)d private keys are allowed, %(key_count)d provided." +msgstr "%(max_keys)d を超える秘密鍵は許可されません。提供数: %(key_count)d " + +#: main/validators.py:136 +msgid "Exactly one certificate is required." +msgstr "証明書が 1 つのみ必要です。" + +#: main/validators.py:138 +msgid "At least one certificate is required." +msgstr "1 つ以上の証明書が必要です。" + +#: main/validators.py:140 +#, python-format +msgid "" +"At least %(min_certs)d certificates are required, only %(cert_count)d " +"provided." +msgstr "%(min_certs)d 以上の証明書が必要です。提供数: %(cert_count)d のみ。" + +#: main/validators.py:143 +#, python-format +msgid "Only one certificate is allowed, %(cert_count)d provided." +msgstr "証明書が 1 つのみ許可されます。提供数: %(cert_count)d" + +#: main/validators.py:145 +#, python-format +msgid "" +"No more than %(max_certs)d certificates are allowed, %(cert_count)d " +"provided." +msgstr "%(max_certs)d を超える証明書は許可されません。提供数: %(cert_count)d" + +#: main/views.py:20 +msgid "API Error" +msgstr "API エラー" + +#: main/views.py:49 +msgid "Bad Request" +msgstr "不正な要求です" + +#: main/views.py:50 +msgid "The request could not be understood by the server." +msgstr "要求がサーバーによって認識されませんでした。" + +#: main/views.py:57 +msgid "Forbidden" +msgstr "許可されていません" + +#: main/views.py:58 +msgid "You don't have permission to access the requested resource." +msgstr "要求されたリソースにアクセスするためのパーミッションがありません。" + +#: main/views.py:65 +msgid "Not Found" +msgstr "見つかりません" + +#: main/views.py:66 +msgid "The requested resource could not be found." +msgstr "要求されたリソースは見つかりませんでした。" + +#: main/views.py:73 +msgid "Server Error" +msgstr "サーバーエラー" + +#: main/views.py:74 +msgid "A server error has occurred." +msgstr "サーバーエラーが発生しました。" + +#: settings/defaults.py:611 +msgid "Chicago" +msgstr "シカゴ" + +#: settings/defaults.py:612 +msgid "Dallas/Ft. Worth" +msgstr "ダラス/フォートワース" + +#: settings/defaults.py:613 +msgid "Northern Virginia" +msgstr "北バージニア" + +#: settings/defaults.py:614 +msgid "London" +msgstr "ロンドン" + +#: settings/defaults.py:615 +msgid "Sydney" +msgstr "シドニー" + +#: settings/defaults.py:616 +msgid "Hong Kong" +msgstr "香港" + +#: settings/defaults.py:643 +msgid "US East (Northern Virginia)" +msgstr "米国東部 (バージニア北部)" + +#: settings/defaults.py:644 +msgid "US East (Ohio)" +msgstr "米国東部 (オハイオ)" + +#: settings/defaults.py:645 +msgid "US West (Oregon)" +msgstr "米国西部 (オレゴン)" + +#: settings/defaults.py:646 +msgid "US West (Northern California)" +msgstr "米国西部 (北カリフォルニア)" + +#: settings/defaults.py:647 +msgid "Canada (Central)" +msgstr "" + +#: settings/defaults.py:648 +msgid "EU (Frankfurt)" +msgstr "EU (フランクフルト)" + +#: settings/defaults.py:649 +msgid "EU (Ireland)" +msgstr "EU (アイルランド)" + +#: settings/defaults.py:650 +msgid "EU (London)" +msgstr "" + +#: settings/defaults.py:651 +msgid "Asia Pacific (Singapore)" +msgstr "アジア太平洋 (シンガポール)" + +#: settings/defaults.py:652 +msgid "Asia Pacific (Sydney)" +msgstr "アジア太平洋 (シドニー)" + +#: settings/defaults.py:653 +msgid "Asia Pacific (Tokyo)" +msgstr "アジア太平洋 (東京)" + +#: settings/defaults.py:654 +msgid "Asia Pacific (Seoul)" +msgstr "アジア太平洋 (ソウル)" + +#: settings/defaults.py:655 +msgid "Asia Pacific (Mumbai)" +msgstr "アジア太平洋 (ムンバイ)" + +#: settings/defaults.py:656 +msgid "South America (Sao Paulo)" +msgstr "南アメリカ (サンパウロ)" + +#: settings/defaults.py:657 +msgid "US West (GovCloud)" +msgstr "米国西部 (GovCloud)" + +#: settings/defaults.py:658 +msgid "China (Beijing)" +msgstr "中国 (北京)" + +#: settings/defaults.py:707 +msgid "US East (B)" +msgstr "米国東部 (B)" + +#: settings/defaults.py:708 +msgid "US East (C)" +msgstr "米国東部 (C)" + +#: settings/defaults.py:709 +msgid "US East (D)" +msgstr "米国東部 (D)" + +#: settings/defaults.py:710 +msgid "US Central (A)" +msgstr "米国中部 (A)" + +#: settings/defaults.py:711 +msgid "US Central (B)" +msgstr "米国中部 (B)" + +#: settings/defaults.py:712 +msgid "US Central (C)" +msgstr "米国中部 (C)" + +#: settings/defaults.py:713 +msgid "US Central (F)" +msgstr "米国中部 (F)" + +#: settings/defaults.py:714 +msgid "Europe West (B)" +msgstr "欧州西部 (B)" + +#: settings/defaults.py:715 +msgid "Europe West (C)" +msgstr "欧州西部 (C)" + +#: settings/defaults.py:716 +msgid "Europe West (D)" +msgstr "欧州西部 (D)" + +#: settings/defaults.py:717 +msgid "Asia East (A)" +msgstr "アジア東部 (A)" + +#: settings/defaults.py:718 +msgid "Asia East (B)" +msgstr "アジア東部 (B)" + +#: settings/defaults.py:719 +msgid "Asia East (C)" +msgstr "アジア東部 (C)" + +#: settings/defaults.py:743 +msgid "US Central" +msgstr "米国中部" + +#: settings/defaults.py:744 +msgid "US East" +msgstr "米国東部" + +#: settings/defaults.py:745 +msgid "US East 2" +msgstr "米国東部 2" + +#: settings/defaults.py:746 +msgid "US North Central" +msgstr "米国中北部" + +#: settings/defaults.py:747 +msgid "US South Central" +msgstr "米国中南部" + +#: settings/defaults.py:748 +msgid "US West" +msgstr "米国西部" + +#: settings/defaults.py:749 +msgid "Europe North" +msgstr "欧州北部" + +#: settings/defaults.py:750 +msgid "Europe West" +msgstr "欧州西部" + +#: settings/defaults.py:751 +msgid "Asia Pacific East" +msgstr "アジア太平洋東部" + +#: settings/defaults.py:752 +msgid "Asia Pacific Southeast" +msgstr "アジア太平洋南東部" + +#: settings/defaults.py:753 +msgid "Japan East" +msgstr "日本東部" + +#: settings/defaults.py:754 +msgid "Japan West" +msgstr "日本西部" + +#: settings/defaults.py:755 +msgid "Brazil South" +msgstr "ブラジル南部" + +#: sso/apps.py:9 +msgid "Single Sign-On" +msgstr "シングルサインオン" + +#: sso/conf.py:27 +msgid "" +"Mapping to organization admins/users from social auth accounts. This setting\n" +"controls which users are placed into which Tower organizations based on\n" +"their username and email address. Dictionary keys are organization names.\n" +"organizations will be created if not present if the license allows for\n" +"multiple organizations, otherwise the single default organization is used\n" +"regardless of the key. Values are dictionaries defining the options for\n" +"each organization's membership. For each organization it is possible to\n" +"specify which users are automatically users of the organization and also\n" +"which users can administer the organization. \n" +"\n" +"- admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated.\n" +" If True, all users using social auth will automatically be added as admins\n" +" of the organization.\n" +" If False, no social auth users will be automatically added as admins of\n" +" the organization.\n" +" If a string or list of strings, specifies the usernames and emails for\n" +" users who will be added to the organization. Strings in the format\n" +" \"//\" will be interpreted as JavaScript regular expressions and\n" +" may also be used instead of string literals; only \"i\" and \"m\" are supported\n" +" for flags.\n" +"- remove_admins: True/False. Defaults to True.\n" +" If True, a user who does not match will be removed from the organization's\n" +" administrative list.\n" +"- users: None, True/False, string or list of strings. Same rules apply as for\n" +" admins.\n" +"- remove_users: True/False. Defaults to True. Same rules as apply for \n" +" remove_admins." +msgstr "" +"ソーシャル認証アカウントから組織管理者/ユーザーへのマッピングです。この設定\n" +"は、ユーザーのユーザー名とメールアドレスに基づいてどのユーザーをどの Tower 組織に配置するかを制御します。\n" +"辞書キーは組織名です。\n" +"組織は、存在しない場合、ライセンスで複数の組織が許可される場合に作成されます。そうでない場合、キーとは無関係に単一のデフォルト組織が使用されます。\n" +"値は、各組織のメンバーシップのオプションを定義する辞書です。\n" +"各組織については、自動的に組織のユーザーにするユーザーと\n" +"組織を管理できるユーザーを指定できます。\n" +"\n" +"- admins: None、True/False、文字列または文字列の一覧。\n" +" None の場合、組織管理者は更新されません。\n" +" True の場合、ソーシャル認証を使用するすべてのユーザーが組織の管理者として\n" +" 自動的に追加されます。\n" +" False の場合、ソーシャル認証ユーザーは組織の管理者として自動的に\n" +" 追加されません。\n" +" 文字列または文字列の一覧の場合、組織に追加されるユーザーの\n" +" ユーザー名およびメールを指定します。\"//\" 形式の文字列\n" +" は JavaScript 正規表現として解釈され、文字列リテラルの代わりに使用できます。\n" +" \"i\" と \"m\" のみがフラグでサポートされます。\n" +" - remove_admins: True/False。デフォルトで True に設定されます。\n" +" True の場合、一致しないユーザーは組織の管理者リストから削除されます。\n" +" - users: None、True/False、文字列または文字列の一覧。管理者の場合と同じルールが\n" +" 適用されます。\n" +"- remove_users: True/False。デフォルトで True に設定されます。remove_admins の\n" +" 場合と同じルールが適用されます。" + +#: sso/conf.py:76 +msgid "" +"Mapping of team members (users) from social auth accounts. Keys are team\n" +"names (will be created if not present). Values are dictionaries of options\n" +"for each team's membership, where each can contain the following parameters:\n" +"\n" +"- organization: string. The name of the organization to which the team\n" +" belongs. The team will be created if the combination of organization and\n" +" team name does not exist. The organization will first be created if it\n" +" does not exist. If the license does not allow for multiple organizations,\n" +" the team will always be assigned to the single default organization.\n" +"- users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all social auth users will be added/removed as team\n" +" members.\n" +" If a string or list of strings, specifies expressions used to match users.\n" +" User will be added as a team member if the username or email matches.\n" +" Strings in the format \"//\" will be interpreted as JavaScript\n" +" regular expressions and may also be used instead of string literals; only \"i\"\n" +" and \"m\" are supported for flags.\n" +"- remove: True/False. Defaults to True. If True, a user who does not match\n" +" the rules above will be removed from the team." +msgstr "" +"ソーシャル認証アカウントからチームメンバー (ユーザー) へのマッピングです。\n" +"キーはチーム名です (存在しない場合に作成されます)。値は各チームの\n" +"メンバーシップのオプションの辞書です。各値には以下のパラメーターが含まれます。\n" +"\n" +"- organization: 文字列。チームが属する組織の名前です。\n" +" チームは組織とチーム名の組み合わせが存在しない場合に作成されます。\n" +" 組織がまず作成されます (存在しない場合)。ライセンスにより複数の組織が許可\n" +" されない場合、チームは常に単一のデフォルト組織に割り当てられます。\n" +"- ユーザー: None、True/False、文字列または文字列の一覧。\n" +" None の場合、チームメンバーは更新されません。\n" +" True/False の場合、すべてのソーシャル認証ユーザーがチームメンバーとして\n" +" 追加/削除されます。\n" +" 文字列または文字列の一覧の場合、ユーザーに一致する表現を指定します。\n" +" ユーザーは、ユーザー名またはメールが一致する場合にチームメンバーとして\n" +" 追加されます。\n" +" \"//\" 形式の文字列が JavaScript 正規表現として解釈され、\n" +" 文字列リテラルの代わりに使用することもできます。\"i\"\n" +" および \"m\" のみがフラグでサポートされます。\n" +"- remove: True/False。デフォルトで True に設定されます。True の場合、上記のルール\n" +" に一致しないユーザーはチームから削除されます。" + +#: sso/conf.py:119 +msgid "Authentication Backends" +msgstr "認証バックエンド" + +#: sso/conf.py:120 +msgid "" +"List of authentication backends that are enabled based on license features " +"and other authentication settings." +msgstr "ライセンスの特長およびその他の認証設定に基づいて有効にされる認証バックエンドの一覧。" + +#: sso/conf.py:133 +msgid "Social Auth Organization Map" +msgstr "ソーシャル認証組織マップ" + +#: sso/conf.py:145 +msgid "Social Auth Team Map" +msgstr "ソーシャル認証チームマップ" + +#: sso/conf.py:157 +msgid "Social Auth User Fields" +msgstr "ソーシャル認証ユーザーフィールド" + +#: sso/conf.py:158 +msgid "" +"When set to an empty list `[]`, this setting prevents new user accounts from" +" being created. Only users who have previously logged in using social auth " +"or have a user account with a matching email address will be able to login." +msgstr "" +"空リスト " +"`[]`に設定される場合、この設定により新規ユーザーアカウントは作成できなくなります。ソーシャル認証を使ってログインしたことのあるユーザーまたは一致するメールアドレスのユーザーアカウントを持つユーザーのみがログインできます。" + +#: sso/conf.py:176 +msgid "LDAP Server URI" +msgstr "LDAP サーバー URI" + +#: sso/conf.py:177 +msgid "" +"URI to connect to LDAP server, such as \"ldap://ldap.example.com:389\" (non-" +"SSL) or \"ldaps://ldap.example.com:636\" (SSL). Multiple LDAP servers may be" +" specified by separating with spaces or commas. LDAP authentication is " +"disabled if this parameter is empty." +msgstr "" +"\"ldap://ldap.example.com:389\" (非 SSL) または \"ldaps://ldap.example.com:636\"" +" (SSL) などの LDAP サーバーに接続する URI です。複数の LDAP サーバーをスペースまたはカンマで区切って指定できます。LDAP " +"認証は、このパラメーターが空の場合は無効になります。" + +#: sso/conf.py:181 sso/conf.py:199 sso/conf.py:211 sso/conf.py:223 +#: sso/conf.py:239 sso/conf.py:258 sso/conf.py:280 sso/conf.py:296 +#: sso/conf.py:315 sso/conf.py:332 sso/conf.py:349 sso/conf.py:365 +#: sso/conf.py:382 sso/conf.py:420 sso/conf.py:461 +msgid "LDAP" +msgstr "LDAP" + +#: sso/conf.py:193 +msgid "LDAP Bind DN" +msgstr "LDAP バインド DN" + +#: sso/conf.py:194 +msgid "" +"DN (Distinguished Name) of user to bind for all search queries. Normally in " +"the format \"CN=Some User,OU=Users,DC=example,DC=com\" but may also be " +"specified as \"DOMAIN\\username\" for Active Directory. This is the system " +"user account we will use to login to query LDAP for other user information." +msgstr "" +"すべての検索クエリーについてバインドするユーザーの DN (識別名) です。通常、形式は \"CN=Some " +"User,OU=Users,DC=example,DC=com\" になりますが、Active Directory の場合 " +"\"DOMAIN\\username\" として指定することもできます。これは、他のユーザー情報についての LDAP " +"クエリー実行時のログインに使用するシステムユーザーアカウントです。" + +#: sso/conf.py:209 +msgid "LDAP Bind Password" +msgstr "LDAP バインドパスワード" + +#: sso/conf.py:210 +msgid "Password used to bind LDAP user account." +msgstr "LDAP ユーザーアカウントをバインドするために使用されるパスワード。" + +#: sso/conf.py:221 +msgid "LDAP Start TLS" +msgstr "LDAP Start TLS" + +#: sso/conf.py:222 +msgid "Whether to enable TLS when the LDAP connection is not using SSL." +msgstr "LDAP 接続が SSL を使用していない場合に TLS を有効にするかどうか。" + +#: sso/conf.py:232 +msgid "LDAP Connection Options" +msgstr "LDAP 接続オプション" + +#: sso/conf.py:233 +msgid "" +"Additional options to set for the LDAP connection. LDAP referrals are " +"disabled by default (to prevent certain LDAP queries from hanging with AD). " +"Option names should be strings (e.g. \"OPT_REFERRALS\"). Refer to " +"https://www.python-ldap.org/doc/html/ldap.html#options for possible options " +"and values that can be set." +msgstr "" +"LDAP 設定に設定する追加オプションです。LDAP 照会はデフォルトで無効にされます (特定の LDAP クエリーが AD " +"でハングすることを避けるため)。オプション名は文字列でなければなりません (例: " +"\"OPT_REFERRALS\")。可能なオプションおよび設定できる値については、https://www.python-" +"ldap.org/doc/html/ldap.html#options を参照してください。" + +#: sso/conf.py:251 +msgid "LDAP User Search" +msgstr "LDAP ユーザー検索" + +#: sso/conf.py:252 +msgid "" +"LDAP search query to find users. Any user that matches the given pattern " +"will be able to login to Tower. The user should also be mapped into an " +"Tower organization (as defined in the AUTH_LDAP_ORGANIZATION_MAP setting). " +"If multiple search queries need to be supported use of \"LDAPUnion\" is " +"possible. See python-ldap documentation as linked at the top of this " +"section." +msgstr "" +"ユーザーを検索するための LDAP 検索クエリーです。指定パターンに一致するユーザーは Tower にログインできます。ユーザーは Tower " +"組織にマップされている必要もあります (AUTH_LDAP_ORGANIZATION_MAP " +"設定で定義)。複数の検索クエリーがサポートされる必要がある場合、\"LDAPUnion\" を使用できます。このセクションの先頭にリンクされている " +"python-ldap ドキュメントを参照してください。" + +#: sso/conf.py:274 +msgid "LDAP User DN Template" +msgstr "LDAP ユーザー DN テンプレート" + +#: sso/conf.py:275 +msgid "" +"Alternative to user search, if user DNs are all of the same format. This " +"approach will be more efficient for user lookups than searching if it is " +"usable in your organizational environment. If this setting has a value it " +"will be used instead of AUTH_LDAP_USER_SEARCH." +msgstr "" +"ユーザー DN " +"の形式がすべて同じである場合のユーザー検索の代替法になります。この方法は、組織の環境で使用可能であるかどうかを検索する場合よりも効率的なユーザー検索方法になります。この設定に値がある場合、それが" +" AUTH_LDAP_USER_SEARCH の代わりに使用されます。" + +#: sso/conf.py:290 +msgid "LDAP User Attribute Map" +msgstr "LDAP ユーザー属性マップ" + +#: sso/conf.py:291 +msgid "" +"Mapping of LDAP user schema to Tower API user attributes (key is user " +"attribute name, value is LDAP attribute name). The default setting is valid" +" for ActiveDirectory but users with other LDAP configurations may need to " +"change the values (not the keys) of the dictionary/hash-table." +msgstr "" +"LDAP ユーザースキーマの Tower API ユーザー属性へのマッピングです (キーはユーザー属性名で、値は LDAP " +"属性名です)。デフォルト設定は ActiveDirectory で有効ですが、他の LDAP 設定を持つユーザーは、辞書/ハッシュテーブルの値 " +"(キーではない) を変更する必要ある場合があります。" + +#: sso/conf.py:310 +msgid "LDAP Group Search" +msgstr "LDAP グループ検索" + +#: sso/conf.py:311 +msgid "" +"Users in Tower are mapped to organizations based on their membership in LDAP" +" groups. This setting defines the LDAP search query to find groups. Note " +"that this, unlike the user search above, does not support LDAPSearchUnion." +msgstr "" +"Tower のユーザーは LDAP グループのメンバーシップに基づいて組織にマップされます。この設定は、グループを検索できるように LDAP " +"検索クエリーを定義します。上記のユーザー検索とは異なり、これは LDAPSearchUnion をサポートしないことに注意してください。" + +#: sso/conf.py:328 +msgid "LDAP Group Type" +msgstr "LDAP グループタイプ" + +#: sso/conf.py:329 +msgid "" +"The group type may need to be changed based on the type of the LDAP server." +" Values are listed at: http://pythonhosted.org/django-auth-ldap/groups.html" +"#types-of-groups" +msgstr "" +"グループタイプは LDAP サーバーのタイプに基づいて変更する必要がある場合があります。値は以下に記載されています: " +"http://pythonhosted.org/django-auth-ldap/groups.html#types-of-groups" + +#: sso/conf.py:344 +msgid "LDAP Require Group" +msgstr "LDAP 要求グループ" + +#: sso/conf.py:345 +msgid "" +"Group DN required to login. If specified, user must be a member of this " +"group to login via LDAP. If not set, everyone in LDAP that matches the user " +"search will be able to login via Tower. Only one require group is supported." +msgstr "" +"ログインに必要なグループ DN。指定されている場合、LDAP " +"経由でログインするにはユーザーはこのグループのメンバーである必要があります。設定されていない場合は、ユーザー検索に一致する LDAP " +"のすべてのユーザーが Tower 経由でログインできます。1つの要求グループのみがサポートされます。" + +#: sso/conf.py:361 +msgid "LDAP Deny Group" +msgstr "LDAP 拒否グループ" + +#: sso/conf.py:362 +msgid "" +"Group DN denied from login. If specified, user will not be allowed to login " +"if a member of this group. Only one deny group is supported." +msgstr "" +"グループ DN がログインで拒否されます。指定されている場合、ユーザーはこのグループのメンバーの場合にログインできません。1 " +"つの拒否グループのみがサポートされます。" + +#: sso/conf.py:375 +msgid "LDAP User Flags By Group" +msgstr "LDAP ユーザーフラグ (グループ別)" + +#: sso/conf.py:376 +msgid "" +"User profile flags updated from group membership (key is user attribute " +"name, value is group DN). These are boolean fields that are matched based " +"on whether the user is a member of the given group. So far only " +"is_superuser is settable via this method. This flag is set both true and " +"false at login time based on current LDAP settings." +msgstr "" +"グループメンバーシップから更新されるユーザープロファイルフラグです (キーはユーザー属性名、値はグループ " +"DN)。これらは、ユーザーが指定グループのメンバーであるかに基づいて一致するブール値フィールドです。is_superuser " +"のみがこのメソッドで設定可能です。このフラグは、現在の LDAP 設定に基づいてログイン時に true および false に設定されます。" + +#: sso/conf.py:394 +msgid "LDAP Organization Map" +msgstr "LDAP 組織マップ" + +#: sso/conf.py:395 +msgid "" +"Mapping between organization admins/users and LDAP groups. This controls what users are placed into what Tower organizations relative to their LDAP group memberships. Keys are organization names. Organizations will be created if not present. Values are dictionaries defining the options for each organization's membership. For each organization it is possible to specify what groups are automatically users of the organization and also what groups can administer the organization.\n" +"\n" +" - admins: None, True/False, string or list of strings.\n" +" If None, organization admins will not be updated based on LDAP values.\n" +" If True, all users in LDAP will automatically be added as admins of the organization.\n" +" If False, no LDAP users will be automatically added as admins of the organization.\n" +" If a string or list of strings, specifies the group DN(s) that will be added of the organization if they match any of the specified groups.\n" +" - remove_admins: True/False. Defaults to True.\n" +" If True, a user who is not an member of the given groups will be removed from the organization's administrative list.\n" +" - users: None, True/False, string or list of strings. Same rules apply as for admins.\n" +" - remove_users: True/False. Defaults to True. Same rules apply as for remove_admins." +msgstr "" +"組織管理者/ユーザーと LDAP グループ間のマッピングです。これは、LDAP グループメンバーシップと相対してどのユーザーをどの Tower 組織に配置するかを制御します。キーは組織名です。組織は存在しない場合に作成されます。値は、各組織のメンバーシップのオプションを定義する辞書です。各組織については、自動的に組織のユーザーにするユーザーと組織を管理できるグループを指定できます。\n" +"\n" +" - admins: None、True/False、文字列または文字列の一覧。\n" +" None の場合、組織管理者は LDAP 値に基づいて更新されません。\n" +" True の場合、LDAP のすべてのユーザーが組織の管理者として自動的に追加されます。\n" +" False の場合、LDAP ユーザーは組織の管理者として自動的に追加されません。\n" +" 文字列または文字列の一覧の場合、指定されるグループのいずれかに一致する場合に組織に追加されるグループ DN を指定します。\n" +" - remove_admins: True/False。デフォルトで True に設定されます。\n" +" True の場合、指定グループのメンバーでないユーザーは組織の管理者リストから削除されます。\n" +" - users: None、True/False、文字列または文字列の一覧。管理者の場合と同じルールが適用されます。\n" +" - remove_users: True/False。デフォルトで True に設定されます。remove_admins の場合と同じルールが適用されます。" + +#: sso/conf.py:443 +msgid "LDAP Team Map" +msgstr "LDAP チームマップ" + +#: sso/conf.py:444 +msgid "" +"Mapping between team members (users) and LDAP groups. Keys are team names (will be created if not present). Values are dictionaries of options for each team's membership, where each can contain the following parameters:\n" +"\n" +" - organization: string. The name of the organization to which the team belongs. The team will be created if the combination of organization and team name does not exist. The organization will first be created if it does not exist.\n" +" - users: None, True/False, string or list of strings.\n" +" If None, team members will not be updated.\n" +" If True/False, all LDAP users will be added/removed as team members.\n" +" If a string or list of strings, specifies the group DN(s). User will be added as a team member if the user is a member of ANY of these groups.\n" +"- remove: True/False. Defaults to True. If True, a user who is not a member of the given groups will be removed from the team." +msgstr "" +"チームメンバー (ユーザー) と LDAP グループ間のマッピングです。キーはチーム名です (存在しない場合に作成されます)。値は各チームのメンバーシップのオプションの辞書です。各値には以下のパラメーターが含まれます。\n" +"\n" +" - organization: 文字列。チームが属する組織の名前です。組織とチーム名の組み合わせ\n" +" が存在しない場合にチームが作成されます。組織がまず作成されます (存在しない場合)。\n" +" - users: None、True/False、文字列または文字列の一覧。\n" +" None の場合、チームメンバーは更新されません。\n" +" True/False の場合、すべての LDAP ユーザーがチームメンバーとして追加/削除されます。\n" +" 文字列または文字列の一覧の場合、グループ DN を指定します。\n" +" ユーザーがこれらのグループのいずれかのメンバーである場合、チームメンバーとして追加されます。\n" +"- remove: True/False。デフォルトで True に設定されます。True の場合、指定グループのメンバーでないユーザーはチームから削除されます。" + +#: sso/conf.py:487 +msgid "RADIUS Server" +msgstr "RADIUS サーバー" + +#: sso/conf.py:488 +msgid "" +"Hostname/IP of RADIUS server. RADIUS authentication will be disabled if this" +" setting is empty." +msgstr "RADIUS サーバーのホスト名/IP です。この設定が空の場合は RADIUS 認証は無効にされます。" + +#: sso/conf.py:490 sso/conf.py:504 sso/conf.py:516 +msgid "RADIUS" +msgstr "RADIUS" + +#: sso/conf.py:502 +msgid "RADIUS Port" +msgstr "RADIUS ポート" + +#: sso/conf.py:503 +msgid "Port of RADIUS server." +msgstr "RADIUS サーバーのポート。" + +#: sso/conf.py:514 +msgid "RADIUS Secret" +msgstr "RADIUS シークレット" + +#: sso/conf.py:515 +msgid "Shared secret for authenticating to RADIUS server." +msgstr "RADIUS サーバーに対して認証するための共有シークレット。" + +#: sso/conf.py:531 +msgid "Google OAuth2 Callback URL" +msgstr "Google OAuth2 コールバック URL" + +#: sso/conf.py:532 +msgid "" +"Create a project at https://console.developers.google.com/ to obtain an " +"OAuth2 key and secret for a web application. Ensure that the Google+ API is " +"enabled. Provide this URL as the callback URL for your application." +msgstr "" +"web アプリケーションの OAuth2 キーおよびシークレットを取得するために " +"https://console.developers.google.com/ にプロジェクトを作成します。Google+ API " +"が有効であることを確認します。この URL をアプリケーションのコールバック URL として指定します。" + +#: sso/conf.py:536 sso/conf.py:547 sso/conf.py:558 sso/conf.py:571 +#: sso/conf.py:585 sso/conf.py:597 sso/conf.py:609 +msgid "Google OAuth2" +msgstr "Google OAuth2" + +#: sso/conf.py:545 +msgid "Google OAuth2 Key" +msgstr "Google OAuth2 キー" + +#: sso/conf.py:546 +msgid "" +"The OAuth2 key from your web application at " +"https://console.developers.google.com/." +msgstr "web アプリケーションの OAuth2 キー (https://console.developers.google.com/)。" + +#: sso/conf.py:556 +msgid "Google OAuth2 Secret" +msgstr "Google OAuth2 シークレット" + +#: sso/conf.py:557 +msgid "" +"The OAuth2 secret from your web application at " +"https://console.developers.google.com/." +msgstr "web アプリケーションの OAuth2 シークレット (https://console.developers.google.com/)。" + +#: sso/conf.py:568 +msgid "Google OAuth2 Whitelisted Domains" +msgstr "Google OAuth2 ホワイトリストドメイン" + +#: sso/conf.py:569 +msgid "" +"Update this setting to restrict the domains who are allowed to login using " +"Google OAuth2." +msgstr "この設定を更新し、Google OAuth2 を使用してログインできるドメインを制限します。" + +#: sso/conf.py:580 +msgid "Google OAuth2 Extra Arguments" +msgstr "Google OAuth2 追加引数" + +#: sso/conf.py:581 +msgid "" +"Extra arguments for Google OAuth2 login. When only allowing a single domain " +"to authenticate, set to `{\"hd\": \"yourdomain.com\"}` and Google will not " +"display any other accounts even if the user is logged in with multiple " +"Google accounts." +msgstr "" +"Google OAuth2 ログインの追加引数です。単一ドメインの認証のみを許可する場合、`{\"hd\": \"yourdomain.com\"}` " +"に設定すると、Google はユーザーが複数の Google アカウントでログインしている場合でもその他のアカウントを表示しません。" + +#: sso/conf.py:595 +msgid "Google OAuth2 Organization Map" +msgstr "Google OAuth2 組織マップ" + +#: sso/conf.py:607 +msgid "Google OAuth2 Team Map" +msgstr "Google OAuth2 チームマップ" + +#: sso/conf.py:623 +msgid "GitHub OAuth2 Callback URL" +msgstr "GitHub OAuth2 コールバック URL" + +#: sso/conf.py:624 +msgid "" +"Create a developer application at https://github.com/settings/developers to " +"obtain an OAuth2 key (Client ID) and secret (Client Secret). Provide this " +"URL as the callback URL for your application." +msgstr "" +"OAuth2 キー (クライアント ID) およびシークレット (クライアントシークレット) を取得するために " +"https://github.com/settings/developers に開発者アプリケーションを作成します。この URL " +"をアプリケーションのコールバック URL として指定します。" + +#: sso/conf.py:628 sso/conf.py:639 sso/conf.py:649 sso/conf.py:661 +#: sso/conf.py:673 +msgid "GitHub OAuth2" +msgstr "GitHub OAuth2" + +#: sso/conf.py:637 +msgid "GitHub OAuth2 Key" +msgstr "GitHub OAuth2 キー" + +#: sso/conf.py:638 +msgid "The OAuth2 key (Client ID) from your GitHub developer application." +msgstr "GitHub 開発者アプリケーションからの OAuth2 キー (クライアント ID)。" + +#: sso/conf.py:647 +msgid "GitHub OAuth2 Secret" +msgstr "GitHub OAuth2 シークレット" + +#: sso/conf.py:648 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub developer application." +msgstr "GitHub 開発者アプリケーションからの OAuth2 シークレット (クライアントシークレット)。" + +#: sso/conf.py:659 +msgid "GitHub OAuth2 Organization Map" +msgstr "GitHub OAuth2 組織マップ" + +#: sso/conf.py:671 +msgid "GitHub OAuth2 Team Map" +msgstr "GitHub OAuth2 チームマップ" + +#: sso/conf.py:687 +msgid "GitHub Organization OAuth2 Callback URL" +msgstr "GitHub 組織 OAuth2 コールバック URL" + +#: sso/conf.py:688 sso/conf.py:763 +msgid "" +"Create an organization-owned application at " +"https://github.com/organizations//settings/applications and obtain " +"an OAuth2 key (Client ID) and secret (Client Secret). Provide this URL as " +"the callback URL for your application." +msgstr "" +"組織が所有するアプリケーションを " +"https://github.com/organizations//settings/applications に作成し、OAuth2" +" キー (クライアント ID) およびシークレット (クライアントシークレット) を取得します。この URL をアプリケーションのコールバック URL " +"として指定します。" + +#: sso/conf.py:692 sso/conf.py:703 sso/conf.py:713 sso/conf.py:725 +#: sso/conf.py:736 sso/conf.py:748 +msgid "GitHub Organization OAuth2" +msgstr "GitHub 組織 OAuth2" + +#: sso/conf.py:701 +msgid "GitHub Organization OAuth2 Key" +msgstr "GitHub 組織 OAuth2 キー" + +#: sso/conf.py:702 sso/conf.py:777 +msgid "The OAuth2 key (Client ID) from your GitHub organization application." +msgstr "GitHub 組織アプリケーションからの OAuth2 キー (クライアント ID)。" + +#: sso/conf.py:711 +msgid "GitHub Organization OAuth2 Secret" +msgstr "GitHub 組織 OAuth2 シークレット" + +#: sso/conf.py:712 sso/conf.py:787 +msgid "" +"The OAuth2 secret (Client Secret) from your GitHub organization application." +msgstr "GitHub 組織アプリケーションからの OAuth2 シークレット (クライアントシークレット)。" + +#: sso/conf.py:722 +msgid "GitHub Organization Name" +msgstr "GitHub 組織名" + +#: sso/conf.py:723 +msgid "" +"The name of your GitHub organization, as used in your organization's URL: " +"https://github.com//." +msgstr "GitHub 組織の名前で、組織の URL (https://github.com//) で使用されます。" + +#: sso/conf.py:734 +msgid "GitHub Organization OAuth2 Organization Map" +msgstr "GitHub 組織 OAuth2 組織マップ" + +#: sso/conf.py:746 +msgid "GitHub Organization OAuth2 Team Map" +msgstr "GitHub 組織 OAuth2 チームマップ" + +#: sso/conf.py:762 +msgid "GitHub Team OAuth2 Callback URL" +msgstr "GitHub チーム OAuth2 コールバック URL" + +#: sso/conf.py:767 sso/conf.py:778 sso/conf.py:788 sso/conf.py:800 +#: sso/conf.py:811 sso/conf.py:823 +msgid "GitHub Team OAuth2" +msgstr "GitHub チーム OAuth2" + +#: sso/conf.py:776 +msgid "GitHub Team OAuth2 Key" +msgstr "GitHub チーム OAuth2 キー" + +#: sso/conf.py:786 +msgid "GitHub Team OAuth2 Secret" +msgstr "GitHub チーム OAuth2 シークレット" + +#: sso/conf.py:797 +msgid "GitHub Team ID" +msgstr "GitHub チーム ID" + +#: sso/conf.py:798 +msgid "" +"Find the numeric team ID using the Github API: http://fabian-" +"kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/." +msgstr "" +"Github API を使用して数値のチーム ID を検索します: http://fabian-" +"kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/" + +#: sso/conf.py:809 +msgid "GitHub Team OAuth2 Organization Map" +msgstr "GitHub チーム OAuth2 組織マップ" + +#: sso/conf.py:821 +msgid "GitHub Team OAuth2 Team Map" +msgstr "GitHub チーム OAuth2 チームマップ" + +#: sso/conf.py:837 +msgid "Azure AD OAuth2 Callback URL" +msgstr "Azure AD OAuth2 コールバック URL" + +#: sso/conf.py:838 +msgid "" +"Register an Azure AD application as described by https://msdn.microsoft.com" +"/en-us/library/azure/dn132599.aspx and obtain an OAuth2 key (Client ID) and " +"secret (Client Secret). Provide this URL as the callback URL for your " +"application." +msgstr "" +"Azure AD アプリケーションを https://msdn.microsoft.com/en-" +"us/library/azure/dn132599.aspx の説明に従って登録し、OAuth2 キー (クライアント ID) およびシークレット " +"(クライアントシークレット) を取得します。この URL をアプリケーションのコールバック URL として指定します。" + +#: sso/conf.py:842 sso/conf.py:853 sso/conf.py:863 sso/conf.py:875 +#: sso/conf.py:887 +msgid "Azure AD OAuth2" +msgstr "Azure AD OAuth2" + +#: sso/conf.py:851 +msgid "Azure AD OAuth2 Key" +msgstr "Azure AD OAuth2 キー" + +#: sso/conf.py:852 +msgid "The OAuth2 key (Client ID) from your Azure AD application." +msgstr "Azure AD アプリケーションからの OAuth2 キー (クライアント ID)。" + +#: sso/conf.py:861 +msgid "Azure AD OAuth2 Secret" +msgstr "Azure AD OAuth2 シークレット" + +#: sso/conf.py:862 +msgid "The OAuth2 secret (Client Secret) from your Azure AD application." +msgstr "Azure AD アプリケーションからの OAuth2 シークレット (クライアントシークレット)。" + +#: sso/conf.py:873 +msgid "Azure AD OAuth2 Organization Map" +msgstr "Azure AD OAuth2 組織マップ" + +#: sso/conf.py:885 +msgid "Azure AD OAuth2 Team Map" +msgstr "Azure AD OAuth2 チームマップ" + +#: sso/conf.py:906 +msgid "SAML Service Provider Callback URL" +msgstr "SAML サービスプロバイダーコールバック URL" + +#: sso/conf.py:907 +msgid "" +"Register Tower as a service provider (SP) with each identity provider (IdP) " +"you have configured. Provide your SP Entity ID and this callback URL for " +"your application." +msgstr "" +"設定済みの各アイデンティティープロバイダー (IdP) で Tower をサービスプロバイダー (SP) として登録します。SP エンティティー ID " +"およびアプリケーションのこのコールバック URL を指定します。" + +#: sso/conf.py:910 sso/conf.py:924 sso/conf.py:937 sso/conf.py:951 +#: sso/conf.py:965 sso/conf.py:983 sso/conf.py:1005 sso/conf.py:1024 +#: sso/conf.py:1044 sso/conf.py:1078 sso/conf.py:1091 +msgid "SAML" +msgstr "SAML" + +#: sso/conf.py:921 +msgid "SAML Service Provider Metadata URL" +msgstr "SAML サービスプロバイダーメタデータ URL" + +#: sso/conf.py:922 +msgid "" +"If your identity provider (IdP) allows uploading an XML metadata file, you " +"can download one from this URL." +msgstr "" +"アイデンティティープロバイダー (IdP) が XML メタデータファイルのアップロードを許可する場合、この URL からダウンロードできます。" + +#: sso/conf.py:934 +msgid "SAML Service Provider Entity ID" +msgstr "SAML サービスプロバイダーエンティティー ID" + +#: sso/conf.py:935 +msgid "" +"The application-defined unique identifier used as the audience of the SAML " +"service provider (SP) configuration." +msgstr "SAML サービスプロバイダー (SP) 設定の対象として使用されるアプリケーションで定義される固有識別子。" + +#: sso/conf.py:948 +msgid "SAML Service Provider Public Certificate" +msgstr "SAML サービスプロバイダーの公開証明書" + +#: sso/conf.py:949 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the" +" certificate content here." +msgstr "サービスプロバイダー (SP) として使用するための Tower のキーペアを作成し、ここに証明書の内容を組み込みます。" + +#: sso/conf.py:962 +msgid "SAML Service Provider Private Key" +msgstr "SAML サービスプロバイダーの秘密鍵|" + +#: sso/conf.py:963 +msgid "" +"Create a keypair for Tower to use as a service provider (SP) and include the" +" private key content here." +msgstr "サービスプロバイダー (SP) として使用するための Tower のキーペアを作成し、ここに秘密鍵の内容を組み込みます。" + +#: sso/conf.py:981 +msgid "SAML Service Provider Organization Info" +msgstr "SAML サービスプロバイダーの組織情報" + +#: sso/conf.py:982 +msgid "Configure this setting with information about your app." +msgstr "アプリの情報でこの設定を行います。" + +#: sso/conf.py:1003 +msgid "SAML Service Provider Technical Contact" +msgstr "SAML サービスプロバイダーテクニカルサポートの問い合わせ先" + +#: sso/conf.py:1004 sso/conf.py:1023 +msgid "Configure this setting with your contact information." +msgstr "問い合わせ先情報で設定を行います。" + +#: sso/conf.py:1022 +msgid "SAML Service Provider Support Contact" +msgstr "SAML サービスプロバイダーサポートの問い合わせ先" + +#: sso/conf.py:1037 +msgid "SAML Enabled Identity Providers" +msgstr "SAML で有効にされたアイデンティティープロバイダー" + +#: sso/conf.py:1038 +msgid "" +"Configure the Entity ID, SSO URL and certificate for each identity provider " +"(IdP) in use. Multiple SAML IdPs are supported. Some IdPs may provide user " +"data using attribute names that differ from the default OIDs " +"(https://github.com/omab/python-social-" +"auth/blob/master/social/backends/saml.py#L16). Attribute names may be " +"overridden for each IdP." +msgstr "" +"使用中のそれぞれのアイデンティティープロバイダー (IdP) についてのエンティティー ID、SSO URL および証明書を設定します。複数の SAML" +" IdP がサポートされます。一部の IdP はデフォルト OID とは異なる属性名を使用してユーザーデータを提供することがあります " +"(https://github.com/omab/python-social-" +"auth/blob/master/social/backends/saml.py#L16)。それぞれの IdP の属性名を上書きできます。" + +#: sso/conf.py:1076 +msgid "SAML Organization Map" +msgstr "SAML 組織マップ" + +#: sso/conf.py:1089 +msgid "SAML Team Map" +msgstr "SAML チームマップ" + +#: sso/fields.py:123 +msgid "Invalid connection option(s): {invalid_options}." +msgstr "無効な接続オプション: {invalid_options}" + +#: sso/fields.py:194 +msgid "Base" +msgstr "ベース" + +#: sso/fields.py:195 +msgid "One Level" +msgstr "1 レベル" + +#: sso/fields.py:196 +msgid "Subtree" +msgstr "サブツリー" + +#: sso/fields.py:214 +msgid "Expected a list of three items but got {length} instead." +msgstr "3 つの項目の一覧が予期されましが、{length} が取得されました。" + +#: sso/fields.py:215 +msgid "Expected an instance of LDAPSearch but got {input_type} instead." +msgstr "LDAPSearch のインスタンスが予期されましたが、{input_type} が取得されました。" + +#: sso/fields.py:251 +msgid "" +"Expected an instance of LDAPSearch or LDAPSearchUnion but got {input_type} " +"instead." +msgstr "" +"LDAPSearch または LDAPSearchUnion のインスタンスが予期されましたが、{input_type} が取得されました。" + +#: sso/fields.py:278 +msgid "Invalid user attribute(s): {invalid_attrs}." +msgstr "無効なユーザー属性: {invalid_attrs}" + +#: sso/fields.py:295 +msgid "Expected an instance of LDAPGroupType but got {input_type} instead." +msgstr "LDAPGroupType のインスタンスが予期されましたが、{input_type} が取得されました。" + +#: sso/fields.py:323 +msgid "Invalid user flag: \"{invalid_flag}\"." +msgstr "無効なユーザーフラグ: \"{invalid_flag}\"" + +#: sso/fields.py:339 sso/fields.py:506 +msgid "" +"Expected None, True, False, a string or list of strings but got {input_type}" +" instead." +msgstr "None、True、False、文字列または文字列の一覧が予期されましたが、{input_type} が取得されました。" + +#: sso/fields.py:375 +msgid "Missing key(s): {missing_keys}." +msgstr "キーがありません: {missing_keys}" + +#: sso/fields.py:376 +msgid "Invalid key(s): {invalid_keys}." +msgstr "無効なキー: {invalid_keys}" + +#: sso/fields.py:425 sso/fields.py:542 +msgid "Invalid key(s) for organization map: {invalid_keys}." +msgstr "組織マップの無効なキー: {invalid_keys}" + +#: sso/fields.py:443 +msgid "Missing required key for team map: {invalid_keys}." +msgstr "チームマップの必要なキーがありません: {invalid_keys}" + +#: sso/fields.py:444 sso/fields.py:561 +msgid "Invalid key(s) for team map: {invalid_keys}." +msgstr "チームマップの無効なキー: {invalid_keys}" + +#: sso/fields.py:560 +msgid "Missing required key for team map: {missing_keys}." +msgstr "チームマップで必要なキーがありません: {missing_keys}" + +#: sso/fields.py:578 +msgid "Missing required key(s) for org info record: {missing_keys}." +msgstr "組織情報レコードで必要なキーがありません: {missing_keys}" + +#: sso/fields.py:591 +msgid "Invalid language code(s) for org info: {invalid_lang_codes}." +msgstr "組織情報の無効な言語コード: {invalid_lang_codes}" + +#: sso/fields.py:610 +msgid "Missing required key(s) for contact: {missing_keys}." +msgstr "問い合わせ先の必要なキーがありません: {missing_keys}" + +#: sso/fields.py:622 +msgid "Missing required key(s) for IdP: {missing_keys}." +msgstr "IdP で必要なキーがありません: {missing_keys}" + +#: sso/pipeline.py:24 +msgid "An account cannot be found for {0}" +msgstr "{0} のアカウントが見つかりません" + +#: sso/pipeline.py:30 +msgid "Your account is inactive" +msgstr "アカウントが非アクティブです" + +#: sso/validators.py:19 sso/validators.py:44 +#, python-format +msgid "DN must include \"%%(user)s\" placeholder for username: %s" +msgstr "DN にはユーザー名の \"%%(user)s\" プレースホルダーを含める必要があります: %s" + +#: sso/validators.py:26 +#, python-format +msgid "Invalid DN: %s" +msgstr "無効な DN: %s" + +#: sso/validators.py:56 +#, python-format +msgid "Invalid filter: %s" +msgstr "無効なフィルター: %s" + +#: templates/error.html:4 ui/templates/ui/index.html:8 +msgid "Ansible Tower" +msgstr "Ansible Tower" + +#: templates/rest_framework/api.html:39 +msgid "Ansible Tower API Guide" +msgstr "Ansible Tower API ガイド" + +#: templates/rest_framework/api.html:40 +msgid "Back to Ansible Tower" +msgstr "Ansible Tower に戻る" + +#: templates/rest_framework/api.html:41 +msgid "Resize" +msgstr "サイズの変更" + +#: templates/rest_framework/base.html:78 templates/rest_framework/base.html:92 +#, python-format +msgid "Make a GET request on the %(name)s resource" +msgstr "%(name)s リソースでの GET 要求" + +#: templates/rest_framework/base.html:80 +msgid "Specify a format for the GET request" +msgstr "GET 要求の形式を指定" + +#: templates/rest_framework/base.html:86 +#, python-format +msgid "" +"Make a GET request on the %(name)s resource with the format set to " +"`%(format)s`" +msgstr "形式が `%(format)s` に設定された状態での %(name)s リソースでの GET 要求" + +#: templates/rest_framework/base.html:100 +#, python-format +msgid "Make an OPTIONS request on the %(name)s resource" +msgstr "%(name)s リソースでの OPTIONS 要求" + +#: templates/rest_framework/base.html:106 +#, python-format +msgid "Make a DELETE request on the %(name)s resource" +msgstr "%(name)s リソースでの DELETE 要求" + +#: templates/rest_framework/base.html:113 +msgid "Filters" +msgstr "フィルター" + +#: templates/rest_framework/base.html:172 +#: templates/rest_framework/base.html:186 +#, python-format +msgid "Make a POST request on the %(name)s resource" +msgstr "%(name)s リソースでの POST 要求" + +#: templates/rest_framework/base.html:216 +#: templates/rest_framework/base.html:230 +#, python-format +msgid "Make a PUT request on the %(name)s resource" +msgstr "%(name)s リソースでの PUT 要求" + +#: templates/rest_framework/base.html:233 +#, python-format +msgid "Make a PATCH request on the %(name)s resource" +msgstr "%(name)s リソースでの PATCH 要求" + +#: ui/apps.py:9 ui/conf.py:22 ui/conf.py:38 ui/conf.py:53 +msgid "UI" +msgstr "UI" + +#: ui/conf.py:16 +msgid "Off" +msgstr "オフ" + +#: ui/conf.py:17 +msgid "Anonymous" +msgstr "匿名" + +#: ui/conf.py:18 +msgid "Detailed" +msgstr "詳細" + +#: ui/conf.py:20 +msgid "Analytics Tracking State" +msgstr "アナリティクストラッキングの状態" + +#: ui/conf.py:21 +msgid "Enable or Disable Analytics Tracking." +msgstr "アナリティクストラッキングの有効化/無効化。" + +#: ui/conf.py:31 +msgid "Custom Login Info" +msgstr "カスタムログイン情報" + +#: ui/conf.py:32 +msgid "" +"If needed, you can add specific information (such as a legal notice or a " +"disclaimer) to a text box in the login modal using this setting. Any content" +" added must be in plain text, as custom HTML or other markup languages are " +"not supported. If multiple paragraphs of text are needed, new lines " +"(paragraphs) must be escaped as `\\n` within the block of text." +msgstr "" +"必要な場合は、この設定を使ってログインモーダルのテキストボックスに特定の情報 (法律上の通知または免責事項など) " +"を追加できます。追加されるすべてのコンテンツは、カスタム HTML " +"や他のマークアップ言語がサポートされないため、プレーンテキストでなければなりません。テキストの複数のパラグラフが必要な場合、改行 (パラグラフ) " +"をテキストのブロック内の `\\n` としてエスケープする必要があります。" + +#: ui/conf.py:48 +msgid "Custom Logo" +msgstr "カスタムロゴ" + +#: ui/conf.py:49 +msgid "" +"To set up a custom logo, provide a file that you create. For the custom logo" +" to look its best, use a `.png` file with a transparent background. GIF, PNG" +" and JPEG formats are supported." +msgstr "" +"カスタムロゴをセットアップするには、作成するファイルを指定します。カスタムロゴを最適化するには、背景が透明の「.png」ファイルを使用します。GIF、PNG" +" および JPEG 形式がサポートされます。" + +#: ui/fields.py:29 +msgid "" +"Invalid format for custom logo. Must be a data URL with a base64-encoded " +"GIF, PNG or JPEG image." +msgstr "" +"カスタムロゴの無効な形式です。base64 エンコードされた GIF、PNG または JPEG イメージと共にデータ URL を指定する必要があります。" + +#: ui/fields.py:30 +msgid "Invalid base64-encoded data in data URL." +msgstr "データ URL の無効な base64 エンコードされたデータ。" + +#: ui/templates/ui/index.html:49 +msgid "" +"Your session will expire in 60 seconds, would you like to " +"continue?" +msgstr "" +"セッションは 60 秒後に期限切れになります。続行しますか?" + +#: ui/templates/ui/index.html:64 +msgid "CANCEL" +msgstr "取り消し" + +#: ui/templates/ui/index.html:116 +msgid "Set how many days of data should be retained." +msgstr "データの保持日数を設定します。" + +#: ui/templates/ui/index.html:122 +msgid "" +"Please enter an integer that is not " +"negative that is lower than " +"9999." +msgstr "" +"負でない 9999 " +"より値の小さい整数を入力してください。" + +#: ui/templates/ui/index.html:127 +msgid "" +"For facts collected older than the time period specified, save one fact scan (snapshot) per time window (frequency). For example, facts older than 30 days are purged, while one weekly fact scan is kept.\n" +"
\n" +"
CAUTION: Setting both numerical variables to \"0\" will delete all facts.\n" +"
\n" +"
" +msgstr "" +"指定された期間の前に収集されたファクトについては、時間枠 (頻度) ごとに 1 つのファクトスキャン (スナップショット) を保存します。たとえば、30 日間の前のファクトは削除され、1 つの週次ファクトは保持されます。\n" +"
\n" +"
注意: どちらの数値変数も「0」に設定すると、すべてのファクトが削除されます。\n" +"
\n" +"
" + +#: ui/templates/ui/index.html:136 +msgid "Select a time period after which to remove old facts" +msgstr "古いファクトを削除するまでの期間を選択" + +#: ui/templates/ui/index.html:150 +msgid "" +"Please enter an integer that is not " +"negative that is lower than " +"9999." +msgstr "" +"負でない 9999 " +"より値の小さい整数を入力してください。" + +#: ui/templates/ui/index.html:155 +msgid "Select a frequency for snapshot retention" +msgstr "スナップショットの保持頻度を選択" + +#: ui/templates/ui/index.html:169 +msgid "" +"Please enter an integer that is not" +" negative that is " +"lower than 9999." +msgstr "" +"負でない 9999 " +"よりも値の小さい
整数を入力してください。" + +#: ui/templates/ui/index.html:175 +msgid "working..." +msgstr "実行中..." diff --git a/awx/main/access.py b/awx/main/access.py index 7f28e0a7ce..9417563f75 100644 --- a/awx/main/access.py +++ b/awx/main/access.py @@ -285,7 +285,7 @@ class BaseAccess(object): return True # User has access to both, permission check passed - def check_license(self, add_host=False, feature=None, check_expiration=True): + def check_license(self, add_host_name=None, feature=None, check_expiration=True): validation_info = TaskEnhancer().validate_enhancements() if ('test' in sys.argv or 'py.test' in sys.argv[0] or 'jenkins' in sys.argv) and not os.environ.get('SKIP_LICENSE_FIXUP_FOR_TEST', ''): validation_info['free_instances'] = 99999999 @@ -299,11 +299,14 @@ class BaseAccess(object): free_instances = validation_info.get('free_instances', 0) available_instances = validation_info.get('available_instances', 0) - if add_host and free_instances == 0: - raise PermissionDenied(_("License count of %s instances has been reached.") % available_instances) - elif add_host and free_instances < 0: - raise PermissionDenied(_("License count of %s instances has been exceeded.") % available_instances) - elif not add_host and free_instances < 0: + + if add_host_name: + host_exists = Host.objects.filter(name=add_host_name).exists() + if not host_exists and free_instances == 0: + raise PermissionDenied(_("License count of %s instances has been reached.") % available_instances) + elif not host_exists and free_instances < 0: + raise PermissionDenied(_("License count of %s instances has been exceeded.") % available_instances) + elif not add_host_name and free_instances < 0: raise PermissionDenied(_("Host count exceeds available instances.")) if feature is not None: @@ -353,7 +356,7 @@ class BaseAccess(object): # Shortcuts in certain cases by deferring to earlier property if display_method == 'schedule': - user_capabilities['schedule'] = user_capabilities['edit'] + user_capabilities['schedule'] = user_capabilities['start'] continue elif display_method == 'delete' and not isinstance(obj, (User, UnifiedJob)): user_capabilities['delete'] = user_capabilities['edit'] @@ -363,27 +366,30 @@ class BaseAccess(object): continue # Compute permission - data = {} - access_method = getattr(self, "can_%s" % method) - if method in ['change']: # 3 args - user_capabilities[display_method] = access_method(obj, data) - elif method in ['delete', 'run_ad_hoc_commands', 'copy']: - user_capabilities[display_method] = access_method(obj) - elif method in ['start']: - user_capabilities[display_method] = access_method(obj, validate_license=False) - elif method in ['add']: # 2 args with data - user_capabilities[display_method] = access_method(data) - elif method in ['attach', 'unattach']: # parent/sub-object call - if type(parent_obj) == Team: - relationship = 'parents' - parent_obj = parent_obj.member_role - else: - relationship = 'members' - user_capabilities[display_method] = access_method( - obj, parent_obj, relationship, skip_sub_obj_read_check=True, data=data) + user_capabilities[display_method] = self.get_method_capability(method, obj, parent_obj) return user_capabilities + def get_method_capability(self, method, obj, parent_obj): + if method in ['change']: # 3 args + return self.can_change(obj, {}) + elif method in ['delete', 'run_ad_hoc_commands', 'copy']: + access_method = getattr(self, "can_%s" % method) + return access_method(obj) + elif method in ['start']: + return self.can_start(obj, validate_license=False) + elif method in ['add']: # 2 args with data + return self.can_add({}) + elif method in ['attach', 'unattach']: # parent/sub-object call + access_method = getattr(self, "can_%s" % method) + if type(parent_obj) == Team: + relationship = 'parents' + parent_obj = parent_obj.member_role + else: + relationship = 'members' + return access_method(obj, parent_obj, relationship, skip_sub_obj_read_check=True, data={}) + return False + class UserAccess(BaseAccess): ''' @@ -402,23 +408,24 @@ class UserAccess(BaseAccess): def get_queryset(self): if self.user.is_superuser or self.user.is_system_auditor: - return User.objects.all() + qs = User.objects.all() - if settings.ORG_ADMINS_CAN_SEE_ALL_USERS and \ + elif settings.ORG_ADMINS_CAN_SEE_ALL_USERS and \ (self.user.admin_of_organizations.exists() or self.user.auditor_of_organizations.exists()): - return User.objects.all() - - return ( - User.objects.filter( - pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members') - ) | - User.objects.filter( - pk=self.user.id - ) | - User.objects.filter( - pk__in=Role.objects.filter(singleton_name__in = [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members') - ) - ).distinct() + qs = User.objects.all() + else: + qs = ( + User.objects.filter( + pk__in=Organization.accessible_objects(self.user, 'read_role').values('member_role__members') + ) | + User.objects.filter( + pk=self.user.id + ) | + User.objects.filter( + pk__in=Role.objects.filter(singleton_name__in = [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR]).values('members') + ) + ).distinct() + return qs.prefetch_related('profile') def can_add(self, data): @@ -485,7 +492,7 @@ class OrganizationAccess(BaseAccess): def get_queryset(self): qs = self.model.accessible_objects(self.user, 'read_role') - return qs.select_related('created_by', 'modified_by').all() + return qs.prefetch_related('created_by', 'modified_by').all() @check_superuser def can_change(self, obj, data): @@ -608,7 +615,7 @@ class HostAccess(BaseAccess): return False # Check to see if we have enough licenses - self.check_license(add_host=True) + self.check_license(add_host_name=data.get('name', None)) return True def can_change(self, obj, data): @@ -616,6 +623,11 @@ class HostAccess(BaseAccess): inventory_pk = get_pk_from_dict(data, 'inventory') if obj and inventory_pk and obj.inventory.pk != inventory_pk: raise PermissionDenied(_('Unable to change inventory on a host.')) + + # Prevent renaming a host that might exceed license count + if 'name' in data: + self.check_license(add_host_name=data['name']) + # Checks for admin or change permission on inventory, controls whether # the user can edit variable data. return obj and self.user in obj.inventory.admin_role @@ -837,15 +849,7 @@ class CredentialAccess(BaseAccess): def can_change(self, obj, data): if not obj: return False - - # Cannot change the organization for a credential after it's been created - if data and 'organization' in data: - organization_pk = get_pk_from_dict(data, 'organization') - if (organization_pk and (not obj.organization or organization_pk != obj.organization.id)) \ - or (not organization_pk and obj.organization): - return False - - return self.user in obj.admin_role + return self.user in obj.admin_role and self.check_related('organization', Organization, data, obj=obj) def can_delete(self, obj): # Unassociated credentials may be marked deleted by anyone, though we @@ -990,8 +994,6 @@ class ProjectUpdateAccess(BaseAccess): @check_superuser def can_cancel(self, obj): - if not obj.can_cancel: - return False if self.user == obj.created_by: return True # Project updates cascade delete with project, admin role descends from org admin @@ -1048,7 +1050,7 @@ class JobTemplateAccess(BaseAccess): Project.accessible_objects(self.user, 'use_role').exists() or Inventory.accessible_objects(self.user, 'use_role').exists()) - # if reference_obj is provided, determine if it can be coppied + # if reference_obj is provided, determine if it can be copied reference_obj = data.get('reference_obj', None) if 'job_type' in data and data['job_type'] == PERM_INVENTORY_SCAN: @@ -1223,7 +1225,7 @@ class JobAccess(BaseAccess): model = Job def get_queryset(self): - qs = self.model.objects.distinct() + qs = self.model.objects qs = qs.select_related('created_by', 'modified_by', 'job_template', 'inventory', 'project', 'credential', 'cloud_credential', 'job_template') qs = qs.prefetch_related('unified_job_template') @@ -1370,7 +1372,6 @@ class SystemJobAccess(BaseAccess): return False # no relaunching of system jobs -# TODO: class WorkflowJobTemplateNodeAccess(BaseAccess): ''' I can see/use a WorkflowJobTemplateNode if I have read permission @@ -1401,25 +1402,23 @@ class WorkflowJobTemplateNodeAccess(BaseAccess): qs = self.model.objects.filter( workflow_job_template__in=WorkflowJobTemplate.accessible_objects( self.user, 'read_role')) - qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes') + qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes', + 'unified_job_template') return qs def can_use_prompted_resources(self, data): - if not self.check_related('credential', Credential, data): - return False - if not self.check_related('inventory', Inventory, data): - return False - return True + return ( + self.check_related('credential', Credential, data, role_field='use_role') and + self.check_related('inventory', Inventory, data, role_field='use_role')) @check_superuser def can_add(self, data): if not data: # So the browseable API will work return True - if not self.check_related('workflow_job_template', WorkflowJobTemplate, data, mandatory=True): - return False - if not self.can_use_prompted_resources(data): - return False - return True + return ( + self.check_related('workflow_job_template', WorkflowJobTemplate, data, mandatory=True) and + self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role') and + self.can_use_prompted_resources(data)) def wfjt_admin(self, obj): if not obj.workflow_job_template: @@ -1490,8 +1489,14 @@ class WorkflowJobNodeAccess(BaseAccess): qs = qs.prefetch_related('success_nodes', 'failure_nodes', 'always_nodes') return qs + @check_superuser def can_add(self, data): - return False + if data is None: # Hide direct creation in API browser + return False + return ( + self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role') and + self.check_related('credential', Credential, data, role_field='use_role') and + self.check_related('inventory', Inventory, data, role_field='use_role')) def can_change(self, obj, data): return False @@ -1540,24 +1545,30 @@ class WorkflowJobTemplateAccess(BaseAccess): def can_copy(self, obj): if self.save_messages: - wfjt_errors = {} + missing_ujt = [] + missing_credentials = [] + missing_inventories = [] qs = obj.workflow_job_template_nodes qs.select_related('unified_job_template', 'inventory', 'credential') for node in qs.all(): node_errors = {} if node.inventory and self.user not in node.inventory.use_role: - node_errors['inventory'] = 'Prompted inventory %s can not be coppied.' % node.inventory.name + missing_inventories.append(node.inventory.name) if node.credential and self.user not in node.credential.use_role: - node_errors['credential'] = 'Prompted credential %s can not be coppied.' % node.credential.name + missing_credentials.append(node.credential.name) ujt = node.unified_job_template if ujt and not self.user.can_access(UnifiedJobTemplate, 'start', ujt, validate_license=False): - node_errors['unified_job_template'] = ( - 'Prompted %s %s can not be coppied.' % (ujt._meta.verbose_name_raw, ujt.name)) + missing_ujt.append(ujt.name) if node_errors: wfjt_errors[node.id] = node_errors - self.messages.update(wfjt_errors) + if missing_ujt: + self.messages['templates_unable_to_copy'] = missing_ujt + if missing_credentials: + self.messages['credentials_unable_to_copy'] = missing_credentials + if missing_inventories: + self.messages['inventories_unable_to_copy'] = missing_inventories - return self.check_related('organization', Organization, {}, obj=obj, mandatory=True) + return self.check_related('organization', Organization, {'reference_obj': obj}, mandatory=True) def can_start(self, obj, validate_license=True): if validate_license: @@ -1623,22 +1634,50 @@ class WorkflowJobAccess(BaseAccess): def can_change(self, obj, data): return False + @check_superuser def can_delete(self, obj): - if obj.workflow_job_template is None: - # only superusers can delete orphaned workflow jobs - return self.user.is_superuser - return self.user in obj.workflow_job_template.admin_role + return (obj.workflow_job_template and + obj.workflow_job_template.organization and + self.user in obj.workflow_job_template.organization.admin_role) + + def get_method_capability(self, method, obj, parent_obj): + if method == 'start': + # Return simplistic permission, will perform detailed check on POST + if not obj.workflow_job_template: + return self.user.is_superuser + return self.user in obj.workflow_job_template.execute_role + return super(WorkflowJobAccess, self).get_method_capability(method, obj, parent_obj) def can_start(self, obj, validate_license=True): if validate_license: self.check_license() - if obj.survey_enabled: - self.check_license(feature='surveys') if self.user.is_superuser: return True - return (obj.workflow_job_template and self.user in obj.workflow_job_template.execute_role) + wfjt = obj.workflow_job_template + # only superusers can relaunch orphans + if not wfjt: + return False + + # execute permission to WFJT is mandatory for any relaunch + if self.user not in wfjt.execute_role: + return False + + # user's WFJT access doesn't guarentee permission to launch, introspect nodes + return self.can_recreate(obj) + + def can_recreate(self, obj): + node_qs = obj.workflow_job_nodes.all().prefetch_related('inventory', 'credential', 'unified_job_template') + node_access = WorkflowJobNodeAccess(user=self.user) + wj_add_perm = True + for node in node_qs: + if not node_access.can_add({'reference_obj': node}): + wj_add_perm = False + if not wj_add_perm and self.save_messages: + self.messages['workflow_job_template'] = _('You do not have permission to the workflow job ' + 'resources required for relaunch.') + return wj_add_perm def can_cancel(self, obj): if not obj.can_cancel: @@ -1766,21 +1805,15 @@ class JobEventAccess(BaseAccess): model = JobEvent def get_queryset(self): - qs = self.model.objects.all() - qs = qs.select_related('job', 'job__job_template', 'host', 'parent') - qs = qs.prefetch_related('hosts', 'children') - - # Filter certain "internal" events generated by async polling. - qs = qs.exclude(event__in=('runner_on_ok', 'runner_on_failed'), - event_data__icontains='"ansible_job_id": "', - event_data__contains='"module_name": "async_status"') + qs = self.model.objects + qs = qs.prefetch_related('hosts', 'children', 'job__job_template', 'host') if self.user.is_superuser or self.user.is_system_auditor: return qs.all() - job_qs = self.user.get_queryset(Job) - host_qs = self.user.get_queryset(Host) - return qs.filter(Q(host__isnull=True) | Q(host__in=host_qs), job__in=job_qs) + return qs.filter( + Q(host__inventory__in=Inventory.accessible_pk_qs(self.user, 'read_role')) | + Q(job__job_template__in=JobTemplate.accessible_pk_qs(self.user, 'read_role'))) def can_add(self, data): return False @@ -1795,29 +1828,29 @@ class JobEventAccess(BaseAccess): class UnifiedJobTemplateAccess(BaseAccess): ''' I can see a unified job template whenever I can see the same project, - inventory source or job template. Unified job templates do not include - projects without SCM configured or inventory sources without a cloud - source. + inventory source, WFJT, or job template. Unified job templates do not include + inventory sources without a cloud source. ''' model = UnifiedJobTemplate def get_queryset(self): - qs = self.model.objects.all() - project_qs = self.user.get_queryset(Project).filter(scm_type__in=[s[0] for s in Project.SCM_TYPE_CHOICES]) - inventory_source_qs = self.user.get_queryset(InventorySource).filter(source__in=CLOUD_INVENTORY_SOURCES) - job_template_qs = self.user.get_queryset(JobTemplate) - system_job_template_qs = self.user.get_queryset(SystemJobTemplate) - workflow_job_template_qs = self.user.get_queryset(WorkflowJobTemplate) - qs = qs.filter(Q(Project___in=project_qs) | - Q(InventorySource___in=inventory_source_qs) | - Q(JobTemplate___in=job_template_qs) | - Q(systemjobtemplate__in=system_job_template_qs) | - Q(workflowjobtemplate__in=workflow_job_template_qs)) + if self.user.is_superuser or self.user.is_system_auditor: + qs = self.model.objects.all() + else: + qs = self.model.objects.filter( + Q(pk__in=self.model.accessible_pk_qs(self.user, 'read_role')) | + Q(inventorysource__inventory__id__in=Inventory._accessible_pk_qs( + Inventory, self.user, 'read_role'))) + qs = qs.exclude(inventorysource__source="") + qs = qs.select_related( 'created_by', 'modified_by', 'next_schedule', + ) + # prefetch last/current jobs so we get the real instance + qs = qs.prefetch_related( 'last_job', 'current_job', ) @@ -1849,25 +1882,23 @@ class UnifiedJobAccess(BaseAccess): model = UnifiedJob def get_queryset(self): - qs = self.model.objects.all() - project_update_qs = self.user.get_queryset(ProjectUpdate) - inventory_update_qs = self.user.get_queryset(InventoryUpdate).filter(source__in=CLOUD_INVENTORY_SOURCES) - job_qs = self.user.get_queryset(Job) - ad_hoc_command_qs = self.user.get_queryset(AdHocCommand) - system_job_qs = self.user.get_queryset(SystemJob) - workflow_job_qs = self.user.get_queryset(WorkflowJob) - qs = qs.filter(Q(ProjectUpdate___in=project_update_qs) | - Q(InventoryUpdate___in=inventory_update_qs) | - Q(Job___in=job_qs) | - Q(AdHocCommand___in=ad_hoc_command_qs) | - Q(SystemJob___in=system_job_qs) | - Q(WorkflowJob___in=workflow_job_qs)) - qs = qs.select_related( + if self.user.is_superuser or self.user.is_system_auditor: + qs = self.model.objects.all() + else: + inv_pk_qs = Inventory._accessible_pk_qs(Inventory, self.user, 'read_role') + org_auditor_qs = Organization.objects.filter( + Q(admin_role__members=self.user) | Q(auditor_role__members=self.user)) + qs = self.model.objects.filter( + Q(unified_job_template_id__in=UnifiedJobTemplate.accessible_pk_qs(self.user, 'read_role')) | + Q(inventoryupdate__inventory_source__inventory__id__in=inv_pk_qs) | + Q(adhoccommand__inventory__id__in=inv_pk_qs) | + Q(job__inventory__organization__in=org_auditor_qs) | + Q(job__project__organization__in=org_auditor_qs) + ) + qs = qs.prefetch_related( 'created_by', 'modified_by', 'unified_job_node__workflow_job', - ) - qs = qs.prefetch_related( 'unified_job_template', ) @@ -1923,11 +1954,17 @@ class ScheduleAccess(BaseAccess): @check_superuser def can_add(self, data): - return self.check_related('unified_job_template', UnifiedJobTemplate, data, mandatory=True) + return self.check_related('unified_job_template', UnifiedJobTemplate, data, role_field='execute_role', mandatory=True) @check_superuser def can_change(self, obj, data): - return self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, mandatory=True) + if self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, mandatory=True): + return True + # Users with execute role can modify the schedules they created + return ( + obj.created_by == self.user and + self.check_related('unified_job_template', UnifiedJobTemplate, data, obj=obj, role_field='execute_role', mandatory=True)) + def can_delete(self, obj): return self.can_change(obj, {}) @@ -1989,9 +2026,9 @@ class NotificationAccess(BaseAccess): model = Notification def get_queryset(self): - qs = self.model.objects.all() + qs = self.model.objects.prefetch_related('notification_template') if self.user.is_superuser or self.user.is_system_auditor: - return qs + return qs.all() return self.model.objects.filter( Q(notification_template__organization__in=self.user.admin_of_organizations) | Q(notification_template__organization__in=self.user.auditor_of_organizations) @@ -2067,11 +2104,12 @@ class ActivityStreamAccess(BaseAccess): - custom inventory scripts ''' qs = self.model.objects.all() - qs = qs.select_related('actor') qs = qs.prefetch_related('organization', 'user', 'inventory', 'host', 'group', 'inventory_source', 'inventory_update', 'credential', 'team', 'project', 'project_update', - 'permission', 'job_template', 'job', 'ad_hoc_command', - 'notification_template', 'notification', 'label', 'role') + 'job_template', 'job', 'ad_hoc_command', + 'notification_template', 'notification', 'label', 'role', 'actor', + 'schedule', 'custom_inventory_script', 'unified_job_template', + 'workflow_job_template', 'workflow_job') if self.user.is_superuser or self.user.is_system_auditor: return qs.all() @@ -2084,6 +2122,7 @@ class ActivityStreamAccess(BaseAccess): project_set = Project.accessible_objects(self.user, 'read_role') jt_set = JobTemplate.accessible_objects(self.user, 'read_role') team_set = Team.accessible_objects(self.user, 'read_role') + wfjt_set = WorkflowJobTemplate.accessible_objects(self.user, 'read_role') return qs.filter( Q(ad_hoc_command__inventory__in=inventory_set) | @@ -2101,6 +2140,9 @@ class ActivityStreamAccess(BaseAccess): Q(project_update__project__in=project_set) | Q(job_template__in=jt_set) | Q(job__job_template__in=jt_set) | + Q(workflow_job_template__in=wfjt_set) | + Q(workflow_job_template_node__workflow_job_template__in=wfjt_set) | + Q(workflow_job__workflow_job_template__in=wfjt_set) | Q(notification_template__organization__in=auditing_orgs) | Q(notification__notification_template__organization__in=auditing_orgs) | Q(label__organization__in=auditing_orgs) | diff --git a/awx/main/conf.py b/awx/main/conf.py index 229e8aaf93..2361a54e3e 100644 --- a/awx/main/conf.py +++ b/awx/main/conf.py @@ -134,6 +134,7 @@ register( register( 'AWX_PROOT_HIDE_PATHS', field_class=fields.StringListField, + required=False, label=_('Paths to hide from isolated jobs'), help_text=_('Additional paths to hide from isolated processes.'), category=_('Jobs'), @@ -143,6 +144,7 @@ register( register( 'AWX_PROOT_SHOW_PATHS', field_class=fields.StringListField, + required=False, label=_('Paths to expose to isolated jobs'), help_text=_('Whitelist of paths that would otherwise be hidden to expose to isolated jobs.'), category=_('Jobs'), @@ -182,6 +184,7 @@ register( register( 'AWX_ANSIBLE_CALLBACK_PLUGINS', field_class=fields.StringListField, + required=False, label=_('Ansible Callback Plugins'), help_text=_('List of paths to search for extra callback plugins to be used when running jobs.'), category=_('Jobs'), @@ -228,8 +231,8 @@ register( 'LOG_AGGREGATOR_HOST', field_class=fields.CharField, allow_null=True, - label=_('Logging Aggregator Receiving Host'), - help_text=_('External host maintain a log collector to send logs to'), + label=_('Logging Aggregator'), + help_text=_('Hostname/IP where external logs will be sent to.'), category=_('Logging'), category_slug='logging', ) @@ -237,8 +240,8 @@ register( 'LOG_AGGREGATOR_PORT', field_class=fields.IntegerField, allow_null=True, - label=_('Logging Aggregator Receiving Port'), - help_text=_('Port that the log collector is listening on'), + label=_('Logging Aggregator Port'), + help_text=_('Port on Logging Aggregator to send logs to (if required).'), category=_('Logging'), category_slug='logging', ) @@ -247,8 +250,8 @@ register( field_class=fields.ChoiceField, choices=['logstash', 'splunk', 'loggly', 'sumologic', 'other'], allow_null=True, - label=_('Logging Aggregator Type: Logstash, Loggly, Datadog, etc'), - help_text=_('The type of log aggregator service to format messages for'), + label=_('Logging Aggregator Type'), + help_text=_('Format messages for the chosen log aggregator.'), category=_('Logging'), category_slug='logging', ) @@ -256,8 +259,8 @@ register( 'LOG_AGGREGATOR_USERNAME', field_class=fields.CharField, allow_null=True, - label=_('Logging Aggregator Username to Authenticate With'), - help_text=_('Username for Logstash or others (basic auth)'), + label=_('Logging Aggregator Username'), + help_text=_('Username for external log aggregator (if required).'), category=_('Logging'), category_slug='logging', ) @@ -265,8 +268,9 @@ register( 'LOG_AGGREGATOR_PASSWORD', field_class=fields.CharField, allow_null=True, - label=_('Logging Aggregator Password to Authenticate With'), - help_text=_('Password for Logstash or others (basic auth)'), + encrypted=True, + label=_('Logging Aggregator Password/Token'), + help_text=_('Password or authentication token for external log aggregator (if required).'), category=_('Logging'), category_slug='logging', ) @@ -277,11 +281,10 @@ register( label=_('Loggers to send data to the log aggregator from'), help_text=_('List of loggers that will send HTTP logs to the collector, these can ' 'include any or all of: \n' - 'activity_stream - logs duplicate to records entered in activity stream\n' + 'awx - Tower service logs\n' + 'activity_stream - activity stream records\n' 'job_events - callback data from Ansible job events\n' - 'system_tracking - data generated from scan jobs\n' - 'Sending generic Tower logs must be configured through local_settings.py' - 'instead of this mechanism.'), + 'system_tracking - facts gathered from scan jobs.'), category=_('Logging'), category_slug='logging', ) @@ -289,10 +292,11 @@ register( 'LOG_AGGREGATOR_INDIVIDUAL_FACTS', field_class=fields.BooleanField, default=False, - label=_('Flag denoting to send individual messages for each fact in system tracking'), - help_text=_('If not set, the data from system tracking will be sent inside ' - 'of a single dictionary, but if set, separate requests will be sent ' - 'for each package, service, etc. that is found in the scan.'), + label=_('Log System Tracking Facts Individually'), + help_text=_('If set, system tracking facts will be sent for each package, service, or' + 'other item found in a scan, allowing for greater search query granularity. ' + 'If unset, facts will be sent as a single dictionary, allowing for greater ' + 'efficiency in fact processing.'), category=_('Logging'), category_slug='logging', ) @@ -300,8 +304,8 @@ register( 'LOG_AGGREGATOR_ENABLED', field_class=fields.BooleanField, default=False, - label=_('Flag denoting whether to use the external logger system'), - help_text=_('If not set, only normal settings data will be used to configure loggers.'), + label=_('Enable External Logging'), + help_text=_('Enable sending logs to external log aggregator.'), category=_('Logging'), category_slug='logging', ) diff --git a/awx/main/consumers.py b/awx/main/consumers.py index a8c56a264d..2cb1f450f2 100644 --- a/awx/main/consumers.py +++ b/awx/main/consumers.py @@ -6,6 +6,7 @@ from channels import Group from channels.sessions import channel_session from django.contrib.auth.models import User +from django.core.serializers.json import DjangoJSONEncoder from awx.main.models.organization import AuthToken @@ -86,4 +87,4 @@ def ws_receive(message): def emit_channel_notification(group, payload): - Group(group).send({"text": json.dumps(payload)}) + Group(group).send({"text": json.dumps(payload, cls=DjangoJSONEncoder)}) diff --git a/awx/main/management/commands/cleanup_facts.py b/awx/main/management/commands/cleanup_facts.py index a709f81c1a..f6b3c76b26 100644 --- a/awx/main/management/commands/cleanup_facts.py +++ b/awx/main/management/commands/cleanup_facts.py @@ -96,12 +96,12 @@ class Command(BaseCommand): option_list = BaseCommand.option_list + ( make_option('--older_than', dest='older_than', - default=None, - help='Specify the relative time to consider facts older than (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y).'), + default='30d', + help='Specify the relative time to consider facts older than (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y). Defaults to 30d.'), make_option('--granularity', dest='granularity', - default=None, - help='Window duration to group same hosts by for deletion (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y).'), + default='1w', + help='Window duration to group same hosts by for deletion (w)eek (d)ay or (y)ear (i.e. 5d, 2w, 1y). Defaults to 1w.'), make_option('--module', dest='module', default=None, diff --git a/awx/main/management/commands/cleanup_jobs.py b/awx/main/management/commands/cleanup_jobs.py index 3f7270c9b2..ead5ef50d9 100644 --- a/awx/main/management/commands/cleanup_jobs.py +++ b/awx/main/management/commands/cleanup_jobs.py @@ -12,7 +12,7 @@ from django.db import transaction from django.utils.timezone import now # AWX -from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob +from awx.main.models import Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob, WorkflowJob, Notification class Command(NoArgsCommand): @@ -30,19 +30,25 @@ class Command(NoArgsCommand): 'be removed)'), make_option('--jobs', dest='only_jobs', action='store_true', default=False, - help='Only remove jobs'), + help='Remove jobs'), make_option('--ad-hoc-commands', dest='only_ad_hoc_commands', action='store_true', default=False, - help='Only remove ad hoc commands'), + help='Remove ad hoc commands'), make_option('--project-updates', dest='only_project_updates', action='store_true', default=False, - help='Only remove project updates'), + help='Remove project updates'), make_option('--inventory-updates', dest='only_inventory_updates', action='store_true', default=False, - help='Only remove inventory updates'), + help='Remove inventory updates'), make_option('--management-jobs', default=False, action='store_true', dest='only_management_jobs', - help='Only remove management jobs') + help='Remove management jobs'), + make_option('--notifications', dest='only_notifications', + action='store_true', default=False, + help='Remove notifications'), + make_option('--workflow-jobs', default=False, + action='store_true', dest='only_workflow_jobs', + help='Remove workflow jobs') ) def cleanup_jobs(self): @@ -169,6 +175,50 @@ class Command(NoArgsCommand): self.logger.addHandler(handler) self.logger.propagate = False + def cleanup_workflow_jobs(self): + skipped, deleted = 0, 0 + for workflow_job in WorkflowJob.objects.all(): + workflow_job_display = '"{}" (started {}, {} nodes)'.format( + unicode(workflow_job), unicode(workflow_job.created), + workflow_job.workflow_nodes.count()) + if workflow_job.status in ('pending', 'waiting', 'running'): + action_text = 'would skip' if self.dry_run else 'skipping' + self.logger.debug('%s %s job %s', action_text, workflow_job.status, workflow_job_display) + skipped += 1 + elif workflow_job.created >= self.cutoff: + action_text = 'would skip' if self.dry_run else 'skipping' + self.logger.debug('%s %s', action_text, workflow_job_display) + skipped += 1 + else: + action_text = 'would delete' if self.dry_run else 'deleting' + self.logger.info('%s %s', action_text, workflow_job_display) + if not self.dry_run: + workflow_job.delete() + deleted += 1 + return skipped, deleted + + def cleanup_notifications(self): + skipped, deleted = 0, 0 + for notification in Notification.objects.all(): + notification_display = '"{}" (started {}, {} type, {} sent)'.format( + unicode(notification), unicode(notification.created), + notification.notification_type, notification.notifications_sent) + if notification.status in ('pending',): + action_text = 'would skip' if self.dry_run else 'skipping' + self.logger.debug('%s %s notification %s', action_text, notification.status, notification_display) + skipped += 1 + elif notification.created >= self.cutoff: + action_text = 'would skip' if self.dry_run else 'skipping' + self.logger.debug('%s %s', action_text, notification_display) + skipped += 1 + else: + action_text = 'would delete' if self.dry_run else 'deleting' + self.logger.info('%s %s', action_text, notification_display) + if not self.dry_run: + notification.delete() + deleted += 1 + return skipped, deleted + @transaction.atomic def handle_noargs(self, **options): self.verbosity = int(options.get('verbosity', 1)) @@ -179,7 +229,8 @@ class Command(NoArgsCommand): self.cutoff = now() - datetime.timedelta(days=self.days) except OverflowError: raise CommandError('--days specified is too large. Try something less than 99999 (about 270 years).') - model_names = ('jobs', 'ad_hoc_commands', 'project_updates', 'inventory_updates', 'management_jobs') + model_names = ('jobs', 'ad_hoc_commands', 'project_updates', 'inventory_updates', + 'management_jobs', 'workflow_jobs', 'notifications') models_to_cleanup = set() for m in model_names: if options.get('only_%s' % m, False): diff --git a/awx/main/management/commands/inventory_import.py b/awx/main/management/commands/inventory_import.py index 7f87694cbe..c1399e1a11 100644 --- a/awx/main/management/commands/inventory_import.py +++ b/awx/main/management/commands/inventory_import.py @@ -64,7 +64,7 @@ class MemObject(object): all_vars = {} files_found = 0 for suffix in ('', '.yml', '.yaml', '.json'): - path = ''.join([base_path, suffix]) + path = ''.join([base_path, suffix]).encode("utf-8") if not os.path.exists(path): continue if not os.path.isfile(path): @@ -462,7 +462,7 @@ class ExecutableJsonLoader(BaseLoader): # to set their variables for k,v in self.all_group.all_hosts.iteritems(): if 'hostvars' not in _meta: - data = self.command_to_json([self.source, '--host', k]) + data = self.command_to_json([self.source, '--host', k.encode("utf-8")]) else: data = _meta['hostvars'].get(k, {}) if isinstance(data, dict): @@ -482,6 +482,7 @@ def load_inventory_source(source, all_group=None, group_filter_re=None, # good naming conventions source = source.replace('azure.py', 'windows_azure.py') source = source.replace('satellite6.py', 'foreman.py') + source = source.replace('vmware.py', 'vmware_inventory.py') logger.debug('Analyzing type of source: %s', source) original_all_group = all_group if not os.path.exists(source): @@ -1191,7 +1192,7 @@ class Command(NoArgsCommand): def check_license(self): license_info = TaskEnhancer().validate_enhancements() - if not license_info or len(license_info) == 0: + if license_info.get('license_key', 'UNLICENSED') == 'UNLICENSED': self.logger.error(LICENSE_NON_EXISTANT_MESSAGE) raise CommandError('No Tower license found!') available_instances = license_info.get('available_instances', 0) @@ -1253,6 +1254,12 @@ class Command(NoArgsCommand): except re.error: raise CommandError('invalid regular expression for --host-filter') + ''' + TODO: Remove this deprecation when we remove support for rax.py + ''' + if self.source == "rax.py": + self.logger.info("Rackspace inventory sync is Deprecated in Tower 3.1.0 and support for Rackspace will be removed in a future release.") + begin = time.time() self.load_inventory_from_database() diff --git a/awx/main/management/commands/run_callback_receiver.py b/awx/main/management/commands/run_callback_receiver.py index c0105b2587..9da8ac9bd0 100644 --- a/awx/main/management/commands/run_callback_receiver.py +++ b/awx/main/management/commands/run_callback_receiver.py @@ -3,6 +3,11 @@ # Python import logging +import signal +from uuid import UUID +from multiprocessing import Process +from multiprocessing import Queue as MPQueue +from Queue import Empty as QueueEmpty from kombu import Connection, Exchange, Queue from kombu.mixins import ConsumerMixin @@ -10,7 +15,9 @@ from kombu.mixins import ConsumerMixin # Django from django.conf import settings from django.core.management.base import NoArgsCommand +from django.db import connection as django_connection from django.db import DatabaseError +from django.core.cache import cache as django_cache # AWX from awx.main.models import * # noqa @@ -19,8 +26,40 @@ logger = logging.getLogger('awx.main.commands.run_callback_receiver') class CallbackBrokerWorker(ConsumerMixin): - def __init__(self, connection): + def __init__(self, connection, use_workers=True): self.connection = connection + self.worker_queues = [] + self.total_messages = 0 + self.init_workers(use_workers) + + def init_workers(self, use_workers=True): + def shutdown_handler(active_workers): + def _handler(signum, frame): + try: + for active_worker in active_workers: + active_worker.terminate() + signal.signal(signum, signal.SIG_DFL) + os.kill(os.getpid(), signum) # Rethrow signal, this time without catching it + except Exception: + # TODO: LOG + pass + return _handler + + if use_workers: + django_connection.close() + django_cache.close() + for idx in range(settings.JOB_EVENT_WORKERS): + queue_actual = MPQueue(settings.JOB_EVENT_MAX_QUEUE_SIZE) + w = Process(target=self.callback_worker, args=(queue_actual, idx,)) + w.start() + if settings.DEBUG: + logger.info('Started worker %s' % str(idx)) + self.worker_queues.append([0, queue_actual, w]) + elif settings.DEBUG: + logger.warn('Started callback receiver (no workers)') + + signal.signal(signal.SIGINT, shutdown_handler([p[2] for p in self.worker_queues])) + signal.signal(signal.SIGTERM, shutdown_handler([p[2] for p in self.worker_queues])) def get_consumers(self, Consumer, channel): return [Consumer(queues=[Queue(settings.CALLBACK_QUEUE, @@ -30,27 +69,57 @@ class CallbackBrokerWorker(ConsumerMixin): callbacks=[self.process_task])] def process_task(self, body, message): - try: - if 'event' not in body: - raise Exception('Payload does not have an event') - if 'job_id' not in body and 'ad_hoc_command_id' not in body: - raise Exception('Payload does not have a job_id or ad_hoc_command_id') - if settings.DEBUG: - logger.info('Body: {}'.format(body)) - logger.info('Message: {}'.format(message)) - try: - if 'job_id' in body: - JobEvent.create_from_data(**body) - elif 'ad_hoc_command_id' in body: - AdHocCommandEvent.create_from_data(**body) - except DatabaseError as e: - logger.error('Database Error Saving Job Event: {}'.format(e)) - except Exception as exc: - import traceback - traceback.print_exc() - logger.error('Callback Task Processor Raised Exception: %r', exc) + if "uuid" in body: + queue = UUID(body['uuid']).int % settings.JOB_EVENT_WORKERS + else: + queue = self.total_messages % settings.JOB_EVENT_WORKERS + self.write_queue_worker(queue, body) + self.total_messages += 1 message.ack() + def write_queue_worker(self, preferred_queue, body): + queue_order = sorted(range(settings.JOB_EVENT_WORKERS), cmp=lambda x, y: -1 if x==preferred_queue else 0) + for queue_actual in queue_order: + try: + worker_actual = self.worker_queues[queue_actual] + worker_actual[1].put(body, block=True, timeout=5) + worker_actual[0] += 1 + return queue_actual + except Exception: + import traceback + tb = traceback.format_exc() + logger.warn("Could not write to queue %s" % preferred_queue) + logger.warn("Detail: {}".format(tb)) + continue + return None + + def callback_worker(self, queue_actual, idx): + while True: + try: + body = queue_actual.get(block=True, timeout=1) + except QueueEmpty: + continue + except Exception as e: + logger.error("Exception on worker thread, restarting: " + str(e)) + continue + try: + if 'job_id' not in body and 'ad_hoc_command_id' not in body: + raise Exception('Payload does not have a job_id or ad_hoc_command_id') + if settings.DEBUG: + logger.info('Body: {}'.format(body)) + try: + if 'job_id' in body: + JobEvent.create_from_data(**body) + elif 'ad_hoc_command_id' in body: + AdHocCommandEvent.create_from_data(**body) + except DatabaseError as e: + logger.error('Database Error Saving Job Event: {}'.format(e)) + except Exception as exc: + import traceback + tb = traceback.format_exc() + logger.error('Callback Task Processor Raised Exception: %r', exc) + logger.error('Detail: {}'.format(tb)) + class Command(NoArgsCommand): ''' diff --git a/awx/main/management/commands/run_socketio_service.py b/awx/main/management/commands/run_socketio_service.py deleted file mode 100644 index 9b7e5a61d2..0000000000 --- a/awx/main/management/commands/run_socketio_service.py +++ /dev/null @@ -1,293 +0,0 @@ -# Copyright (c) 2015 Ansible, Inc. -# All Rights Reserved. - -# Python -import os -import logging -import urllib -import weakref -from optparse import make_option -from threading import Thread - -# Django -from django.conf import settings -from django.core.management.base import NoArgsCommand - -# AWX -import awx -from awx.main.models import * # noqa -from awx.main.socket_queue import Socket - -# socketio -from socketio import socketio_manage -from socketio.server import SocketIOServer -from socketio.namespace import BaseNamespace - -logger = logging.getLogger('awx.main.commands.run_socketio_service') - - -class SocketSession(object): - def __init__(self, session_id, token_key, socket): - self.socket = weakref.ref(socket) - self.session_id = session_id - self.token_key = token_key - self._valid = True - - def is_valid(self): - return bool(self._valid) - - def invalidate(self): - self._valid = False - - def is_db_token_valid(self): - auth_token = AuthToken.objects.filter(key=self.token_key, reason='') - if not auth_token.exists(): - return False - auth_token = auth_token[0] - return bool(not auth_token.is_expired()) - - -class SocketSessionManager(object): - def __init__(self): - self.SESSIONS_MAX = 1000 - self.socket_sessions = [] - self.socket_session_token_key_map = {} - - def _prune(self): - if len(self.socket_sessions) > self.SESSIONS_MAX: - session = self.socket_sessions[0] - entries = self.socket_session_token_key_map[session.token_key] - del entries[session.session_id] - if len(entries) == 0: - del self.socket_session_token_key_map[session.token_key] - self.socket_sessions.pop(0) - - ''' - Returns an dict of sessions - ''' - def lookup(self, token_key=None): - if not token_key: - raise ValueError("token_key required") - return self.socket_session_token_key_map.get(token_key, None) - - def add_session(self, session): - self.socket_sessions.append(session) - entries = self.socket_session_token_key_map.get(session.token_key, None) - if not entries: - entries = {} - self.socket_session_token_key_map[session.token_key] = entries - entries[session.session_id] = session - self._prune() - return session - - -class SocketController(object): - def __init__(self, SocketSessionManager): - self.server = None - self.SocketSessionManager = SocketSessionManager - - def add_session(self, session): - return self.SocketSessionManager.add_session(session) - - def broadcast_packet(self, packet): - # Broadcast message to everyone at endpoint - # Loop over the 'raw' list of sockets (don't trust our list) - for session_id, socket in list(self.server.sockets.iteritems()): - socket_session = socket.session.get('socket_session', None) - if socket_session and socket_session.is_valid(): - try: - socket.send_packet(packet) - except Exception as e: - logger.error("Error sending client packet to %s: %s" % (str(session_id), str(packet))) - logger.error("Error was: " + str(e)) - - def send_packet(self, packet, token_key): - if not token_key: - raise ValueError("token_key is required") - socket_sessions = self.SocketSessionManager.lookup(token_key=token_key) - # We may not find the socket_session if the user disconnected - # (it's actually more compliciated than that because of our prune logic) - if not socket_sessions: - return None - for session_id, socket_session in socket_sessions.iteritems(): - logger.warn("Maybe sending packet to %s" % session_id) - if socket_session and socket_session.is_valid(): - logger.warn("Sending packet to %s" % session_id) - socket = socket_session.socket() - if socket: - try: - socket.send_packet(packet) - except Exception as e: - logger.error("Error sending client packet to %s: %s" % (str(socket_session.session_id), str(packet))) - logger.error("Error was: " + str(e)) - - def set_server(self, server): - self.server = server - return server - - -socketController = SocketController(SocketSessionManager()) - - -# -# Socket session is attached to self.session['socket_session'] -# self.session and self.socket.session point to the same dict -# -class TowerBaseNamespace(BaseNamespace): - def get_allowed_methods(self): - return ['recv_disconnect'] - - def get_initial_acl(self): - request_token = self._get_request_token() - if request_token: - # (1) This is the first time the socket has been seen (first - # namespace joined). - # (2) This socket has already been seen (already joined and maybe - # left a namespace) - # - # Note: Assume that the user token is valid if the session is found - socket_session = self.session.get('socket_session', None) - if not socket_session: - socket_session = SocketSession(self.socket.sessid, request_token, self.socket) - if socket_session.is_db_token_valid(): - self.session['socket_session'] = socket_session - socketController.add_session(socket_session) - else: - socket_session.invalidate() - - return set(['recv_connect'] + self.get_allowed_methods()) - else: - logger.warn("Authentication Failure validating user") - self.emit("connect_failed", "Authentication failed") - return set(['recv_connect']) - - def _get_request_token(self): - if 'QUERY_STRING' not in self.environ: - return False - - try: - k, v = self.environ['QUERY_STRING'].split("=") - if k == "Token": - token_actual = urllib.unquote_plus(v).decode().replace("\"","") - return token_actual - except Exception as e: - logger.error("Exception validating user: " + str(e)) - return False - return False - - def recv_connect(self): - socket_session = self.session.get('socket_session', None) - if socket_session and not socket_session.is_valid(): - self.disconnect(silent=False) - - -class TestNamespace(TowerBaseNamespace): - def recv_connect(self): - logger.info("Received client connect for test namespace from %s" % str(self.environ['REMOTE_ADDR'])) - self.emit('test', "If you see this then you attempted to connect to the test socket endpoint") - super(TestNamespace, self).recv_connect() - - -class JobNamespace(TowerBaseNamespace): - def recv_connect(self): - logger.info("Received client connect for job namespace from %s" % str(self.environ['REMOTE_ADDR'])) - super(JobNamespace, self).recv_connect() - - -class JobEventNamespace(TowerBaseNamespace): - def recv_connect(self): - logger.info("Received client connect for job event namespace from %s" % str(self.environ['REMOTE_ADDR'])) - super(JobEventNamespace, self).recv_connect() - - -class AdHocCommandEventNamespace(TowerBaseNamespace): - def recv_connect(self): - logger.info("Received client connect for ad hoc command event namespace from %s" % str(self.environ['REMOTE_ADDR'])) - super(AdHocCommandEventNamespace, self).recv_connect() - - -class ScheduleNamespace(TowerBaseNamespace): - def get_allowed_methods(self): - parent_allowed = super(ScheduleNamespace, self).get_allowed_methods() - return parent_allowed + ["schedule_changed"] - - def recv_connect(self): - logger.info("Received client connect for schedule namespace from %s" % str(self.environ['REMOTE_ADDR'])) - super(ScheduleNamespace, self).recv_connect() - - -# Catch-all namespace. -# Deliver 'global' events over this namespace -class ControlNamespace(TowerBaseNamespace): - def recv_connect(self): - logger.warn("Received client connect for control namespace from %s" % str(self.environ['REMOTE_ADDR'])) - super(ControlNamespace, self).recv_connect() - - -class TowerSocket(object): - def __call__(self, environ, start_response): - path = environ['PATH_INFO'].strip('/') or 'index.html' - if path.startswith('socket.io'): - socketio_manage(environ, {'/socket.io/test': TestNamespace, - '/socket.io/jobs': JobNamespace, - '/socket.io/job_events': JobEventNamespace, - '/socket.io/ad_hoc_command_events': AdHocCommandEventNamespace, - '/socket.io/schedules': ScheduleNamespace, - '/socket.io/control': ControlNamespace}) - else: - logger.warn("Invalid connect path received: " + path) - start_response('404 Not Found', []) - return ['Tower version %s' % awx.__version__] - - -def notification_handler(server): - with Socket('websocket', 'r') as websocket: - for message in websocket.listen(): - packet = { - 'args': message, - 'endpoint': message['endpoint'], - 'name': message['event'], - 'type': 'event', - } - - if 'token_key' in message: - # Best practice not to send the token over the socket - socketController.send_packet(packet, message.pop('token_key')) - else: - socketController.broadcast_packet(packet) - - -class Command(NoArgsCommand): - ''' - SocketIO event emitter Tower service - Receives notifications from other services destined for UI notification - ''' - - help = 'Launch the SocketIO event emitter service' - - option_list = NoArgsCommand.option_list + ( - make_option('--receive_port', dest='receive_port', type='int', default=5559, - help='Port to listen for new events that will be destined for a client'), - make_option('--socketio_port', dest='socketio_port', type='int', default=8080, - help='Port to accept socketio requests from clients'),) - - def handle_noargs(self, **options): - socketio_listen_port = settings.SOCKETIO_LISTEN_PORT - - try: - if os.path.exists('/etc/tower/tower.cert') and os.path.exists('/etc/tower/tower.key'): - logger.info('Listening on port https://0.0.0.0:' + str(socketio_listen_port)) - server = SocketIOServer(('0.0.0.0', socketio_listen_port), TowerSocket(), resource='socket.io', - keyfile='/etc/tower/tower.key', certfile='/etc/tower/tower.cert') - else: - logger.info('Listening on port http://0.0.0.0:' + str(socketio_listen_port)) - server = SocketIOServer(('0.0.0.0', socketio_listen_port), TowerSocket(), resource='socket.io') - - socketController.set_server(server) - handler_thread = Thread(target=notification_handler, args=(server,)) - handler_thread.daemon = True - handler_thread.start() - - server.serve_forever() - except KeyboardInterrupt: - pass diff --git a/awx/main/middleware.py b/awx/main/middleware.py index 1c8f8fc4d5..0e0e4c748c 100644 --- a/awx/main/middleware.py +++ b/awx/main/middleware.py @@ -68,7 +68,6 @@ class ActivityStreamMiddleware(threading.local): if user.exists(): user = user[0] instance.actor = user - instance.save(update_fields=['actor']) else: if instance.id not in self.instance_ids: self.instance_ids.append(instance.id) diff --git a/awx/main/migrations/0002_squashed_v300_release.py b/awx/main/migrations/0002_squashed_v300_release.py new file mode 100644 index 0000000000..c398d18468 --- /dev/null +++ b/awx/main/migrations/0002_squashed_v300_release.py @@ -0,0 +1,742 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2016 Ansible, Inc. +# All Rights Reserved. + +from __future__ import unicode_literals + +import awx.main.fields + +from django.db import migrations, models +import django.db.models.deletion +from django.conf import settings +from django.utils.timezone import now + +import jsonfield.fields +import jsonbfield.fields +import taggit.managers + + +def create_system_job_templates(apps, schema_editor): + ''' + Create default system job templates if not present. Create default schedules + only if new system job templates were created (i.e. new database). + ''' + + SystemJobTemplate = apps.get_model('main', 'SystemJobTemplate') + Schedule = apps.get_model('main', 'Schedule') + ContentType = apps.get_model('contenttypes', 'ContentType') + sjt_ct = ContentType.objects.get_for_model(SystemJobTemplate) + now_dt = now() + now_str = now_dt.strftime('%Y%m%dT%H%M%SZ') + + sjt, created = SystemJobTemplate.objects.get_or_create( + job_type='cleanup_jobs', + defaults=dict( + name='Cleanup Job Details', + description='Remove job history', + created=now_dt, + modified=now_dt, + polymorphic_ctype=sjt_ct, + ), + ) + if created: + sched = Schedule( + name='Cleanup Job Schedule', + rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=SU' % now_str, + description='Automatically Generated Schedule', + enabled=True, + extra_data={'days': '120'}, + created=now_dt, + modified=now_dt, + ) + sched.unified_job_template = sjt + sched.save() + + existing_cd_jobs = SystemJobTemplate.objects.filter(job_type='cleanup_deleted') + Schedule.objects.filter(unified_job_template__in=existing_cd_jobs).delete() + existing_cd_jobs.delete() + + sjt, created = SystemJobTemplate.objects.get_or_create( + job_type='cleanup_activitystream', + defaults=dict( + name='Cleanup Activity Stream', + description='Remove activity stream history', + created=now_dt, + modified=now_dt, + polymorphic_ctype=sjt_ct, + ), + ) + if created: + sched = Schedule( + name='Cleanup Activity Schedule', + rrule='DTSTART:%s RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TU' % now_str, + description='Automatically Generated Schedule', + enabled=True, + extra_data={'days': '355'}, + created=now_dt, + modified=now_dt, + ) + sched.unified_job_template = sjt + sched.save() + + sjt, created = SystemJobTemplate.objects.get_or_create( + job_type='cleanup_facts', + defaults=dict( + name='Cleanup Fact Details', + description='Remove system tracking history', + created=now_dt, + modified=now_dt, + polymorphic_ctype=sjt_ct, + ), + ) + if created: + sched = Schedule( + name='Cleanup Fact Schedule', + rrule='DTSTART:%s RRULE:FREQ=MONTHLY;INTERVAL=1;BYMONTHDAY=1' % now_str, + description='Automatically Generated Schedule', + enabled=True, + extra_data={'older_than': '120d', 'granularity': '1w'}, + created=now_dt, + modified=now_dt, + ) + sched.unified_job_template = sjt + sched.save() + + +class Migration(migrations.Migration): + replaces = [(b'main', '0002_v300_tower_settings_changes'), + (b'main', '0003_v300_notification_changes'), + (b'main', '0004_v300_fact_changes'), + (b'main', '0005_v300_migrate_facts'), + (b'main', '0006_v300_active_flag_cleanup'), + (b'main', '0007_v300_active_flag_removal'), + (b'main', '0008_v300_rbac_changes'), + (b'main', '0009_v300_rbac_migrations'), + (b'main', '0010_v300_create_system_job_templates'), + (b'main', '0011_v300_credential_domain_field'), + (b'main', '0012_v300_create_labels'), + (b'main', '0013_v300_label_changes'), + (b'main', '0014_v300_invsource_cred'), + (b'main', '0015_v300_label_changes'), + (b'main', '0016_v300_prompting_changes'), + (b'main', '0017_v300_prompting_migrations'), + (b'main', '0018_v300_host_ordering'), + (b'main', '0019_v300_new_azure_credential'),] + + dependencies = [ + ('taggit', '0002_auto_20150616_2121'), + ('contenttypes', '0002_remove_content_type_name'), + migrations.swappable_dependency(settings.AUTH_USER_MODEL), + ('main', '0001_initial'), + ] + + operations = [ + # Tower settings changes + migrations.CreateModel( + name='TowerSettings', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('key', models.CharField(unique=True, max_length=255)), + ('description', models.TextField()), + ('category', models.CharField(max_length=128)), + ('value', models.TextField(blank=True)), + ('value_type', models.CharField(max_length=12, choices=[(b'string', 'String'), (b'int', 'Integer'), (b'float', 'Decimal'), (b'json', 'JSON'), (b'bool', 'Boolean'), (b'password', 'Password'), (b'list', 'List')])), + ('user', models.ForeignKey(related_name='settings', default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), + ], + ), + # Notification changes + migrations.CreateModel( + name='Notification', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('status', models.CharField(default=b'pending', max_length=20, editable=False, choices=[(b'pending', 'Pending'), (b'successful', 'Successful'), (b'failed', 'Failed')])), + ('error', models.TextField(default=b'', editable=False, blank=True)), + ('notifications_sent', models.IntegerField(default=0, editable=False)), + ('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'irc', 'IRC')])), + ('recipients', models.TextField(default=b'', editable=False, blank=True)), + ('subject', models.TextField(default=b'', editable=False, blank=True)), + ('body', jsonfield.fields.JSONField(default=dict, blank=True)), + ], + options={ + 'ordering': ('pk',), + }, + ), + migrations.CreateModel( + name='NotificationTemplate', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('description', models.TextField(default=b'', blank=True)), + ('name', models.CharField(unique=True, max_length=512)), + ('notification_type', models.CharField(max_length=32, choices=[(b'email', 'Email'), (b'slack', 'Slack'), (b'twilio', 'Twilio'), (b'pagerduty', 'Pagerduty'), (b'hipchat', 'HipChat'), (b'webhook', 'Webhook'), (b'irc', 'IRC')])), + ('notification_configuration', jsonfield.fields.JSONField(default=dict)), + ('created_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), + ('modified_by', models.ForeignKey(related_name="{u'class': 'notificationtemplate', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), + ('organization', models.ForeignKey(related_name='notification_templates', on_delete=django.db.models.deletion.SET_NULL, to='main.Organization', null=True)), + ('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags')), + ], + ), + migrations.AddField( + model_name='notification', + name='notification_template', + field=models.ForeignKey(related_name='notifications', editable=False, to='main.NotificationTemplate'), + ), + migrations.AddField( + model_name='activitystream', + name='notification', + field=models.ManyToManyField(to='main.Notification', blank=True), + ), + migrations.AddField( + model_name='activitystream', + name='notification_template', + field=models.ManyToManyField(to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='organization', + name='notification_templates_any', + field=models.ManyToManyField(related_name='organization_notification_templates_for_any', to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='organization', + name='notification_templates_error', + field=models.ManyToManyField(related_name='organization_notification_templates_for_errors', to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='organization', + name='notification_templates_success', + field=models.ManyToManyField(related_name='organization_notification_templates_for_success', to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='unifiedjob', + name='notifications', + field=models.ManyToManyField(related_name='unifiedjob_notifications', editable=False, to='main.Notification'), + ), + migrations.AddField( + model_name='unifiedjobtemplate', + name='notification_templates_any', + field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_any', to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='unifiedjobtemplate', + name='notification_templates_error', + field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_errors', to='main.NotificationTemplate', blank=True), + ), + migrations.AddField( + model_name='unifiedjobtemplate', + name='notification_templates_success', + field=models.ManyToManyField(related_name='unifiedjobtemplate_notification_templates_for_success', to='main.NotificationTemplate', blank=True), + ), + # Fact changes + migrations.CreateModel( + name='Fact', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('timestamp', models.DateTimeField(default=None, help_text='Date and time of the corresponding fact scan gathering time.', editable=False)), + ('module', models.CharField(max_length=128)), + ('facts', jsonbfield.fields.JSONField(default={}, help_text='Arbitrary JSON structure of module facts captured at timestamp for a single host.', blank=True)), + ('host', models.ForeignKey(related_name='facts', to='main.Host', help_text='Host for the facts that the fact scan captured.')), + ], + ), + migrations.AlterIndexTogether( + name='fact', + index_together=set([('timestamp', 'module', 'host')]), + ), + # Active flag removal + migrations.RemoveField( + model_name='credential', + name='active', + ), + migrations.RemoveField( + model_name='custominventoryscript', + name='active', + ), + migrations.RemoveField( + model_name='group', + name='active', + ), + migrations.RemoveField( + model_name='host', + name='active', + ), + migrations.RemoveField( + model_name='inventory', + name='active', + ), + migrations.RemoveField( + model_name='organization', + name='active', + ), + migrations.RemoveField( + model_name='permission', + name='active', + ), + migrations.RemoveField( + model_name='schedule', + name='active', + ), + migrations.RemoveField( + model_name='team', + name='active', + ), + migrations.RemoveField( + model_name='unifiedjob', + name='active', + ), + migrations.RemoveField( + model_name='unifiedjobtemplate', + name='active', + ), + + # RBAC Changes + # ############ + migrations.RenameField( + 'Organization', + 'admins', + 'deprecated_admins', + ), + migrations.RenameField( + 'Organization', + 'users', + 'deprecated_users', + ), + migrations.RenameField( + 'Team', + 'users', + 'deprecated_users', + ), + migrations.RenameField( + 'Team', + 'projects', + 'deprecated_projects', + ), + migrations.AddField( + model_name='project', + name='organization', + field=models.ForeignKey(related_name='projects', to='main.Organization', blank=True, null=True), + ), + migrations.AlterField( + model_name='team', + name='deprecated_projects', + field=models.ManyToManyField(related_name='deprecated_teams', to='main.Project', blank=True), + ), + migrations.RenameField( + model_name='organization', + old_name='projects', + new_name='deprecated_projects', + ), + migrations.AlterField( + model_name='organization', + name='deprecated_projects', + field=models.ManyToManyField(related_name='deprecated_organizations', to='main.Project', blank=True), + ), + migrations.RenameField( + 'Credential', + 'team', + 'deprecated_team', + ), + migrations.RenameField( + 'Credential', + 'user', + 'deprecated_user', + ), + migrations.AlterField( + model_name='organization', + name='deprecated_admins', + field=models.ManyToManyField(related_name='deprecated_admin_of_organizations', to=settings.AUTH_USER_MODEL, blank=True), + ), + migrations.AlterField( + model_name='organization', + name='deprecated_users', + field=models.ManyToManyField(related_name='deprecated_organizations', to=settings.AUTH_USER_MODEL, blank=True), + ), + migrations.AlterField( + model_name='team', + name='deprecated_users', + field=models.ManyToManyField(related_name='deprecated_teams', to=settings.AUTH_USER_MODEL, blank=True), + ), + migrations.AlterUniqueTogether( + name='credential', + unique_together=set([]), + ), + migrations.AddField( + model_name='credential', + name='organization', + field=models.ForeignKey(related_name='credentials', default=None, blank=True, to='main.Organization', null=True), + ), + + # + # New RBAC models and fields + # + migrations.CreateModel( + name='Role', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('role_field', models.TextField()), + ('singleton_name', models.TextField(default=None, unique=True, null=True, db_index=True)), + ('members', models.ManyToManyField(related_name='roles', to=settings.AUTH_USER_MODEL)), + ('parents', models.ManyToManyField(related_name='children', to='main.Role')), + ('implicit_parents', models.TextField(default=b'[]')), + ('content_type', models.ForeignKey(default=None, to='contenttypes.ContentType', null=True)), + ('object_id', models.PositiveIntegerField(default=None, null=True)), + + ], + options={ + 'db_table': 'main_rbac_roles', + 'verbose_name_plural': 'roles', + }, + ), + migrations.CreateModel( + name='RoleAncestorEntry', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('role_field', models.TextField()), + ('content_type_id', models.PositiveIntegerField()), + ('object_id', models.PositiveIntegerField()), + ('ancestor', models.ForeignKey(related_name='+', to='main.Role')), + ('descendent', models.ForeignKey(related_name='+', to='main.Role')), + ], + options={ + 'db_table': 'main_rbac_role_ancestors', + 'verbose_name_plural': 'role_ancestors', + }, + ), + migrations.AddField( + model_name='role', + name='ancestors', + field=models.ManyToManyField(related_name='descendents', through='main.RoleAncestorEntry', to='main.Role'), + ), + migrations.AlterIndexTogether( + name='role', + index_together=set([('content_type', 'object_id')]), + ), + migrations.AlterIndexTogether( + name='roleancestorentry', + index_together=set([('ancestor', 'content_type_id', 'object_id'), ('ancestor', 'content_type_id', 'role_field'), ('ancestor', 'descendent')]), + ), + migrations.AddField( + model_name='credential', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='credential', + name='use_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='credential', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='custominventoryscript', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='custominventoryscript', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'organization.member_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='inventory', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='inventory', + name='adhoc_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='inventory', + name='update_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='inventory', + name='use_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'adhoc_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='inventory', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'update_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='jobtemplate', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.admin_role', b'inventory.organization.admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='jobtemplate', + name='execute_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='jobtemplate', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'project.organization.auditor_role', b'inventory.organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='organization', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='organization', + name='auditor_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_auditor', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='organization', + name='member_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='organization', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'member_role', b'auditor_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='project', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'singleton:system_administrator'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='project', + name='use_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='project', + name='update_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='project', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'singleton:system_auditor', b'use_role', b'update_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='team', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'organization.admin_role', to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='team', + name='member_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=None, to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='team', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role', b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'), + ), + + # System Job Templates + migrations.RunPython(create_system_job_templates, migrations.RunPython.noop), + migrations.AlterField( + model_name='systemjob', + name='job_type', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]), + ), + migrations.AlterField( + model_name='systemjobtemplate', + name='job_type', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'cleanup_jobs', 'Remove jobs older than a certain number of days'), (b'cleanup_activitystream', 'Remove activity stream entries older than a certain number of days'), (b'cleanup_facts', 'Purge and/or reduce the granularity of system tracking data')]), + ), + # Credential domain field + migrations.AddField( + model_name='credential', + name='domain', + field=models.CharField(default=b'', help_text='The identifier for the domain.', max_length=100, verbose_name='Domain', blank=True), + ), + # Create Labels + migrations.CreateModel( + name='Label', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('description', models.TextField(default=b'', blank=True)), + ('name', models.CharField(max_length=512)), + ('created_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_created+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), + ('modified_by', models.ForeignKey(related_name="{u'class': 'label', u'app_label': 'main'}(class)s_modified+", on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to=settings.AUTH_USER_MODEL, null=True)), + ('organization', models.ForeignKey(related_name='labels', to='main.Organization', help_text='Organization this label belongs to.')), + ('tags', taggit.managers.TaggableManager(to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags')), + ], + options={ + 'ordering': ('organization', 'name'), + }, + ), + migrations.AddField( + model_name='activitystream', + name='label', + field=models.ManyToManyField(to='main.Label', blank=True), + ), + migrations.AddField( + model_name='job', + name='labels', + field=models.ManyToManyField(related_name='job_labels', to='main.Label', blank=True), + ), + migrations.AddField( + model_name='jobtemplate', + name='labels', + field=models.ManyToManyField(related_name='jobtemplate_labels', to='main.Label', blank=True), + ), + migrations.AlterUniqueTogether( + name='label', + unique_together=set([('name', 'organization')]), + ), + # Label changes + migrations.AlterField( + model_name='label', + name='organization', + field=models.ForeignKey(related_name='labels', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Organization', help_text='Organization this label belongs to.', null=True), + ), + migrations.AlterField( + model_name='label', + name='organization', + field=models.ForeignKey(related_name='labels', to='main.Organization', help_text='Organization this label belongs to.'), + ), + # InventorySource Credential + migrations.AddField( + model_name='job', + name='network_credential', + field=models.ForeignKey(related_name='jobs_as_network_credential+', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), + ), + migrations.AddField( + model_name='jobtemplate', + name='network_credential', + field=models.ForeignKey(related_name='jobtemplates_as_network_credential+', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), + ), + migrations.AddField( + model_name='credential', + name='authorize', + field=models.BooleanField(default=False, help_text='Whether to use the authorize mechanism.'), + ), + migrations.AddField( + model_name='credential', + name='authorize_password', + field=models.CharField(default=b'', help_text='Password used by the authorize mechanism.', max_length=1024, blank=True), + ), + migrations.AlterField( + model_name='credential', + name='deprecated_team', + field=models.ForeignKey(related_name='deprecated_credentials', default=None, blank=True, to='main.Team', null=True), + ), + migrations.AlterField( + model_name='credential', + name='deprecated_user', + field=models.ForeignKey(related_name='deprecated_credentials', default=None, blank=True, to=settings.AUTH_USER_MODEL, null=True), + ), + migrations.AlterField( + model_name='credential', + name='kind', + field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'openstack', 'OpenStack')]), + ), + migrations.AlterField( + model_name='inventorysource', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + migrations.AlterField( + model_name='inventoryupdate', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + migrations.AlterField( + model_name='team', + name='deprecated_projects', + field=models.ManyToManyField(related_name='deprecated_teams', to='main.Project', blank=True), + ), + # Prompting changes + migrations.AddField( + model_name='jobtemplate', + name='ask_limit_on_launch', + field=models.BooleanField(default=False), + ), + migrations.AddField( + model_name='jobtemplate', + name='ask_inventory_on_launch', + field=models.BooleanField(default=False), + ), + migrations.AddField( + model_name='jobtemplate', + name='ask_credential_on_launch', + field=models.BooleanField(default=False), + ), + migrations.AddField( + model_name='jobtemplate', + name='ask_job_type_on_launch', + field=models.BooleanField(default=False), + ), + migrations.AddField( + model_name='jobtemplate', + name='ask_tags_on_launch', + field=models.BooleanField(default=False), + ), + migrations.AlterField( + model_name='job', + name='inventory', + field=models.ForeignKey(related_name='jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), + ), + migrations.AlterField( + model_name='jobtemplate', + name='inventory', + field=models.ForeignKey(related_name='jobtemplates', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), + ), + # Host ordering + migrations.AlterModelOptions( + name='host', + options={'ordering': ('name',)}, + ), + # New Azure credential + migrations.AddField( + model_name='credential', + name='client', + field=models.CharField(default=b'', help_text='Client Id or Application Id for the credential', max_length=128, blank=True), + ), + migrations.AddField( + model_name='credential', + name='secret', + field=models.CharField(default=b'', help_text='Secret Token for this credential', max_length=1024, blank=True), + ), + migrations.AddField( + model_name='credential', + name='subscription', + field=models.CharField(default=b'', help_text='Subscription identifier for this credential', max_length=1024, blank=True), + ), + migrations.AddField( + model_name='credential', + name='tenant', + field=models.CharField(default=b'', help_text='Tenant identifier for this credential', max_length=1024, blank=True), + ), + migrations.AlterField( + model_name='credential', + name='kind', + field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]), + ), + migrations.AlterField( + model_name='host', + name='instance_id', + field=models.CharField(default=b'', max_length=1024, blank=True), + ), + migrations.AlterField( + model_name='inventorysource', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + migrations.AlterField( + model_name='inventoryupdate', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Satellite 6'), (b'cloudforms', 'CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + ] diff --git a/awx/main/migrations/0003_squashed_v300_v303_updates.py b/awx/main/migrations/0003_squashed_v300_v303_updates.py new file mode 100644 index 0000000000..82d781ec85 --- /dev/null +++ b/awx/main/migrations/0003_squashed_v300_v303_updates.py @@ -0,0 +1,156 @@ +# -*- coding: utf-8 -*- + +# Copyright (c) 2016 Ansible, Inc. +# All Rights Reserved. + +from __future__ import unicode_literals + +from django.db import migrations, models +from django.conf import settings +import awx.main.fields +import jsonfield.fields + + +def update_dashed_host_variables(apps, schema_editor): + Host = apps.get_model('main', 'Host') + for host in Host.objects.filter(variables='---'): + host.variables = '' + host.save() + + +class Migration(migrations.Migration): + replaces = [(b'main', '0020_v300_labels_changes'), + (b'main', '0021_v300_activity_stream'), + (b'main', '0022_v300_adhoc_extravars'), + (b'main', '0023_v300_activity_stream_ordering'), + (b'main', '0024_v300_jobtemplate_allow_simul'), + (b'main', '0025_v300_update_rbac_parents'), + (b'main', '0026_v300_credential_unique'), + (b'main', '0027_v300_team_migrations'), + (b'main', '0028_v300_org_team_cascade'), + (b'main', '0029_v302_add_ask_skip_tags'), + (b'main', '0030_v302_job_survey_passwords'), + (b'main', '0031_v302_migrate_survey_passwords'), + (b'main', '0032_v302_credential_permissions_update'), + (b'main', '0033_v303_v245_host_variable_fix'),] + + + dependencies = [ + migrations.swappable_dependency(settings.AUTH_USER_MODEL), + ('main', '0002_squashed_v300_release'), + ] + + operations = [ + # Labels Changes + migrations.RemoveField( + model_name='job', + name='labels', + ), + migrations.RemoveField( + model_name='jobtemplate', + name='labels', + ), + migrations.AddField( + model_name='unifiedjob', + name='labels', + field=models.ManyToManyField(related_name='unifiedjob_labels', to='main.Label', blank=True), + ), + migrations.AddField( + model_name='unifiedjobtemplate', + name='labels', + field=models.ManyToManyField(related_name='unifiedjobtemplate_labels', to='main.Label', blank=True), + ), + # Activity Stream + migrations.AddField( + model_name='activitystream', + name='role', + field=models.ManyToManyField(to='main.Role', blank=True), + ), + migrations.AlterModelOptions( + name='activitystream', + options={'ordering': ('pk',)}, + ), + # Adhoc extra vars + migrations.AddField( + model_name='adhoccommand', + name='extra_vars', + field=models.TextField(default=b'', blank=True), + ), + migrations.AlterField( + model_name='credential', + name='kind', + field=models.CharField(default=b'ssh', max_length=32, choices=[(b'ssh', 'Machine'), (b'net', 'Network'), (b'scm', 'Source Control'), (b'aws', 'Amazon Web Services'), (b'rax', 'Rackspace'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'openstack', 'OpenStack')]), + ), + migrations.AlterField( + model_name='inventorysource', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + migrations.AlterField( + model_name='inventoryupdate', + name='source', + field=models.CharField(default=b'', max_length=32, blank=True, choices=[(b'', 'Manual'), (b'file', 'Local File, Directory or Script'), (b'rax', 'Rackspace Cloud Servers'), (b'ec2', 'Amazon EC2'), (b'gce', 'Google Compute Engine'), (b'azure', 'Microsoft Azure Classic (deprecated)'), (b'azure_rm', 'Microsoft Azure Resource Manager'), (b'vmware', 'VMware vCenter'), (b'satellite6', 'Red Hat Satellite 6'), (b'cloudforms', 'Red Hat CloudForms'), (b'openstack', 'OpenStack'), (b'custom', 'Custom Script')]), + ), + # jobtemplate allow simul + migrations.AddField( + model_name='jobtemplate', + name='allow_simultaneous', + field=models.BooleanField(default=False), + ), + # RBAC update parents + migrations.AlterField( + model_name='credential', + name='use_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.admin_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AlterField( + model_name='team', + name='member_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'admin_role', to='main.Role', null=b'True'), + ), + migrations.AlterField( + model_name='team', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'organization.auditor_role', b'member_role'], to='main.Role', null=b'True'), + ), + # Unique credential + migrations.AlterUniqueTogether( + name='credential', + unique_together=set([('organization', 'name', 'kind')]), + ), + migrations.AlterField( + model_name='credential', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'use_role', b'admin_role'], to='main.Role', null=b'True'), + ), + # Team cascade + migrations.AlterField( + model_name='team', + name='organization', + field=models.ForeignKey(related_name='teams', to='main.Organization'), + preserve_default=False, + ), + # add ask skip tags + migrations.AddField( + model_name='jobtemplate', + name='ask_skip_tags_on_launch', + field=models.BooleanField(default=False), + ), + # job survery passwords + migrations.AddField( + model_name='job', + name='survey_passwords', + field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), + ), + # RBAC credential permission updates + migrations.AlterField( + model_name='credential', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'), + ), + migrations.AlterField( + model_name='credential', + name='use_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'), + ), + ] diff --git a/awx/main/migrations/0034_v310_add_workflows.py b/awx/main/migrations/0034_v310_add_workflows.py deleted file mode 100644 index 4dfb84177a..0000000000 --- a/awx/main/migrations/0034_v310_add_workflows.py +++ /dev/null @@ -1,109 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import awx.main.models.notifications -import django.db.models.deletion -import awx.main.models.workflow -import awx.main.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0033_v303_v245_host_variable_fix'), - ] - - operations = [ - migrations.AlterField( - model_name='unifiedjob', - name='launch_type', - field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow')]), - ), - migrations.CreateModel( - name='WorkflowJob', - fields=[ - ('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')), - ('extra_vars', models.TextField(default=b'', blank=True)), - ], - options={ - 'ordering': ('id',), - }, - bases=('main.unifiedjob', models.Model, awx.main.models.notifications.JobNotificationMixin, awx.main.models.workflow.WorkflowJobInheritNodesMixin), - ), - migrations.CreateModel( - name='WorkflowJobNode', - fields=[ - ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), - ('created', models.DateTimeField(default=None, editable=False)), - ('modified', models.DateTimeField(default=None, editable=False)), - ('always_nodes', models.ManyToManyField(related_name='workflowjobnodes_always', to='main.WorkflowJobNode', blank=True)), - ('failure_nodes', models.ManyToManyField(related_name='workflowjobnodes_failure', to='main.WorkflowJobNode', blank=True)), - ('job', models.OneToOneField(related_name='unified_job_node', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJob', null=True)), - ('success_nodes', models.ManyToManyField(related_name='workflowjobnodes_success', to='main.WorkflowJobNode', blank=True)), - ], - options={ - 'abstract': False, - }, - ), - migrations.CreateModel( - name='WorkflowJobTemplate', - fields=[ - ('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')), - ('extra_vars', models.TextField(default=b'', blank=True)), - ('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True')), - ], - bases=('main.unifiedjobtemplate', models.Model), - ), - migrations.CreateModel( - name='WorkflowJobTemplateNode', - fields=[ - ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), - ('created', models.DateTimeField(default=None, editable=False)), - ('modified', models.DateTimeField(default=None, editable=False)), - ('always_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_always', to='main.WorkflowJobTemplateNode', blank=True)), - ('failure_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_failure', to='main.WorkflowJobTemplateNode', blank=True)), - ('success_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_success', to='main.WorkflowJobTemplateNode', blank=True)), - ('unified_job_template', models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True)), - ('workflow_job_template', models.ForeignKey(related_name='workflow_job_template_nodes', default=None, blank=True, to='main.WorkflowJobTemplate', null=True)), - ], - options={ - 'abstract': False, - }, - ), - migrations.AddField( - model_name='workflowjobnode', - name='unified_job_template', - field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True), - ), - migrations.AddField( - model_name='workflowjobnode', - name='workflow_job', - field=models.ForeignKey(related_name='workflow_job_nodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJob', null=True), - ), - migrations.AddField( - model_name='workflowjob', - name='workflow_job_template', - field=models.ForeignKey(related_name='workflow_jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJobTemplate', null=True), - ), - migrations.AddField( - model_name='activitystream', - name='workflow_job', - field=models.ManyToManyField(to='main.WorkflowJob', blank=True), - ), - migrations.AddField( - model_name='activitystream', - name='workflow_job_node', - field=models.ManyToManyField(to='main.WorkflowJobNode', blank=True), - ), - migrations.AddField( - model_name='activitystream', - name='workflow_job_template', - field=models.ManyToManyField(to='main.WorkflowJobTemplate', blank=True), - ), - migrations.AddField( - model_name='activitystream', - name='workflow_job_template_node', - field=models.ManyToManyField(to='main.WorkflowJobTemplateNode', blank=True), - ), - ] diff --git a/awx/main/migrations/0034_v310_release.py b/awx/main/migrations/0034_v310_release.py new file mode 100644 index 0000000000..d23843a5fe --- /dev/null +++ b/awx/main/migrations/0034_v310_release.py @@ -0,0 +1,614 @@ +# -*- coding: utf-8 -*- +from __future__ import unicode_literals + +from django.db import migrations, models +import awx.main.models.notifications +import jsonfield.fields +import django.db.models.deletion +import awx.main.models.workflow +import awx.main.fields + + +class Migration(migrations.Migration): + + dependencies = [ + ('main', '0033_v303_v245_host_variable_fix'), + ] + + operations = [ + # Create ChannelGroup table + migrations.CreateModel( + name='ChannelGroup', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('group', models.CharField(unique=True, max_length=200)), + ('channels', models.TextField()), + ], + ), + # Allow simultaneous Job + migrations.AddField( + model_name='job', + name='allow_simultaneous', + field=models.BooleanField(default=False), + ), + # Job Event UUID + migrations.AddField( + model_name='jobevent', + name='uuid', + field=models.CharField(default=b'', max_length=1024, editable=False), + ), + # Job Parent Event UUID + migrations.AddField( + model_name='jobevent', + name='parent_uuid', + field=models.CharField(default=b'', max_length=1024, editable=False), + ), + # Modify the HA Instance + migrations.RemoveField( + model_name='instance', + name='primary', + ), + migrations.AlterField( + model_name='instance', + name='uuid', + field=models.CharField(max_length=40), + ), + migrations.AlterField( + model_name='credential', + name='become_method', + field=models.CharField(default=b'', help_text='Privilege escalation method.', max_length=32, blank=True, choices=[(b'', 'None'), (b'sudo', 'Sudo'), (b'su', 'Su'), (b'pbrun', 'Pbrun'), (b'pfexec', 'Pfexec'), (b'dzdo', 'DZDO'), (b'pmrun', 'Pmrun')]), + ), + # Add Workflows + migrations.AlterField( + model_name='unifiedjob', + name='launch_type', + field=models.CharField(default=b'manual', max_length=20, editable=False, choices=[(b'manual', 'Manual'), (b'relaunch', 'Relaunch'), (b'callback', 'Callback'), (b'scheduled', 'Scheduled'), (b'dependency', 'Dependency'), (b'workflow', 'Workflow'), (b'sync', 'Sync')]), + ), + migrations.CreateModel( + name='WorkflowJob', + fields=[ + ('unifiedjob_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJob')), + ('extra_vars', models.TextField(default=b'', blank=True)), + ], + options={ + 'ordering': ('id',), + }, + bases=('main.unifiedjob', models.Model, awx.main.models.notifications.JobNotificationMixin), + ), + migrations.CreateModel( + name='WorkflowJobNode', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('always_nodes', models.ManyToManyField(related_name='workflowjobnodes_always', to='main.WorkflowJobNode', blank=True)), + ('failure_nodes', models.ManyToManyField(related_name='workflowjobnodes_failure', to='main.WorkflowJobNode', blank=True)), + ('job', models.OneToOneField(related_name='unified_job_node', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJob', null=True)), + ('success_nodes', models.ManyToManyField(related_name='workflowjobnodes_success', to='main.WorkflowJobNode', blank=True)), + ], + options={ + 'abstract': False, + }, + ), + migrations.CreateModel( + name='WorkflowJobTemplate', + fields=[ + ('unifiedjobtemplate_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='main.UnifiedJobTemplate')), + ('extra_vars', models.TextField(default=b'', blank=True)), + ('admin_role', awx.main.fields.ImplicitRoleField(related_name='+', parent_role=b'singleton:system_administrator', to='main.Role', null=b'True')), + ], + bases=('main.unifiedjobtemplate', models.Model), + ), + migrations.CreateModel( + name='WorkflowJobTemplateNode', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('created', models.DateTimeField(default=None, editable=False)), + ('modified', models.DateTimeField(default=None, editable=False)), + ('always_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_always', to='main.WorkflowJobTemplateNode', blank=True)), + ('failure_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_failure', to='main.WorkflowJobTemplateNode', blank=True)), + ('success_nodes', models.ManyToManyField(related_name='workflowjobtemplatenodes_success', to='main.WorkflowJobTemplateNode', blank=True)), + ('unified_job_template', models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True)), + ('workflow_job_template', models.ForeignKey(related_name='workflow_job_template_nodes', default=None, blank=True, to='main.WorkflowJobTemplate', null=True)), + ], + options={ + 'abstract': False, + }, + ), + migrations.AddField( + model_name='workflowjobnode', + name='unified_job_template', + field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.UnifiedJobTemplate', null=True), + ), + migrations.AddField( + model_name='workflowjobnode', + name='workflow_job', + field=models.ForeignKey(related_name='workflow_job_nodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJob', null=True), + ), + migrations.AddField( + model_name='workflowjob', + name='workflow_job_template', + field=models.ForeignKey(related_name='workflow_jobs', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.WorkflowJobTemplate', null=True), + ), + migrations.AddField( + model_name='activitystream', + name='workflow_job', + field=models.ManyToManyField(to='main.WorkflowJob', blank=True), + ), + migrations.AddField( + model_name='activitystream', + name='workflow_job_node', + field=models.ManyToManyField(to='main.WorkflowJobNode', blank=True), + ), + migrations.AddField( + model_name='activitystream', + name='workflow_job_template', + field=models.ManyToManyField(to='main.WorkflowJobTemplate', blank=True), + ), + migrations.AddField( + model_name='activitystream', + name='workflow_job_template_node', + field=models.ManyToManyField(to='main.WorkflowJobTemplateNode', blank=True), + ), + # Workflow RBAC prompts + migrations.AddField( + model_name='workflowjobnode', + name='char_prompts', + field=jsonfield.fields.JSONField(default={}, blank=True), + ), + migrations.AddField( + model_name='workflowjobnode', + name='credential', + field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), + ), + migrations.AddField( + model_name='workflowjobnode', + name='inventory', + field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), + ), + migrations.AddField( + model_name='workflowjobtemplate', + name='execute_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='workflowjobtemplate', + name='organization', + field=models.ForeignKey(related_name='workflows', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='main.Organization', null=True), + ), + migrations.AddField( + model_name='workflowjobtemplate', + name='read_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'), + ), + migrations.AddField( + model_name='workflowjobtemplatenode', + name='char_prompts', + field=jsonfield.fields.JSONField(default={}, blank=True), + ), + migrations.AddField( + model_name='workflowjobtemplatenode', + name='credential', + field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), + ), + migrations.AddField( + model_name='workflowjobtemplatenode', + name='inventory', + field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), + ), + migrations.AlterField( + model_name='workflowjobnode', + name='unified_job_template', + field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True), + ), + migrations.AlterField( + model_name='workflowjobnode', + name='workflow_job', + field=models.ForeignKey(related_name='workflow_job_nodes', default=None, blank=True, to='main.WorkflowJob', null=True), + ), + migrations.AlterField( + model_name='workflowjobtemplate', + name='admin_role', + field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'), + ), + migrations.AlterField( + model_name='workflowjobtemplatenode', + name='unified_job_template', + field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True), + ), + # Job artifacts + migrations.AddField( + model_name='job', + name='artifacts', + field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AddField( + model_name='workflowjobnode', + name='ancestor_artifacts', + field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), + ), + # Job timeout settings + migrations.AddField( + model_name='inventorysource', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + migrations.AddField( + model_name='inventoryupdate', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + migrations.AddField( + model_name='job', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + migrations.AddField( + model_name='jobtemplate', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + migrations.AddField( + model_name='project', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + migrations.AddField( + model_name='projectupdate', + name='timeout', + field=models.IntegerField(default=0, blank=True), + ), + # Execution Node + migrations.AddField( + model_name='unifiedjob', + name='execution_node', + field=models.TextField(default=b'', editable=False, blank=True), + ), + # SCM Revision + migrations.AddField( + model_name='project', + name='scm_revision', + field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'), + ), + migrations.AddField( + model_name='projectupdate', + name='job_type', + field=models.CharField(default=b'check', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')]), + ), + migrations.AddField( + model_name='job', + name='scm_revision', + field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'), + ), + # Project Playbook Files + migrations.AddField( + model_name='project', + name='playbook_files', + field=jsonfield.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True), + ), + # Job events to stdout + migrations.AddField( + model_name='adhoccommandevent', + name='end_line', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AddField( + model_name='adhoccommandevent', + name='start_line', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AddField( + model_name='adhoccommandevent', + name='stdout', + field=models.TextField(default=b'', editable=False), + ), + migrations.AddField( + model_name='adhoccommandevent', + name='uuid', + field=models.CharField(default=b'', max_length=1024, editable=False), + ), + migrations.AddField( + model_name='adhoccommandevent', + name='verbosity', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AddField( + model_name='jobevent', + name='end_line', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AddField( + model_name='jobevent', + name='playbook', + field=models.CharField(default=b'', max_length=1024, editable=False), + ), + migrations.AddField( + model_name='jobevent', + name='start_line', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AddField( + model_name='jobevent', + name='stdout', + field=models.TextField(default=b'', editable=False), + ), + migrations.AddField( + model_name='jobevent', + name='verbosity', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AlterField( + model_name='adhoccommandevent', + name='counter', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AlterField( + model_name='adhoccommandevent', + name='event', + field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]), + ), + migrations.AlterField( + model_name='jobevent', + name='counter', + field=models.PositiveIntegerField(default=0, editable=False), + ), + migrations.AlterField( + model_name='jobevent', + name='event', + field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]), + ), + migrations.AlterUniqueTogether( + name='adhoccommandevent', + unique_together=set([]), + ), + migrations.AlterIndexTogether( + name='adhoccommandevent', + index_together=set([('ad_hoc_command', 'event'), ('ad_hoc_command', 'uuid'), ('ad_hoc_command', 'end_line'), ('ad_hoc_command', 'start_line')]), + ), + migrations.AlterIndexTogether( + name='jobevent', + index_together=set([('job', 'event'), ('job', 'parent_uuid'), ('job', 'start_line'), ('job', 'uuid'), ('job', 'end_line')]), + ), + # Tower state + migrations.CreateModel( + name='TowerScheduleState', + fields=[ + ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), + ('schedule_last_run', models.DateTimeField(auto_now_add=True)), + ], + options={ + 'abstract': False, + }, + ), + # Tower instance capacity + migrations.AddField( + model_name='instance', + name='capacity', + field=models.PositiveIntegerField(default=100, editable=False), + ), + # Workflow surveys + migrations.AddField( + model_name='workflowjob', + name='survey_passwords', + field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AddField( + model_name='workflowjobtemplate', + name='survey_enabled', + field=models.BooleanField(default=False), + ), + migrations.AddField( + model_name='workflowjobtemplate', + name='survey_spec', + field=jsonfield.fields.JSONField(default={}, blank=True), + ), + # JSON field changes + migrations.AlterField( + model_name='adhoccommandevent', + name='event_data', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='job', + name='artifacts', + field=awx.main.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AlterField( + model_name='job', + name='survey_passwords', + field=awx.main.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AlterField( + model_name='jobevent', + name='event_data', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='jobtemplate', + name='survey_spec', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='notification', + name='body', + field=awx.main.fields.JSONField(default=dict, blank=True), + ), + migrations.AlterField( + model_name='notificationtemplate', + name='notification_configuration', + field=awx.main.fields.JSONField(default=dict), + ), + migrations.AlterField( + model_name='project', + name='playbook_files', + field=awx.main.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True), + ), + migrations.AlterField( + model_name='schedule', + name='extra_data', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='unifiedjob', + name='job_env', + field=awx.main.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AlterField( + model_name='workflowjob', + name='survey_passwords', + field=awx.main.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AlterField( + model_name='workflowjobnode', + name='ancestor_artifacts', + field=awx.main.fields.JSONField(default={}, editable=False, blank=True), + ), + migrations.AlterField( + model_name='workflowjobnode', + name='char_prompts', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='workflowjobtemplate', + name='survey_spec', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + migrations.AlterField( + model_name='workflowjobtemplatenode', + name='char_prompts', + field=awx.main.fields.JSONField(default={}, blank=True), + ), + # Job Project Update + migrations.AddField( + model_name='job', + name='project_update', + field=models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.ProjectUpdate', help_text='The SCM Refresh task used to make sure the playbooks were available for the job run', null=True), + ), + # Inventory, non-unique name + migrations.AlterField( + model_name='inventory', + name='name', + field=models.CharField(max_length=512), + ), + # Text and has schedules + migrations.RemoveField( + model_name='unifiedjobtemplate', + name='has_schedules', + ), + migrations.AlterField( + model_name='host', + name='instance_id', + field=models.CharField(default=b'', help_text='The value used by the remote inventory source to uniquely identify the host', max_length=1024, blank=True), + ), + migrations.AlterField( + model_name='project', + name='scm_clean', + field=models.BooleanField(default=False, help_text='Discard any local changes before syncing the project.'), + ), + migrations.AlterField( + model_name='project', + name='scm_delete_on_update', + field=models.BooleanField(default=False, help_text='Delete the project before syncing.'), + ), + migrations.AlterField( + model_name='project', + name='scm_type', + field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'), + ), + migrations.AlterField( + model_name='project', + name='scm_update_cache_timeout', + field=models.PositiveIntegerField(default=0, help_text='The number of seconds after the last project update ran that a newproject update will be launched as a job dependency.', blank=True), + ), + migrations.AlterField( + model_name='project', + name='scm_update_on_launch', + field=models.BooleanField(default=False, help_text='Update the project when a job is launched that uses the project.'), + ), + migrations.AlterField( + model_name='project', + name='scm_url', + field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True), + ), + migrations.AlterField( + model_name='project', + name='timeout', + field=models.IntegerField(default=0, help_text='The amount of time to run before the task is canceled.', blank=True), + ), + migrations.AlterField( + model_name='projectupdate', + name='scm_clean', + field=models.BooleanField(default=False, help_text='Discard any local changes before syncing the project.'), + ), + migrations.AlterField( + model_name='projectupdate', + name='scm_delete_on_update', + field=models.BooleanField(default=False, help_text='Delete the project before syncing.'), + ), + migrations.AlterField( + model_name='projectupdate', + name='scm_type', + field=models.CharField(default=b'', choices=[(b'', 'Manual'), (b'git', 'Git'), (b'hg', 'Mercurial'), (b'svn', 'Subversion')], max_length=8, blank=True, help_text='Specifies the source control system used to store the project.', verbose_name='SCM Type'), + ), + migrations.AlterField( + model_name='projectupdate', + name='scm_url', + field=models.CharField(default=b'', help_text='The location where the project is stored.', max_length=1024, verbose_name='SCM URL', blank=True), + ), + migrations.AlterField( + model_name='projectupdate', + name='timeout', + field=models.IntegerField(default=0, help_text='The amount of time to run before the task is canceled.', blank=True), + ), + migrations.AlterField( + model_name='schedule', + name='dtend', + field=models.DateTimeField(default=None, help_text='The last occurrence of the schedule occurs before this time, aftewards the schedule expires.', null=True, editable=False), + ), + migrations.AlterField( + model_name='schedule', + name='dtstart', + field=models.DateTimeField(default=None, help_text='The first occurrence of the schedule occurs on or after this time.', null=True, editable=False), + ), + migrations.AlterField( + model_name='schedule', + name='enabled', + field=models.BooleanField(default=True, help_text='Enables processing of this schedule by Tower.'), + ), + migrations.AlterField( + model_name='schedule', + name='next_run', + field=models.DateTimeField(default=None, help_text='The next time that the scheduled action will run.', null=True, editable=False), + ), + migrations.AlterField( + model_name='schedule', + name='rrule', + field=models.CharField(help_text='A value representing the schedules iCal recurrence rule.', max_length=255), + ), + migrations.AlterField( + model_name='unifiedjob', + name='elapsed', + field=models.DecimalField(help_text='Elapsed time in seconds that the job ran.', editable=False, max_digits=12, decimal_places=3), + ), + migrations.AlterField( + model_name='unifiedjob', + name='execution_node', + field=models.TextField(default=b'', help_text='The Tower node the job executed on.', editable=False, blank=True), + ), + migrations.AlterField( + model_name='unifiedjob', + name='finished', + field=models.DateTimeField(default=None, help_text='The date and time the job finished execution.', null=True, editable=False), + ), + migrations.AlterField( + model_name='unifiedjob', + name='job_explanation', + field=models.TextField(default=b'', help_text="A status field to indicate the state of the job if it wasn't able to run and capture stdout", editable=False, blank=True), + ), + migrations.AlterField( + model_name='unifiedjob', + name='started', + field=models.DateTimeField(default=None, help_text='The date and time the job was queued for starting.', null=True, editable=False), + ), + + ] diff --git a/awx/main/migrations/0035_v310_modify_ha_instance.py b/awx/main/migrations/0035_v310_modify_ha_instance.py deleted file mode 100644 index fa58ec094c..0000000000 --- a/awx/main/migrations/0035_v310_modify_ha_instance.py +++ /dev/null @@ -1,23 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0034_v310_add_workflows'), - ] - - operations = [ - migrations.RemoveField( - model_name='instance', - name='primary', - ), - migrations.AlterField( - model_name='instance', - name='uuid', - field=models.CharField(max_length=40), - ), - ] diff --git a/awx/main/migrations/0037_v310_remove_tower_settings.py b/awx/main/migrations/0035_v310_remove_tower_settings.py similarity index 69% rename from awx/main/migrations/0037_v310_remove_tower_settings.py rename to awx/main/migrations/0035_v310_remove_tower_settings.py index 00ee17f098..e92dfe605c 100644 --- a/awx/main/migrations/0037_v310_remove_tower_settings.py +++ b/awx/main/migrations/0035_v310_remove_tower_settings.py @@ -1,17 +1,17 @@ # -*- coding: utf-8 -*- from __future__ import unicode_literals -from django.db import migrations, models +from django.db import migrations class Migration(migrations.Migration): dependencies = [ - ('main', '0036_v310_jobevent_uuid'), + ('main', '0034_v310_release'), ] - # These settings are now in the separate awx.conf app. operations = [ + # Remove Tower settings, these settings are now in separate awx.conf app. migrations.RemoveField( model_name='towersettings', name='user', diff --git a/awx/main/migrations/0036_v310_jobevent_uuid.py b/awx/main/migrations/0036_v310_jobevent_uuid.py deleted file mode 100644 index b097c2d1f1..0000000000 --- a/awx/main/migrations/0036_v310_jobevent_uuid.py +++ /dev/null @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0035_v310_modify_ha_instance'), - ] - - operations = [ - migrations.AddField( - model_name='jobevent', - name='uuid', - field=models.CharField(default=b'', max_length=1024, editable=False), - ), - ] diff --git a/awx/main/migrations/0038_v310_job_allow_simultaneous.py b/awx/main/migrations/0038_v310_job_allow_simultaneous.py deleted file mode 100644 index 1ec3412fb4..0000000000 --- a/awx/main/migrations/0038_v310_job_allow_simultaneous.py +++ /dev/null @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0037_v310_remove_tower_settings'), - ] - - operations = [ - migrations.AddField( - model_name='job', - name='allow_simultaneous', - field=models.BooleanField(default=False), - ), - ] diff --git a/awx/main/migrations/0039_v310_workflow_rbac_prompts.py b/awx/main/migrations/0039_v310_workflow_rbac_prompts.py deleted file mode 100644 index 35db9c5575..0000000000 --- a/awx/main/migrations/0039_v310_workflow_rbac_prompts.py +++ /dev/null @@ -1,82 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import jsonfield.fields -import django.db.models.deletion -import awx.main.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0038_v310_job_allow_simultaneous'), - ] - - operations = [ - migrations.AddField( - model_name='workflowjobnode', - name='char_prompts', - field=jsonfield.fields.JSONField(default={}, blank=True), - ), - migrations.AddField( - model_name='workflowjobnode', - name='credential', - field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), - ), - migrations.AddField( - model_name='workflowjobnode', - name='inventory', - field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), - ), - migrations.AddField( - model_name='workflowjobtemplate', - name='execute_role', - field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'admin_role'], to='main.Role', null=b'True'), - ), - migrations.AddField( - model_name='workflowjobtemplate', - name='organization', - field=models.ForeignKey(related_name='workflows', on_delete=django.db.models.deletion.SET_NULL, blank=True, to='main.Organization', null=True), - ), - migrations.AddField( - model_name='workflowjobtemplate', - name='read_role', - field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_auditor', b'organization.auditor_role', b'execute_role', b'admin_role'], to='main.Role', null=b'True'), - ), - migrations.AddField( - model_name='workflowjobtemplatenode', - name='char_prompts', - field=jsonfield.fields.JSONField(default={}, blank=True), - ), - migrations.AddField( - model_name='workflowjobtemplatenode', - name='credential', - field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Credential', null=True), - ), - migrations.AddField( - model_name='workflowjobtemplatenode', - name='inventory', - field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.Inventory', null=True), - ), - migrations.AlterField( - model_name='workflowjobnode', - name='unified_job_template', - field=models.ForeignKey(related_name='workflowjobnodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True), - ), - migrations.AlterField( - model_name='workflowjobnode', - name='workflow_job', - field=models.ForeignKey(related_name='workflow_job_nodes', default=None, blank=True, to='main.WorkflowJob', null=True), - ), - migrations.AlterField( - model_name='workflowjobtemplate', - name='admin_role', - field=awx.main.fields.ImplicitRoleField(related_name='+', parent_role=[b'singleton:system_administrator', b'organization.admin_role'], to='main.Role', null=b'True'), - ), - migrations.AlterField( - model_name='workflowjobtemplatenode', - name='unified_job_template', - field=models.ForeignKey(related_name='workflowjobtemplatenodes', on_delete=django.db.models.deletion.SET_NULL, default=None, to='main.UnifiedJobTemplate', null=True), - ), - ] diff --git a/awx/main/migrations/0040_v310_channelgroup.py b/awx/main/migrations/0040_v310_channelgroup.py deleted file mode 100644 index 51f2016926..0000000000 --- a/awx/main/migrations/0040_v310_channelgroup.py +++ /dev/null @@ -1,22 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0039_v310_workflow_rbac_prompts'), - ] - - operations = [ - migrations.CreateModel( - name='ChannelGroup', - fields=[ - ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), - ('group', models.CharField(unique=True, max_length=200)), - ('channels', models.TextField()), - ], - ), - ] diff --git a/awx/main/migrations/0041_v310_artifacts.py b/awx/main/migrations/0041_v310_artifacts.py deleted file mode 100644 index f54cff411b..0000000000 --- a/awx/main/migrations/0041_v310_artifacts.py +++ /dev/null @@ -1,25 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import jsonfield.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0040_v310_channelgroup'), - ] - - operations = [ - migrations.AddField( - model_name='job', - name='artifacts', - field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AddField( - model_name='workflowjobnode', - name='ancestor_artifacts', - field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), - ), - ] diff --git a/awx/main/migrations/0042_v310_job_timeout.py b/awx/main/migrations/0042_v310_job_timeout.py deleted file mode 100644 index 4d49e5841a..0000000000 --- a/awx/main/migrations/0042_v310_job_timeout.py +++ /dev/null @@ -1,44 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0041_v310_artifacts'), - ] - - operations = [ - migrations.AddField( - model_name='inventorysource', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - migrations.AddField( - model_name='inventoryupdate', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - migrations.AddField( - model_name='job', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - migrations.AddField( - model_name='jobtemplate', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - migrations.AddField( - model_name='project', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - migrations.AddField( - model_name='projectupdate', - name='timeout', - field=models.PositiveIntegerField(default=0, blank=True), - ), - ] diff --git a/awx/main/migrations/0043_v310_executionnode.py b/awx/main/migrations/0043_v310_executionnode.py deleted file mode 100644 index bab47ad032..0000000000 --- a/awx/main/migrations/0043_v310_executionnode.py +++ /dev/null @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0042_v310_job_timeout'), - ] - - operations = [ - migrations.AddField( - model_name='unifiedjob', - name='execution_node', - field=models.TextField(default=b'', editable=False, blank=True), - ), - ] diff --git a/awx/main/migrations/0044_v310_scm_revision.py b/awx/main/migrations/0044_v310_scm_revision.py deleted file mode 100644 index 40ee1f8596..0000000000 --- a/awx/main/migrations/0044_v310_scm_revision.py +++ /dev/null @@ -1,30 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0043_v310_executionnode'), - ] - - operations = [ - migrations.AddField( - model_name='project', - name='scm_revision', - field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The last revision fetched by a project update', verbose_name='SCM Revision'), - ), - migrations.AddField( - model_name='projectupdate', - name='job_type', - field=models.CharField(default=b'check', max_length=64, choices=[(b'run', 'Run'), (b'check', 'Check')]), - ), - migrations.AddField( - model_name='job', - name='scm_revision', - field=models.CharField(default=b'', editable=False, max_length=1024, blank=True, help_text='The SCM Revision from the Project used for this job, if available', verbose_name='SCM Revision'), - ), - - ] diff --git a/awx/main/migrations/0045_v310_project_playbook_files.py b/awx/main/migrations/0045_v310_project_playbook_files.py deleted file mode 100644 index 77c7bfc8b7..0000000000 --- a/awx/main/migrations/0045_v310_project_playbook_files.py +++ /dev/null @@ -1,20 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import jsonfield.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0044_v310_scm_revision'), - ] - - operations = [ - migrations.AddField( - model_name='project', - name='playbook_files', - field=jsonfield.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True), - ), - ] diff --git a/awx/main/migrations/0046_v310_job_event_stdout.py b/awx/main/migrations/0046_v310_job_event_stdout.py deleted file mode 100644 index 7ff2ed8ade..0000000000 --- a/awx/main/migrations/0046_v310_job_event_stdout.py +++ /dev/null @@ -1,96 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0045_v310_project_playbook_files'), - ] - - operations = [ - migrations.AddField( - model_name='adhoccommandevent', - name='end_line', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AddField( - model_name='adhoccommandevent', - name='start_line', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AddField( - model_name='adhoccommandevent', - name='stdout', - field=models.TextField(default=b'', editable=False), - ), - migrations.AddField( - model_name='adhoccommandevent', - name='uuid', - field=models.CharField(default=b'', max_length=1024, editable=False), - ), - migrations.AddField( - model_name='adhoccommandevent', - name='verbosity', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AddField( - model_name='jobevent', - name='end_line', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AddField( - model_name='jobevent', - name='playbook', - field=models.CharField(default=b'', max_length=1024, editable=False), - ), - migrations.AddField( - model_name='jobevent', - name='start_line', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AddField( - model_name='jobevent', - name='stdout', - field=models.TextField(default=b'', editable=False), - ), - migrations.AddField( - model_name='jobevent', - name='verbosity', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AlterField( - model_name='adhoccommandevent', - name='counter', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AlterField( - model_name='adhoccommandevent', - name='event', - field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_skipped', 'Host Skipped'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]), - ), - migrations.AlterField( - model_name='jobevent', - name='counter', - field=models.PositiveIntegerField(default=0, editable=False), - ), - migrations.AlterField( - model_name='jobevent', - name='event', - field=models.CharField(max_length=100, choices=[(b'runner_on_failed', 'Host Failed'), (b'runner_on_ok', 'Host OK'), (b'runner_on_error', 'Host Failure'), (b'runner_on_skipped', 'Host Skipped'), (b'runner_on_unreachable', 'Host Unreachable'), (b'runner_on_no_hosts', 'No Hosts Remaining'), (b'runner_on_async_poll', 'Host Polling'), (b'runner_on_async_ok', 'Host Async OK'), (b'runner_on_async_failed', 'Host Async Failure'), (b'runner_item_on_ok', 'Item OK'), (b'runner_item_on_failed', 'Item Failed'), (b'runner_item_on_skipped', 'Item Skipped'), (b'runner_retry', 'Host Retry'), (b'runner_on_file_diff', 'File Difference'), (b'playbook_on_start', 'Playbook Started'), (b'playbook_on_notify', 'Running Handlers'), (b'playbook_on_include', 'Including File'), (b'playbook_on_no_hosts_matched', 'No Hosts Matched'), (b'playbook_on_no_hosts_remaining', 'No Hosts Remaining'), (b'playbook_on_task_start', 'Task Started'), (b'playbook_on_vars_prompt', 'Variables Prompted'), (b'playbook_on_setup', 'Gathering Facts'), (b'playbook_on_import_for_host', 'internal: on Import for Host'), (b'playbook_on_not_import_for_host', 'internal: on Not Import for Host'), (b'playbook_on_play_start', 'Play Started'), (b'playbook_on_stats', 'Playbook Complete'), (b'debug', 'Debug'), (b'verbose', 'Verbose'), (b'deprecated', 'Deprecated'), (b'warning', 'Warning'), (b'system_warning', 'System Warning'), (b'error', 'Error')]), - ), - migrations.AlterUniqueTogether( - name='adhoccommandevent', - unique_together=set([]), - ), - migrations.AlterIndexTogether( - name='adhoccommandevent', - index_together=set([('ad_hoc_command', 'event'), ('ad_hoc_command', 'uuid'), ('ad_hoc_command', 'end_line'), ('ad_hoc_command', 'start_line')]), - ), - migrations.AlterIndexTogether( - name='jobevent', - index_together=set([('job', 'event'), ('job', 'parent'), ('job', 'start_line'), ('job', 'uuid'), ('job', 'end_line')]), - ), - ] diff --git a/awx/main/migrations/0047_v310_tower_state.py b/awx/main/migrations/0047_v310_tower_state.py deleted file mode 100644 index 941dfd0ba2..0000000000 --- a/awx/main/migrations/0047_v310_tower_state.py +++ /dev/null @@ -1,24 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0046_v310_job_event_stdout'), - ] - - operations = [ - migrations.CreateModel( - name='TowerScheduleState', - fields=[ - ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)), - ('schedule_last_run', models.DateTimeField(auto_now_add=True)), - ], - options={ - 'abstract': False, - }, - ), - ] diff --git a/awx/main/migrations/0048_v310_instance_capacity.py b/awx/main/migrations/0048_v310_instance_capacity.py deleted file mode 100644 index 5f0795a4fd..0000000000 --- a/awx/main/migrations/0048_v310_instance_capacity.py +++ /dev/null @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0047_v310_tower_state'), - ] - - operations = [ - migrations.AddField( - model_name='instance', - name='capacity', - field=models.PositiveIntegerField(default=100, editable=False), - ), - ] diff --git a/awx/main/migrations/0049_v310_workflow_surveys.py b/awx/main/migrations/0049_v310_workflow_surveys.py deleted file mode 100644 index bc3865d33c..0000000000 --- a/awx/main/migrations/0049_v310_workflow_surveys.py +++ /dev/null @@ -1,30 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import jsonfield.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0048_v310_instance_capacity'), - ] - - operations = [ - migrations.AddField( - model_name='workflowjob', - name='survey_passwords', - field=jsonfield.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AddField( - model_name='workflowjobtemplate', - name='survey_enabled', - field=models.BooleanField(default=False), - ), - migrations.AddField( - model_name='workflowjobtemplate', - name='survey_spec', - field=jsonfield.fields.JSONField(default={}, blank=True), - ), - ] diff --git a/awx/main/migrations/0050_v310_JSONField_changes.py b/awx/main/migrations/0050_v310_JSONField_changes.py deleted file mode 100644 index b2f3f2534c..0000000000 --- a/awx/main/migrations/0050_v310_JSONField_changes.py +++ /dev/null @@ -1,90 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import awx.main.fields - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0049_v310_workflow_surveys'), - ] - - operations = [ - migrations.AlterField( - model_name='adhoccommandevent', - name='event_data', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='job', - name='artifacts', - field=awx.main.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AlterField( - model_name='job', - name='survey_passwords', - field=awx.main.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AlterField( - model_name='jobevent', - name='event_data', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='jobtemplate', - name='survey_spec', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='notification', - name='body', - field=awx.main.fields.JSONField(default=dict, blank=True), - ), - migrations.AlterField( - model_name='notificationtemplate', - name='notification_configuration', - field=awx.main.fields.JSONField(default=dict), - ), - migrations.AlterField( - model_name='project', - name='playbook_files', - field=awx.main.fields.JSONField(default=[], help_text='List of playbooks found in the project', verbose_name='Playbook Files', editable=False, blank=True), - ), - migrations.AlterField( - model_name='schedule', - name='extra_data', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='unifiedjob', - name='job_env', - field=awx.main.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AlterField( - model_name='workflowjob', - name='survey_passwords', - field=awx.main.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AlterField( - model_name='workflowjobnode', - name='ancestor_artifacts', - field=awx.main.fields.JSONField(default={}, editable=False, blank=True), - ), - migrations.AlterField( - model_name='workflowjobnode', - name='char_prompts', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='workflowjobtemplate', - name='survey_spec', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - migrations.AlterField( - model_name='workflowjobtemplatenode', - name='char_prompts', - field=awx.main.fields.JSONField(default={}, blank=True), - ), - ] diff --git a/awx/main/migrations/0051_v310_job_project_update.py b/awx/main/migrations/0051_v310_job_project_update.py deleted file mode 100644 index 2732f6973d..0000000000 --- a/awx/main/migrations/0051_v310_job_project_update.py +++ /dev/null @@ -1,20 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models -import django.db.models.deletion - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0050_v310_JSONField_changes'), - ] - - operations = [ - migrations.AddField( - model_name='job', - name='project_update', - field=models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, default=None, blank=True, to='main.ProjectUpdate', help_text='The SCM Refresh task used to make sure the playbooks were available for the job run', null=True), - ), - ] diff --git a/awx/main/migrations/0052_v310_inventory_name_non_unique.py b/awx/main/migrations/0052_v310_inventory_name_non_unique.py deleted file mode 100644 index 83bf44619e..0000000000 --- a/awx/main/migrations/0052_v310_inventory_name_non_unique.py +++ /dev/null @@ -1,19 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0051_v310_job_project_update'), - ] - - operations = [ - migrations.AlterField( - model_name='inventory', - name='name', - field=models.CharField(max_length=512), - ), - ] diff --git a/awx/main/migrations/0053_v310_update_timeout_field_type.py b/awx/main/migrations/0053_v310_update_timeout_field_type.py deleted file mode 100644 index 9365a4156a..0000000000 --- a/awx/main/migrations/0053_v310_update_timeout_field_type.py +++ /dev/null @@ -1,44 +0,0 @@ -# -*- coding: utf-8 -*- -from __future__ import unicode_literals - -from django.db import migrations, models - - -class Migration(migrations.Migration): - - dependencies = [ - ('main', '0052_v310_inventory_name_non_unique'), - ] - - operations = [ - migrations.AlterField( - model_name='inventorysource', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - migrations.AlterField( - model_name='inventoryupdate', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - migrations.AlterField( - model_name='job', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - migrations.AlterField( - model_name='jobtemplate', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - migrations.AlterField( - model_name='project', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - migrations.AlterField( - model_name='projectupdate', - name='timeout', - field=models.IntegerField(default=0, blank=True), - ), - ] diff --git a/awx/main/models/__init__.py b/awx/main/models/__init__.py index b962456e48..3f7e309940 100644 --- a/awx/main/models/__init__.py +++ b/awx/main/models/__init__.py @@ -76,7 +76,12 @@ User.add_to_class('auditor_of_organizations', user_get_auditor_of_organizations) @property def user_is_system_auditor(user): if not hasattr(user, '_is_system_auditor'): - user._is_system_auditor = Role.objects.filter(role_field='system_auditor', id=user.id).exists() + if user.pk: + user._is_system_auditor = user.roles.filter( + singleton_name='system_auditor', role_field='system_auditor').exists() + else: + # Odd case where user is unsaved, this should never be relied on + return False return user._is_system_auditor diff --git a/awx/main/models/ad_hoc_commands.py b/awx/main/models/ad_hoc_commands.py index 27d8754aa6..057924eda7 100644 --- a/awx/main/models/ad_hoc_commands.py +++ b/awx/main/models/ad_hoc_commands.py @@ -20,7 +20,7 @@ from django.core.urlresolvers import reverse # AWX from awx.main.models.base import * # noqa from awx.main.models.unified_jobs import * # noqa -from awx.main.models.notifications import JobNotificationMixin +from awx.main.models.notifications import JobNotificationMixin, NotificationTemplate from awx.main.fields import JSONField logger = logging.getLogger('awx.main.models.ad_hoc_commands') @@ -157,18 +157,20 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin): @property def notification_templates(self): - all_inventory_sources = set() + all_orgs = set() for h in self.hosts.all(): - for invsrc in h.inventory_sources.all(): - all_inventory_sources.add(invsrc) + all_orgs.add(h.inventory.organization) active_templates = dict(error=set(), success=set(), any=set()) - for invsrc in all_inventory_sources: - notifications_dict = invsrc.notification_templates - for notification_type in active_templates.keys(): - for templ in notifications_dict[notification_type]: - active_templates[notification_type].add(templ) + base_notification_templates = NotificationTemplate.objects + for org in all_orgs: + for templ in base_notification_templates.filter(organization_notification_templates_for_errors=org): + active_templates['error'].add(templ) + for templ in base_notification_templates.filter(organization_notification_templates_for_success=org): + active_templates['success'].add(templ) + for templ in base_notification_templates.filter(organization_notification_templates_for_any=org): + active_templates['any'].add(templ) active_templates['error'] = list(active_templates['error']) active_templates['any'] = list(active_templates['any']) active_templates['success'] = list(active_templates['success']) diff --git a/awx/main/models/credential.py b/awx/main/models/credential.py index aa0bf3243c..a7f77e87c2 100644 --- a/awx/main/models/credential.py +++ b/awx/main/models/credential.py @@ -50,6 +50,8 @@ class Credential(PasswordFieldsModel, CommonModelNameNotUnique, ResourceMixin): ('su', _('Su')), ('pbrun', _('Pbrun')), ('pfexec', _('Pfexec')), + ('dzdo', _('DZDO')), + ('pmrun', _('Pmrun')), #('runas', _('Runas')), ] diff --git a/awx/main/models/inventory.py b/awx/main/models/inventory.py index c16b89bcb2..b01d44802c 100644 --- a/awx/main/models/inventory.py +++ b/awx/main/models/inventory.py @@ -342,6 +342,7 @@ class Host(CommonModelNameNotUnique): max_length=1024, blank=True, default='', + help_text=_('The value used by the remote inventory source to uniquely identify the host'), ) variables = models.TextField( blank=True, @@ -1001,9 +1002,8 @@ class InventorySourceOptions(BaseModel): if r not in valid_regions and r not in invalid_regions: invalid_regions.append(r) if invalid_regions: - raise ValidationError(_('Invalid %(source)s region%(plural)s: %(region)s') % { - 'source': self.source, 'plural': '' if len(invalid_regions) == 1 else 's', - 'region': ', '.join(invalid_regions)}) + raise ValidationError(_('Invalid %(source)s region: %(region)s') % { + 'source': self.source, 'region': ', '.join(invalid_regions)}) return ','.join(regions) source_vars_dict = VarsDictProperty('source_vars') @@ -1027,9 +1027,8 @@ class InventorySourceOptions(BaseModel): if instance_filter_name not in self.INSTANCE_FILTER_NAMES: invalid_filters.append(instance_filter) if invalid_filters: - raise ValidationError(_('Invalid filter expression%(plural)s: %(filter)s') % - {'plural': '' if len(invalid_filters) == 1 else 's', - 'filter': ', '.join(invalid_filters)}) + raise ValidationError(_('Invalid filter expression: %(filter)s') % + {'filter': ', '.join(invalid_filters)}) return instance_filters def clean_group_by(self): @@ -1046,9 +1045,8 @@ class InventorySourceOptions(BaseModel): if c not in valid_choices and c not in invalid_choices: invalid_choices.append(c) if invalid_choices: - raise ValidationError(_('Invalid group by choice%(plural)s: %(choice)s') % - {'plural': '' if len(invalid_choices) == 1 else 's', - 'choice': ', '.join(invalid_choices)}) + raise ValidationError(_('Invalid group by choice: %(choice)s') % + {'choice': ', '.join(invalid_choices)}) return ','.join(choices) diff --git a/awx/main/models/jobs.py b/awx/main/models/jobs.py index b7acf21775..64ad7f7791 100644 --- a/awx/main/models/jobs.py +++ b/awx/main/models/jobs.py @@ -4,14 +4,12 @@ # Python import datetime import hmac -import json import logging import time from urlparse import urljoin # Django from django.conf import settings -from django.core.cache import cache from django.db import models from django.db.models import Q, Count from django.utils.dateparse import parse_datetime @@ -589,22 +587,10 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin): hosts=all_hosts)) return data - def handle_extra_data(self, extra_data): - extra_vars = {} - if isinstance(extra_data, dict): - extra_vars = extra_data - elif extra_data is None: - return - else: - if extra_data == "": - return - try: - extra_vars = json.loads(extra_data) - except Exception as e: - logger.warn("Exception deserializing extra vars: " + str(e)) - evars = self.extra_vars_dict - evars.update(extra_vars) - self.update_fields(extra_vars=json.dumps(evars)) + def _resources_sufficient_for_launch(self): + if self.job_type == PERM_INVENTORY_SCAN: + return self.inventory_id is not None + return not (self.inventory_id is None or self.project_id is None) def display_artifacts(self): ''' @@ -654,6 +640,16 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin): def get_notification_friendly_name(self): return "Job" + ''' + Canceling a job also cancels the implicit project update with launch_type + run. + ''' + def cancel(self): + res = super(Job, self).cancel() + if self.project_update: + self.project_update.cancel() + return res + class JobHostSummary(CreatedModifiedModel): ''' @@ -814,7 +810,7 @@ class JobEvent(CreatedModifiedModel): ('job', 'uuid'), ('job', 'start_line'), ('job', 'end_line'), - ('job', 'parent'), + ('job', 'parent_uuid'), ] job = models.ForeignKey( @@ -890,6 +886,11 @@ class JobEvent(CreatedModifiedModel): on_delete=models.SET_NULL, editable=False, ) + parent_uuid = models.CharField( + max_length=1024, + default='', + editable=False, + ) counter = models.PositiveIntegerField( default=0, editable=False, @@ -970,28 +971,6 @@ class JobEvent(CreatedModifiedModel): pass return msg - def _find_parent_id(self): - # Find the (most likely) parent event for this event. - parent_events = set() - if self.event in ('playbook_on_play_start', 'playbook_on_stats', - 'playbook_on_vars_prompt'): - parent_events.add('playbook_on_start') - elif self.event in ('playbook_on_notify', 'playbook_on_setup', - 'playbook_on_task_start', - 'playbook_on_no_hosts_matched', - 'playbook_on_no_hosts_remaining', - 'playbook_on_import_for_host', - 'playbook_on_not_import_for_host'): - parent_events.add('playbook_on_play_start') - elif self.event.startswith('runner_on_'): - parent_events.add('playbook_on_setup') - parent_events.add('playbook_on_task_start') - if parent_events: - qs = JobEvent.objects.filter(job_id=self.job_id, event__in=parent_events).order_by('-pk') - if self.pk: - qs = qs.filter(pk__lt=self.pk) - return qs.only('id').values_list('id', flat=True).first() - def _update_from_event_data(self): # Update job event model fields from event data. updated_fields = set() @@ -1033,20 +1012,14 @@ class JobEvent(CreatedModifiedModel): updated_fields.add(field) return updated_fields - def _update_parent_failed_and_changed(self): - # Propagate failed and changed flags to parent events. - if self.parent_id: - parent = self.parent - update_fields = [] - if self.failed and not parent.failed: - parent.failed = True - update_fields.append('failed') - if self.changed and not parent.changed: - parent.changed = True - update_fields.append('changed') - if update_fields: - parent.save(update_fields=update_fields, from_parent_update=True) - parent._update_parent_failed_and_changed() + def _update_parents_failed_and_changed(self): + # Update parent events to reflect failed, changed + runner_events = JobEvent.objects.filter(job=self.job, + event__startswith='runner_on') + changed_events = runner_events.filter(changed=True) + failed_events = runner_events.filter(failed=True) + JobEvent.objects.filter(uuid__in=changed_events.values_list('parent_uuid', flat=True)).update(changed=True) + JobEvent.objects.filter(uuid__in=failed_events.values_list('parent_uuid', flat=True)).update(failed=True) def _update_hosts(self, extra_host_pks=None): # Update job event hosts m2m from host_name, propagate to parent events. @@ -1066,15 +1039,18 @@ class JobEvent(CreatedModifiedModel): qs = qs.exclude(job_events__pk=self.id).only('id') for host in qs: self.hosts.add(host) - if self.parent_id: - self.parent._update_hosts(qs.values_list('id', flat=True)) + if self.parent_uuid: + parent = JobEvent.objects.filter(uuid=self.parent_uuid) + if parent.exists(): + parent = parent[0] + parent._update_hosts(qs.values_list('id', flat=True)) def _update_host_summary_from_stats(self): from awx.main.models.inventory import Host hostnames = set() try: - for v in self.event_data.values(): - hostnames.update(v.keys()) + for stat in ('changed', 'dark', 'failures', 'ok', 'processed', 'skipped'): + hostnames.update(self.event_data.get(stat, {}).keys()) except AttributeError: # In case event_data or v isn't a dict. pass with ignore_inventory_computed_fields(): @@ -1126,21 +1102,13 @@ class JobEvent(CreatedModifiedModel): self.host_id = host_id if 'host_id' not in update_fields: update_fields.append('host_id') - # Update parent related field if not set. - if self.parent_id is None: - self.parent_id = self._find_parent_id() - if self.parent_id and 'parent_id' not in update_fields: - update_fields.append('parent_id') super(JobEvent, self).save(*args, **kwargs) # Update related objects after this event is saved. if not from_parent_update: - if self.parent_id: - self._update_parent_failed_and_changed() - # FIXME: The update_hosts() call (and its queries) are the current - # performance bottleneck.... if getattr(settings, 'CAPTURE_JOB_EVENT_HOSTS', False): self._update_hosts() if self.event == 'playbook_on_stats': + self._update_parents_failed_and_changed() self._update_host_summary_from_stats() @classmethod @@ -1162,48 +1130,40 @@ class JobEvent(CreatedModifiedModel): except (KeyError, ValueError): kwargs.pop('created', None) - # Save UUID and parent UUID for determining parent-child relationship. - job_event_uuid = kwargs.get('uuid', None) - parent_event_uuid = kwargs.get('parent_uuid', None) - artifact_dict = kwargs.get('artifact_data', None) - # Sanity check: Don't honor keys that we don't recognize. valid_keys = {'job_id', 'event', 'event_data', 'playbook', 'play', 'role', 'task', 'created', 'counter', 'uuid', 'stdout', - 'start_line', 'end_line', 'verbosity'} + 'parent_uuid', 'start_line', 'end_line', 'verbosity'} for key in kwargs.keys(): if key not in valid_keys: kwargs.pop(key) - # Try to find a parent event based on UUID. - if parent_event_uuid: - cache_key = '{}_{}'.format(kwargs['job_id'], parent_event_uuid) - parent_id = cache.get(cache_key) - if parent_id is None: - parent_id = JobEvent.objects.filter(job_id=kwargs['job_id'], uuid=parent_event_uuid).only('id').values_list('id', flat=True).first() - if parent_id: - print("Settings cache: {} with value {}".format(cache_key, parent_id)) - cache.set(cache_key, parent_id, 300) - if parent_id: - kwargs['parent_id'] = parent_id + event_data = kwargs.get('event_data', None) + artifact_dict = None + if event_data: + artifact_dict = event_data.pop('artifact_data', None) analytics_logger.info('Job event data saved.', extra=dict(event_model_data=kwargs)) job_event = JobEvent.objects.create(**kwargs) - # Cache this job event ID vs. UUID for future parent lookups. - if job_event_uuid: - cache_key = '{}_{}'.format(kwargs['job_id'], job_event_uuid) - cache.set(cache_key, job_event.id, 300) - # Save artifact data to parent job (if provided). if artifact_dict: - event_data = kwargs.get('event_data', None) if event_data and isinstance(event_data, dict): - res = event_data.get('res', None) - if res and isinstance(res, dict): - if res.get('_ansible_no_log', False): - artifact_dict['_ansible_no_log'] = True + # Note: Core has not added support for marking artifacts as + # sensitive yet. Going forward, core will not use + # _ansible_no_log to denote sensitive set_stats calls. + # Instead, they plan to add a flag outside of the traditional + # no_log mechanism. no_log will not work for this feature, + # in core, because sensitive data is scrubbed before sending + # data to the callback. The playbook_on_stats is the callback + # in which the set_stats data is used. + + # Again, the sensitive artifact feature has not yet landed in + # core. The below is how we mark artifacts payload as + # senstive + # artifact_dict['_ansible_no_log'] = True + # parent_job = Job.objects.filter(pk=kwargs['job_id']).first() if parent_job and parent_job.artifacts != artifact_dict: parent_job.artifacts = artifact_dict @@ -1328,23 +1288,6 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin): def get_ui_url(self): return urljoin(settings.TOWER_URL_BASE, "/#/management_jobs/{}".format(self.pk)) - def handle_extra_data(self, extra_data): - extra_vars = {} - if isinstance(extra_data, dict): - extra_vars = extra_data - elif extra_data is None: - return - else: - if extra_data == "": - return - try: - extra_vars = json.loads(extra_data) - except Exception as e: - logger.warn("Exception deserializing extra vars: " + str(e)) - evars = self.extra_vars_dict - evars.update(extra_vars) - self.update_fields(extra_vars=json.dumps(evars)) - @property def task_impact(self): return 150 diff --git a/awx/main/models/mixins.py b/awx/main/models/mixins.py index 07a346964b..80eabf3495 100644 --- a/awx/main/models/mixins.py +++ b/awx/main/models/mixins.py @@ -37,8 +37,12 @@ class ResourceMixin(models.Model): ''' return ResourceMixin._accessible_objects(cls, accessor, role_field) + @classmethod + def accessible_pk_qs(cls, accessor, role_field): + return ResourceMixin._accessible_pk_qs(cls, accessor, role_field) + @staticmethod - def _accessible_objects(cls, accessor, role_field): + def _accessible_pk_qs(cls, accessor, role_field, content_types=None): if type(accessor) == User: ancestor_roles = accessor.roles.all() elif type(accessor) == Role: @@ -47,14 +51,22 @@ class ResourceMixin(models.Model): accessor_type = ContentType.objects.get_for_model(accessor) ancestor_roles = Role.objects.filter(content_type__pk=accessor_type.id, object_id=accessor.id) - qs = cls.objects.filter(pk__in = - RoleAncestorEntry.objects.filter( - ancestor__in=ancestor_roles, - content_type_id = ContentType.objects.get_for_model(cls).id, - role_field = role_field - ).values_list('object_id').distinct() - ) - return qs + + if content_types is None: + ct_kwarg = dict(content_type_id = ContentType.objects.get_for_model(cls).id) + else: + ct_kwarg = dict(content_type_id__in = content_types) + + return RoleAncestorEntry.objects.filter( + ancestor__in = ancestor_roles, + role_field = role_field, + **ct_kwarg + ).values_list('object_id').distinct() + + + @staticmethod + def _accessible_objects(cls, accessor, role_field): + return cls.objects.filter(pk__in = ResourceMixin._accessible_pk_qs(cls, accessor, role_field)) def get_permissions(self, accessor): @@ -105,12 +117,6 @@ class SurveyJobTemplateMixin(models.Model): # Job Template extra_vars extra_vars = self.extra_vars_dict - # Overwrite with job template extra vars with survey default vars - if self.survey_enabled and 'spec' in self.survey_spec: - for survey_element in self.survey_spec.get("spec", []): - if 'default' in survey_element and survey_element['default']: - extra_vars[survey_element['variable']] = survey_element['default'] - # transform to dict if 'extra_vars' in kwargs: kwargs_extra_vars = kwargs['extra_vars'] @@ -118,6 +124,18 @@ class SurveyJobTemplateMixin(models.Model): else: kwargs_extra_vars = {} + # Overwrite with job template extra vars with survey default vars + if self.survey_enabled and 'spec' in self.survey_spec: + for survey_element in self.survey_spec.get("spec", []): + default = survey_element['default'] + variable_key = survey_element['variable'] + if survey_element.get('type') == 'password': + if variable_key in kwargs_extra_vars: + kw_value = kwargs_extra_vars[variable_key] + if kw_value.startswith('$encrypted$') and kw_value != default: + kwargs_extra_vars[variable_key] = default + extra_vars[variable_key] = default + # Overwrite job template extra vars with explicit job extra vars # and add on job extra vars extra_vars.update(kwargs_extra_vars) diff --git a/awx/main/models/organization.py b/awx/main/models/organization.py index 8f96d1656c..c2fe3b1c4f 100644 --- a/awx/main/models/organization.py +++ b/awx/main/models/organization.py @@ -210,7 +210,7 @@ class AuthToken(BaseModel): REASON_CHOICES = [ ('', _('Token not invalidated')), ('timeout_reached', _('Token is expired')), - ('limit_reached', _('Maximum per-user sessions reached')), + ('limit_reached', _('The maximum number of allowed sessions for this user has been exceeded.')), # invalid_token is not a used data-base value, but is returned by the # api when a token is not found ('invalid_token', _('Invalid token')), diff --git a/awx/main/models/projects.py b/awx/main/models/projects.py index 1b56ae72a3..6c76143f40 100644 --- a/awx/main/models/projects.py +++ b/awx/main/models/projects.py @@ -78,12 +78,14 @@ class ProjectOptions(models.Model): blank=True, default='', verbose_name=_('SCM Type'), + help_text=_("Specifies the source control system used to store the project."), ) scm_url = models.CharField( max_length=1024, blank=True, default='', verbose_name=_('SCM URL'), + help_text=_("The location where the project is stored."), ) scm_branch = models.CharField( max_length=256, @@ -94,9 +96,11 @@ class ProjectOptions(models.Model): ) scm_clean = models.BooleanField( default=False, + help_text=_('Discard any local changes before syncing the project.'), ) scm_delete_on_update = models.BooleanField( default=False, + help_text=_('Delete the project before syncing.'), ) credential = models.ForeignKey( 'Credential', @@ -109,6 +113,7 @@ class ProjectOptions(models.Model): timeout = models.IntegerField( blank=True, default=0, + help_text=_("The amount of time to run before the task is canceled."), ) def clean_scm_type(self): @@ -221,10 +226,13 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin): ) scm_update_on_launch = models.BooleanField( default=False, + help_text=_('Update the project when a job is launched that uses the project.'), ) scm_update_cache_timeout = models.PositiveIntegerField( default=0, blank=True, + help_text=_('The number of seconds after the last project update ran that a new' + 'project update will be launched as a job dependency.'), ) scm_revision = models.CharField( diff --git a/awx/main/models/rbac.py b/awx/main/models/rbac.py index 7f8e4813df..9e40846b42 100644 --- a/awx/main/models/rbac.py +++ b/awx/main/models/rbac.py @@ -427,6 +427,9 @@ class Role(models.Model): def is_ancestor_of(self, role): return role.ancestors.filter(id=self.id).exists() + def is_singleton(self): + return self.singleton_name in [ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, ROLE_SINGLETON_SYSTEM_AUDITOR] + class RoleAncestorEntry(models.Model): diff --git a/awx/main/models/schedules.py b/awx/main/models/schedules.py index 786e788aa3..21ecf49916 100644 --- a/awx/main/models/schedules.py +++ b/awx/main/models/schedules.py @@ -10,6 +10,7 @@ import dateutil.rrule from django.db import models from django.db.models.query import QuerySet from django.utils.timezone import now, make_aware, get_default_timezone +from django.utils.translation import ugettext_lazy as _ # AWX from awx.main.models.base import * # noqa @@ -65,24 +66,29 @@ class Schedule(CommonModel): ) enabled = models.BooleanField( default=True, + help_text=_("Enables processing of this schedule by Tower.") ) dtstart = models.DateTimeField( null=True, default=None, editable=False, + help_text=_("The first occurrence of the schedule occurs on or after this time.") ) dtend = models.DateTimeField( null=True, default=None, editable=False, + help_text=_("The last occurrence of the schedule occurs before this time, aftewards the schedule expires.") ) rrule = models.CharField( max_length=255, + help_text=_("A value representing the schedules iCal recurrence rule.") ) next_run = models.DateTimeField( null=True, default=None, editable=False, + help_text=_("The next time that the scheduled action will run.") ) extra_data = JSONField( blank=True, diff --git a/awx/main/models/unified_jobs.py b/awx/main/models/unified_jobs.py index 63c3e3196a..fdbe566fc2 100644 --- a/awx/main/models/unified_jobs.py +++ b/awx/main/models/unified_jobs.py @@ -20,6 +20,7 @@ from django.utils.translation import ugettext_lazy as _ from django.utils.timezone import now from django.utils.encoding import smart_text from django.apps import apps +from django.contrib.contenttypes.models import ContentType # Django-Polymorphic from polymorphic import PolymorphicModel @@ -30,6 +31,7 @@ from djcelery.models import TaskMeta # AWX from awx.main.models.base import * # noqa from awx.main.models.schedules import Schedule +from awx.main.models.mixins import ResourceMixin from awx.main.utils import ( decrypt_field, _inventory_updates, copy_model_by_class, copy_m2m_relationships @@ -122,10 +124,6 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio default=None, editable=False, ) - has_schedules = models.BooleanField( - default=False, - editable=False, - ) #on_missed_schedule = models.CharField( # max_length=32, # choices=[], @@ -170,6 +168,20 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio else: return super(UnifiedJobTemplate, self).unique_error_message(model_class, unique_check) + @classmethod + def accessible_pk_qs(cls, accessor, role_field): + ''' + A re-implementation of accessible pk queryset for the "normal" unified JTs. + Does not return inventory sources or system JTs, these should + be handled inside of get_queryset where it is utilized. + ''' + ujt_names = [c.__name__.lower() for c in cls.__subclasses__() + if c.__name__.lower() not in ['inventorysource', 'systemjobtemplate']] + subclass_content_types = list(ContentType.objects.filter( + model__in=ujt_names).values_list('id', flat=True)) + + return ResourceMixin._accessible_pk_qs(cls, accessor, role_field, content_types=subclass_content_types) + def _perform_unique_checks(self, unique_checks): # Handle the list of unique fields returned above. Replace with an # appropriate error message for the remaining field(s) in the unique @@ -353,6 +365,10 @@ class UnifiedJobTypeStringMixin(object): def _underscore_to_camel(cls, word): return ''.join(x.capitalize() or '_' for x in word.split('_')) + @classmethod + def _camel_to_underscore(cls, word): + return re.sub('(?!^)([A-Z]+)', r'_\1', word).lower() + @classmethod def _model_type(cls, job_type): # Django >= 1.9 @@ -371,6 +387,9 @@ class UnifiedJobTypeStringMixin(object): return None return model.objects.get(id=job_id) + def model_to_str(self): + return UnifiedJobTypeStringMixin._camel_to_underscore(self.__class__.__name__) + class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique, UnifiedJobTypeStringMixin): ''' @@ -386,6 +405,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique ('scheduled', _('Scheduled')), # Job was started from a schedule. ('dependency', _('Dependency')), # Job was started as a dependency of another job. ('workflow', _('Workflow')), # Job was started from a workflow job. + ('sync', _('Sync')), # Job was started from a project sync. ] PASSWORD_FIELDS = ('start_args',) @@ -431,6 +451,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique blank=True, default='', editable=False, + help_text=_("The Tower node the job executed on."), ) notifications = models.ManyToManyField( 'Notification', @@ -456,16 +477,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique null=True, default=None, editable=False, + help_text=_("The date and time the job was queued for starting."), ) finished = models.DateTimeField( null=True, default=None, editable=False, + help_text=_("The date and time the job finished execution."), ) elapsed = models.DecimalField( max_digits=12, decimal_places=3, editable=False, + help_text=_("Elapsed time in seconds that the job ran."), ) job_args = models.TextField( blank=True, @@ -487,6 +511,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique blank=True, default='', editable=False, + help_text=_("A status field to indicate the state of the job if it wasn't able to run and capture stdout"), ) start_args = models.TextField( blank=True, @@ -553,6 +578,9 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique "Override in child classes, None value indicates this is not configurable" return None + def _resources_sufficient_for_launch(self): + return True + def __unicode__(self): return u'%s-%s-%s' % (self.created, self.id, self.status) @@ -780,13 +808,19 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique @property def workflow_job_id(self): if self.spawned_by_workflow: - return self.unified_job_node.workflow_job.pk + try: + return self.unified_job_node.workflow_job.pk + except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist: + pass return None @property def workflow_node_id(self): if self.spawned_by_workflow: - return self.unified_job_node.pk + try: + return self.unified_job_node.pk + except UnifiedJob.unified_job_node.RelatedObjectDoesNotExist: + pass return None @property @@ -801,7 +835,22 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique return [] def handle_extra_data(self, extra_data): - return + if hasattr(self, 'extra_vars'): + extra_vars = {} + if isinstance(extra_data, dict): + extra_vars = extra_data + elif extra_data is None: + return + else: + if extra_data == "": + return + try: + extra_vars = json.loads(extra_data) + except Exception as e: + logger.warn("Exception deserializing extra vars: " + str(e)) + evars = self.extra_vars_dict + evars.update(extra_vars) + self.update_fields(extra_vars=json.dumps(evars)) @property def can_start(self): diff --git a/awx/main/models/workflow.py b/awx/main/models/workflow.py index 1cd4d44348..7e6f73f085 100644 --- a/awx/main/models/workflow.py +++ b/awx/main/models/workflow.py @@ -125,23 +125,16 @@ class WorkflowNodeBase(CreatedModifiedModel): return {} accepted_fields, ignored_fields = ujt_obj._accept_or_ignore_job_kwargs(**prompts_dict) - ask_for_vars_dict = ujt_obj._ask_for_vars_dict() ignored_dict = {} - missing_dict = {} for fd in ignored_fields: ignored_dict[fd] = 'Workflow node provided field, but job template is not set to ask on launch' scan_errors = ujt_obj._extra_job_type_errors(accepted_fields) ignored_dict.update(scan_errors) - for fd in ['inventory', 'credential']: - if getattr(ujt_obj, fd) is None and not (ask_for_vars_dict.get(fd, False) and fd in prompts_dict): - missing_dict[fd] = 'Job Template does not have this field and workflow node does not provide it' data = {} if ignored_dict: data['ignored'] = ignored_dict - if missing_dict: - data['missing'] = missing_dict return data def get_parent_nodes(self): @@ -245,8 +238,7 @@ class WorkflowJobNode(WorkflowNodeBase): accepted_fields, ignored_fields = ujt_obj._accept_or_ignore_job_kwargs(**self.prompts_dict()) for fd in ujt_obj._extra_job_type_errors(accepted_fields): accepted_fields.pop(fd) - data.update(accepted_fields) - # TODO: decide what to do in the event of missing fields + data.update(accepted_fields) # missing fields are handled in the scheduler # build ancestor artifacts, save them to node model for later aa_dict = {} for parent_node in self.get_parent_nodes(): @@ -366,7 +358,9 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl @classmethod def _get_unified_jt_copy_names(cls): - return (super(WorkflowJobTemplate, cls)._get_unified_jt_copy_names() + + base_list = super(WorkflowJobTemplate, cls)._get_unified_jt_copy_names() + base_list.remove('labels') + return (base_list + ['survey_spec', 'survey_enabled', 'organization']) def get_absolute_url(self): @@ -390,8 +384,8 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl success=list(success_notification_templates), any=list(any_notification_templates)) - def create_workflow_job(self, **kwargs): - workflow_job = self.create_unified_job(**kwargs) + def create_unified_job(self, **kwargs): + workflow_job = super(WorkflowJobTemplate, self).create_unified_job(**kwargs) workflow_job.copy_nodes_from_original(original=self) return workflow_job @@ -416,18 +410,22 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl def can_start_without_user_input(self): '''Return whether WFJT can be launched without survey passwords.''' - return not bool(self.variables_needed_to_start) + return not bool( + self.variables_needed_to_start or + self.node_templates_missing() or + self.node_prompts_rejected()) - def get_warnings(self): - warning_data = {} - for node in self.workflow_job_template_nodes.all(): - if node.unified_job_template is None: - warning_data[node.pk] = 'Node is missing a linked unified_job_template' - continue + def node_templates_missing(self): + return [node.pk for node in self.workflow_job_template_nodes.filter( + unified_job_template__isnull=True).all()] + + def node_prompts_rejected(self): + node_list = [] + for node in self.workflow_job_template_nodes.prefetch_related('unified_job_template').all(): node_prompts_warnings = node.get_prompts_warnings() if node_prompts_warnings: - warning_data[node.pk] = node_prompts_warnings - return warning_data + node_list.append(node.pk) + return node_list def user_copy(self, user): new_wfjt = self.copy_unified_jt() @@ -435,11 +433,6 @@ class WorkflowJobTemplate(UnifiedJobTemplate, WorkflowJobOptions, SurveyJobTempl return new_wfjt -# Stub in place because of old migrations, can remove if migrations are squashed -class WorkflowJobInheritNodesMixin(object): - pass - - class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificationMixin): class Meta: app_label = 'main' @@ -488,10 +481,6 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio result['body'] = '\n'.join(str_arr) return result - # TODO: Ask UI if this is needed ? - #def get_ui_url(self): - # return urlparse.urljoin(tower_settings.TOWER_URL_BASE, "/#/workflow_jobs/{}".format(self.pk)) - @property def task_impact(self): return 0 diff --git a/awx/main/scheduler/__init__.py b/awx/main/scheduler/__init__.py index 8569fb5cfc..bb98cec776 100644 --- a/awx/main/scheduler/__init__.py +++ b/awx/main/scheduler/__init__.py @@ -10,6 +10,7 @@ from sets import Set from django.conf import settings from django.db import transaction, connection from django.db.utils import DatabaseError +from django.utils.translation import ugettext_lazy as _ # AWX from awx.main.models import * # noqa @@ -114,14 +115,20 @@ class TaskManager(): dag = WorkflowDAG(workflow_job) spawn_nodes = dag.bfs_nodes_to_run() for spawn_node in spawn_nodes: + if spawn_node.unified_job_template is None: + continue kv = spawn_node.get_job_kwargs() job = spawn_node.unified_job_template.create_unified_job(**kv) spawn_node.job = job spawn_node.save() - can_start = job.signal_start(**kv) + if job._resources_sufficient_for_launch(): + can_start = job.signal_start(**kv) + else: + can_start = False if not can_start: job.status = 'failed' - job.job_explanation = "Workflow job could not start because it was not in the right state or required manual credentials" + job.job_explanation = _("Job spawned from workflow could not start because it " + "was not in the right state or required manual credentials") job.save(update_fields=['status', 'job_explanation']) connection.on_commit(lambda: job.websocket_emit_status('failed')) @@ -166,6 +173,9 @@ class TaskManager(): return (active_task_queues, active_tasks) + def get_dependent_jobs_for_inv_and_proj_update(self, job_obj): + return [{'type': j.model_to_str(), 'id': j.id} for j in job_obj.dependent_jobs.all()] + def start_task(self, task, dependent_tasks=[]): from awx.main.tasks import handle_work_error, handle_work_success @@ -179,6 +189,17 @@ class TaskManager(): success_handler = handle_work_success.s(task_actual=task_actual) job_obj = task.get_full() + ''' + This is to account for when there isn't enough capacity to execute all + dependent jobs (i.e. proj or inv update) within the same schedule() + call. + + Proceeding calls to schedule() need to recontruct the proj or inv + update -> job fail logic dependency. The below call recontructs that + failure dependency. + ''' + if len(dependencies) == 0: + dependencies = self.get_dependent_jobs_for_inv_and_proj_update(job_obj) job_obj.status = 'waiting' (start_status, opts) = job_obj.pre_start() @@ -230,16 +251,41 @@ class TaskManager(): return inventory_task + ''' + Since we are dealing with partial objects we don't get to take advantage + of Django to resolve the type of related Many to Many field dependent_job. + + Hence the, potentional, double query in this method. + ''' + def get_related_dependent_jobs_as_patials(self, job_ids): + dependent_partial_jobs = [] + for id in job_ids: + if ProjectUpdate.objects.filter(id=id).exists(): + dependent_partial_jobs.append(ProjectUpdateDict({"id": id}).refresh_partial()) + elif InventoryUpdate.objects.filter(id=id).exists(): + dependent_partial_jobs.append(InventoryUpdateDict({"id": id}).refresh_partial()) + return dependent_partial_jobs + + def capture_chain_failure_dependencies(self, task, dependencies): + for dep in dependencies: + dep_obj = task.get_full() + dep_obj.dependent_jobs.add(task['id']) + dep_obj.save() + def generate_dependencies(self, task): dependencies = [] # TODO: What if the project is null ? if type(task) is JobDict: + if task['project__scm_update_on_launch'] is True and \ self.graph.should_update_related_project(task): project_task = self.create_project_update(task) dependencies.append(project_task) # Inventory created 2 seconds behind job + ''' + Inventory may have already been synced from a provision callback. + ''' inventory_sources_already_updated = task.get_inventory_sources_already_updated() for inventory_source_task in self.graph.get_inventory_sources(task['inventory_id']): @@ -248,6 +294,8 @@ class TaskManager(): if self.graph.should_update_related_inventory_source(task, inventory_source_task['id']): inventory_task = self.create_inventory_update(task, inventory_source_task) dependencies.append(inventory_task) + + self.capture_chain_failure_dependencies(task, dependencies) return dependencies def process_latest_project_updates(self, latest_project_updates): @@ -305,6 +353,7 @@ class TaskManager(): 'Celery, so it has been marked as failed.', )) task_obj.save() + _send_notification_templates(task_obj, 'failed') connection.on_commit(lambda: task_obj.websocket_emit_status('failed')) logger.error("Task %s appears orphaned... marking as failed" % task) diff --git a/awx/main/scheduler/dag_workflow.py b/awx/main/scheduler/dag_workflow.py index c765b48678..5fc716584a 100644 --- a/awx/main/scheduler/dag_workflow.py +++ b/awx/main/scheduler/dag_workflow.py @@ -67,6 +67,8 @@ class WorkflowDAG(SimpleDAG): obj = n['node_object'] job = obj.job + if obj.unified_job_template is None: + continue if not job: return False # Job is about to run or is running. Hold our horses and wait for diff --git a/awx/main/scheduler/dependency_graph.py b/awx/main/scheduler/dependency_graph.py index 7b66a8fe1b..846a194b27 100644 --- a/awx/main/scheduler/dependency_graph.py +++ b/awx/main/scheduler/dependency_graph.py @@ -117,10 +117,6 @@ class DependencyGraph(object): if not latest_inventory_update: return True - # TODO: Other finished, failed cases? i.e. error ? - if latest_inventory_update['status'] in ['failed', 'canceled']: - return True - ''' This is a bit of fuzzy logic. If the latest inventory update has a created time == job_created_time-2 @@ -138,7 +134,11 @@ class DependencyGraph(object): timeout_seconds = timedelta(seconds=latest_inventory_update['inventory_source__update_cache_timeout']) if (latest_inventory_update['finished'] + timeout_seconds) < now: return True - + + if latest_inventory_update['inventory_source__update_on_launch'] is True and \ + latest_inventory_update['status'] in ['failed', 'canceled', 'error']: + return True + return False def mark_system_job(self): diff --git a/awx/main/scheduler/partial.py b/awx/main/scheduler/partial.py index d16634f369..c28992d7f3 100644 --- a/awx/main/scheduler/partial.py +++ b/awx/main/scheduler/partial.py @@ -1,6 +1,7 @@ # Python import json +import itertools # AWX from awx.main.utils import decrypt_field_value @@ -61,13 +62,35 @@ class PartialModelDict(object): def task_impact(self): raise RuntimeError("Inherit and implement me") + @classmethod + def merge_values(cls, values): + grouped_results = itertools.groupby(values, key=lambda value: value['id']) + + merged_values = [] + for k, g in grouped_results: + groups = list(g) + merged_value = {} + for group in groups: + for key, val in group.iteritems(): + if not merged_value.get(key): + merged_value[key] = val + elif val != merged_value[key]: + if isinstance(merged_value[key], list): + if val not in merged_value[key]: + merged_value[key].append(val) + else: + old_val = merged_value[key] + merged_value[key] = [old_val, val] + merged_values.append(merged_value) + return merged_values + class JobDict(PartialModelDict): FIELDS = ( 'id', 'status', 'job_template_id', 'inventory_id', 'project_id', 'launch_type', 'limit', 'allow_simultaneous', 'created', 'job_type', 'celery_task_id', 'project__scm_update_on_launch', - 'forks', 'start_args', + 'forks', 'start_args', 'dependent_jobs__id', ) model = Job @@ -85,6 +108,14 @@ class JobDict(PartialModelDict): start_args = start_args or {} return start_args.get('inventory_sources_already_updated', []) + @classmethod + def filter_partial(cls, status=[]): + kv = { + 'status__in': status + } + merged = PartialModelDict.merge_values(cls.model.objects.filter(**kv).values(*cls.get_db_values())) + return [cls(o) for o in merged] + class ProjectUpdateDict(PartialModelDict): FIELDS = ( @@ -134,7 +165,8 @@ class InventoryUpdateDict(PartialModelDict): #'inventory_source__update_on_launch', #'inventory_source__update_cache_timeout', FIELDS = ( - 'id', 'status', 'created', 'celery_task_id', 'inventory_source_id', 'inventory_source__inventory_id', + 'id', 'status', 'created', 'celery_task_id', 'inventory_source_id', + 'inventory_source__inventory_id', ) model = InventoryUpdate @@ -151,6 +183,7 @@ class InventoryUpdateLatestDict(InventoryUpdateDict): FIELDS = ( 'id', 'status', 'created', 'celery_task_id', 'inventory_source_id', 'finished', 'inventory_source__update_cache_timeout', 'launch_type', + 'inventory_source__update_on_launch', ) model = InventoryUpdate @@ -217,7 +250,7 @@ class SystemJobDict(PartialModelDict): class AdHocCommandDict(PartialModelDict): FIELDS = ( - 'id', 'created', 'status', 'inventory_id', + 'id', 'created', 'status', 'inventory_id', 'dependent_jobs__id', 'celery_task_id', ) model = AdHocCommand diff --git a/awx/main/scheduler/tasks.py b/awx/main/scheduler/tasks.py index 5c4c821606..6e169224b7 100644 --- a/awx/main/scheduler/tasks.py +++ b/awx/main/scheduler/tasks.py @@ -34,11 +34,13 @@ def run_job_complete(job_id): @task def run_task_manager(): + logger.debug("Running Tower task manager.") TaskManager().schedule() @task def run_fail_inconsistent_running_jobs(): + logger.debug("Running task to fail inconsistent running jobs.") with transaction.atomic(): # Lock try: diff --git a/awx/main/socket_queue.py b/awx/main/socket_queue.py deleted file mode 100644 index 40dba76366..0000000000 --- a/awx/main/socket_queue.py +++ /dev/null @@ -1,169 +0,0 @@ -# Copyright (c) 2015 Ansible, Inc. -# All Rights Reserved. - -import os - -import zmq - -from django.conf import settings - - -class Socket(object): - """An abstraction class implemented for a dumb OS socket. - - Intended to allow alteration of backend details in a single, consistent - way throughout the Tower application. - """ - def __init__(self, bucket, rw, debug=0, logger=None, nowait=False): - """Instantiate a Socket object, which uses ZeroMQ to actually perform - passing a message back and forth. - - Designed to be used as a context manager: - - with Socket('callbacks', 'w') as socket: - socket.publish({'message': 'foo bar baz'}) - - If listening for messages through a socket, the `listen` method - is a simple generator: - - with Socket('callbacks', 'r') as socket: - for message in socket.listen(): - [...] - """ - self._bucket = bucket - self._rw = { - 'r': zmq.REP, - 'w': zmq.REQ, - }[rw.lower()] - - self._connection_pid = None - self._context = None - self._socket = None - - self._debug = debug - self._logger = logger - self._nowait = nowait - - def __enter__(self): - self.connect() - return self - - def __exit__(self, *args, **kwargs): - self.close() - - @property - def is_connected(self): - if self._socket: - return True - return False - - @property - def port(self): - return { - 'callbacks': os.environ.get('CALLBACK_CONSUMER_PORT', - getattr(settings, 'CALLBACK_CONSUMER_PORT', 'tcp://127.0.0.1:5557')), - 'task_commands': settings.TASK_COMMAND_PORT, - 'websocket': settings.SOCKETIO_NOTIFICATION_PORT, - 'fact_cache': settings.FACT_CACHE_PORT, - }[self._bucket] - - def connect(self): - """Connect to ZeroMQ.""" - - # Make sure that we are clearing everything out if there is - # a problem; PID crossover can cause bad news. - active_pid = os.getpid() - if self._connection_pid is None: - self._connection_pid = active_pid - if self._connection_pid != active_pid: - self._context = None - self._socket = None - self._connection_pid = active_pid - - # If the port is an integer, convert it into tcp:// - port = self.port - if isinstance(port, int): - port = 'tcp://127.0.0.1:%d' % port - - # If the port is None, then this is an intentional dummy; - # honor this. (For testing.) - if not port: - return - - # Okay, create the connection. - if self._context is None: - self._context = zmq.Context() - self._socket = self._context.socket(self._rw) - if self._nowait: - self._socket.setsockopt(zmq.RCVTIMEO, 2000) - self._socket.setsockopt(zmq.LINGER, 1000) - if self._rw == zmq.REQ: - self._socket.connect(port) - else: - self._socket.bind(port) - - def close(self): - """Disconnect and tear down.""" - if self._socket: - self._socket.close() - self._socket = None - self._context = None - - def publish(self, message): - """Publish a message over the socket.""" - - # If the port is None, no-op. - if self.port is None: - return - - # If we are not connected, whine. - if not self.is_connected: - raise RuntimeError('Cannot publish a message when not connected ' - 'to the socket.') - - # If we are in the wrong mode, whine. - if self._rw != zmq.REQ: - raise RuntimeError('This socket is not opened for writing.') - - # If we are in debug mode; provide the PID. - if self._debug: - message.update({'pid': os.getpid(), - 'connection_pid': self._connection_pid}) - - # Send the message. - for retry in xrange(4): - try: - self._socket.send_json(message) - self._socket.recv() - break - except Exception as ex: - if self._logger: - self._logger.error('Publish Exception: %r; retry=%d', - ex, retry, exc_info=True) - if retry >= 3: - raise - - def listen(self): - """Retrieve a single message from the subcription channel - and return it. - """ - # If the port is None, no-op. - if self.port is None: - raise StopIteration - - # If we are not connected, whine. - if not self.is_connected: - raise RuntimeError('Cannot publish a message when not connected ' - 'to the socket.') - - # If we are in the wrong mode, whine. - if self._rw != zmq.REP: - raise RuntimeError('This socket is not opened for reading.') - - # Actually listen to the socket. - while True: - try: - message = self._socket.recv_json() - yield message - finally: - self._socket.send('1') diff --git a/awx/main/tasks.py b/awx/main/tasks.py index 5e78750f3a..216a4f4534 100644 --- a/awx/main/tasks.py +++ b/awx/main/tasks.py @@ -32,7 +32,8 @@ import pexpect # Celery from celery import Task, task -from celery.signals import celeryd_init +from celery.signals import celeryd_init, worker_ready +from celery import current_app # Django from django.conf import settings @@ -43,7 +44,6 @@ from django.core.mail import send_mail from django.contrib.auth.models import User from django.utils.translation import ugettext_lazy as _ from django.core.cache import cache -from django.utils.log import configure_logging # AWX from awx.main.constants import CLOUD_PROVIDERS @@ -76,7 +76,8 @@ logger = logging.getLogger('awx.main.tasks') def celery_startup(conf=None, **kwargs): # Re-init all schedules # NOTE: Rework this during the Rampart work - logger.info("Syncing Tower Schedules") + startup_logger = logging.getLogger('awx.main.tasks') + startup_logger.info("Syncing Tower Schedules") for sch in Schedule.objects.all(): try: sch.update_computed_fields() @@ -85,19 +86,58 @@ def celery_startup(conf=None, **kwargs): logger.error("Failed to rebuild schedule {}: {}".format(sch, e)) -@task(queue='broadcast_all') -def clear_cache_keys(cache_keys): - set_of_keys = set([key for key in cache_keys]) +def _setup_tower_logger(): + global logger + from django.utils.log import configure_logging + LOGGING_DICT = settings.LOGGING + if settings.LOG_AGGREGATOR_ENABLED: + LOGGING_DICT['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler' + LOGGING_DICT['handlers']['http_receiver']['async'] = False + if 'awx' in settings.LOG_AGGREGATOR_LOGGERS: + if 'http_receiver' not in LOGGING_DICT['loggers']['awx']['handlers']: + LOGGING_DICT['loggers']['awx']['handlers'] += ['http_receiver'] + configure_logging(settings.LOGGING_CONFIG, LOGGING_DICT) + logger = logging.getLogger('awx.main.tasks') + + +@worker_ready.connect +def task_set_logger_pre_run(*args, **kwargs): + cache.close() + if settings.LOG_AGGREGATOR_ENABLED: + _setup_tower_logger() + logger.debug('Custom Tower logger configured for worker process.') + + +def _uwsgi_reload(): + # http://uwsgi-docs.readthedocs.io/en/latest/MasterFIFO.html#available-commands + logger.warn('Initiating uWSGI chain reload of server') + TRIGGER_CHAIN_RELOAD = 'c' + with open('/var/lib/awx/awxfifo', 'w') as awxfifo: + awxfifo.write(TRIGGER_CHAIN_RELOAD) + + +def _reset_celery_logging(): + # Worker logger reloaded, now send signal to restart pool + app = current_app._get_current_object() + app.control.broadcast('pool_restart', arguments={'reload': True}, + destination=['celery@{}'.format(settings.CLUSTER_HOST_ID)], reply=False) + + +def _clear_cache_keys(set_of_keys): logger.debug('cache delete_many(%r)', set_of_keys) cache.delete_many(set_of_keys) + + +@task(queue='broadcast_all') +def process_cache_changes(cache_keys): + logger.warn('Processing cache changes, task args: {0.args!r} kwargs: {0.kwargs!r}'.format( + process_cache_changes.request)) + set_of_keys = set([key for key in cache_keys]) + _clear_cache_keys(set_of_keys) for setting_key in set_of_keys: if setting_key.startswith('LOG_AGGREGATOR_'): - LOGGING = settings.LOGGING - if settings.LOG_AGGREGATOR_ENABLED: - LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSHandler' - else: - LOGGING['handlers']['http_receiver']['class'] = 'awx.main.utils.handlers.HTTPSNullHandler' - configure_logging(settings.LOGGING_CONFIG, LOGGING) + _uwsgi_reload() + _reset_celery_logging() break @@ -107,8 +147,12 @@ def send_notifications(notification_list, job_id=None): raise TypeError("notification_list should be of type list") if job_id is not None: job_actual = UnifiedJob.objects.get(id=job_id) - for notification_id in notification_list: - notification = Notification.objects.get(id=notification_id) + + notifications = Notification.objects.filter(id__in=notification_list) + if job_id is not None: + job_actual.notifications.add(*notifications) + + for notification in notifications: try: sent = notification.notification_template.send(notification.subject, notification.body) notification.status = "successful" @@ -119,12 +163,11 @@ def send_notifications(notification_list, job_id=None): notification.error = smart_str(e) finally: notification.save() - if job_id is not None: - job_actual.notifications.add(notification) @task(bind=True, queue='default') def run_administrative_checks(self): + logger.warn("Running administrative checks.") if not settings.TOWER_ADMIN_ALERTS: return validation_info = TaskEnhancer().validate_enhancements() @@ -146,11 +189,13 @@ def run_administrative_checks(self): @task(bind=True, queue='default') def cleanup_authtokens(self): + logger.warn("Cleaning up expired authtokens.") AuthToken.objects.filter(expires__lt=now()).delete() @task(bind=True) def cluster_node_heartbeat(self): + logger.debug("Cluster node heartbeat task.") inst = Instance.objects.filter(hostname=settings.CLUSTER_HOST_ID) if inst.exists(): inst = inst[0] @@ -370,9 +415,12 @@ class BaseTask(Task): data += '\n' # For credentials used with ssh-add, write to a named pipe which # will be read then closed, instead of leaving the SSH key on disk. - if name in ('credential', 'network_credential', 'scm_credential', 'ad_hoc_credential') and not ssh_too_old: + if name in ('credential', 'scm_credential', 'ad_hoc_credential') and not ssh_too_old: path = os.path.join(kwargs.get('private_data_dir', tempfile.gettempdir()), name) self.open_fifo_write(path, data) + # Ansible network modules do not yet support ssh-agent. + # Instead, ssh private key file is explicitly passed via an + # env variable. else: handle, path = tempfile.mkstemp(dir=kwargs.get('private_data_dir', None)) f = os.fdopen(handle, 'w') @@ -452,7 +500,7 @@ class BaseTask(Task): for k,v in env.items(): if k in ('REST_API_URL', 'AWS_ACCESS_KEY', 'AWS_ACCESS_KEY_ID'): continue - elif k.startswith('ANSIBLE_'): + elif k.startswith('ANSIBLE_') and not k.startswith('ANSIBLE_NET'): continue elif hidden_re.search(k): env[k] = HIDDEN_PASSWORD @@ -616,7 +664,7 @@ class BaseTask(Task): for child_proc in child_procs: os.kill(child_proc.pid, signal.SIGKILL) os.kill(main_proc.pid, signal.SIGKILL) - except TypeError: + except (TypeError, psutil.Error): os.kill(job.pid, signal.SIGKILL) else: os.kill(job.pid, signal.SIGTERM) @@ -706,6 +754,11 @@ class BaseTask(Task): stdout_handle.close() except Exception: pass + + instance = self.update_model(pk) + if instance.cancel_flag: + status = 'canceled' + instance = self.update_model(pk, status=status, result_traceback=tb, output_replacements=output_replacements, **extra_update_fields) @@ -807,8 +860,10 @@ class RunJob(BaseTask): env['REST_API_URL'] = settings.INTERNAL_API_URL env['REST_API_TOKEN'] = job.task_auth_token or '' env['TOWER_HOST'] = settings.TOWER_URL_BASE + env['MAX_EVENT_RES'] = str(settings.MAX_EVENT_RES_DATA) env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE env['CALLBACK_CONNECTION'] = settings.BROKER_URL + env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else '' if getattr(settings, 'JOB_CALLBACK_DEBUG', False): env['JOB_CALLBACK_DEBUG'] = '2' elif settings.DEBUG: @@ -865,10 +920,14 @@ class RunJob(BaseTask): env['ANSIBLE_NET_USERNAME'] = network_cred.username env['ANSIBLE_NET_PASSWORD'] = decrypt_field(network_cred, 'password') + ssh_keyfile = kwargs.get('private_data_files', {}).get('network_credential', '') + if ssh_keyfile: + env['ANSIBLE_NET_SSH_KEYFILE'] = ssh_keyfile + authorize = network_cred.authorize env['ANSIBLE_NET_AUTHORIZE'] = unicode(int(authorize)) if authorize: - env['ANSIBLE_NET_AUTHORIZE_PASSWORD'] = decrypt_field(network_cred, 'authorize_password') + env['ANSIBLE_NET_AUTH_PASS'] = decrypt_field(network_cred, 'authorize_password') # Set environment variables related to scan jobs if job.job_type == PERM_INVENTORY_SCAN: @@ -984,21 +1043,23 @@ class RunJob(BaseTask): def get_password_prompts(self): d = super(RunJob, self).get_password_prompts() - d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' - d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = '' - d[re.compile(r'^sudo password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SUDO password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^su password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SU password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^PBRUN password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^pbrun password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^PFEXEC password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^pfexec password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^RUNAS password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^runas password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SSH password:\s*?$', re.M)] = 'ssh_password' - d[re.compile(r'^Password:\s*?$', re.M)] = 'ssh_password' - d[re.compile(r'^Vault password:\s*?$', re.M)] = 'vault_password' + d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' + d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = '' + d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password' + d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password' + d[re.compile(r'Vault password:\s*?$', re.M)] = 'vault_password' return d def get_stdout_handle(self, instance): @@ -1012,6 +1073,10 @@ class RunJob(BaseTask): def job_event_callback(event_data): event_data.setdefault('job_id', instance.id) + if 'uuid' in event_data: + cache_event = cache.get('ev-{}'.format(event_data['uuid']), None) + if cache_event is not None: + event_data.update(cache_event) dispatcher.dispatch(event_data) else: def job_event_callback(event_data): @@ -1027,8 +1092,15 @@ class RunJob(BaseTask): private_data_files = kwargs.get('private_data_files', {}) if 'credential' in private_data_files: return private_data_files.get('credential') - elif 'network_credential' in private_data_files: - return private_data_files.get('network_credential') + ''' + Note: Don't inject network ssh key data into ssh-agent for network + credentials because the ansible modules do not yet support it. + We will want to add back in support when/if Ansible network modules + support this. + ''' + #elif 'network_credential' in private_data_files: + # return private_data_files.get('network_credential') + return '' def should_use_proot(self, instance, **kwargs): @@ -1042,17 +1114,18 @@ class RunJob(BaseTask): local_project_sync = job.project.create_project_update(launch_type="sync") local_project_sync.job_type = 'run' local_project_sync.save() + # save the associated project update before calling run() so that a + # cancel() call on the job can cancel the project update + job = self.update_model(job.pk, project_update=local_project_sync) + project_update_task = local_project_sync._get_task_class() try: project_update_task().run(local_project_sync.id) - job.scm_revision = job.project.scm_revision - job.project_update = local_project_sync - job.save() + job = self.update_model(job.pk, scm_revision=job.project.scm_revision) except Exception: - job.status = 'failed' - job.job_explanation = 'Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % \ - ('project_update', local_project_sync.name, local_project_sync.id) - job.save() + job = self.update_model(job.pk, status='failed', + job_explanation=('Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' % + ('project_update', local_project_sync.name, local_project_sync.id))) raise def post_run_hook(self, job, status, **kwargs): @@ -1222,12 +1295,12 @@ class RunProjectUpdate(BaseTask): def get_password_prompts(self): d = super(RunProjectUpdate, self).get_password_prompts() - d[re.compile(r'^Username for.*:\s*?$', re.M)] = 'scm_username' - d[re.compile(r'^Password for.*:\s*?$', re.M)] = 'scm_password' - d[re.compile(r'^Password:\s*?$', re.M)] = 'scm_password' - d[re.compile(r'^\S+?@\S+?\'s\s+?password:\s*?$', re.M)] = 'scm_password' - d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'scm_key_unlock' - d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = '' + d[re.compile(r'Username for.*:\s*?$', re.M)] = 'scm_username' + d[re.compile(r'Password for.*:\s*?$', re.M)] = 'scm_password' + d[re.compile(r'Password:\s*?$', re.M)] = 'scm_password' + d[re.compile(r'\S+?@\S+?\'s\s+?password:\s*?$', re.M)] = 'scm_password' + d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'scm_key_unlock' + d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = '' # FIXME: Configure whether we should auto accept host keys? d[re.compile(r'^Are you sure you want to continue connecting \(yes/no\)\?\s*?$', re.M)] = 'yes' return d @@ -1338,10 +1411,22 @@ class RunInventoryUpdate(BaseTask): 'password')) # Allow custom options to vmware inventory script. elif inventory_update.source == 'vmware': - section = 'defaults' + credential = inventory_update.credential + + section = 'vmware' cp.add_section(section) + cp.set('vmware', 'cache_max_age', 0) + + cp.set('vmware', 'username', credential.username) + cp.set('vmware', 'password', decrypt_field(credential, 'password')) + cp.set('vmware', 'server', credential.host) + vmware_opts = dict(inventory_update.source_vars_dict.items()) - vmware_opts.setdefault('guests_only', 'True') + if inventory_update.instance_filters: + vmware_opts.setdefault('host_filters', inventory_update.instance_filters) + if inventory_update.group_by: + vmware_opts.setdefault('groupby_patterns', inventory_update.groupby_patterns) + for k,v in vmware_opts.items(): cp.set(section, k, unicode(v)) @@ -1362,7 +1447,9 @@ class RunInventoryUpdate(BaseTask): section = 'ansible' cp.add_section(section) - cp.set(section, 'group_patterns', '["{app}-{tier}-{color}", "{app}-{color}", "{app}", "{tier}"]') + cp.set(section, 'group_patterns', os.environ.get('SATELLITE6_GROUP_PATTERNS', [])) + cp.set(section, 'want_facts', True) + cp.set(section, 'group_prefix', os.environ.get('SATELLITE6_GROUP_PREFIX', 'foreman_')) section = 'cache' cp.add_section(section) @@ -1459,10 +1546,7 @@ class RunInventoryUpdate(BaseTask): # complain about not being able to determine its version number. env['PBR_VERSION'] = '0.5.21' elif inventory_update.source == 'vmware': - env['VMWARE_INI'] = cloud_credential - env['VMWARE_HOST'] = passwords.get('source_host', '') - env['VMWARE_USER'] = passwords.get('source_username', '') - env['VMWARE_PASSWORD'] = passwords.get('source_password', '') + env['VMWARE_INI_PATH'] = cloud_credential elif inventory_update.source == 'azure': env['AZURE_SUBSCRIPTION_ID'] = passwords.get('source_username', '') env['AZURE_CERT_PATH'] = cloud_credential @@ -1647,6 +1731,7 @@ class RunAdHocCommand(BaseTask): env['CALLBACK_QUEUE'] = settings.CALLBACK_QUEUE env['CALLBACK_CONNECTION'] = settings.BROKER_URL env['ANSIBLE_SFTP_BATCH_MODE'] = 'False' + env['CACHE'] = settings.CACHES['default']['LOCATION'] if 'LOCATION' in settings.CACHES['default'] else '' if getattr(settings, 'JOB_CALLBACK_DEBUG', False): env['JOB_CALLBACK_DEBUG'] = '2' elif settings.DEBUG: @@ -1722,20 +1807,22 @@ class RunAdHocCommand(BaseTask): def get_password_prompts(self): d = super(RunAdHocCommand, self).get_password_prompts() - d[re.compile(r'^Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' - d[re.compile(r'^Bad passphrase, try again for .*:\s*?$', re.M)] = '' - d[re.compile(r'^sudo password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SUDO password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^su password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SU password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^PBRUN password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^pbrun password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^PFEXEC password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^pfexec password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^RUNAS password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^runas password.*:\s*?$', re.M)] = 'become_password' - d[re.compile(r'^SSH password:\s*?$', re.M)] = 'ssh_password' - d[re.compile(r'^Password:\s*?$', re.M)] = 'ssh_password' + d[re.compile(r'Enter passphrase for .*:\s*?$', re.M)] = 'ssh_key_unlock' + d[re.compile(r'Bad passphrase, try again for .*:\s*?$', re.M)] = '' + d[re.compile(r'sudo password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SUDO password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'su password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SU password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'PBRUN password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'pbrun password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'PFEXEC password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'pfexec password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'RUNAS password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'runas password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'DZDO password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'dzdo password.*:\s*?$', re.M)] = 'become_password' + d[re.compile(r'SSH password:\s*?$', re.M)] = 'ssh_password' + d[re.compile(r'Password:\s*?$', re.M)] = 'ssh_password' return d def get_stdout_handle(self, instance): @@ -1749,6 +1836,10 @@ class RunAdHocCommand(BaseTask): def ad_hoc_command_event_callback(event_data): event_data.setdefault('ad_hoc_command_id', instance.id) + if 'uuid' in event_data: + cache_event = cache.get('ev-{}'.format(event_data['uuid']), None) + if cache_event is not None: + event_data.update(cache_event) dispatcher.dispatch(event_data) else: def ad_hoc_command_event_callback(event_data): @@ -1788,7 +1879,9 @@ class RunSystemJob(BaseTask): if 'days' in json_vars and system_job.job_type != 'cleanup_facts': args.extend(['--days', str(json_vars.get('days', 60))]) if system_job.job_type == 'cleanup_jobs': - args.extend(['--jobs', '--project-updates', '--inventory-updates', '--management-jobs', '--ad-hoc-commands']) + args.extend(['--jobs', '--project-updates', '--inventory-updates', + '--management-jobs', '--ad-hoc-commands', '--workflow-jobs', + '--notifications']) if system_job.job_type == 'cleanup_facts': if 'older_than' in json_vars: args.extend(['--older_than', str(json_vars['older_than'])]) diff --git a/awx/main/tests/functional/api/test_create_attach_views.py b/awx/main/tests/functional/api/test_create_attach_views.py index 48f3aadc7b..b80cb4fa2c 100644 --- a/awx/main/tests/functional/api/test_create_attach_views.py +++ b/awx/main/tests/functional/api/test_create_attach_views.py @@ -45,3 +45,18 @@ def test_role_team_view_access(rando, team, inventory, mocker, post): mock_access.assert_called_once_with( inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False) + + +@pytest.mark.django_db +def test_org_associate_with_junk_data(rando, admin_user, organization, post): + """ + Assure that post-hoc enforcement of auditor role + will turn off if the action is an association + """ + user_data = {'is_system_auditor': True, 'id': rando.pk} + post(url=reverse('api:organization_users_list', args=(organization.pk,)), + data=user_data, expect=204, user=admin_user) + # assure user is now an org member + assert rando in organization.member_role + # assure that this did not also make them a system auditor + assert not rando.is_system_auditor diff --git a/awx/main/tests/functional/api/test_credential.py b/awx/main/tests/functional/api/test_credential.py index bd6cd25841..8f596cdac9 100644 --- a/awx/main/tests/functional/api/test_credential.py +++ b/awx/main/tests/functional/api/test_credential.py @@ -339,39 +339,6 @@ def test_list_created_org_credentials(post, get, organization, org_admin, org_me assert response.data['count'] == 0 -@pytest.mark.django_db -def test_cant_change_organization(patch, credential, organization, org_admin): - credential.organization = organization - credential.save() - - response = patch(reverse('api:credential_detail', args=(credential.id,)), { - 'name': 'Some new name', - }, org_admin) - assert response.status_code == 200 - - response = patch(reverse('api:credential_detail', args=(credential.id,)), { - 'name': 'Some new name2', - 'organization': organization.id, # fine for it to be the same - }, org_admin) - assert response.status_code == 200 - - response = patch(reverse('api:credential_detail', args=(credential.id,)), { - 'name': 'Some new name3', - 'organization': None - }, org_admin) - assert response.status_code == 403 - - -@pytest.mark.django_db -def test_cant_add_organization(patch, credential, organization, org_admin): - assert credential.organization is None - response = patch(reverse('api:credential_detail', args=(credential.id,)), { - 'name': 'Some new name', - 'organization': organization.id - }, org_admin) - assert response.status_code == 403 - - # # Openstack Credentials # diff --git a/awx/main/tests/functional/api/test_job_template.py b/awx/main/tests/functional/api/test_job_template.py index 4983aaac69..ec4286176e 100644 --- a/awx/main/tests/functional/api/test_job_template.py +++ b/awx/main/tests/functional/api/test_job_template.py @@ -65,6 +65,17 @@ def test_edit_sensitive_fields(patch, job_template_factory, alice, grant_project }, alice, expect=expect) +@pytest.mark.django_db +def test_reject_dict_extra_vars_patch(patch, job_template_factory, admin_user): + # Expect a string for extra_vars, raise 400 in this case that would + # otherwise have been saved incorrectly + jt = job_template_factory( + 'jt', organization='org1', project='prj', inventory='inv', credential='cred' + ).job_template + patch(reverse('api:job_template_detail', args=(jt.id,)), + {'extra_vars': {'foo': 5}}, admin_user, expect=400) + + @pytest.mark.django_db def test_edit_playbook(patch, job_template_factory, alice): objs = job_template_factory('jt', organization='org1', project='prj', inventory='inv', credential='cred') diff --git a/awx/main/tests/functional/api/test_settings.py b/awx/main/tests/functional/api/test_settings.py index 94aa316aea..5753841bd2 100644 --- a/awx/main/tests/functional/api/test_settings.py +++ b/awx/main/tests/functional/api/test_settings.py @@ -44,6 +44,27 @@ def test_license_cannot_be_removed_via_system_settings(mock_no_license_file, get assert response.data['LICENSE'] +@pytest.mark.django_db +def test_jobs_settings(get, put, patch, delete, admin): + url = reverse('api:setting_singleton_detail', args=('jobs',)) + get(url, user=admin, expect=200) + delete(url, user=admin, expect=204) + response = get(url, user=admin, expect=200) + data = dict(response.data.items()) + put(url, user=admin, data=data, expect=200) + patch(url, user=admin, data={'AWX_PROOT_HIDE_PATHS': ['/home']}, expect=200) + response = get(url, user=admin, expect=200) + assert response.data['AWX_PROOT_HIDE_PATHS'] == ['/home'] + data.pop('AWX_PROOT_HIDE_PATHS') + data.pop('AWX_PROOT_SHOW_PATHS') + data.pop('AWX_ANSIBLE_CALLBACK_PLUGINS') + put(url, user=admin, data=data, expect=200) + response = get(url, user=admin, expect=200) + assert response.data['AWX_PROOT_HIDE_PATHS'] == [] + assert response.data['AWX_PROOT_SHOW_PATHS'] == [] + assert response.data['AWX_ANSIBLE_CALLBACK_PLUGINS'] == [] + + @pytest.mark.django_db def test_ldap_settings(get, put, patch, delete, admin, enterprise_license): url = reverse('api:setting_singleton_detail', args=('ldap',)) @@ -65,6 +86,26 @@ def test_ldap_settings(get, put, patch, delete, admin, enterprise_license): patch(url, user=admin, data={'AUTH_LDAP_SERVER_URI': 'ldap://ldap.example.com, ldap://ldap2.example.com'}, expect=200) +@pytest.mark.parametrize('setting', [ + 'AUTH_LDAP_USER_DN_TEMPLATE', + 'AUTH_LDAP_REQUIRE_GROUP', + 'AUTH_LDAP_DENY_GROUP', +]) +@pytest.mark.django_db +def test_empty_ldap_dn(get, put, patch, delete, admin, enterprise_license, + setting): + url = reverse('api:setting_singleton_detail', args=('ldap',)) + Setting.objects.create(key='LICENSE', value=enterprise_license) + + patch(url, user=admin, data={setting: ''}, expect=200) + resp = get(url, user=admin, expect=200) + assert resp.data[setting] is None + + patch(url, user=admin, data={setting: None}, expect=200) + resp = get(url, user=admin, expect=200) + assert resp.data[setting] is None + + @pytest.mark.django_db def test_radius_settings(get, put, patch, delete, admin, enterprise_license, settings): url = reverse('api:setting_singleton_detail', args=('radius',)) diff --git a/awx/main/tests/functional/api/test_user.py b/awx/main/tests/functional/api/test_user.py index fb4c7a33e0..e3b7b4145c 100644 --- a/awx/main/tests/functional/api/test_user.py +++ b/awx/main/tests/functional/api/test_user.py @@ -7,65 +7,41 @@ from django.core.urlresolvers import reverse # user creation # +EXAMPLE_USER_DATA = { + "username": "affable", + "first_name": "a", + "last_name": "a", + "email": "a@a.com", + "is_superuser": False, + "password": "r$TyKiOCb#ED" +} + @pytest.mark.django_db def test_user_create(post, admin): - response = post(reverse('api:user_list'), { - "username": "affable", - "first_name": "a", - "last_name": "a", - "email": "a@a.com", - "is_superuser": False, - "password": "fo0m4nchU" - }, admin) + response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin) assert response.status_code == 201 + assert not response.data['is_superuser'] + assert not response.data['is_system_auditor'] @pytest.mark.django_db def test_fail_double_create_user(post, admin): - response = post(reverse('api:user_list'), { - "username": "affable", - "first_name": "a", - "last_name": "a", - "email": "a@a.com", - "is_superuser": False, - "password": "fo0m4nchU" - }, admin) + response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin) assert response.status_code == 201 - response = post(reverse('api:user_list'), { - "username": "affable", - "first_name": "a", - "last_name": "a", - "email": "a@a.com", - "is_superuser": False, - "password": "fo0m4nchU" - }, admin) + response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin) assert response.status_code == 400 @pytest.mark.django_db def test_create_delete_create_user(post, delete, admin): - response = post(reverse('api:user_list'), { - "username": "affable", - "first_name": "a", - "last_name": "a", - "email": "a@a.com", - "is_superuser": False, - "password": "fo0m4nchU" - }, admin) + response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin) assert response.status_code == 201 response = delete(reverse('api:user_detail', args=(response.data['id'],)), admin) assert response.status_code == 204 - response = post(reverse('api:user_list'), { - "username": "affable", - "first_name": "a", - "last_name": "a", - "email": "a@a.com", - "is_superuser": False, - "password": "fo0m4nchU" - }, admin) + response = post(reverse('api:user_list'), EXAMPLE_USER_DATA, admin) print(response.data) assert response.status_code == 201 diff --git a/awx/main/tests/functional/conftest.py b/awx/main/tests/functional/conftest.py index 05f1941fab..165dfed0d6 100644 --- a/awx/main/tests/functional/conftest.py +++ b/awx/main/tests/functional/conftest.py @@ -41,7 +41,7 @@ from awx.main.models.organization import ( Permission, Team, ) - +from awx.main.models.rbac import Role from awx.main.models.notifications import ( NotificationTemplate, Notification @@ -262,6 +262,13 @@ def admin(user): return user('admin', True) +@pytest.fixture +def system_auditor(user): + u = user(False) + Role.singleton('system_auditor').members.add(u) + return u + + @pytest.fixture def alice(user): return user('alice', False) diff --git a/awx/main/tests/functional/test_projects.py b/awx/main/tests/functional/test_projects.py index 8d93fcf6d9..8b66c396bd 100644 --- a/awx/main/tests/functional/test_projects.py +++ b/awx/main/tests/functional/test_projects.py @@ -1,3 +1,5 @@ +# -*- coding: utf-8 -*- + import mock # noqa import pytest @@ -22,6 +24,84 @@ def team_project_list(organization_factory): return objects +@pytest.mark.django_db +def test_user_project_paged_list(get, organization_factory): + 'Test project listing that spans multiple pages' + + # 3 total projects, 1 per page, 3 pages + objects = organization_factory( + 'org1', + projects=['project-%s' % i for i in range(3)], + users=['alice'], + roles=['project-%s.admin_role:alice' % i for i in range(3)], + ) + + # first page has first project and no previous page + pk = objects.users.alice.pk + url = reverse('api:user_projects_list', args=(pk,)) + results = get(url, objects.users.alice, QUERY_STRING='page_size=1').data + assert results['count'] == 3 + assert len(results['results']) == 1 + assert results['previous'] is None + assert results['next'] == ( + '/api/v1/users/%s/projects/?page=2&page_size=1' % pk + ) + + # second page has one more, a previous and next page + results = get(url, objects.users.alice, + QUERY_STRING='page=2&page_size=1').data + assert len(results['results']) == 1 + assert results['previous'] == ( + '/api/v1/users/%s/projects/?page=1&page_size=1' % pk + ) + assert results['next'] == ( + '/api/v1/users/%s/projects/?page=3&page_size=1' % pk + ) + + # third page has last project and a previous page + results = get(url, objects.users.alice, + QUERY_STRING='page=3&page_size=1').data + assert len(results['results']) == 1 + assert results['previous'] == ( + '/api/v1/users/%s/projects/?page=2&page_size=1' % pk + ) + assert results['next'] is None + + +@pytest.mark.django_db +def test_user_project_paged_list_with_unicode(get, organization_factory): + 'Test project listing that contains unicode chars in the next/prev links' + + # Create 2 projects that contain a "cloud" unicode character, make sure we + # can search it and properly generate next/previous page links + objects = organization_factory( + 'org1', + projects=['project-☁-1','project-☁-2'], + users=['alice'], + roles=['project-☁-1.admin_role:alice','project-☁-2.admin_role:alice'], + ) + pk = objects.users.alice.pk + url = reverse('api:user_projects_list', args=(pk,)) + + # first on first page, next page link contains unicode char + results = get(url, objects.users.alice, + QUERY_STRING='page_size=1&search=%E2%98%81').data + assert results['count'] == 2 + assert len(results['results']) == 1 + assert results['next'] == ( + '/api/v1/users/%s/projects/?page=2&page_size=1&search=%%E2%%98%%81' % pk # noqa + ) + + # second project on second page, previous page link contains unicode char + results = get(url, objects.users.alice, + QUERY_STRING='page=2&page_size=1&search=%E2%98%81').data + assert results['count'] == 2 + assert len(results['results']) == 1 + assert results['previous'] == ( + '/api/v1/users/%s/projects/?page=1&page_size=1&search=%%E2%%98%%81' % pk # noqa + ) + + @pytest.mark.django_db def test_user_project_list(get, organization_factory): 'List of projects a user has access to, filtered by projects you can also see' diff --git a/awx/main/tests/functional/test_rbac_job_templates.py b/awx/main/tests/functional/test_rbac_job_templates.py index ac545dafee..13e0da8e8c 100644 --- a/awx/main/tests/functional/test_rbac_job_templates.py +++ b/awx/main/tests/functional/test_rbac_job_templates.py @@ -259,22 +259,37 @@ def test_associate_label(label, user, job_template): @pytest.mark.django_db -def test_move_schedule_to_JT_no_access(job_template, rando): - schedule = Schedule.objects.create( - unified_job_template=job_template, - rrule='DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1') - job_template.admin_role.members.add(rando) - jt2 = JobTemplate.objects.create(name="other-jt") - access = ScheduleAccess(rando) - assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk)) +class TestJobTemplateSchedules: + + rrule = 'DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1' + rrule2 = 'DTSTART:20151117T050000Z RRULE:FREQ=WEEKLY;INTERVAL=1;COUNT=1' + + @pytest.fixture + def jt2(self): + return JobTemplate.objects.create(name="other-jt") + + def test_move_schedule_to_JT_no_access(self, job_template, rando, jt2): + schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule) + job_template.admin_role.members.add(rando) + access = ScheduleAccess(rando) + assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk)) -@pytest.mark.django_db -def test_move_schedule_from_JT_no_access(job_template, rando): - schedule = Schedule.objects.create( - unified_job_template=job_template, - rrule='DTSTART:20151117T050000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1') - jt2 = JobTemplate.objects.create(name="other-jt") - jt2.admin_role.members.add(rando) - access = ScheduleAccess(rando) - assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk)) + def test_move_schedule_from_JT_no_access(self, job_template, rando, jt2): + schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule) + jt2.admin_role.members.add(rando) + access = ScheduleAccess(rando) + assert not access.can_change(schedule, data=dict(unified_job_template=jt2.pk)) + + + def test_can_create_schedule_with_execute(self, job_template, rando): + job_template.execute_role.members.add(rando) + access = ScheduleAccess(rando) + assert access.can_add({'unified_job_template': job_template}) + + + def test_can_modify_ones_own_schedule(self, job_template, rando): + job_template.execute_role.members.add(rando) + schedule = Schedule.objects.create(unified_job_template=job_template, rrule=self.rrule, created_by=rando) + access = ScheduleAccess(rando) + assert access.can_change(schedule, {'rrule': self.rrule2}) diff --git a/awx/main/tests/functional/test_rbac_user.py b/awx/main/tests/functional/test_rbac_user.py index 0b43ce2f1c..c7eaa8c0e9 100644 --- a/awx/main/tests/functional/test_rbac_user.py +++ b/awx/main/tests/functional/test_rbac_user.py @@ -9,7 +9,7 @@ from awx.main.models import Role, User, Organization, Inventory @pytest.mark.django_db -class TestSysAuditor(TransactionTestCase): +class TestSysAuditorTransactional(TransactionTestCase): def rando(self): return User.objects.create(username='rando', password='rando', email='rando@com.com') @@ -41,6 +41,10 @@ class TestSysAuditor(TransactionTestCase): assert not rando.is_system_auditor +@pytest.mark.django_db +def test_system_auditor_is_system_auditor(system_auditor): + assert system_auditor.is_system_auditor + @pytest.mark.django_db def test_user_admin(user_project, project, user): diff --git a/awx/main/tests/functional/test_rbac_workflow.py b/awx/main/tests/functional/test_rbac_workflow.py index 80eae5af8b..8d363305d5 100644 --- a/awx/main/tests/functional/test_rbac_workflow.py +++ b/awx/main/tests/functional/test_rbac_workflow.py @@ -51,19 +51,50 @@ class TestWorkflowJobTemplateAccess: @pytest.mark.django_db class TestWorkflowJobTemplateNodeAccess: - def test_jt_access_to_edit(self, wfjt_node, org_admin): + def test_no_jt_access_to_edit(self, wfjt_node, org_admin): + # without access to the related job template, admin to the WFJT can + # not change the prompted parameters access = WorkflowJobTemplateNodeAccess(org_admin) assert not access.can_change(wfjt_node, {'job_type': 'scan'}) + def test_add_JT_no_start_perm(self, wfjt, job_template, rando): + wfjt.admin_role.members.add(rando) + access = WorkflowJobTemplateNodeAccess(rando) + job_template.read_role.members.add(rando) + assert not access.can_add({ + 'workflow_job_template': wfjt, + 'unified_job_template': job_template}) + + def test_add_node_with_minimum_permissions(self, wfjt, job_template, inventory, rando): + wfjt.admin_role.members.add(rando) + access = WorkflowJobTemplateNodeAccess(rando) + job_template.execute_role.members.add(rando) + inventory.use_role.members.add(rando) + assert access.can_add({ + 'workflow_job_template': wfjt, + 'inventory': inventory, + 'unified_job_template': job_template}) + + def test_remove_unwanted_foreign_node(self, wfjt_node, job_template, rando): + wfjt = wfjt_node.workflow_job_template + wfjt.admin_role.members.add(rando) + wfjt_node.unified_job_template = job_template + access = WorkflowJobTemplateNodeAccess(rando) + assert access.can_delete(wfjt_node) + @pytest.mark.django_db class TestWorkflowJobAccess: - def test_wfjt_admin_delete(self, wfjt, workflow_job, rando): - wfjt.admin_role.members.add(rando) - access = WorkflowJobAccess(rando) + def test_org_admin_can_delete_workflow_job(self, workflow_job, org_admin): + access = WorkflowJobAccess(org_admin) assert access.can_delete(workflow_job) + def test_wfjt_admin_can_delete_workflow_job(self, workflow_job, rando): + workflow_job.workflow_job_template.admin_role.members.add(rando) + access = WorkflowJobAccess(rando) + assert not access.can_delete(workflow_job) + def test_cancel_your_own_job(self, wfjt, workflow_job, rando): wfjt.execute_role.members.add(rando) workflow_job.created_by = rando @@ -71,6 +102,19 @@ class TestWorkflowJobAccess: access = WorkflowJobAccess(rando) assert access.can_cancel(workflow_job) + def test_copy_permissions_org_admin(self, wfjt, org_admin, org_member): + admin_access = WorkflowJobTemplateAccess(org_admin) + assert admin_access.can_copy(wfjt) + + def test_copy_permissions_user(self, wfjt, org_admin, org_member): + ''' + Only org admins are able to add WFJTs, only org admins + are able to copy them + ''' + wfjt.admin_role.members.add(org_member) + member_access = WorkflowJobTemplateAccess(org_member) + assert not member_access.can_copy(wfjt) + def test_workflow_copy_warnings_inv(self, wfjt, rando, inventory): ''' The user `rando` does not have access to the prompted inventory in a @@ -80,13 +124,11 @@ class TestWorkflowJobAccess: access = WorkflowJobTemplateAccess(rando, save_messages=True) assert not access.can_copy(wfjt) warnings = access.messages - assert 1 in warnings - assert 'inventory' in warnings[1] + assert 'inventories_unable_to_copy' in warnings def test_workflow_copy_warnings_jt(self, wfjt, rando, job_template): wfjt.workflow_job_template_nodes.create(unified_job_template=job_template) access = WorkflowJobTemplateAccess(rando, save_messages=True) assert not access.can_copy(wfjt) warnings = access.messages - assert 1 in warnings - assert 'unified_job_template' in warnings[1] + assert 'templates_unable_to_copy' in warnings diff --git a/awx/main/tests/unit/api/decorator_paginated.py b/awx/main/tests/unit/api/decorator_paginated.py deleted file mode 100644 index 71344e92ba..0000000000 --- a/awx/main/tests/unit/api/decorator_paginated.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) 2015 Ansible, Inc. -# All Rights Reserved. - -import json - -from django.test import TestCase - -from rest_framework.permissions import AllowAny -from rest_framework.test import APIRequestFactory -from rest_framework.views import APIView - -from awx.api.utils.decorators import paginated - - -class PaginatedDecoratorTests(TestCase): - """A set of tests for ensuring that the "paginated" decorator works - in the way we expect. - """ - def setUp(self): - self.rf = APIRequestFactory() - - # Define an uninteresting view that we can use to test - # that the paginator wraps in the way we expect. - class View(APIView): - permission_classes = (AllowAny,) - - @paginated - def get(self, request, limit, ordering, offset): - return ['a', 'b', 'c', 'd', 'e'], 26, None - self.view = View.as_view() - - def test_implicit_first_page(self): - """Establish that if we get an implicit request for the first page - (e.g. no page provided), that it is returned appropriately. - """ - # Create a request, and run the paginated function. - request = self.rf.get('/dummy/', {'page_size': 5}) - response = self.view(request) - - # Ensure the response looks like what it should. - r = json.loads(response.rendered_content) - self.assertEqual(r['count'], 26) - self.assertIn(r['next'], - (u'/dummy/?page=2&page_size=5', - u'/dummy/?page_size=5&page=2')) - self.assertEqual(r['previous'], None) - self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e']) - - def test_mid_page(self): - """Establish that if we get a request for a page in the middle, that - the paginator causes next and prev to be set appropriately. - """ - # Create a request, and run the paginated function. - request = self.rf.get('/dummy/', {'page': 3, 'page_size': 5}) - response = self.view(request) - - # Ensure the response looks like what it should. - r = json.loads(response.rendered_content) - self.assertEqual(r['count'], 26) - self.assertIn(r['next'], - (u'/dummy/?page=4&page_size=5', - u'/dummy/?page_size=5&page=4')) - self.assertIn(r['previous'], - (u'/dummy/?page=2&page_size=5', - u'/dummy/?page_size=5&page=2')) - self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e']) - - def test_last_page(self): - """Establish that if we get a request for the last page, that the - paginator picks up on it and sets `next` to None. - """ - # Create a request, and run the paginated function. - request = self.rf.get('/dummy/', {'page': 6, 'page_size': 5}) - response = self.view(request) - - # Ensure the response looks like what it should. - r = json.loads(response.rendered_content) - self.assertEqual(r['count'], 26) - self.assertEqual(r['next'], None) - self.assertIn(r['previous'], - (u'/dummy/?page=5&page_size=5', - u'/dummy/?page_size=5&page=5')) - self.assertEqual(r['results'], ['a', 'b', 'c', 'd', 'e']) diff --git a/awx/main/tests/unit/api/serializers/test_job_serializers.py b/awx/main/tests/unit/api/serializers/test_job_serializers.py index 603b72892c..fc1ae86a8d 100644 --- a/awx/main/tests/unit/api/serializers/test_job_serializers.py +++ b/awx/main/tests/unit/api/serializers/test_job_serializers.py @@ -53,8 +53,6 @@ def jobs(mocker): class TestJobSerializerGetRelated(): @pytest.mark.parametrize("related_resource_name", [ 'job_events', - 'job_plays', - 'job_tasks', 'relaunch', 'labels', ]) diff --git a/awx/main/tests/unit/api/serializers/test_workflow_serializers.py b/awx/main/tests/unit/api/serializers/test_workflow_serializers.py index b444531206..b8697db71f 100644 --- a/awx/main/tests/unit/api/serializers/test_workflow_serializers.py +++ b/awx/main/tests/unit/api/serializers/test_workflow_serializers.py @@ -125,6 +125,7 @@ class TestWorkflowJobTemplateNodeSerializerCharPrompts(): serializer = WorkflowJobTemplateNodeSerializer() node = WorkflowJobTemplateNode(pk=1) node.char_prompts = {'limit': 'webservers'} + serializer.instance = node view = FakeView(node) view.request = FakeRequest() view.request.method = "PATCH" diff --git a/awx/main/tests/unit/api/test_generics.py b/awx/main/tests/unit/api/test_generics.py index b10b1c6c54..579440b201 100644 --- a/awx/main/tests/unit/api/test_generics.py +++ b/awx/main/tests/unit/api/test_generics.py @@ -6,9 +6,16 @@ import mock # DRF from rest_framework import status from rest_framework.response import Response +from rest_framework.exceptions import PermissionDenied # AWX -from awx.api.generics import ParentMixin, SubListCreateAttachDetachAPIView, DeleteLastUnattachLabelMixin +from awx.api.generics import ( + ParentMixin, + SubListCreateAttachDetachAPIView, + DeleteLastUnattachLabelMixin, + ResourceAccessList +) +from awx.main.models import Organization @pytest.fixture @@ -29,6 +36,11 @@ def mock_response_new(mocker): return m +@pytest.fixture +def mock_organization(): + return Organization(pk=4, name="Unsaved Org") + + @pytest.fixture def parent_relationship_factory(mocker): def rf(serializer_class, relationship_name, relationship_value=mocker.Mock()): @@ -178,3 +190,37 @@ class TestParentMixin: get_object_or_404.assert_called_with(parent_mixin.parent_model, **parent_mixin.kwargs) assert get_object_or_404.return_value == return_value + + +class TestResourceAccessList: + + def mock_request(self): + return mock.MagicMock( + user=mock.MagicMock( + is_anonymous=mock.MagicMock(return_value=False), + is_superuser=False + ), method='GET') + + + def mock_view(self): + view = ResourceAccessList() + view.parent_model = Organization + view.kwargs = {'pk': 4} + return view + + + def test_parent_access_check_failed(self, mocker, mock_organization): + with mocker.patch('awx.api.permissions.get_object_or_400', return_value=mock_organization): + mock_access = mocker.MagicMock(__name__='for logger', return_value=False) + with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access): + with pytest.raises(PermissionDenied): + self.mock_view().check_permissions(self.mock_request()) + mock_access.assert_called_once_with(mock_organization) + + + def test_parent_access_check_worked(self, mocker, mock_organization): + with mocker.patch('awx.api.permissions.get_object_or_400', return_value=mock_organization): + mock_access = mocker.MagicMock(__name__='for logger', return_value=True) + with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access): + self.mock_view().check_permissions(self.mock_request()) + mock_access.assert_called_once_with(mock_organization) diff --git a/awx/main/tests/unit/models/test_workflow_unit.py b/awx/main/tests/unit/models/test_workflow_unit.py index 6fa494e49d..ce288dced1 100644 --- a/awx/main/tests/unit/models/test_workflow_unit.py +++ b/awx/main/tests/unit/models/test_workflow_unit.py @@ -239,10 +239,3 @@ class TestWorkflowWarnings: assert 'job_type' in job_node_with_prompts.get_prompts_warnings()['ignored'] assert 'inventory' in job_node_with_prompts.get_prompts_warnings()['ignored'] assert len(job_node_with_prompts.get_prompts_warnings()['ignored']) == 2 - - def test_warn_missing_fields(self, job_node_no_prompts): - job_node_no_prompts.inventory = None - assert 'missing' in job_node_no_prompts.get_prompts_warnings() - assert 'inventory' in job_node_no_prompts.get_prompts_warnings()['missing'] - assert 'credential' in job_node_no_prompts.get_prompts_warnings()['missing'] - assert len(job_node_no_prompts.get_prompts_warnings()['missing']) == 2 diff --git a/awx/main/tests/unit/scheduler/conftest.py b/awx/main/tests/unit/scheduler/conftest.py index f04ba12a0b..40e221d0cc 100644 --- a/awx/main/tests/unit/scheduler/conftest.py +++ b/awx/main/tests/unit/scheduler/conftest.py @@ -36,6 +36,7 @@ def scheduler_factory(mocker, epoch): def no_create_project_update(task): raise RuntimeError("create_project_update should not be called") + mocker.patch.object(sched, 'capture_chain_failure_dependencies') mocker.patch.object(sched, 'get_tasks', return_value=tasks) mocker.patch.object(sched, 'get_running_workflow_jobs', return_value=[]) mocker.patch.object(sched, 'get_inventory_source_tasks', return_value=inventory_sources) diff --git a/awx/main/tests/unit/scheduler/test_dag.py b/awx/main/tests/unit/scheduler/test_dag.py index 54e7de4fa8..932f4436ec 100644 --- a/awx/main/tests/unit/scheduler/test_dag.py +++ b/awx/main/tests/unit/scheduler/test_dag.py @@ -5,7 +5,7 @@ import pytest # AWX from awx.main.scheduler.dag_simple import SimpleDAG from awx.main.scheduler.dag_workflow import WorkflowDAG -from awx.main.models import Job +from awx.main.models import Job, JobTemplate from awx.main.models.workflow import WorkflowJobNode @@ -72,6 +72,7 @@ def factory_node(): if status: j = Job(status=status) wfn.job = j + wfn.unified_job_template = JobTemplate(name='JT{}'.format(id)) return wfn return fn diff --git a/awx/main/tests/unit/scheduler/test_scheduler_inventory_update.py b/awx/main/tests/unit/scheduler/test_scheduler_inventory_update.py index 5e49eec729..acffff3f8d 100644 --- a/awx/main/tests/unit/scheduler/test_scheduler_inventory_update.py +++ b/awx/main/tests/unit/scheduler/test_scheduler_inventory_update.py @@ -26,6 +26,22 @@ def successful_inventory_update_latest_cache_expired(inventory_update_latest_fac return iu +@pytest.fixture +def failed_inventory_update_latest_cache_zero(failed_inventory_update_latest): + iu = failed_inventory_update_latest + iu['inventory_source__update_cache_timeout'] = 0 + iu['inventory_source__update_on_launch'] = True + iu['finished'] = iu['created'] + timedelta(seconds=2) + iu['status'] = 'failed' + return iu + + +@pytest.fixture +def failed_inventory_update_latest_cache_non_zero(failed_inventory_update_latest_cache_zero): + failed_inventory_update_latest_cache_zero['inventory_source__update_cache_timeout'] = 10000000 + return failed_inventory_update_latest_cache_zero + + class TestStartInventoryUpdate(): def test_pending(self, scheduler_factory, pending_inventory_update): scheduler = scheduler_factory(tasks=[pending_inventory_update]) @@ -79,11 +95,38 @@ class TestCreateDependentInventoryUpdate(): scheduler.start_task.assert_called_with(waiting_inventory_update, [pending_job]) - def test_last_update_failed(self, scheduler_factory, pending_job, failed_inventory_update, failed_inventory_update_latest, waiting_inventory_update, inventory_id_sources): + def test_last_update_timeout_zero_failed(self, scheduler_factory, pending_job, failed_inventory_update, failed_inventory_update_latest_cache_zero, waiting_inventory_update, inventory_id_sources): scheduler = scheduler_factory(tasks=[failed_inventory_update, pending_job], - latest_inventory_updates=[failed_inventory_update_latest], + latest_inventory_updates=[failed_inventory_update_latest_cache_zero], create_inventory_update=waiting_inventory_update, inventory_sources=inventory_id_sources) scheduler._schedule() scheduler.start_task.assert_called_with(waiting_inventory_update, [pending_job]) + + def test_last_update_timeout_non_zero_failed(self, scheduler_factory, pending_job, failed_inventory_update, failed_inventory_update_latest_cache_non_zero, waiting_inventory_update, inventory_id_sources): + scheduler = scheduler_factory(tasks=[failed_inventory_update, pending_job], + latest_inventory_updates=[failed_inventory_update_latest_cache_non_zero], + create_inventory_update=waiting_inventory_update, + inventory_sources=inventory_id_sources) + scheduler._schedule() + + scheduler.start_task.assert_called_with(waiting_inventory_update, [pending_job]) + + +class TestCaptureChainFailureDependencies(): + @pytest.fixture + def inventory_id_sources(self, inventory_source_factory): + return [ + (1, [inventory_source_factory(id=1)]), + ] + + def test(self, scheduler_factory, pending_job, waiting_inventory_update, inventory_id_sources): + scheduler = scheduler_factory(tasks=[pending_job], + create_inventory_update=waiting_inventory_update, + inventory_sources=inventory_id_sources) + + scheduler._schedule() + + scheduler.capture_chain_failure_dependencies.assert_called_with(pending_job, [waiting_inventory_update]) + diff --git a/awx/main/tests/unit/test_access.py b/awx/main/tests/unit/test_access.py index 8a6687ba2f..05199fd5e3 100644 --- a/awx/main/tests/unit/test_access.py +++ b/awx/main/tests/unit/test_access.py @@ -1,9 +1,11 @@ import pytest import mock +import os from django.contrib.auth.models import User from django.forms.models import model_to_dict from rest_framework.exceptions import ParseError +from rest_framework.exceptions import PermissionDenied from awx.main.access import ( BaseAccess, @@ -14,7 +16,14 @@ from awx.main.access import ( ) from awx.conf.license import LicenseForbids -from awx.main.models import Credential, Inventory, Project, Role, Organization, Instance +from awx.main.models import ( + Credential, + Inventory, + Project, + Role, + Organization, + Instance, +) @pytest.fixture @@ -247,6 +256,41 @@ class TestWorkflowAccessMethods: assert access.can_add({'organization': 1}) +class TestCheckLicense: + @pytest.fixture + def validate_enhancements_mocker(self, mocker): + os.environ['SKIP_LICENSE_FIXUP_FOR_TEST'] = '1' + + def fn(available_instances=1, free_instances=0, host_exists=False): + + class MockFilter: + def exists(self): + return host_exists + + mocker.patch('awx.main.tasks.TaskEnhancer.validate_enhancements', return_value={'free_instances': free_instances, 'available_instances': available_instances, 'date_warning': True}) + + mock_filter = MockFilter() + mocker.patch('awx.main.models.Host.objects.filter', return_value=mock_filter) + + return fn + + def test_check_license_add_host_duplicate(self, validate_enhancements_mocker, user_unit): + validate_enhancements_mocker(available_instances=1, free_instances=0, host_exists=True) + + BaseAccess(None).check_license(add_host_name='blah', check_expiration=False) + + def test_check_license_add_host_new_exceed_licence(self, validate_enhancements_mocker, user_unit, mocker): + validate_enhancements_mocker(available_instances=1, free_instances=0, host_exists=False) + exception = None + + try: + BaseAccess(None).check_license(add_host_name='blah', check_expiration=False) + except PermissionDenied as e: + exception = e + + assert "License count of 1 instances has been reached." == str(exception) + + def test_user_capabilities_method(): """Unit test to verify that the user_capabilities method will defer to the appropriate sub-class methods of the access classes. diff --git a/awx/main/tests/unit/test_network_credential.py b/awx/main/tests/unit/test_network_credential.py index 6517c14a89..8cdf720af3 100644 --- a/awx/main/tests/unit/test_network_credential.py +++ b/awx/main/tests/unit/test_network_credential.py @@ -1,3 +1,5 @@ +import pytest + from awx.main.models.credential import Credential from awx.main.models.jobs import Job from awx.main.models.inventory import Inventory @@ -36,50 +38,84 @@ def test_net_cred_parse(mocker): 'password':'test', 'authorize': True, 'authorize_password': 'passwd', + 'ssh_key_data': """-----BEGIN PRIVATE KEY-----\nstuff==\n-----END PRIVATE KEY-----""", + } + private_data_files = { + 'network_credential': '/tmp/this_file_does_not_exist_during_test_but_the_path_is_real', } job.network_credential = Credential(**options) run_job = RunJob() mocker.patch.object(run_job, 'should_use_proot', return_value=False) - env = run_job.build_env(job, private_data_dir='/tmp') + env = run_job.build_env(job, private_data_dir='/tmp', private_data_files=private_data_files) assert env['ANSIBLE_NET_USERNAME'] == options['username'] assert env['ANSIBLE_NET_PASSWORD'] == options['password'] assert env['ANSIBLE_NET_AUTHORIZE'] == '1' - assert env['ANSIBLE_NET_AUTHORIZE_PASSWORD'] == options['authorize_password'] + assert env['ANSIBLE_NET_AUTH_PASS'] == options['authorize_password'] + assert env['ANSIBLE_NET_SSH_KEYFILE'] == private_data_files['network_credential'] -def test_net_cred_ssh_agent(mocker, get_ssh_version): - with mocker.patch('django.db.ConnectionRouter.db_for_write'): - run_job = RunJob() +@pytest.fixture +def mock_job(mocker): + options = { + 'username':'test', + 'password':'test', + 'ssh_key_data': """-----BEGIN PRIVATE KEY-----\nstuff==\n-----END PRIVATE KEY-----""", + 'authorize': True, + 'authorize_password': 'passwd', + } - options = { - 'username':'test', - 'password':'test', - 'ssh_key_data': """-----BEGIN PRIVATE KEY-----\nstuff==\n-----END PRIVATE KEY-----""", - 'authorize': True, - 'authorize_password': 'passwd', - } - mock_job_attrs = {'forks': False, 'id': 1, 'cancel_flag': False, 'status': 'running', 'job_type': 'normal', - 'credential': None, 'cloud_credential': None, 'network_credential': Credential(**options), - 'become_enabled': False, 'become_method': None, 'become_username': None, - 'inventory': mocker.MagicMock(spec=Inventory, id=2), 'force_handlers': False, - 'limit': None, 'verbosity': None, 'job_tags': None, 'skip_tags': None, - 'start_at_task': None, 'pk': 1, 'launch_type': 'normal', 'job_template':None, - 'created_by': None, 'extra_vars_dict': None, 'project':None, 'playbook': 'test.yml'} - mock_job = mocker.MagicMock(spec=Job, **mock_job_attrs) + mock_job_attrs = {'forks': False, 'id': 1, 'cancel_flag': False, 'status': 'running', 'job_type': 'normal', + 'credential': None, 'cloud_credential': None, 'network_credential': Credential(**options), + 'become_enabled': False, 'become_method': None, 'become_username': None, + 'inventory': mocker.MagicMock(spec=Inventory, id=2), 'force_handlers': False, + 'limit': None, 'verbosity': None, 'job_tags': None, 'skip_tags': None, + 'start_at_task': None, 'pk': 1, 'launch_type': 'normal', 'job_template':None, + 'created_by': None, 'extra_vars_dict': None, 'project':None, 'playbook': 'test.yml'} + mock_job = mocker.MagicMock(spec=Job, **mock_job_attrs) + return mock_job - mocker.patch.object(run_job, 'update_model', return_value=mock_job) - mocker.patch.object(run_job, 'build_cwd', return_value='/tmp') - mocker.patch.object(run_job, 'should_use_proot', return_value=False) - mocker.patch.object(run_job, 'run_pexpect', return_value=('successful', 0)) - mocker.patch.object(run_job, 'open_fifo_write', return_value=None) - mocker.patch.object(run_job, 'post_run_hook', return_value=None) - run_job.run(mock_job.id) - assert run_job.update_model.call_count == 3 +@pytest.fixture +def run_job_net_cred(mocker, get_ssh_version, mock_job): + mocker.patch('django.db.ConnectionRouter.db_for_write') + run_job = RunJob() + + mocker.patch.object(run_job, 'update_model', return_value=mock_job) + mocker.patch.object(run_job, 'build_cwd', return_value='/tmp') + mocker.patch.object(run_job, 'should_use_proot', return_value=False) + mocker.patch.object(run_job, 'run_pexpect', return_value=('successful', 0)) + mocker.patch.object(run_job, 'open_fifo_write', return_value=None) + mocker.patch.object(run_job, 'post_run_hook', return_value=None) + + return run_job + + +@pytest.mark.skip(reason="Note: Ansible network modules don't yet support ssh-agent added keys.") +def test_net_cred_ssh_agent(run_job_net_cred, mock_job): + run_job = run_job_net_cred + run_job.run(mock_job.id) + + assert run_job.update_model.call_count == 4 + + job_args = run_job.update_model.call_args_list[1][1].get('job_args') + assert 'ssh-add' in job_args + assert 'ssh-agent' in job_args + assert 'network_credential' in job_args + + +def test_net_cred_job_model_env(run_job_net_cred, mock_job): + run_job = run_job_net_cred + run_job.run(mock_job.id) + + assert run_job.update_model.call_count == 4 + + job_args = run_job.update_model.call_args_list[1][1].get('job_env') + assert 'ANSIBLE_NET_USERNAME' in job_args + assert 'ANSIBLE_NET_PASSWORD' in job_args + assert 'ANSIBLE_NET_AUTHORIZE' in job_args + assert 'ANSIBLE_NET_AUTH_PASS' in job_args + assert 'ANSIBLE_NET_SSH_KEYFILE' in job_args + - job_args = run_job.update_model.call_args_list[1][1].get('job_args') - assert 'ssh-add' in job_args - assert 'ssh-agent' in job_args - assert 'network_credential' in job_args diff --git a/awx/main/tests/unit/test_tasks.py b/awx/main/tests/unit/test_tasks.py index b83772bb59..2cefe0007b 100644 --- a/awx/main/tests/unit/test_tasks.py +++ b/awx/main/tests/unit/test_tasks.py @@ -38,17 +38,17 @@ def test_send_notifications_list(mocker): mock_job = mocker.MagicMock(spec=UnifiedJob) patches.append(mocker.patch('awx.main.models.UnifiedJob.objects.get', return_value=mock_job)) - mock_notification = mocker.MagicMock(spec=Notification, subject="test", body={'hello': 'world'}) - patches.append(mocker.patch('awx.main.models.Notification.objects.get', return_value=mock_notification)) + mock_notifications = [mocker.MagicMock(spec=Notification, subject="test", body={'hello': 'world'})] + patches.append(mocker.patch('awx.main.models.Notification.objects.filter', return_value=mock_notifications)) with apply_patches(patches): send_notifications([1,2], job_id=1) - assert Notification.objects.get.call_count == 2 - assert mock_notification.status == "successful" - assert mock_notification.save.called + assert Notification.objects.filter.call_count == 1 + assert mock_notifications[0].status == "successful" + assert mock_notifications[0].save.called assert mock_job.notifications.add.called - assert mock_job.notifications.add.called_with(mock_notification) + assert mock_job.notifications.add.called_with(*mock_notifications) @pytest.mark.parametrize("current_instances,call_count", [(91, 2), (89,1)]) diff --git a/awx/main/utils/common.py b/awx/main/utils/common.py index 00937a84c1..bed0588287 100644 --- a/awx/main/utils/common.py +++ b/awx/main/utils/common.py @@ -43,7 +43,8 @@ __all__ = ['get_object_or_400', 'get_object_or_403', 'camelcase_to_underscore', 'copy_m2m_relationships' ,'cache_list_capabilities', 'to_python_boolean', 'ignore_inventory_computed_fields', 'ignore_inventory_group_removal', '_inventory_updates', 'get_pk_from_dict', 'getattrd', 'NoDefaultProvided', - 'get_current_apps', 'set_current_apps', 'OutputEventFilter'] + 'get_current_apps', 'set_current_apps', 'OutputEventFilter', + 'callback_filter_out_ansible_extra_vars',] def get_object_or_400(klass, *args, **kwargs): @@ -824,3 +825,12 @@ class OutputEventFilter(object): self._current_event_data = next_event_data else: self._current_event_data = None + + +def callback_filter_out_ansible_extra_vars(extra_vars): + extra_vars_redacted = {} + for key, value in extra_vars.iteritems(): + if not key.startswith('ansible_'): + extra_vars_redacted[key] = value + return extra_vars_redacted + diff --git a/awx/main/utils/handlers.py b/awx/main/utils/handlers.py index 28b7c26af7..71176cbb1a 100644 --- a/awx/main/utils/handlers.py +++ b/awx/main/utils/handlers.py @@ -30,6 +30,7 @@ PARAM_NAMES = { 'password': 'LOG_AGGREGATOR_PASSWORD', 'enabled_loggers': 'LOG_AGGREGATOR_LOGGERS', 'indv_facts': 'LOG_AGGREGATOR_INDIVIDUAL_FACTS', + 'enabled_flag': 'LOG_AGGREGATOR_ENABLED', } @@ -48,6 +49,7 @@ class HTTPSHandler(logging.Handler): def __init__(self, fqdn=False, **kwargs): super(HTTPSHandler, self).__init__() self.fqdn = fqdn + self.async = kwargs.get('async', True) for fd in PARAM_NAMES: # settings values take precedence over the input params settings_name = PARAM_NAMES[fd] @@ -100,11 +102,21 @@ class HTTPSHandler(logging.Handler): payload_str = json.dumps(payload_input) else: payload_str = payload_input - return dict(data=payload_str, background_callback=unused_callback) + if self.async: + return dict(data=payload_str, background_callback=unused_callback) + else: + return dict(data=payload_str) + + def skip_log(self, logger_name): + if self.host == '' or (not self.enabled_flag): + return True + if not logger_name.startswith('awx.analytics'): + # Tower log emission is only turned off by enablement setting + return False + return self.enabled_loggers is None or logger_name.split('.')[-1] not in self.enabled_loggers def emit(self, record): - if (self.host == '' or self.enabled_loggers is None or - record.name.split('.')[-1] not in self.enabled_loggers): + if self.skip_log(record.name): return try: payload = self.format(record) @@ -123,7 +135,10 @@ class HTTPSHandler(logging.Handler): self.session.post(host, **self.get_post_kwargs(fact_payload)) return - self.session.post(host, **self.get_post_kwargs(payload)) + if self.async: + self.session.post(host, **self.get_post_kwargs(payload)) + else: + requests.post(host, auth=requests.auth.HTTPBasicAuth(self.username, self.password), **self.get_post_kwargs(payload)) except (KeyboardInterrupt, SystemExit): raise except: diff --git a/awx/main/validators.py b/awx/main/validators.py index 1c92d9a645..c045e936cb 100644 --- a/awx/main/validators.py +++ b/awx/main/validators.py @@ -185,8 +185,9 @@ def vars_validate_or_raise(vars_str): except ValueError: pass try: - yaml.safe_load(vars_str) - return vars_str + r = yaml.safe_load(vars_str) + if not (isinstance(r, basestring) and r.startswith('OrderedDict(')): + return vars_str except yaml.YAMLError: pass raise RestValidationError(_('Must be valid JSON or YAML.')) diff --git a/awx/playbooks/project_update.yml b/awx/playbooks/project_update.yml index 30eff5f6bc..0f4d354ff7 100644 --- a/awx/playbooks/project_update.yml +++ b/awx/playbooks/project_update.yml @@ -115,6 +115,12 @@ chdir: "{{project_path|quote}}/roles" when: doesRequirementsExist.stat.exists and scm_full_checkout|bool + # format provided by ansible is ["Revision: 12345", "URL: ..."] + - name: parse subversion version string properly + set_fact: + scm_version: "{{scm_version|regex_replace('^.*Revision: ([0-9]+).*$', '\\1')}}" + when: scm_type == 'svn' + - name: Repository Version debug: msg="Repository Version {{ scm_version }}" when: scm_version is defined diff --git a/awx/plugins/inventory/azure_rm.py b/awx/plugins/inventory/azure_rm.py index f3c9e7c28d..8545967c37 100755 --- a/awx/plugins/inventory/azure_rm.py +++ b/awx/plugins/inventory/azure_rm.py @@ -1,4 +1,4 @@ -#!/usr/bin/python +#!/usr/bin/env python # # Copyright (c) 2016 Matt Davis, # Chris Houseknecht, @@ -786,11 +786,11 @@ class AzureInventory(object): def main(): if not HAS_AZURE: - sys.exit("The Azure python sdk is not installed (try 'pip install azure==2.0.0rc5') - {0}".format(HAS_AZURE_EXC)) + sys.exit("The Azure python sdk is not installed (try 'pip install azure>=2.0.0rc5') - {0}".format(HAS_AZURE_EXC)) - if LooseVersion(azure_compute_version) != LooseVersion(AZURE_MIN_VERSION): + if LooseVersion(azure_compute_version) < LooseVersion(AZURE_MIN_VERSION): sys.exit("Expecting azure.mgmt.compute.__version__ to be {0}. Found version {1} " - "Do you have Azure == 2.0.0rc5 installed?".format(AZURE_MIN_VERSION, azure_compute_version)) + "Do you have Azure >= 2.0.0rc5 installed?".format(AZURE_MIN_VERSION, azure_compute_version)) AzureInventory() diff --git a/awx/plugins/inventory/cloudforms.py b/awx/plugins/inventory/cloudforms.py index 65d95853d5..69c149bfc5 100755 --- a/awx/plugins/inventory/cloudforms.py +++ b/awx/plugins/inventory/cloudforms.py @@ -1,4 +1,4 @@ -#!/usr/bin/python +#!/usr/bin/env python # vim: set fileencoding=utf-8 : # # Copyright (C) 2016 Guido Günther @@ -459,4 +459,3 @@ class CloudFormsInventory(object): return json.dumps(data) CloudFormsInventory() - diff --git a/awx/plugins/inventory/ec2.ini.example b/awx/plugins/inventory/ec2.ini.example index 1d7428b2ed..2b9f089135 100644 --- a/awx/plugins/inventory/ec2.ini.example +++ b/awx/plugins/inventory/ec2.ini.example @@ -29,23 +29,41 @@ regions_exclude = us-gov-west-1,cn-north-1 # in the event of a collision. destination_variable = public_dns_name +# This allows you to override the inventory_name with an ec2 variable, instead +# of using the destination_variable above. Addressing (aka ansible_ssh_host) +# will still use destination_variable. Tags should be written as 'tag_TAGNAME'. +#hostname_variable = tag_Name + # For server inside a VPC, using DNS names may not make sense. When an instance # has 'subnet_id' set, this variable is used. If the subnet is public, setting # this to 'ip_address' will return the public IP address. For instances in a # private subnet, this should be set to 'private_ip_address', and Ansible must # be run from within EC2. The key of an EC2 tag may optionally be used; however # the boto instance variables hold precedence in the event of a collision. -# WARNING: - instances that are in the private vpc, _without_ public ip address -# will not be listed in the inventory untill You set: -# vpc_destination_variable = 'private_ip_address' +# WARNING: - instances that are in the private vpc, _without_ public ip address +# will not be listed in the inventory until You set: +# vpc_destination_variable = private_ip_address vpc_destination_variable = ip_address +# The following two settings allow flexible ansible host naming based on a +# python format string and a comma-separated list of ec2 tags. Note that: +# +# 1) If the tags referenced are not present for some instances, empty strings +# will be substituted in the format string. +# 2) This overrides both destination_variable and vpc_destination_variable. +# +#destination_format = {0}.{1}.example.com +#destination_format_tags = Name,environment + # To tag instances on EC2 with the resource records that point to them from # Route53, uncomment and set 'route53' to True. route53 = False # To exclude RDS instances from the inventory, uncomment and set to False. -#rds = False +rds = False + +# To exclude ElastiCache instances from the inventory, uncomment and set to False. +elasticache = False # Additionally, you can specify the list of zones to exclude looking up in # 'route53_excluded_zones' as a comma-separated list. @@ -55,10 +73,30 @@ route53 = False # 'all_instances' to True to return all instances regardless of state. all_instances = False +# By default, only EC2 instances in the 'running' state are returned. Specify +# EC2 instance states to return as a comma-separated list. This +# option is overriden when 'all_instances' is True. +# instance_states = pending, running, shutting-down, terminated, stopping, stopped + # By default, only RDS instances in the 'available' state are returned. Set # 'all_rds_instances' to True return all RDS instances regardless of state. all_rds_instances = False +# Include RDS cluster information (Aurora etc.) +include_rds_clusters = False + +# By default, only ElastiCache clusters and nodes in the 'available' state +# are returned. Set 'all_elasticache_clusters' and/or 'all_elastic_nodes' +# to True return all ElastiCache clusters and nodes, regardless of state. +# +# Note that all_elasticache_nodes only applies to listed clusters. That means +# if you set all_elastic_clusters to false, no node will be return from +# unavailable clusters, regardless of the state and to what you set for +# all_elasticache_nodes. +all_elasticache_replication_groups = False +all_elasticache_clusters = False +all_elasticache_nodes = False + # API calls to EC2 are slow. For this reason, we cache the results of an API # call. Set this to the path you want cache files to be written to. Two files # will be written to this directory: @@ -69,11 +107,18 @@ cache_path = ~/.ansible/tmp # The number of seconds a cache file is considered valid. After this many # seconds, a new API call will be made, and the cache file will be updated. # To disable the cache, set this value to 0 -cache_max_age = 300 +cache_max_age = 0 # Organize groups into a nested/hierarchy instead of a flat namespace. nested_groups = False +# Replace - tags when creating groups to avoid issues with ansible +replace_dash_in_groups = True + +# If set to true, any tag of the form "a,b,c" is expanded into a list +# and the results are used to create additional tag_* inventory groups. +expand_csv_tags = True + # The EC2 inventory output can become very large. To manage its size, # configure which groups should be created. group_by_instance_id = True @@ -89,6 +134,10 @@ group_by_tag_none = True group_by_route53_names = True group_by_rds_engine = True group_by_rds_parameter_group = True +group_by_elasticache_engine = True +group_by_elasticache_cluster = True +group_by_elasticache_parameter_group = True +group_by_elasticache_replication_group = True # If you only want to include hosts that match a certain regular expression # pattern_include = staging-* @@ -113,5 +162,28 @@ group_by_rds_parameter_group = True # You can use wildcards in filter values also. Below will list instances which # tag Name value matches webservers1* -# (ex. webservers15, webservers1a, webservers123 etc) +# (ex. webservers15, webservers1a, webservers123 etc) # instance_filters = tag:Name=webservers1* + +# A boto configuration profile may be used to separate out credentials +# see http://boto.readthedocs.org/en/latest/boto_config_tut.html +# boto_profile = some-boto-profile-name + + +[credentials] + +# The AWS credentials can optionally be specified here. Credentials specified +# here are ignored if the environment variable AWS_ACCESS_KEY_ID or +# AWS_PROFILE is set, or if the boto_profile property above is set. +# +# Supplying AWS credentials here is not recommended, as it introduces +# non-trivial security concerns. When going down this route, please make sure +# to set access permissions for this file correctly, e.g. handle it the same +# way as you would a private SSH key. +# +# Unlike the boto and AWS configure files, this section does not support +# profiles. +# +# aws_access_key_id = AXXXXXXXXXXXXXX +# aws_secret_access_key = XXXXXXXXXXXXXXXXXXX +# aws_security_token = XXXXXXXXXXXXXXXXXXXXXXXXXXXX diff --git a/awx/plugins/inventory/ec2.py b/awx/plugins/inventory/ec2.py index 6068df901f..dcc369e124 100755 --- a/awx/plugins/inventory/ec2.py +++ b/awx/plugins/inventory/ec2.py @@ -37,6 +37,7 @@ When run against a specific host, this script returns the following variables: - ec2_attachTime - ec2_attachment - ec2_attachmentId + - ec2_block_devices - ec2_client_token - ec2_deleteOnTermination - ec2_description @@ -131,6 +132,15 @@ from boto import elasticache from boto import route53 import six +from ansible.module_utils import ec2 as ec2_utils + +HAS_BOTO3 = False +try: + import boto3 + HAS_BOTO3 = True +except ImportError: + pass + from six.moves import configparser from collections import defaultdict @@ -265,6 +275,12 @@ class Ec2Inventory(object): if config.has_option('ec2', 'rds'): self.rds_enabled = config.getboolean('ec2', 'rds') + # Include RDS cluster instances? + if config.has_option('ec2', 'include_rds_clusters'): + self.include_rds_clusters = config.getboolean('ec2', 'include_rds_clusters') + else: + self.include_rds_clusters = False + # Include ElastiCache instances? self.elasticache_enabled = True if config.has_option('ec2', 'elasticache'): @@ -474,6 +490,8 @@ class Ec2Inventory(object): if self.elasticache_enabled: self.get_elasticache_clusters_by_region(region) self.get_elasticache_replication_groups_by_region(region) + if self.include_rds_clusters: + self.include_rds_clusters_by_region(region) self.write_to_cache(self.inventory, self.cache_path_cache) self.write_to_cache(self.index, self.cache_path_index) @@ -527,6 +545,7 @@ class Ec2Inventory(object): instance_ids = [] for reservation in reservations: instance_ids.extend([instance.id for instance in reservation.instances]) + max_filter_value = 199 tags = [] for i in range(0, len(instance_ids), max_filter_value): @@ -573,6 +592,65 @@ class Ec2Inventory(object): error = "Looks like AWS RDS is down:\n%s" % e.message self.fail_with_error(error, 'getting RDS instances') + def include_rds_clusters_by_region(self, region): + if not HAS_BOTO3: + self.fail_with_error("Working with RDS clusters requires boto3 - please install boto3 and try again", + "getting RDS clusters") + + client = ec2_utils.boto3_inventory_conn('client', 'rds', region, **self.credentials) + + marker, clusters = '', [] + while marker is not None: + resp = client.describe_db_clusters(Marker=marker) + clusters.extend(resp["DBClusters"]) + marker = resp.get('Marker', None) + + account_id = boto.connect_iam().get_user().arn.split(':')[4] + c_dict = {} + for c in clusters: + # remove these datetime objects as there is no serialisation to json + # currently in place and we don't need the data yet + if 'EarliestRestorableTime' in c: + del c['EarliestRestorableTime'] + if 'LatestRestorableTime' in c: + del c['LatestRestorableTime'] + + if self.ec2_instance_filters == {}: + matches_filter = True + else: + matches_filter = False + + try: + # arn:aws:rds:::: + tags = client.list_tags_for_resource( + ResourceName='arn:aws:rds:' + region + ':' + account_id + ':cluster:' + c['DBClusterIdentifier']) + c['Tags'] = tags['TagList'] + + if self.ec2_instance_filters: + for filter_key, filter_values in self.ec2_instance_filters.items(): + # get AWS tag key e.g. tag:env will be 'env' + tag_name = filter_key.split(":", 1)[1] + # Filter values is a list (if you put multiple values for the same tag name) + matches_filter = any(d['Key'] == tag_name and d['Value'] in filter_values for d in c['Tags']) + + if matches_filter: + # it matches a filter, so stop looking for further matches + break + + except Exception as e: + if e.message.find('DBInstanceNotFound') >= 0: + # AWS RDS bug (2016-01-06) means deletion does not fully complete and leave an 'empty' cluster. + # Ignore errors when trying to find tags for these + pass + + # ignore empty clusters caused by AWS bug + if len(c['DBClusterMembers']) == 0: + continue + elif matches_filter: + c_dict[c['DBClusterIdentifier']] = c + + self.inventory['db_clusters'] = c_dict + def get_elasticache_clusters_by_region(self, region): ''' Makes an AWS API call to the list of ElastiCache clusters (with nodes' info) in a particular region.''' @@ -1235,7 +1313,7 @@ class Ec2Inventory(object): elif key == 'ec2_tags': for k, v in value.items(): if self.expand_csv_tags and ',' in v: - v = map(lambda x: x.strip(), v.split(',')) + v = list(map(lambda x: x.strip(), v.split(','))) key = self.to_safe('ec2_tag_' + k) instance_vars[key] = v elif key == 'ec2_groups': @@ -1246,6 +1324,10 @@ class Ec2Inventory(object): group_names.append(group.name) instance_vars["ec2_security_group_ids"] = ','.join([str(i) for i in group_ids]) instance_vars["ec2_security_group_names"] = ','.join([str(i) for i in group_names]) + elif key == 'ec2_block_device_mapping': + instance_vars["ec2_block_devices"] = {} + for k, v in value.items(): + instance_vars["ec2_block_devices"][ os.path.basename(k) ] = v.volume_id else: pass # TODO Product codes if someone finds them useful diff --git a/awx/plugins/inventory/foreman.ini.example b/awx/plugins/inventory/foreman.ini.example index d5cd56e441..42312dac6c 100644 --- a/awx/plugins/inventory/foreman.ini.example +++ b/awx/plugins/inventory/foreman.ini.example @@ -9,6 +9,9 @@ group_patterns = ["{app}-{tier}-{color}", "{app}-{color}", "{app}", "{tier}"] +group_prefix = foreman_ +# Whether to fetch facts from Foreman and store them on the host +want_facts = True [cache] path = . diff --git a/awx/plugins/inventory/foreman.py b/awx/plugins/inventory/foreman.py index ddcb912fd5..8bff5a8ece 100755 --- a/awx/plugins/inventory/foreman.py +++ b/awx/plugins/inventory/foreman.py @@ -18,14 +18,22 @@ # # This is somewhat based on cobbler inventory +from __future__ import print_function + import argparse -import ConfigParser import copy import os import re -from time import time import requests from requests.auth import HTTPBasicAuth +import sys +from time import time + +try: + import ConfigParser +except ImportError: + import configparser as ConfigParser + try: import json @@ -34,19 +42,34 @@ except ImportError: class ForemanInventory(object): + config_paths = [ + "/etc/ansible/foreman.ini", + os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini', + ] + def __init__(self): - """ Main execution path """ self.inventory = dict() # A list of groups and the hosts in that group - self.cache = dict() # Details about hosts in the inventory - self.params = dict() # Params of each host - self.facts = dict() # Facts of each host + self.cache = dict() # Details about hosts in the inventory + self.params = dict() # Params of each host + self.facts = dict() # Facts of each host self.hostgroups = dict() # host groups + self.session = None # Requests session + def run(self): + if not self._read_settings(): + return False + self._get_inventory() + self._print_data() + return True + + def _read_settings(self): # Read settings and parse CLI arguments - self.read_settings() + if not self.read_settings(): + return False self.parse_cli_args() + return True - # Cache + def _get_inventory(self): if self.args.refresh_cache: self.update_cache() elif not self.is_cache_valid(): @@ -57,9 +80,8 @@ class ForemanInventory(object): self.load_facts_from_cache() self.load_cache_from_cache() + def _print_data(self): data_to_print = "" - - # Data to print if self.args.host: data_to_print += self.get_host_info() else: @@ -77,38 +99,36 @@ class ForemanInventory(object): print(data_to_print) def is_cache_valid(self): - """ Determines if the cache files have expired, or if it is still valid """ - + """Determines if the cache is still valid""" if os.path.isfile(self.cache_path_cache): mod_time = os.path.getmtime(self.cache_path_cache) current_time = time() if (mod_time + self.cache_max_age) > current_time: if (os.path.isfile(self.cache_path_inventory) and os.path.isfile(self.cache_path_params) and - os.path.isfile(self.cache_path_facts)): + os.path.isfile(self.cache_path_facts)): return True return False def read_settings(self): - """ Reads the settings from the foreman.ini file """ + """Reads the settings from the foreman.ini file""" config = ConfigParser.SafeConfigParser() - config_paths = [ - "/etc/ansible/foreman.ini", - os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini', - ] - env_value = os.environ.get('FOREMAN_INI_PATH') if env_value is not None: - config_paths.append(os.path.expanduser(os.path.expandvars(env_value))) + self.config_paths.append(os.path.expanduser(os.path.expandvars(env_value))) - config.read(config_paths) + config.read(self.config_paths) # Foreman API related - self.foreman_url = config.get('foreman', 'url') - self.foreman_user = config.get('foreman', 'user') - self.foreman_pw = config.get('foreman', 'password') - self.foreman_ssl_verify = config.getboolean('foreman', 'ssl_verify') + try: + self.foreman_url = config.get('foreman', 'url') + self.foreman_user = config.get('foreman', 'user') + self.foreman_pw = config.get('foreman', 'password') + self.foreman_ssl_verify = config.getboolean('foreman', 'ssl_verify') + except (ConfigParser.NoOptionError, ConfigParser.NoSectionError) as e: + print("Error parsing configuration: %s" % e, file=sys.stderr) + return False # Ansible related try: @@ -138,10 +158,14 @@ class ForemanInventory(object): self.cache_path_inventory = cache_path + "/%s.index" % script self.cache_path_params = cache_path + "/%s.params" % script self.cache_path_facts = cache_path + "/%s.facts" % script - self.cache_max_age = config.getint('cache', 'max_age') + try: + self.cache_max_age = config.getint('cache', 'max_age') + except (ConfigParser.NoOptionError, ConfigParser.NoSectionError): + self.cache_max_age = 60 + return True def parse_cli_args(self): - """ Command line argument processing """ + """Command line argument processing""" parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on foreman') parser.add_argument('--list', action='store_true', default=True, help='List instances (default: True)') @@ -150,26 +174,39 @@ class ForemanInventory(object): help='Force refresh of cache by making API requests to foreman (default: False - use cache files)') self.args = parser.parse_args() + def _get_session(self): + if not self.session: + self.session = requests.session() + self.session.auth = HTTPBasicAuth(self.foreman_user, self.foreman_pw) + self.session.verify = self.foreman_ssl_verify + return self.session + def _get_json(self, url, ignore_errors=None): page = 1 results = [] + s = self._get_session() while True: - ret = requests.get(url, - auth=HTTPBasicAuth(self.foreman_user, self.foreman_pw), - verify=self.foreman_ssl_verify, - params={'page': page, 'per_page': 250}) + ret = s.get(url, params={'page': page, 'per_page': 250}) if ignore_errors and ret.status_code in ignore_errors: break ret.raise_for_status() json = ret.json() - if not json.has_key('results'): + # /hosts/:id has not results key + if 'results' not in json: return json - if type(json['results']) == type({}): + # Facts are returned as dict in results not list + if isinstance(json['results'], dict): return json['results'] + # List of all hosts is returned paginaged results = results + json['results'] if len(results) >= json['total']: break page += 1 + if len(json['results']) == 0: + print("Did not make any progress during loop. " + "expected %d got %d" % (json['total'], len(results)), + file=sys.stderr) + break return results def _get_hosts(self): @@ -184,7 +221,8 @@ class ForemanInventory(object): def _get_all_params_by_id(self, hid): url = "%s/api/v2/hosts/%s" % (self.foreman_url, hid) ret = self._get_json(url, [404]) - if ret == []: ret = {} + if ret == []: + ret = {} return ret.get('all_parameters', {}) def _get_facts_by_id(self, hid): @@ -192,9 +230,7 @@ class ForemanInventory(object): return self._get_json(url) def _resolve_params(self, host): - """ - Fetch host params and convert to dict - """ + """Fetch host params and convert to dict""" params = {} for param in self._get_all_params_by_id(host['id']): @@ -204,9 +240,7 @@ class ForemanInventory(object): return params def _get_facts(self, host): - """ - Fetch all host facts of the host - """ + """Fetch all host facts of the host""" if not self.want_facts: return {} @@ -214,7 +248,7 @@ class ForemanInventory(object): if len(ret.values()) == 0: facts = {} elif len(ret.values()) == 1: - facts = ret.values()[0] + facts = list(ret.values())[0] else: raise ValueError("More than one set of facts returned for '%s'" % host) return facts @@ -228,8 +262,15 @@ class ForemanInventory(object): for host in self._get_hosts(): dns_name = host['name'] - # Create ansible groups for hostgroup, environment, location and organization - for group in ['hostgroup', 'environment', 'location', 'organization']: + # Create ansible groups for hostgroup + group = 'hostgroup' + val = host.get('%s_title' % group) or host.get('%s_name' % group) + if val: + safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower())) + self.push(self.inventory, safe_key, dns_name) + + # Create ansible groups for environment, location and organization + for group in ['environment', 'location', 'organization']: val = host.get('%s_name' % group) if val: safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower())) @@ -247,7 +288,7 @@ class ForemanInventory(object): # attributes. groupby = copy.copy(params) for k, v in host.items(): - if isinstance(v, basestring): + if isinstance(v, str): groupby[k] = self.to_safe(v) elif isinstance(v, int): groupby[k] = v @@ -264,14 +305,16 @@ class ForemanInventory(object): self.params[dns_name] = params self.facts[dns_name] = self._get_facts(host) self.push(self.inventory, 'all', dns_name) + self._write_cache() + def _write_cache(self): self.write_to_cache(self.cache, self.cache_path_cache) self.write_to_cache(self.inventory, self.cache_path_inventory) self.write_to_cache(self.params, self.cache_path_params) self.write_to_cache(self.facts, self.cache_path_facts) def get_host_info(self): - """ Get variables about a specific host """ + """Get variables about a specific host""" if not self.cache or len(self.cache) == 0: # Need to load index from cache @@ -294,21 +337,21 @@ class ForemanInventory(object): d[k] = [v] def load_inventory_from_cache(self): - """ Reads the index from the cache file sets self.index """ + """Read the index from the cache file sets self.index""" cache = open(self.cache_path_inventory, 'r') json_inventory = cache.read() self.inventory = json.loads(json_inventory) def load_params_from_cache(self): - """ Reads the index from the cache file sets self.index """ + """Read the index from the cache file sets self.index""" cache = open(self.cache_path_params, 'r') json_params = cache.read() self.params = json.loads(json_params) def load_facts_from_cache(self): - """ Reads the index from the cache file sets self.index """ + """Read the index from the cache file sets self.facts""" if not self.want_facts: return cache = open(self.cache_path_facts, 'r') @@ -316,26 +359,33 @@ class ForemanInventory(object): self.facts = json.loads(json_facts) def load_cache_from_cache(self): - """ Reads the cache from the cache file sets self.cache """ + """Read the cache from the cache file sets self.cache""" cache = open(self.cache_path_cache, 'r') json_cache = cache.read() self.cache = json.loads(json_cache) def write_to_cache(self, data, filename): - """ Writes data in JSON format to a file """ + """Write data in JSON format to a file""" json_data = self.json_format_dict(data, True) cache = open(filename, 'w') cache.write(json_data) cache.close() - def to_safe(self, word): - ''' Converts 'bad' characters in a string to underscores so they can be used as Ansible groups ''' + @staticmethod + def to_safe(word): + '''Converts 'bad' characters in a string to underscores + + so they can be used as Ansible groups + + >>> ForemanInventory.to_safe("foo-bar baz") + 'foo_barbaz' + ''' regex = "[^A-Za-z0-9\_]" return re.sub(regex, "_", word.replace(" ", "")) def json_format_dict(self, data, pretty=False): - """ Converts a dict to a JSON object and dumps it as a formatted string """ + """Converts a dict to a JSON object and dumps it as a formatted string""" if pretty: return json.dumps(data, sort_keys=True, indent=2) @@ -343,6 +393,5 @@ class ForemanInventory(object): return json.dumps(data) if __name__ == '__main__': - ForemanInventory() - - + inv = ForemanInventory() + sys.exit(not inv.run()) diff --git a/awx/plugins/inventory/gce.py b/awx/plugins/inventory/gce.py index 498511d635..87f1e8e811 100755 --- a/awx/plugins/inventory/gce.py +++ b/awx/plugins/inventory/gce.py @@ -69,7 +69,8 @@ Examples: $ contrib/inventory/gce.py --host my_instance Author: Eric Johnson -Version: 0.0.1 +Contributors: Matt Hite , Tom Melendez +Version: 0.0.3 ''' __requires__ = ['pycrypto>=2.6'] @@ -83,13 +84,19 @@ except ImportError: pass USER_AGENT_PRODUCT="Ansible-gce_inventory_plugin" -USER_AGENT_VERSION="v1" +USER_AGENT_VERSION="v2" import sys import os import argparse + +from time import time + import ConfigParser +import logging +logging.getLogger('libcloud.common.google').addHandler(logging.NullHandler()) + try: import json except ImportError: @@ -100,33 +107,103 @@ try: from libcloud.compute.providers import get_driver _ = Provider.GCE except: - print("GCE inventory script requires libcloud >= 0.13") - sys.exit(1) + sys.exit("GCE inventory script requires libcloud >= 0.13") + + +class CloudInventoryCache(object): + def __init__(self, cache_name='ansible-cloud-cache', cache_path='/tmp', + cache_max_age=300): + cache_dir = os.path.expanduser(cache_path) + if not os.path.exists(cache_dir): + os.makedirs(cache_dir) + self.cache_path_cache = os.path.join(cache_dir, cache_name) + + self.cache_max_age = cache_max_age + + def is_valid(self, max_age=None): + ''' Determines if the cache files have expired, or if it is still valid ''' + + if max_age is None: + max_age = self.cache_max_age + + if os.path.isfile(self.cache_path_cache): + mod_time = os.path.getmtime(self.cache_path_cache) + current_time = time() + if (mod_time + max_age) > current_time: + return True + + return False + + def get_all_data_from_cache(self, filename=''): + ''' Reads the JSON inventory from the cache file. Returns Python dictionary. ''' + + data = '' + if not filename: + filename = self.cache_path_cache + with open(filename, 'r') as cache: + data = cache.read() + return json.loads(data) + + def write_to_cache(self, data, filename=''): + ''' Writes data to file as JSON. Returns True. ''' + if not filename: + filename = self.cache_path_cache + json_data = json.dumps(data) + with open(filename, 'w') as cache: + cache.write(json_data) + return True class GceInventory(object): def __init__(self): + # Cache object + self.cache = None + # dictionary containing inventory read from disk + self.inventory = {} + # Read settings and parse CLI arguments self.parse_cli_args() + self.config = self.get_config() self.driver = self.get_gce_driver() + self.ip_type = self.get_inventory_options() + if self.ip_type: + self.ip_type = self.ip_type.lower() + + # Cache management + start_inventory_time = time() + cache_used = False + if self.args.refresh_cache or not self.cache.is_valid(): + self.do_api_calls_update_cache() + else: + self.load_inventory_from_cache() + cache_used = True + self.inventory['_meta']['stats'] = {'use_cache': True} + self.inventory['_meta']['stats'] = { + 'inventory_load_time': time() - start_inventory_time, + 'cache_used': cache_used + } # Just display data for specific host if self.args.host: - print(self.json_format_dict(self.node_to_dict( - self.get_instance(self.args.host)), - pretty=self.args.pretty)) - sys.exit(0) - - zones = self.parse_env_zones() - - # Otherwise, assume user wants all instances grouped - print(self.json_format_dict(self.group_instances(zones), - pretty=self.args.pretty)) + print(self.json_format_dict( + self.inventory['_meta']['hostvars'][self.args.host], + pretty=self.args.pretty)) + else: + # Otherwise, assume user wants all instances grouped + zones = self.parse_env_zones() + print(self.json_format_dict(self.inventory, + pretty=self.args.pretty)) sys.exit(0) - def get_gce_driver(self): - """Determine the GCE authorization settings and return a - libcloud driver. + def get_config(self): + """ + Reads the settings from the gce.ini file. + + Populates a SafeConfigParser object with defaults and + attempts to read an .ini-style configuration from the filename + specified in GCE_INI_PATH. If the environment variable is + not present, the filename defaults to gce.ini in the current + working directory. """ gce_ini_default_path = os.path.join( os.path.dirname(os.path.realpath(__file__)), "gce.ini") @@ -141,14 +218,57 @@ class GceInventory(object): 'gce_service_account_pem_file_path': '', 'gce_project_id': '', 'libcloud_secrets': '', + 'inventory_ip_type': '', + 'cache_path': '~/.ansible/tmp', + 'cache_max_age': '300' }) if 'gce' not in config.sections(): config.add_section('gce') + if 'inventory' not in config.sections(): + config.add_section('inventory') + if 'cache' not in config.sections(): + config.add_section('cache') + config.read(gce_ini_path) + ######### + # Section added for processing ini settings + ######### + + # Set the instance_states filter based on config file options + self.instance_states = [] + if config.has_option('gce', 'instance_states'): + states = config.get('gce', 'instance_states') + # Ignore if instance_states is an empty string. + if states: + self.instance_states = states.split(',') + + # Caching + cache_path = config.get('cache', 'cache_path') + cache_max_age = config.getint('cache', 'cache_max_age') + # TOOD(supertom): support project-specific caches + cache_name = 'ansible-gce.cache' + self.cache = CloudInventoryCache(cache_path=cache_path, + cache_max_age=cache_max_age, + cache_name=cache_name) + return config + + def get_inventory_options(self): + """Determine inventory options. Environment variables always + take precedence over configuration files.""" + ip_type = self.config.get('inventory', 'inventory_ip_type') + # If the appropriate environment variables are set, they override + # other configuration + ip_type = os.environ.get('INVENTORY_IP_TYPE', ip_type) + return ip_type + + def get_gce_driver(self): + """Determine the GCE authorization settings and return a + libcloud driver. + """ # Attempt to get GCE params from a configuration file, if one # exists. - secrets_path = config.get('gce', 'libcloud_secrets') + secrets_path = self.config.get('gce', 'libcloud_secrets') secrets_found = False try: import secrets @@ -162,8 +282,7 @@ class GceInventory(object): if not secrets_path.endswith('secrets.py'): err = "Must specify libcloud secrets file as " err += "/absolute/path/to/secrets.py" - print(err) - sys.exit(1) + sys.exit(err) sys.path.append(os.path.dirname(secrets_path)) try: import secrets @@ -174,10 +293,10 @@ class GceInventory(object): pass if not secrets_found: args = [ - config.get('gce','gce_service_account_email_address'), - config.get('gce','gce_service_account_pem_file_path') + self.config.get('gce','gce_service_account_email_address'), + self.config.get('gce','gce_service_account_pem_file_path') ] - kwargs = {'project': config.get('gce', 'gce_project_id')} + kwargs = {'project': self.config.get('gce', 'gce_project_id')} # If the appropriate environment variables are set, they override # other configuration; process those into our args and kwargs. @@ -211,6 +330,9 @@ class GceInventory(object): help='Get all information about an instance') parser.add_argument('--pretty', action='store_true', default=False, help='Pretty format (default: False)') + parser.add_argument( + '--refresh-cache', action='store_true', default=False, + help='Force refresh of cache by making API requests (default: False - use cache files)') self.args = parser.parse_args() @@ -220,11 +342,17 @@ class GceInventory(object): if inst is None: return {} - if inst.extra['metadata'].has_key('items'): + if 'items' in inst.extra['metadata']: for entry in inst.extra['metadata']['items']: md[entry['key']] = entry['value'] net = inst.extra['networkInterfaces'][0]['network'].split('/')[-1] + # default to exernal IP unless user has specified they prefer internal + if self.ip_type == 'internal': + ssh_host = inst.private_ips[0] + else: + ssh_host = inst.public_ips[0] if len(inst.public_ips) >= 1 else inst.private_ips[0] + return { 'gce_uuid': inst.uuid, 'gce_id': inst.id, @@ -240,15 +368,36 @@ class GceInventory(object): 'gce_metadata': md, 'gce_network': net, # Hosts don't have a public name, so we add an IP - 'ansible_ssh_host': inst.public_ips[0] if len(inst.public_ips) >= 1 else inst.private_ips[0] + 'ansible_ssh_host': ssh_host } - def get_instance(self, instance_name): - '''Gets details about a specific instance ''' + def load_inventory_from_cache(self): + ''' Loads inventory from JSON on disk. ''' + try: - return self.driver.ex_get_node(instance_name) + self.inventory = self.cache.get_all_data_from_cache() + hosts = self.inventory['_meta']['hostvars'] except Exception as e: - return None + print( + "Invalid inventory file %s. Please rebuild with -refresh-cache option." + % (self.cache.cache_path_cache)) + raise + + def do_api_calls_update_cache(self): + ''' Do API calls and save data in cache. ''' + zones = self.parse_env_zones() + data = self.group_instances(zones) + self.cache.write_to_cache(data) + self.inventory = data + + def list_nodes(self): + all_nodes = [] + params, more_results = {'maxResults': 500}, True + while more_results: + self.driver.connection.gce_params=params + all_nodes.extend(self.driver.list_nodes()) + more_results = 'pageToken' in params + return all_nodes def group_instances(self, zones=None): '''Group all instances''' @@ -256,7 +405,18 @@ class GceInventory(object): meta = {} meta["hostvars"] = {} - for node in self.driver.list_nodes(): + for node in self.list_nodes(): + + # This check filters on the desired instance states defined in the + # config file with the instance_states config option. + # + # If the instance_states list is _empty_ then _ALL_ states are returned. + # + # If the instance_states list is _populated_ then check the current + # state against the instance_states list + if self.instance_states and not node.extra['status'] in self.instance_states: + continue + name = node.name meta["hostvars"][name] = self.node_to_dict(node) @@ -268,7 +428,7 @@ class GceInventory(object): if zones and zone not in zones: continue - if groups.has_key(zone): groups[zone].append(name) + if zone in groups: groups[zone].append(name) else: groups[zone] = [name] tags = node.extra['tags'] @@ -277,25 +437,25 @@ class GceInventory(object): tag = t[6:] else: tag = 'tag_%s' % t - if groups.has_key(tag): groups[tag].append(name) + if tag in groups: groups[tag].append(name) else: groups[tag] = [name] net = node.extra['networkInterfaces'][0]['network'].split('/')[-1] net = 'network_%s' % net - if groups.has_key(net): groups[net].append(name) + if net in groups: groups[net].append(name) else: groups[net] = [name] machine_type = node.size - if groups.has_key(machine_type): groups[machine_type].append(name) + if machine_type in groups: groups[machine_type].append(name) else: groups[machine_type] = [name] image = node.image and node.image or 'persistent_disk' - if groups.has_key(image): groups[image].append(name) + if image in groups: groups[image].append(name) else: groups[image] = [name] status = node.extra['status'] stat = 'status_%s' % status.lower() - if groups.has_key(stat): groups[stat].append(name) + if stat in groups: groups[stat].append(name) else: groups[stat] = [name] groups["_meta"] = meta @@ -311,6 +471,6 @@ class GceInventory(object): else: return json.dumps(data) - # Run the script -GceInventory() +if __name__ == '__main__': + GceInventory() diff --git a/awx/plugins/inventory/openstack.py b/awx/plugins/inventory/openstack.py index 103be1bee0..6679a2cc3b 100755 --- a/awx/plugins/inventory/openstack.py +++ b/awx/plugins/inventory/openstack.py @@ -2,7 +2,8 @@ # Copyright (c) 2012, Marco Vito Moscaritolo # Copyright (c) 2013, Jesse Keating -# Copyright (c) 2014, Hewlett-Packard Development Company, L.P. +# Copyright (c) 2015, Hewlett-Packard Development Company, L.P. +# Copyright (c) 2016, Rackspace Australia # # This module is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by @@ -18,7 +19,7 @@ # along with this software. If not, see . # The OpenStack Inventory module uses os-client-config for configuration. -# https://github.com/stackforge/os-client-config +# https://github.com/openstack/os-client-config # This means it will either: # - Respect normal OS_* environment variables like other OpenStack tools # - Read values from a clouds.yaml file. @@ -32,12 +33,24 @@ # all of them and present them as one contiguous inventory. # # See the adjacent openstack.yml file for an example config file +# There are two ansible inventory specific options that can be set in +# the inventory section. +# expand_hostvars controls whether or not the inventory will make extra API +# calls to fill out additional information about each server +# use_hostnames changes the behavior from registering every host with its UUID +# and making a group of its hostname to only doing this if the +# hostname in question has more than one server +# fail_on_errors causes the inventory to fail and return no hosts if one cloud +# has failed (for example, bad credentials or being offline). +# When set to False, the inventory will return hosts from +# whichever other clouds it can contact. (Default: True) import argparse import collections import os import sys import time +from distutils.version import StrictVersion try: import json @@ -46,89 +59,137 @@ except: import os_client_config import shade +import shade.inventory + +CONFIG_FILES = ['/etc/ansible/openstack.yaml', '/etc/ansible/openstack.yml'] -class OpenStackInventory(object): +def get_groups_from_server(server_vars, namegroup=True): + groups = [] - def __init__(self, private=False, refresh=False): - config_files = os_client_config.config.CONFIG_FILES - config_files.append('/etc/ansible/openstack.yml') - self.openstack_config = os_client_config.config.OpenStackConfig( - config_files) - self.clouds = shade.openstack_clouds(self.openstack_config) - self.private = private - self.refresh = refresh + region = server_vars['region'] + cloud = server_vars['cloud'] + metadata = server_vars.get('metadata', {}) - self.cache_max_age = self.openstack_config.get_cache_max_age() - cache_path = self.openstack_config.get_cache_path() + # Create a group for the cloud + groups.append(cloud) - # Cache related - if not os.path.exists(cache_path): - os.makedirs(cache_path) - self.cache_file = os.path.join(cache_path, "ansible-inventory.cache") + # Create a group on region + groups.append(region) - def is_cache_stale(self): - ''' Determines if cache file has expired, or if it is still valid ''' - if os.path.isfile(self.cache_file): - mod_time = os.path.getmtime(self.cache_file) - current_time = time.time() - if (mod_time + self.cache_max_age) > current_time: - return False - return True + # And one by cloud_region + groups.append("%s_%s" % (cloud, region)) - def get_host_groups(self): - if self.refresh or self.is_cache_stale(): - groups = self.get_host_groups_from_cloud() - self.write_cache(groups) + # Check if group metadata key in servers' metadata + if 'group' in metadata: + groups.append(metadata['group']) + + for extra_group in metadata.get('groups', '').split(','): + if extra_group: + groups.append(extra_group.strip()) + + groups.append('instance-%s' % server_vars['id']) + if namegroup: + groups.append(server_vars['name']) + + for key in ('flavor', 'image'): + if 'name' in server_vars[key]: + groups.append('%s-%s' % (key, server_vars[key]['name'])) + + for key, value in iter(metadata.items()): + groups.append('meta-%s_%s' % (key, value)) + + az = server_vars.get('az', None) + if az: + # Make groups for az, region_az and cloud_region_az + groups.append(az) + groups.append('%s_%s' % (region, az)) + groups.append('%s_%s_%s' % (cloud, region, az)) + return groups + + +def get_host_groups(inventory, refresh=False): + (cache_file, cache_expiration_time) = get_cache_settings() + if is_cache_stale(cache_file, cache_expiration_time, refresh=refresh): + groups = to_json(get_host_groups_from_cloud(inventory)) + open(cache_file, 'w').write(groups) + else: + groups = open(cache_file, 'r').read() + return groups + + +def append_hostvars(hostvars, groups, key, server, namegroup=False): + hostvars[key] = dict( + ansible_ssh_host=server['interface_ip'], + openstack=server) + for group in get_groups_from_server(server, namegroup=namegroup): + groups[group].append(key) + + +def get_host_groups_from_cloud(inventory): + groups = collections.defaultdict(list) + firstpass = collections.defaultdict(list) + hostvars = {} + list_args = {} + if hasattr(inventory, 'extra_config'): + use_hostnames = inventory.extra_config['use_hostnames'] + list_args['expand'] = inventory.extra_config['expand_hostvars'] + if StrictVersion(shade.__version__) >= StrictVersion("1.6.0"): + list_args['fail_on_cloud_config'] = \ + inventory.extra_config['fail_on_errors'] + else: + use_hostnames = False + + for server in inventory.list_hosts(**list_args): + + if 'interface_ip' not in server: + continue + firstpass[server['name']].append(server) + for name, servers in firstpass.items(): + if len(servers) == 1 and use_hostnames: + append_hostvars(hostvars, groups, name, servers[0]) else: - return json.load(open(self.cache_file, 'r')) - return groups + server_ids = set() + # Trap for duplicate results + for server in servers: + server_ids.add(server['id']) + if len(server_ids) == 1 and use_hostnames: + append_hostvars(hostvars, groups, name, servers[0]) + else: + for server in servers: + append_hostvars( + hostvars, groups, server['id'], server, + namegroup=True) + groups['_meta'] = {'hostvars': hostvars} + return groups - def write_cache(self, groups): - with open(self.cache_file, 'w') as cache_file: - cache_file.write(self.json_format_dict(groups)) - def get_host_groups_from_cloud(self): - groups = collections.defaultdict(list) - hostvars = collections.defaultdict(dict) +def is_cache_stale(cache_file, cache_expiration_time, refresh=False): + ''' Determines if cache file has expired, or if it is still valid ''' + if refresh: + return True + if os.path.isfile(cache_file) and os.path.getsize(cache_file) > 0: + mod_time = os.path.getmtime(cache_file) + current_time = time.time() + if (mod_time + cache_expiration_time) > current_time: + return False + return True - for cloud in self.clouds: - cloud.private = cloud.private or self.private - # Cycle on servers - for server in cloud.list_servers(): +def get_cache_settings(): + config = os_client_config.config.OpenStackConfig( + config_files=os_client_config.config.CONFIG_FILES + CONFIG_FILES) + # For inventory-wide caching + cache_expiration_time = config.get_cache_expiration_time() + cache_path = config.get_cache_path() + if not os.path.exists(cache_path): + os.makedirs(cache_path) + cache_file = os.path.join(cache_path, 'ansible-inventory.cache') + return (cache_file, cache_expiration_time) - meta = cloud.get_server_meta(server) - if 'interface_ip' not in meta['server_vars']: - # skip this host if it doesn't have a network address - continue - - server_vars = meta['server_vars'] - hostvars[server.name][ - 'ansible_ssh_host'] = server_vars['interface_ip'] - hostvars[server.name]['openstack'] = server_vars - - for group in meta['groups']: - groups[group].append(server.name) - - if hostvars: - groups['_meta'] = {'hostvars': hostvars} - return groups - - def json_format_dict(self, data): - return json.dumps(data, sort_keys=True, indent=2) - - def list_instances(self): - groups = self.get_host_groups() - # Return server list - print(self.json_format_dict(groups)) - - def get_host(self, hostname): - groups = self.get_host_groups() - hostvars = groups['_meta']['hostvars'] - if hostname in hostvars: - print(self.json_format_dict(hostvars[hostname])) +def to_json(in_dict): + return json.dumps(in_dict, sort_keys=True, indent=2) def parse_args(): @@ -138,21 +199,43 @@ def parse_args(): help='Use private address for ansible host') parser.add_argument('--refresh', action='store_true', help='Refresh cached information') + parser.add_argument('--debug', action='store_true', default=False, + help='Enable debug output') group = parser.add_mutually_exclusive_group(required=True) group.add_argument('--list', action='store_true', help='List active servers') group.add_argument('--host', help='List details about the specific host') + return parser.parse_args() def main(): args = parse_args() try: - inventory = OpenStackInventory(args.private, args.refresh) + config_files = os_client_config.config.CONFIG_FILES + CONFIG_FILES + shade.simple_logging(debug=args.debug) + inventory_args = dict( + refresh=args.refresh, + config_files=config_files, + private=args.private, + ) + if hasattr(shade.inventory.OpenStackInventory, 'extra_config'): + inventory_args.update(dict( + config_key='ansible', + config_defaults={ + 'use_hostnames': False, + 'expand_hostvars': True, + 'fail_on_errors': True, + } + )) + + inventory = shade.inventory.OpenStackInventory(**inventory_args) + if args.list: - inventory.list_instances() + output = get_host_groups(inventory, refresh=args.refresh) elif args.host: - inventory.get_host(args.host) + output = to_json(inventory.get_host(args.host)) + print(output) except shade.OpenStackCloudException as e: sys.stderr.write('%s\n' % e.message) sys.exit(1) diff --git a/awx/plugins/inventory/openstack.yml b/awx/plugins/inventory/openstack.yml index a99bb02058..3687b1f399 100644 --- a/awx/plugins/inventory/openstack.yml +++ b/awx/plugins/inventory/openstack.yml @@ -26,3 +26,7 @@ clouds: username: stack password: stack project_name: stack +ansible: + use_hostnames: False + expand_hostvars: True + fail_on_errors: True diff --git a/awx/plugins/inventory/vmware.py b/awx/plugins/inventory/vmware.py deleted file mode 100755 index 8f723a638d..0000000000 --- a/awx/plugins/inventory/vmware.py +++ /dev/null @@ -1,436 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -''' -VMware Inventory Script -======================= - -Retrieve information about virtual machines from a vCenter server or -standalone ESX host. When `group_by=false` (in the INI file), host systems -are also returned in addition to VMs. - -This script will attempt to read configuration from an INI file with the same -base filename if present, or `vmware.ini` if not. It is possible to create -symlinks to the inventory script to support multiple configurations, e.g.: - -* `vmware.py` (this script) -* `vmware.ini` (default configuration, will be read by `vmware.py`) -* `vmware_test.py` (symlink to `vmware.py`) -* `vmware_test.ini` (test configuration, will be read by `vmware_test.py`) -* `vmware_other.py` (symlink to `vmware.py`, will read `vmware.ini` since no - `vmware_other.ini` exists) - -The path to an INI file may also be specified via the `VMWARE_INI` environment -variable, in which case the filename matching rules above will not apply. - -Host and authentication parameters may be specified via the `VMWARE_HOST`, -`VMWARE_USER` and `VMWARE_PASSWORD` environment variables; these options will -take precedence over options present in the INI file. An INI file is not -required if these options are specified using environment variables. -''' - -from __future__ import print_function - -import collections -import json -import logging -import optparse -import os -import sys -import time -import ConfigParser - -from six import text_type - -# Disable logging message trigged by pSphere/suds. -try: - from logging import NullHandler -except ImportError: - from logging import Handler - class NullHandler(Handler): - def emit(self, record): - pass -logging.getLogger('psphere').addHandler(NullHandler()) -logging.getLogger('suds').addHandler(NullHandler()) - -from psphere.client import Client -from psphere.errors import ObjectNotFoundError -from psphere.managedobjects import HostSystem, VirtualMachine, ManagedObject, Network -from suds.sudsobject import Object as SudsObject - - -class VMwareInventory(object): - - def __init__(self, guests_only=None): - self.config = ConfigParser.SafeConfigParser() - if os.environ.get('VMWARE_INI', ''): - config_files = [os.environ['VMWARE_INI']] - else: - config_files = [os.path.abspath(sys.argv[0]).rstrip('.py') + '.ini', 'vmware.ini'] - for config_file in config_files: - if os.path.exists(config_file): - self.config.read(config_file) - break - - # Retrieve only guest VMs, or include host systems? - if guests_only is not None: - self.guests_only = guests_only - elif self.config.has_option('defaults', 'guests_only'): - self.guests_only = self.config.getboolean('defaults', 'guests_only') - else: - self.guests_only = True - - # Read authentication information from VMware environment variables - # (if set), otherwise from INI file. - auth_host = os.environ.get('VMWARE_HOST') - if not auth_host and self.config.has_option('auth', 'host'): - auth_host = self.config.get('auth', 'host') - auth_user = os.environ.get('VMWARE_USER') - if not auth_user and self.config.has_option('auth', 'user'): - auth_user = self.config.get('auth', 'user') - auth_password = os.environ.get('VMWARE_PASSWORD') - if not auth_password and self.config.has_option('auth', 'password'): - auth_password = self.config.get('auth', 'password') - - # Create the VMware client connection. - self.client = Client(auth_host, auth_user, auth_password) - - def _put_cache(self, name, value): - ''' - Saves the value to cache with the name given. - ''' - if self.config.has_option('defaults', 'cache_dir'): - cache_dir = os.path.expanduser(self.config.get('defaults', 'cache_dir')) - if not os.path.exists(cache_dir): - os.makedirs(cache_dir) - cache_file = os.path.join(cache_dir, name) - with open(cache_file, 'w') as cache: - json.dump(value, cache) - - def _get_cache(self, name, default=None): - ''' - Retrieves the value from cache for the given name. - ''' - if self.config.has_option('defaults', 'cache_dir'): - cache_dir = self.config.get('defaults', 'cache_dir') - cache_file = os.path.join(cache_dir, name) - if os.path.exists(cache_file): - if self.config.has_option('defaults', 'cache_max_age'): - cache_max_age = self.config.getint('defaults', 'cache_max_age') - else: - cache_max_age = 0 - cache_stat = os.stat(cache_file) - if (cache_stat.st_mtime + cache_max_age) >= time.time(): - with open(cache_file) as cache: - return json.load(cache) - return default - - def _flatten_dict(self, d, parent_key='', sep='_'): - ''' - Flatten nested dicts by combining keys with a separator. Lists with - only string items are included as is; any other lists are discarded. - ''' - items = [] - for k, v in d.items(): - if k.startswith('_'): - continue - new_key = parent_key + sep + k if parent_key else k - if isinstance(v, collections.MutableMapping): - items.extend(self._flatten_dict(v, new_key, sep).items()) - elif isinstance(v, (list, tuple)): - if all([isinstance(x, basestring) for x in v]): - items.append((new_key, v)) - else: - items.append((new_key, v)) - return dict(items) - - def _get_obj_info(self, obj, depth=99, seen=None): - ''' - Recursively build a data structure for the given pSphere object (depth - only applies to ManagedObject instances). - ''' - seen = seen or set() - if isinstance(obj, ManagedObject): - try: - obj_unicode = text_type(getattr(obj, 'name')) - except AttributeError: - obj_unicode = () - if obj in seen: - return obj_unicode - seen.add(obj) - if depth <= 0: - return obj_unicode - d = {} - for attr in dir(obj): - if attr.startswith('_'): - continue - try: - val = getattr(obj, attr) - obj_info = self._get_obj_info(val, depth - 1, seen) - if obj_info != (): - d[attr] = obj_info - except Exception as e: - pass - return d - elif isinstance(obj, SudsObject): - d = {} - for key, val in iter(obj): - obj_info = self._get_obj_info(val, depth, seen) - if obj_info != (): - d[key] = obj_info - return d - elif isinstance(obj, (list, tuple)): - l = [] - for val in iter(obj): - obj_info = self._get_obj_info(val, depth, seen) - if obj_info != (): - l.append(obj_info) - return l - elif isinstance(obj, (type(None), bool, int, long, float, basestring)): - return obj - else: - return () - - def _get_host_info(self, host, prefix='vmware'): - ''' - Return a flattened dict with info about the given host system. - ''' - host_info = { - 'name': host.name, - } - for attr in ('datastore', 'network', 'vm'): - try: - value = getattr(host, attr) - host_info['%ss' % attr] = self._get_obj_info(value, depth=0) - except AttributeError: - host_info['%ss' % attr] = [] - for k, v in self._get_obj_info(host.summary, depth=0).items(): - if isinstance(v, collections.MutableMapping): - for k2, v2 in v.items(): - host_info[k2] = v2 - elif k != 'host': - host_info[k] = v - try: - host_info['ipAddress'] = host.config.network.vnic[0].spec.ip.ipAddress - except Exception as e: - print(e, file=sys.stderr) - host_info = self._flatten_dict(host_info, prefix) - if ('%s_ipAddress' % prefix) in host_info: - host_info['ansible_ssh_host'] = host_info['%s_ipAddress' % prefix] - return host_info - - def _get_vm_info(self, vm, prefix='vmware'): - ''' - Return a flattened dict with info about the given virtual machine. - ''' - vm_info = { - 'name': vm.name, - } - for attr in ('datastore', 'network'): - try: - value = getattr(vm, attr) - vm_info['%ss' % attr] = self._get_obj_info(value, depth=0) - except AttributeError: - vm_info['%ss' % attr] = [] - try: - vm_info['resourcePool'] = self._get_obj_info(vm.resourcePool, depth=0) - except AttributeError: - vm_info['resourcePool'] = '' - try: - vm_info['guestState'] = vm.guest.guestState - except AttributeError: - vm_info['guestState'] = '' - for k, v in self._get_obj_info(vm.summary, depth=0).items(): - if isinstance(v, collections.MutableMapping): - for k2, v2 in v.items(): - if k2 == 'host': - k2 = 'hostSystem' - vm_info[k2] = v2 - elif k != 'vm': - vm_info[k] = v - vm_info = self._flatten_dict(vm_info, prefix) - if ('%s_ipAddress' % prefix) in vm_info: - vm_info['ansible_ssh_host'] = vm_info['%s_ipAddress' % prefix] - return vm_info - - def _add_host(self, inv, parent_group, host_name): - ''' - Add the host to the parent group in the given inventory. - ''' - p_group = inv.setdefault(parent_group, []) - if isinstance(p_group, dict): - group_hosts = p_group.setdefault('hosts', []) - else: - group_hosts = p_group - if host_name not in group_hosts: - group_hosts.append(host_name) - - def _add_child(self, inv, parent_group, child_group): - ''' - Add a child group to a parent group in the given inventory. - ''' - if parent_group != 'all': - p_group = inv.setdefault(parent_group, {}) - if not isinstance(p_group, dict): - inv[parent_group] = {'hosts': p_group} - p_group = inv[parent_group] - group_children = p_group.setdefault('children', []) - if child_group not in group_children: - group_children.append(child_group) - inv.setdefault(child_group, []) - - def get_inventory(self, meta_hostvars=True): - ''' - Reads the inventory from cache or VMware API via pSphere. - ''' - # Use different cache names for guests only vs. all hosts. - if self.guests_only: - cache_name = '__inventory_guests__' - else: - cache_name = '__inventory_all__' - - inv = self._get_cache(cache_name, None) - if inv is not None: - return inv - - inv = {'all': {'hosts': []}} - if meta_hostvars: - inv['_meta'] = {'hostvars': {}} - - default_group = os.path.basename(sys.argv[0]).rstrip('.py') - - if not self.guests_only: - if self.config.has_option('defaults', 'hw_group'): - hw_group = self.config.get('defaults', 'hw_group') - else: - hw_group = default_group + '_hw' - - if self.config.has_option('defaults', 'vm_group'): - vm_group = self.config.get('defaults', 'vm_group') - else: - vm_group = default_group + '_vm' - - if self.config.has_option('defaults', 'prefix_filter'): - prefix_filter = self.config.get('defaults', 'prefix_filter') - else: - prefix_filter = None - - # Loop through physical hosts: - for host in HostSystem.all(self.client): - - if not self.guests_only: - self._add_host(inv, 'all', host.name) - self._add_host(inv, hw_group, host.name) - host_info = self._get_host_info(host) - if meta_hostvars: - inv['_meta']['hostvars'][host.name] = host_info - self._put_cache(host.name, host_info) - - # Loop through all VMs on physical host. - for vm in host.vm: - if prefix_filter: - if vm.name.startswith( prefix_filter ): - continue - self._add_host(inv, 'all', vm.name) - self._add_host(inv, vm_group, vm.name) - vm_info = self._get_vm_info(vm) - if meta_hostvars: - inv['_meta']['hostvars'][vm.name] = vm_info - self._put_cache(vm.name, vm_info) - - # Group by resource pool. - vm_resourcePool = vm_info.get('vmware_resourcePool', None) - if vm_resourcePool: - self._add_child(inv, vm_group, 'resource_pools') - self._add_child(inv, 'resource_pools', vm_resourcePool) - self._add_host(inv, vm_resourcePool, vm.name) - - # Group by datastore. - for vm_datastore in vm_info.get('vmware_datastores', []): - self._add_child(inv, vm_group, 'datastores') - self._add_child(inv, 'datastores', vm_datastore) - self._add_host(inv, vm_datastore, vm.name) - - # Group by network. - for vm_network in vm_info.get('vmware_networks', []): - self._add_child(inv, vm_group, 'networks') - self._add_child(inv, 'networks', vm_network) - self._add_host(inv, vm_network, vm.name) - - # Group by guest OS. - vm_guestId = vm_info.get('vmware_guestId', None) - if vm_guestId: - self._add_child(inv, vm_group, 'guests') - self._add_child(inv, 'guests', vm_guestId) - self._add_host(inv, vm_guestId, vm.name) - - # Group all VM templates. - vm_template = vm_info.get('vmware_template', False) - if vm_template: - self._add_child(inv, vm_group, 'templates') - self._add_host(inv, 'templates', vm.name) - - self._put_cache(cache_name, inv) - return inv - - def get_host(self, hostname): - ''' - Read info about a specific host or VM from cache or VMware API. - ''' - inv = self._get_cache(hostname, None) - if inv is not None: - return inv - - if not self.guests_only: - try: - host = HostSystem.get(self.client, name=hostname) - inv = self._get_host_info(host) - except ObjectNotFoundError: - pass - - if inv is None: - try: - vm = VirtualMachine.get(self.client, name=hostname) - inv = self._get_vm_info(vm) - except ObjectNotFoundError: - pass - - if inv is not None: - self._put_cache(hostname, inv) - return inv or {} - - -def main(): - parser = optparse.OptionParser() - parser.add_option('--list', action='store_true', dest='list', - default=False, help='Output inventory groups and hosts') - parser.add_option('--host', dest='host', default=None, metavar='HOST', - help='Output variables only for the given hostname') - # Additional options for use when running the script standalone, but never - # used by Ansible. - parser.add_option('--pretty', action='store_true', dest='pretty', - default=False, help='Output nicely-formatted JSON') - parser.add_option('--include-host-systems', action='store_true', - dest='include_host_systems', default=False, - help='Include host systems in addition to VMs') - parser.add_option('--no-meta-hostvars', action='store_false', - dest='meta_hostvars', default=True, - help='Exclude [\'_meta\'][\'hostvars\'] with --list') - options, args = parser.parse_args() - - if options.include_host_systems: - vmware_inventory = VMwareInventory(guests_only=False) - else: - vmware_inventory = VMwareInventory() - if options.host is not None: - inventory = vmware_inventory.get_host(options.host) - else: - inventory = vmware_inventory.get_inventory(options.meta_hostvars) - - json_kwargs = {} - if options.pretty: - json_kwargs.update({'indent': 4, 'sort_keys': True}) - json.dump(inventory, sys.stdout, **json_kwargs) - - -if __name__ == '__main__': - main() diff --git a/awx/plugins/inventory/vmware_inventory.py b/awx/plugins/inventory/vmware_inventory.py new file mode 100755 index 0000000000..84979dc270 --- /dev/null +++ b/awx/plugins/inventory/vmware_inventory.py @@ -0,0 +1,723 @@ +#!/usr/bin/env python + +# Requirements +# - pyvmomi >= 6.0.0.2016.4 + +# TODO: +# * more jq examples +# * optional folder heriarchy + +""" +$ jq '._meta.hostvars[].config' data.json | head +{ + "alternateguestname": "", + "instanceuuid": "5035a5cd-b8e8-d717-e133-2d383eb0d675", + "memoryhotaddenabled": false, + "guestfullname": "Red Hat Enterprise Linux 7 (64-bit)", + "changeversion": "2016-05-16T18:43:14.977925Z", + "uuid": "4235fc97-5ddb-7a17-193b-9a3ac97dc7b4", + "cpuhotremoveenabled": false, + "vpmcenabled": false, + "firmware": "bios", +""" + +from __future__ import print_function + +import argparse +import atexit +import datetime +import getpass +import jinja2 +import os +import six +import ssl +import sys +import uuid + +from collections import defaultdict +from six.moves import configparser +from time import time + +HAS_PYVMOMI = False +try: + from pyVmomi import vim + from pyVim.connect import SmartConnect, Disconnect + + HAS_PYVMOMI = True +except ImportError: + pass + +try: + import json +except ImportError: + import simplejson as json + +hasvcr = False +try: + import vcr + + hasvcr = True +except ImportError: + pass + + +class VMwareMissingHostException(Exception): + pass + + +class VMWareInventory(object): + __name__ = 'VMWareInventory' + + guest_props = False + instances = [] + debug = False + load_dumpfile = None + write_dumpfile = None + maxlevel = 1 + lowerkeys = True + config = None + cache_max_age = None + cache_path_cache = None + cache_path_index = None + cache_dir = None + server = None + port = None + username = None + password = None + validate_certs = True + host_filters = [] + skip_keys = [] + groupby_patterns = [] + + if sys.version_info > (3, 0): + safe_types = [int, bool, str, float, None] + else: + safe_types = [int, long, bool, str, float, None] + iter_types = [dict, list] + + bad_types = ['Array', 'disabledMethod', 'declaredAlarmState'] + + vimTableMaxDepth = { + "vim.HostSystem": 2, + "vim.VirtualMachine": 2, + } + + custom_fields = {} + + # translation table for attributes to fetch for known vim types + if not HAS_PYVMOMI: + vimTable = {} + else: + vimTable = { + vim.Datastore: ['_moId', 'name'], + vim.ResourcePool: ['_moId', 'name'], + vim.HostSystem: ['_moId', 'name'], + } + + @staticmethod + def _empty_inventory(): + return {"_meta": {"hostvars": {}}} + + def __init__(self, load=True): + self.inventory = VMWareInventory._empty_inventory() + + if load: + # Read settings and parse CLI arguments + self.parse_cli_args() + self.read_settings() + + # Check the cache + cache_valid = self.is_cache_valid() + + # Handle Cache + if self.args.refresh_cache or not cache_valid: + self.do_api_calls_update_cache() + else: + self.debugl('loading inventory from cache') + self.inventory = self.get_inventory_from_cache() + + def debugl(self, text): + if self.args.debug: + try: + text = str(text) + except UnicodeEncodeError: + text = text.encode('ascii', 'ignore') + print('%s %s' % (datetime.datetime.now(), text)) + + def show(self): + # Data to print + self.debugl('dumping results') + data_to_print = None + if self.args.host: + data_to_print = self.get_host_info(self.args.host) + elif self.args.list: + # Display list of instances for inventory + data_to_print = self.inventory + return json.dumps(data_to_print, indent=2) + + def is_cache_valid(self): + + ''' Determines if the cache files have expired, or if it is still valid ''' + + valid = False + + if os.path.isfile(self.cache_path_cache): + mod_time = os.path.getmtime(self.cache_path_cache) + current_time = time() + if (mod_time + self.cache_max_age) > current_time: + valid = True + + return valid + + def do_api_calls_update_cache(self): + + ''' Get instances and cache the data ''' + + self.inventory = self.instances_to_inventory(self.get_instances()) + self.write_to_cache(self.inventory) + + def write_to_cache(self, data): + + ''' Dump inventory to json file ''' + + with open(self.cache_path_cache, 'wb') as f: + f.write(json.dumps(data)) + + def get_inventory_from_cache(self): + + ''' Read in jsonified inventory ''' + + jdata = None + with open(self.cache_path_cache, 'rb') as f: + jdata = f.read() + return json.loads(jdata) + + def read_settings(self): + + ''' Reads the settings from the vmware_inventory.ini file ''' + + scriptbasename = __file__ + scriptbasename = os.path.basename(scriptbasename) + scriptbasename = scriptbasename.replace('.py', '') + + defaults = {'vmware': { + 'server': '', + 'port': 443, + 'username': '', + 'password': '', + 'validate_certs': True, + 'ini_path': os.path.join(os.path.dirname(__file__), '%s.ini' % scriptbasename), + 'cache_name': 'ansible-vmware', + 'cache_path': '~/.ansible/tmp', + 'cache_max_age': 3600, + 'max_object_level': 1, + 'skip_keys': 'declaredalarmstate,' + 'disabledmethod,' + 'dynamicproperty,' + 'dynamictype,' + 'environmentbrowser,' + 'managedby,' + 'parent,' + 'childtype,' + 'resourceconfig', + 'alias_pattern': '{{ config.name + "_" + config.uuid }}', + 'host_pattern': '{{ guest.ipaddress }}', + 'host_filters': '{{ guest.gueststate == "running" }}', + 'groupby_patterns': '{{ guest.guestid }},{{ "templates" if config.template else "guests"}}', + 'lower_var_keys': True, + 'custom_field_group_prefix': 'vmware_tag_', + 'groupby_custom_field': False} + } + + if six.PY3: + config = configparser.ConfigParser() + else: + config = configparser.SafeConfigParser() + + # where is the config? + vmware_ini_path = os.environ.get('VMWARE_INI_PATH', defaults['vmware']['ini_path']) + vmware_ini_path = os.path.expanduser(os.path.expandvars(vmware_ini_path)) + config.read(vmware_ini_path) + + # apply defaults + for k, v in defaults['vmware'].items(): + if not config.has_option('vmware', k): + config.set('vmware', k, str(v)) + + # where is the cache? + self.cache_dir = os.path.expanduser(config.get('vmware', 'cache_path')) + if self.cache_dir and not os.path.exists(self.cache_dir): + os.makedirs(self.cache_dir) + + # set the cache filename and max age + cache_name = config.get('vmware', 'cache_name') + self.cache_path_cache = self.cache_dir + "/%s.cache" % cache_name + self.debugl('cache path is %s' % self.cache_path_cache) + self.cache_max_age = int(config.getint('vmware', 'cache_max_age')) + + # mark the connection info + self.server = os.environ.get('VMWARE_SERVER', config.get('vmware', 'server')) + self.debugl('server is %s' % self.server) + self.port = int(os.environ.get('VMWARE_PORT', config.get('vmware', 'port'))) + self.username = os.environ.get('VMWARE_USERNAME', config.get('vmware', 'username')) + self.debugl('username is %s' % self.username) + self.password = os.environ.get('VMWARE_PASSWORD', config.get('vmware', 'password')) + self.validate_certs = os.environ.get('VMWARE_VALIDATE_CERTS', config.get('vmware', 'validate_certs')) + if self.validate_certs in ['no', 'false', 'False', False]: + self.validate_certs = False + + self.debugl('cert validation is %s' % self.validate_certs) + + # behavior control + self.maxlevel = int(config.get('vmware', 'max_object_level')) + self.debugl('max object level is %s' % self.maxlevel) + self.lowerkeys = config.get('vmware', 'lower_var_keys') + if type(self.lowerkeys) != bool: + if str(self.lowerkeys).lower() in ['yes', 'true', '1']: + self.lowerkeys = True + else: + self.lowerkeys = False + self.debugl('lower keys is %s' % self.lowerkeys) + self.skip_keys = list(config.get('vmware', 'skip_keys').split(',')) + self.debugl('skip keys is %s' % self.skip_keys) + self.host_filters = list(config.get('vmware', 'host_filters').split(',')) + self.debugl('host filters are %s' % self.host_filters) + self.groupby_patterns = list(config.get('vmware', 'groupby_patterns').split(',')) + self.debugl('groupby patterns are %s' % self.groupby_patterns) + + # Special feature to disable the brute force serialization of the + # virtulmachine objects. The key name for these properties does not + # matter because the values are just items for a larger list. + if config.has_section('properties'): + self.guest_props = [] + for prop in config.items('properties'): + self.guest_props.append(prop[1]) + + # save the config + self.config = config + + def parse_cli_args(self): + + ''' Command line argument processing ''' + + parser = argparse.ArgumentParser(description='Produce an Ansible Inventory file based on PyVmomi') + parser.add_argument('--debug', action='store_true', default=False, + help='show debug info') + parser.add_argument('--list', action='store_true', default=True, + help='List instances (default: True)') + parser.add_argument('--host', action='store', + help='Get all the variables about a specific instance') + parser.add_argument('--refresh-cache', action='store_true', default=False, + help='Force refresh of cache by making API requests to VSphere (default: False - use cache files)') + parser.add_argument('--max-instances', default=None, type=int, + help='maximum number of instances to retrieve') + self.args = parser.parse_args() + + def get_instances(self): + + ''' Get a list of vm instances with pyvmomi ''' + kwargs = {'host': self.server, + 'user': self.username, + 'pwd': self.password, + 'port': int(self.port)} + + if hasattr(ssl, 'SSLContext') and not self.validate_certs: + context = ssl.SSLContext(ssl.PROTOCOL_SSLv23) + context.verify_mode = ssl.CERT_NONE + kwargs['sslContext'] = context + + return self._get_instances(kwargs) + + def _get_instances(self, inkwargs): + + ''' Make API calls ''' + + instances = [] + si = SmartConnect(**inkwargs) + + self.debugl('retrieving all instances') + if not si: + print("Could not connect to the specified host using specified " + "username and password") + return -1 + atexit.register(Disconnect, si) + content = si.RetrieveContent() + + # Create a search container for virtualmachines + self.debugl('creating containerview for virtualmachines') + container = content.rootFolder + viewType = [vim.VirtualMachine] + recursive = True + containerView = content.viewManager.CreateContainerView(container, viewType, recursive) + children = containerView.view + for child in children: + # If requested, limit the total number of instances + if self.args.max_instances: + if len(instances) >= self.args.max_instances: + break + instances.append(child) + self.debugl("%s total instances in container view" % len(instances)) + + if self.args.host: + instances = [x for x in instances if x.name == self.args.host] + + instance_tuples = [] + for instance in sorted(instances): + if self.guest_props: + ifacts = self.facts_from_proplist(instance) + else: + ifacts = self.facts_from_vobj(instance) + instance_tuples.append((instance, ifacts)) + self.debugl('facts collected for all instances') + + cfm = content.customFieldsManager + if cfm is not None and cfm.field: + for f in cfm.field: + if f.managedObjectType == vim.VirtualMachine: + self.custom_fields[f.key] = f.name; + self.debugl('%d custom fieds collected' % len(self.custom_fields)) + return instance_tuples + + def instances_to_inventory(self, instances): + + ''' Convert a list of vm objects into a json compliant inventory ''' + + self.debugl('re-indexing instances based on ini settings') + inventory = VMWareInventory._empty_inventory() + inventory['all'] = {} + inventory['all']['hosts'] = [] + for idx, instance in enumerate(instances): + # make a unique id for this object to avoid vmware's + # numerous uuid's which aren't all unique. + thisid = str(uuid.uuid4()) + idata = instance[1] + + # Put it in the inventory + inventory['all']['hosts'].append(thisid) + inventory['_meta']['hostvars'][thisid] = idata.copy() + inventory['_meta']['hostvars'][thisid]['ansible_uuid'] = thisid + + # Make a map of the uuid to the alias the user wants + name_mapping = self.create_template_mapping( + inventory, + self.config.get('vmware', 'alias_pattern') + ) + + # Make a map of the uuid to the ssh hostname the user wants + host_mapping = self.create_template_mapping( + inventory, + self.config.get('vmware', 'host_pattern') + ) + + # Reset the inventory keys + for k, v in name_mapping.items(): + + if not host_mapping or not k in host_mapping: + continue + + # set ansible_host (2.x) + try: + inventory['_meta']['hostvars'][k]['ansible_host'] = host_mapping[k] + # 1.9.x backwards compliance + inventory['_meta']['hostvars'][k]['ansible_ssh_host'] = host_mapping[k] + except Exception: + continue + + if k == v: + continue + + # add new key + inventory['all']['hosts'].append(v) + inventory['_meta']['hostvars'][v] = inventory['_meta']['hostvars'][k] + + # cleanup old key + inventory['all']['hosts'].remove(k) + inventory['_meta']['hostvars'].pop(k, None) + + self.debugl('pre-filtered hosts:') + for i in inventory['all']['hosts']: + self.debugl(' * %s' % i) + # Apply host filters + for hf in self.host_filters: + if not hf: + continue + self.debugl('filter: %s' % hf) + filter_map = self.create_template_mapping(inventory, hf, dtype='boolean') + for k, v in filter_map.items(): + if not v: + # delete this host + inventory['all']['hosts'].remove(k) + inventory['_meta']['hostvars'].pop(k, None) + + self.debugl('post-filter hosts:') + for i in inventory['all']['hosts']: + self.debugl(' * %s' % i) + + # Create groups + for gbp in self.groupby_patterns: + groupby_map = self.create_template_mapping(inventory, gbp) + for k, v in groupby_map.items(): + if v not in inventory: + inventory[v] = {} + inventory[v]['hosts'] = [] + if k not in inventory[v]['hosts']: + inventory[v]['hosts'].append(k) + + if self.config.get('vmware', 'groupby_custom_field'): + for k, v in inventory['_meta']['hostvars'].items(): + if 'customvalue' in v: + for tv in v['customvalue']: + if not isinstance(tv['value'], str) and not isinstance(tv['value'], unicode): + continue + + newkey = None + field_name = self.custom_fields[tv['key']] if tv['key'] in self.custom_fields else tv['key'] + values = [] + keylist = map(lambda x: x.strip(), tv['value'].split(',')) + for kl in keylist: + try: + newkey = self.config.get('vmware', 'custom_field_group_prefix') + field_name + '_' + kl + newkey = newkey.strip() + except Exception as e: + self.debugl(e) + values.append(newkey) + for tag in values: + if not tag: + continue + if tag not in inventory: + inventory[tag] = {} + inventory[tag]['hosts'] = [] + if k not in inventory[tag]['hosts']: + inventory[tag]['hosts'].append(k) + + return inventory + + def create_template_mapping(self, inventory, pattern, dtype='string'): + + ''' Return a hash of uuid to templated string from pattern ''' + + mapping = {} + for k, v in inventory['_meta']['hostvars'].items(): + t = jinja2.Template(pattern) + newkey = None + try: + newkey = t.render(v) + newkey = newkey.strip() + except Exception as e: + self.debugl(e) + if not newkey: + continue + elif dtype == 'integer': + newkey = int(newkey) + elif dtype == 'boolean': + if newkey.lower() == 'false': + newkey = False + elif newkey.lower() == 'true': + newkey = True + elif dtype == 'string': + pass + mapping[k] = newkey + return mapping + + def facts_from_proplist(self, vm): + '''Get specific properties instead of serializing everything''' + + rdata = {} + for prop in self.guest_props: + self.debugl('getting %s property for %s' % (prop, vm.name)) + key = prop + if self.lowerkeys: + key = key.lower() + + if '.' not in prop: + # props without periods are direct attributes of the parent + rdata[key] = getattr(vm, prop) + else: + # props with periods are subkeys of parent attributes + parts = prop.split('.') + total = len(parts) - 1 + + # pointer to the current object + val = None + # pointer to the current result key + lastref = rdata + + for idx, x in enumerate(parts): + + # if the val wasn't set yet, get it from the parent + if not val: + val = getattr(vm, x) + else: + # in a subkey, get the subprop from the previous attrib + try: + val = getattr(val, x) + except AttributeError as e: + self.debugl(e) + + # lowercase keys if requested + if self.lowerkeys: + x = x.lower() + + # change the pointer or set the final value + if idx != total: + if x not in lastref: + lastref[x] = {} + lastref = lastref[x] + else: + lastref[x] = val + + return rdata + + def facts_from_vobj(self, vobj, level=0): + + ''' Traverse a VM object and return a json compliant data structure ''' + + # pyvmomi objects are not yet serializable, but may be one day ... + # https://github.com/vmware/pyvmomi/issues/21 + + # WARNING: + # Accessing an object attribute will trigger a SOAP call to the remote. + # Increasing the attributes collected or the depth of recursion greatly + # increases runtime duration and potentially memory+network utilization. + + if level == 0: + try: + self.debugl("get facts for %s" % vobj.name) + except Exception as e: + self.debugl(e) + + rdata = {} + + methods = dir(vobj) + methods = [str(x) for x in methods if not x.startswith('_')] + methods = [x for x in methods if x not in self.bad_types] + methods = [x for x in methods if not x.lower() in self.skip_keys] + methods = sorted(methods) + + for method in methods: + # Attempt to get the method, skip on fail + try: + methodToCall = getattr(vobj, method) + except Exception as e: + continue + + # Skip callable methods + if callable(methodToCall): + continue + + if self.lowerkeys: + method = method.lower() + + rdata[method] = self._process_object_types( + methodToCall, + thisvm=vobj, + inkey=method, + ) + + return rdata + + def _process_object_types(self, vobj, thisvm=None, inkey=None, level=0): + ''' Serialize an object ''' + rdata = {} + + if type(vobj).__name__ in self.vimTableMaxDepth and level >= self.vimTableMaxDepth[type(vobj).__name__]: + return rdata + + if vobj is None: + rdata = None + elif type(vobj) in self.vimTable: + rdata = {} + for key in self.vimTable[type(vobj)]: + rdata[key] = getattr(vobj, key) + + elif issubclass(type(vobj), str) or isinstance(vobj, str): + if vobj.isalnum(): + rdata = vobj + else: + rdata = vobj.decode('ascii', 'ignore') + elif issubclass(type(vobj), bool) or isinstance(vobj, bool): + rdata = vobj + elif issubclass(type(vobj), int) or isinstance(vobj, int): + rdata = vobj + elif issubclass(type(vobj), float) or isinstance(vobj, float): + rdata = vobj + elif issubclass(type(vobj), long) or isinstance(vobj, long): + rdata = vobj + elif issubclass(type(vobj), list) or issubclass(type(vobj), tuple): + rdata = [] + try: + vobj = sorted(vobj) + except Exception: + pass + + for idv, vii in enumerate(vobj): + if level + 1 <= self.maxlevel: + vid = self._process_object_types( + vii, + thisvm=thisvm, + inkey=inkey + '[' + str(idv) + ']', + level=(level + 1) + ) + + if vid: + rdata.append(vid) + + elif issubclass(type(vobj), dict): + pass + + elif issubclass(type(vobj), object): + methods = dir(vobj) + methods = [str(x) for x in methods if not x.startswith('_')] + methods = [x for x in methods if x not in self.bad_types] + methods = [x for x in methods if not inkey + '.' + x.lower() in self.skip_keys] + methods = sorted(methods) + + for method in methods: + # Attempt to get the method, skip on fail + try: + methodToCall = getattr(vobj, method) + except Exception as e: + continue + + if callable(methodToCall): + continue + + if self.lowerkeys: + method = method.lower() + if level + 1 <= self.maxlevel: + rdata[method] = self._process_object_types( + methodToCall, + thisvm=thisvm, + inkey=inkey + '.' + method, + level=(level + 1) + ) + else: + pass + + return rdata + + def get_host_info(self, host): + + ''' Return hostvars for a single host ''' + + if host in self.inventory['_meta']['hostvars']: + return self.inventory['_meta']['hostvars'][host] + elif self.args.host and self.inventory['_meta']['hostvars']: + match = None + for k, v in self.inventory['_meta']['hostvars']: + if self.inventory['_meta']['hostvars'][k]['name'] == self.args.host: + match = k + break + if match: + return self.inventory['_meta']['hostvars'][match] + else: + raise VMwareMissingHostException('%s not found' % host) + else: + raise VMwareMissingHostException('%s not found' % host) + + +if __name__ == "__main__": + # Run the script + print(VMWareInventory().show()) + + diff --git a/awx/plugins/library/scan_packages.py b/awx/plugins/library/scan_packages.py index 13b28542f6..d5aafc66e6 100755 --- a/awx/plugins/library/scan_packages.py +++ b/awx/plugins/library/scan_packages.py @@ -22,19 +22,19 @@ EXAMPLES = ''' # { # "source": "apt", # "version": "1.0.6-5", -# "architecture": "amd64", +# "arch": "amd64", # "name": "libbz2-1.0" # }, # { # "source": "apt", # "version": "2.7.1-4ubuntu1", -# "architecture": "amd64", +# "arch": "amd64", # "name": "patch" # }, # { # "source": "apt", # "version": "4.8.2-19ubuntu1", -# "architecture": "amd64", +# "arch": "amd64", # "name": "gcc-4.8-base" # }, ... ] } } ''' @@ -64,7 +64,7 @@ def deb_package_list(): ac_pkg = apt_cache[package].installed package_details = dict(name=package, version=ac_pkg.version, - architecture=ac_pkg.architecture, + arch=ac_pkg.architecture, source='apt') installed_packages.append(package_details) return installed_packages diff --git a/awx/settings/defaults.py b/awx/settings/defaults.py index 096d756959..9f1585c072 100644 --- a/awx/settings/defaults.py +++ b/awx/settings/defaults.py @@ -73,7 +73,7 @@ DATABASES = { # timezone as the operating system. # If running in a Windows environment this must be set to the same as your # system time zone. -TIME_ZONE = 'America/New_York' +TIME_ZONE = None # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html @@ -152,9 +152,30 @@ REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST'] # Note: This setting may be overridden by database settings. STDOUT_MAX_BYTES_DISPLAY = 1048576 +# Returned in the header on event api lists as a recommendation to the UI +# on how many events to display before truncating/hiding +RECOMMENDED_MAX_EVENTS_DISPLAY_HEADER = 4000 + +# The maximum size of the ansible callback event's res data structure +# beyond this limit and the value will be removed +MAX_EVENT_RES_DATA = 700000 + # Note: This setting may be overridden by database settings. EVENT_STDOUT_MAX_BYTES_DISPLAY = 1024 +JOB_EVENT_WORKERS = 4 + +JOB_EVENT_MAX_QUEUE_SIZE = 10000 + +# Disallow sending session cookies over insecure connections +SESSION_COOKIE_SECURE = True + +# Disallow sending csrf cookies over insecure connections +CSRF_COOKIE_SECURE = True + +# Limit CSRF cookies to browser sessions +CSRF_COOKIE_AGE = None + TEMPLATE_CONTEXT_PROCESSORS = ( # NOQA 'django.contrib.auth.context_processors.auth', 'django.core.context_processors.debug', @@ -223,6 +244,7 @@ INSTALLED_APPS = ( INTERNAL_IPS = ('127.0.0.1',) +MAX_PAGE_SIZE = 200 REST_FRAMEWORK = { 'DEFAULT_PAGINATION_CLASS': 'awx.api.pagination.Pagination', 'PAGE_SIZE': 25, @@ -359,7 +381,7 @@ os.environ.setdefault('DJANGO_LIVE_TEST_SERVER_ADDRESS', 'localhost:9013-9199') # Initialize Django-Celery. djcelery.setup_loader() -BROKER_URL = 'redis://localhost/' +BROKER_URL = 'amqp://guest:guest@localhost:5672//' CELERY_DEFAULT_QUEUE = 'default' CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' @@ -367,6 +389,7 @@ CELERY_ACCEPT_CONTENT = ['json'] CELERY_TRACK_STARTED = True CELERYD_TASK_TIME_LIMIT = None CELERYD_TASK_SOFT_TIME_LIMIT = None +CELERYD_POOL_RESTARTS = True CELERYBEAT_SCHEDULER = 'celery.beat.PersistentScheduler' CELERYBEAT_MAX_LOOP_INTERVAL = 60 CELERY_RESULT_BACKEND = 'djcelery.backends.database:DatabaseBackend' @@ -491,6 +514,9 @@ SOCIAL_AUTH_GITHUB_TEAM_SECRET = '' SOCIAL_AUTH_GITHUB_TEAM_ID = '' SOCIAL_AUTH_GITHUB_TEAM_SCOPE = ['user:email', 'read:org'] +SOCIAL_AUTH_AZUREAD_OAUTH2_KEY = '' +SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET = '' + SOCIAL_AUTH_SAML_SP_ENTITY_ID = '' SOCIAL_AUTH_SAML_SP_PUBLIC_CERT = '' SOCIAL_AUTH_SAML_SP_PRIVATE_KEY = '' @@ -518,23 +544,12 @@ ANSIBLE_FORCE_COLOR = True # the celery task. AWX_TASK_ENV = {} -# Maximum number of job events processed by the callback receiver worker process -# before it recycles -JOB_EVENT_RECYCLE_THRESHOLD = 3000 - -# Number of workers used to proecess job events in parallel -JOB_EVENT_WORKERS = 4 - -# Maximum number of job events that can be waiting on a single worker queue before -# it can be skipped as too busy -JOB_EVENT_MAX_QUEUE_SIZE = 100 - # Flag to enable/disable updating hosts M2M when saving job events. CAPTURE_JOB_EVENT_HOSTS = False # Enable bubblewrap support for running jobs (playbook runs only). # Note: This setting may be overridden by database settings. -AWX_PROOT_ENABLED = False +AWX_PROOT_ENABLED = True # Command/path to bubblewrap. AWX_PROOT_CMD = 'bwrap' @@ -629,8 +644,10 @@ EC2_REGION_NAMES = { 'us-east-2': _('US East (Ohio)'), 'us-west-2': _('US West (Oregon)'), 'us-west-1': _('US West (Northern California)'), + 'ca-central-1': _('Canada (Central)'), 'eu-central-1': _('EU (Frankfurt)'), 'eu-west-1': _('EU (Ireland)'), + 'eu-west-2': _('EU (London)'), 'ap-southeast-1': _('Asia Pacific (Singapore)'), 'ap-southeast-2': _('Asia Pacific (Sydney)'), 'ap-northeast-1': _('Asia Pacific (Tokyo)'), @@ -666,11 +683,11 @@ VMWARE_REGIONS_BLACKLIST = [] # Inventory variable name/values for determining whether a host is # active in vSphere. -VMWARE_ENABLED_VAR = 'vmware_powerState' -VMWARE_ENABLED_VALUE = 'poweredOn' +VMWARE_ENABLED_VAR = 'guest.gueststate' +VMWARE_ENABLED_VALUE = 'running' # Inventory variable name containing the unique instance ID. -VMWARE_INSTANCE_ID_VAR = 'vmware_uuid' +VMWARE_INSTANCE_ID_VAR = 'config.instanceuuid' # Filter for allowed group and host names when importing inventory # from VMware. @@ -780,6 +797,8 @@ SATELLITE6_GROUP_FILTER = r'^.+$' SATELLITE6_HOST_FILTER = r'^.+$' SATELLITE6_EXCLUDE_EMPTY_GROUPS = True SATELLITE6_INSTANCE_ID_VAR = 'foreman.id' +SATELLITE6_GROUP_PREFIX = 'foreman_' +SATELLITE6_GROUP_PATTERNS = ["{app}-{tier}-{color}", "{app}-{color}", "{app}", "{tier}"] # --------------------- # ----- CloudForms ----- @@ -873,7 +892,7 @@ LOGGING = { }, 'http_receiver': { 'class': 'awx.main.utils.handlers.HTTPSNullHandler', - 'level': 'INFO', + 'level': 'DEBUG', 'formatter': 'json', 'host': '', }, @@ -972,7 +991,7 @@ LOGGING = { 'handlers': ['callback_receiver'], }, 'awx.main.tasks': { - 'handlers': ['task_system'] + 'handlers': ['task_system'], }, 'awx.main.scheduler': { 'handlers': ['task_system'], @@ -996,22 +1015,10 @@ LOGGING = { 'propagate': False, }, 'awx.analytics': { - 'handlers': ['null'], + 'handlers': ['http_receiver'], 'level': 'INFO', 'propagate': False }, - 'awx.analytics.job_events': { - 'handlers': ['null'], - 'level': 'INFO' - }, - 'awx.analytics.activity_stream': { - 'handlers': ['null'], - 'level': 'INFO' - }, - 'awx.analytics.system_tracking': { - 'handlers': ['null'], - 'level': 'INFO' - }, 'django_auth_ldap': { 'handlers': ['console', 'file', 'tower_warnings'], 'level': 'DEBUG', diff --git a/awx/settings/development.py b/awx/settings/development.py index f2d72a1113..1326c12814 100644 --- a/awx/settings/development.py +++ b/awx/settings/development.py @@ -24,11 +24,11 @@ ALLOWED_HOSTS = ['*'] mimetypes.add_type("image/svg+xml", ".svg", True) mimetypes.add_type("image/svg+xml", ".svgz", True) -MONGO_HOST = '127.0.0.1' -MONGO_PORT = 27017 -MONGO_USERNAME = None -MONGO_PASSWORD = None -MONGO_DB = 'system_tracking_dev' +# Disallow sending session cookies over insecure connections +SESSION_COOKIE_SECURE = False + +# Disallow sending csrf cookies over insecure connections +CSRF_COOKIE_SECURE = False # Override django.template.loaders.cached.Loader in defaults.py TEMPLATE_LOADERS = ( @@ -82,14 +82,13 @@ PASSWORD_HASHERS = ( # Configure a default UUID for development only. SYSTEM_UUID = '00000000-0000-0000-0000-000000000000' -# Store a snapshot of default settings at this point (only for migrating from -# file to database settings). -if 'migrate_to_database_settings' in sys.argv: - DEFAULTS_SNAPSHOT = {} - this_module = sys.modules[__name__] - for setting in dir(this_module): - if setting == setting.upper(): - DEFAULTS_SNAPSHOT[setting] = copy.deepcopy(getattr(this_module, setting)) +# Store a snapshot of default settings at this point before loading any +# customizable config files. +DEFAULTS_SNAPSHOT = {} +this_module = sys.modules[__name__] +for setting in dir(this_module): + if setting == setting.upper(): + DEFAULTS_SNAPSHOT[setting] = copy.deepcopy(getattr(this_module, setting)) # If there is an `/etc/tower/settings.py`, include it. # If there is a `/etc/tower/conf.d/*.py`, include them. diff --git a/awx/settings/local_settings.py.docker_compose b/awx/settings/local_settings.py.docker_compose index a439d17989..1202b1cbe1 100644 --- a/awx/settings/local_settings.py.docker_compose +++ b/awx/settings/local_settings.py.docker_compose @@ -114,7 +114,7 @@ SYSTEM_UUID = '00000000-0000-0000-0000-000000000000' # timezone as the operating system. # If running in a Windows environment this must be set to the same as your # system time zone. -TIME_ZONE = 'America/New_York' +TIME_ZONE = None # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html diff --git a/awx/settings/local_settings.py.example b/awx/settings/local_settings.py.example index 20217fa538..2996a8a28e 100644 --- a/awx/settings/local_settings.py.example +++ b/awx/settings/local_settings.py.example @@ -48,7 +48,7 @@ if is_testing(sys.argv): MONGO_DB = 'system_tracking_test' # Celery AMQP configuration. -BROKER_URL = 'redis://localhost/' +BROKER_URL = 'amqp://guest:guest@localhost:5672' # Set True to enable additional logging from the job_event_callback plugin JOB_CALLBACK_DEBUG = False @@ -71,7 +71,7 @@ SYSTEM_UUID = '00000000-0000-0000-0000-000000000000' # timezone as the operating system. # If running in a Windows environment this must be set to the same as your # system time zone. -TIME_ZONE = 'America/New_York' +TIME_ZONE = None # Language code for this installation. All choices can be found here: # http://www.i18nguy.com/unicode/language-identifiers.html diff --git a/awx/settings/production.py b/awx/settings/production.py index 103f775d86..f056a4ea31 100644 --- a/awx/settings/production.py +++ b/awx/settings/production.py @@ -57,14 +57,13 @@ LOGGING['handlers']['fact_receiver']['filename'] = '/var/log/tower/fact_receiver LOGGING['handlers']['system_tracking_migrations']['filename'] = '/var/log/tower/tower_system_tracking_migrations.log' LOGGING['handlers']['rbac_migrations']['filename'] = '/var/log/tower/tower_rbac_migrations.log' -# Store a snapshot of default settings at this point (only for migrating from -# file to database settings). -if 'migrate_to_database_settings' in sys.argv: - DEFAULTS_SNAPSHOT = {} - this_module = sys.modules[__name__] - for setting in dir(this_module): - if setting == setting.upper(): - DEFAULTS_SNAPSHOT[setting] = copy.deepcopy(getattr(this_module, setting)) +# Store a snapshot of default settings at this point before loading any +# customizable config files. +DEFAULTS_SNAPSHOT = {} +this_module = sys.modules[__name__] +for setting in dir(this_module): + if setting == setting.upper(): + DEFAULTS_SNAPSHOT[setting] = copy.deepcopy(getattr(this_module, setting)) # Load settings from any .py files in the global conf.d directory specified in # the environment, defaulting to /etc/tower/conf.d/. diff --git a/awx/sso/conf.py b/awx/sso/conf.py index 4bde08e55a..355e4d6af2 100644 --- a/awx/sso/conf.py +++ b/awx/sso/conf.py @@ -269,6 +269,7 @@ register( 'AUTH_LDAP_USER_DN_TEMPLATE', field_class=fields.LDAPDNWithUserField, allow_blank=True, + allow_null=True, default='', label=_('LDAP User DN Template'), help_text=_('Alternative to user search, if user DNs are all of the same ' @@ -331,12 +332,14 @@ register( category=_('LDAP'), category_slug='ldap', feature_required='ldap', + default='MemberDNGroupType', ) register( 'AUTH_LDAP_REQUIRE_GROUP', field_class=fields.LDAPDNField, allow_blank=True, + allow_null=True, default='', label=_('LDAP Require Group'), help_text=_('Group DN required to login. If specified, user must be a member ' @@ -353,6 +356,7 @@ register( 'AUTH_LDAP_DENY_GROUP', field_class=fields.LDAPDNField, allow_blank=True, + allow_null=True, default='', label=_('LDAP Deny Group'), help_text=_('Group DN denied from login. If specified, user will not be ' @@ -924,13 +928,12 @@ register( register( 'SOCIAL_AUTH_SAML_SP_ENTITY_ID', - field_class=fields.URLField, - schemes=('http', 'https'), + field_class=fields.CharField, allow_blank=True, default='', label=_('SAML Service Provider Entity ID'), - help_text=_('Set to a URL for a domain name you own (does not need to be a ' - 'valid URL; only used as a unique ID).'), + help_text=_('The application-defined unique identifier used as the ' + 'audience of the SAML service provider (SP) configuration.'), category=_('SAML'), category_slug='saml', feature_required='enterprise_auth', diff --git a/awx/sso/fields.py b/awx/sso/fields.py index fdff68130f..5d95296e8e 100644 --- a/awx/sso/fields.py +++ b/awx/sso/fields.py @@ -153,6 +153,12 @@ class LDAPDNField(fields.CharField): super(LDAPDNField, self).__init__(**kwargs) self.validators.append(validate_ldap_dn) + def run_validation(self, data=empty): + value = super(LDAPDNField, self).run_validation(data) + # django-auth-ldap expects DN fields (like AUTH_LDAP_REQUIRE_GROUP) + # to be either a valid string or ``None`` (not an empty string) + return None if value == '' else value + class LDAPDNWithUserField(fields.CharField): @@ -160,6 +166,12 @@ class LDAPDNWithUserField(fields.CharField): super(LDAPDNWithUserField, self).__init__(**kwargs) self.validators.append(validate_ldap_dn_with_user) + def run_validation(self, data=empty): + value = super(LDAPDNWithUserField, self).run_validation(data) + # django-auth-ldap expects DN fields (like AUTH_LDAP_USER_DN_TEMPLATE) + # to be either a valid string or ``None`` (not an empty string) + return None if value == '' else value + class LDAPFilterField(fields.CharField): @@ -299,7 +311,10 @@ class LDAPGroupTypeField(fields.ChoiceField): data = super(LDAPGroupTypeField, self).to_internal_value(data) if not data: return None - return getattr(django_auth_ldap.config, data)() + if data.endswith('MemberDNGroupType'): + return getattr(django_auth_ldap.config, data)(member_attr='member') + else: + return getattr(django_auth_ldap.config, data)() class LDAPUserFlagsField(fields.DictField): @@ -375,7 +390,7 @@ class BaseDictWithChildField(fields.DictField): child_field = self.child_fields.get(k, None) if child_field: value[k] = child_field.to_representation(v) - elif allow_unknown_keys: + elif self.allow_unknown_keys: value[k] = v return value diff --git a/awx/sso/views.py b/awx/sso/views.py index a25aabf511..2a68deec1a 100644 --- a/awx/sso/views.py +++ b/awx/sso/views.py @@ -25,6 +25,8 @@ logger = logging.getLogger('awx.sso.views') class BaseRedirectView(RedirectView): + permanent = True + def get_redirect_url(self, *args, **kwargs): last_path = self.request.COOKIES.get('lastPath', '') last_path = urllib.quote(urllib.unquote(last_path).strip('"')) @@ -83,7 +85,11 @@ class MetadataView(View): 'saml', redirect_uri=complete_url, ) - metadata, errors = saml_backend.generate_metadata_xml() + try: + metadata, errors = saml_backend.generate_metadata_xml() + except Exception as e: + logger.exception('unable to generate SAML metadata') + errors = e if not errors: return HttpResponse(content=metadata, content_type='text/xml') else: diff --git a/awx/static/api/api.css b/awx/static/api/api.css index 61d51fae12..3b18c4273d 100644 --- a/awx/static/api/api.css +++ b/awx/static/api/api.css @@ -151,6 +151,9 @@ body .prettyprint .lit { body .prettyprint .str { color: #D9534F; } +body div.ansi_back { + display: inline-block; +} body .well.tab-content { padding: 20px; diff --git a/awx/templates/rest_framework/api.html b/awx/templates/rest_framework/api.html index 746521f542..3b75c4a35c 100644 --- a/awx/templates/rest_framework/api.html +++ b/awx/templates/rest_framework/api.html @@ -52,7 +52,7 @@
diff --git a/awx/ui/README.md b/awx/ui/README.md index c6f493357a..adc8475283 100644 --- a/awx/ui/README.md +++ b/awx/ui/README.md @@ -1,68 +1,81 @@ -### Table of Contents +# Ansible Tower UI -1. [Requirements](#requirements) -2. [Usage](#usage) -3. [Testing](#testing) -4. [Adding new dependencies](#deps) -5. [Using libraries without modular exports/depedency management](#polyfill) -6. [Environment configuration](#environment) -7. [NPM scripts](#scripts) +## Requirements -### Requirements +### Node / NPM + +Tower currently requires the 6.x LTS version of Node and NPM. + +macOS installer: [https://nodejs.org/dist/latest-v6.x/node-v6.9.4.pkg](https://nodejs.org/dist/latest-v6.x/node-v6.9.4.pkg) + +RHEL / CentOS / Fedora: -* A supported version of node + npm, constrained in `package.json`. -```json -"engines": { - "node": "^6.3.1", - "npm": "^3.10.3" -} ``` -* C/C++ compiler tools, like GCC. Bundled in [Command Line Tools for Xcode](https://developer.apple.com/xcode/) -* [node-gyp](https://github.com/nodejs/node-gyp) -``` -npm install -g node-gyp +$ curl --silent --location https://rpm.nodesource.com/setup_6.x | bash - +$ yum install nodejs ``` -### Usage +### Other Dependencies -The following Makefile targets are available from the root dir of `ansible-tower` +On macOS, install the Command Line Tools: -**Native Docker** -Specify the container ID of a `tools_tower` instance. -`DOCKER_CID=containerID make ui-docker-cid` +``` +$ xcode-select --install +``` -**Docker Machine** -Specify the name of a docker-machine. -`DOCKER_MACHINE_NAME=default make ui-docker-machine` +RHEL / CentOS / Fedora: -Build a minified/uglified **release candidate** -`make ui-release` +``` +$ yum install bzip2 gcc-c++ git make +``` -### Testing +## Usage + +### Starting the UI + +First, the Tower API will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md). + +When using Docker for Mac or native Docker on Linux: + +``` +$ make ui-docker +``` + +When using Docker Machine: + +``` +$ DOCKER_MACHINE_NAME=default make ui-docker-machine +``` + +### Running Tests Run unit tests locally, poll for changes to both source and test files, launch tests in supported browser engines: -`make ui-test` + +``` +$ make ui-test +``` Run unit tests in a CI environment (Jenkins) -`make ui-test-ci` -Run Protractor (E2E) tests in Saucelabs: -`make ui-test-saucelabs` +``` +$ make ui-test-ci +``` -### Adding new dependencies +### Adding new dependencies -From the root dir of `ansible-tower`: -#### Add/update a bundled vendor dependency +#### Add / update a bundled vendor dependency + 1. `npm install --prefix awx/ui --save some-frontend-package@1.2.3` 2. Add `'some-package'` to `var vendorFiles` in `./grunt-tasks/webpack.js` 3. `npm --prefix awx/ui shrinkwrap` to freeze current dependency resolution -#### Add/update a dependecy in the build/test pipeline: +#### Add / update a dependecy in the build/test pipeline + 1. `npm install --prefix awx/ui --save-dev some-toolchain-package@1.2.3` 2. `npm --prefix awx/ui shrinkwrap` to freeze current dependency resolution -### Polyfills, shims, patches +### Polyfills, shims, patches The Webpack pipeline will prefer module patterns in this order: CommonJS, AMD, UMD. For a comparison of supported patterns, refer to [https://webpack.github.io/docs/comparison.html](Webpack's docs). @@ -74,6 +87,7 @@ Some javascript libraries do not export their contents as a module, or depend on // Tower source code depends on the lodash library being available as _ _.uniq([1,2,3,1]) // will throw error undefined ``` + ```js // webpack.config.js plugins: [ @@ -82,6 +96,7 @@ plugins: [ }) ] ``` + ```js // the following requirement is inserted by webpack at build time var _ = require('lodash'); @@ -89,12 +104,12 @@ _.uniq([1,2,3,1]) ``` 2. Use [`imports-loader`](https://webpack.github.io/docs/shimming-modules.html#importing) to inject requirements into the namespace of vendor code at import time. Use [`exports-loader`](https://webpack.github.io/docs/shimming-modules.html#exporting) to conventionally export vendor code lacking a conventional export pattern. -3. [Apply a functional patch](https://gist.github.com/leigh-johnson/070159d3fd780d6d8da6e13625234bb3). A webpack plugin is the correct choice for a functional patch if your patch needs to access events in a build's lifecycle. A webpack loader is preferable if you need to compile and export a custom pattern of library modules. -4. [Submit patches to libraries without modular exports](https://github.com/leigh-johnson/ngToast/commit/fea95bb34d27687e414619b4f72c11735d909f93) - the internet will thank you +3. [Apply a functional patch](https://gist.github.com/leigh-johnson/070159d3fd780d6d8da6e13625234bb3). A webpack plugin is the correct choice for a functional patch if your patch needs to access events in a build's lifecycle. A webpack loader is preferable if you need to compile and export a custom pattern of library modules. +4. [Submit patches to libraries without modular exports](https://github.com/leigh-johnson/ngToast/commit/fea95bb34d27687e414619b4f72c11735d909f93) - the internet will thank you Some javascript libraries might only get one module pattern right. -### Environment configuration - used in development/test builds +### Environment configuration - used in development / test builds Build tasks are parameterized with environment variables. @@ -113,28 +128,30 @@ Environment variables can accessed in a Javascript via `PROCESS.env`. Example usage in `npm run build-docker-machine`: ```bash -docker-machine ssh $DOCKER_MACHINE_NAME -f -N -L ${npm_package_config_websocket_port}:localhost:${npm_package_config_websocket_port}; ip=$(docker-machine ip $DOCKER_MACHINE_NAME); echo npm set ansible-tower:django_host ${ip}; grunt dev +$ docker-machine ssh $DOCKER_MACHINE_NAME -f -N -L ${npm_package_config_websocket_port}:localhost:${npm_package_config_websocket_port}; ip=$(docker-machine ip $DOCKER_MACHINE_NAME); echo npm set ansible-tower:django_host ${ip}; $ grunt dev ``` Example usage in an `npm test` script target: -```bash +``` npm_package_config_websocket_port=mock_websocket_port npm_package_config_django_port=mock_api_port npm_package_config_django_host=mock_api_host npm run test:someMockIntegration ``` You'll usually want to pipe and set vars prior to running a script target: -```bash -npm set ansible-tower:websocket_host ${mock_host}; npm run script-name +``` +$ npm set ansible-tower:websocket_host ${mock_host}; npm run script-name ``` -### NPM Scripts +### NPM Scripts Examples: ```json - "scripts": { - "pretest": "echo I run immediately before 'npm test' executes", - "posttest": "echo I run immediately after 'npm test' exits", - "test": "karma start karma.conf.js" + { + "scripts": { + "pretest": "echo I run immediately before 'npm test' executes", + "posttest": "echo I run immediately after 'npm test' exits", + "test": "karma start karma.conf.js" + } } ``` diff --git a/awx/ui/client/legacy-styles/ansible-ui.less b/awx/ui/client/legacy-styles/ansible-ui.less index e925bff5d4..a511ffa33f 100644 --- a/awx/ui/client/legacy-styles/ansible-ui.less +++ b/awx/ui/client/legacy-styles/ansible-ui.less @@ -729,18 +729,6 @@ legend { .navigation { margin: 15px 0 15px 0; } - .modal-body { - .alert { - padding: 0; - border: none; - margin: 0; - } - .alert-danger { - background-color: @default-bg; - border: none; - color: @default-interface-txt; - } - } .footer-navigation { margin: 10px 0 10px 0; @@ -1104,6 +1092,7 @@ input[type="checkbox"].checkbox-no-label { .icon-job-stopped:before, .icon-job-error:before, .icon-job-canceled:before, + .icon-job-stdout-download-tooltip:before, .icon-job-unreachable:before { content: "\f06a"; } @@ -1141,6 +1130,7 @@ input[type="checkbox"].checkbox-no-label { .icon-job-stopped, .icon-job-error, .icon-job-failed, + .icon-job-stdout-download-tooltip, .icon-job-canceled { color: @red; } @@ -1638,17 +1628,19 @@ tr td button i { } /* overrides to TB modal */ +.modal-content { + padding: 20px; +} .modal-header { color: @default-interface-txt; - margin: .1em 0; white-space: nowrap; width: 90%; overflow: hidden; text-overflow: ellipsis; width: 100%; border: none; - padding: 12px 14px 0 12px; + padding: 0; } .modal { @@ -1677,8 +1669,18 @@ tr td button i { } .modal-body { - padding: 20px 14px 7px 14px; min-height: 120px; + padding: 20px 0; + + .alert { + padding: 10px; + margin: 0; + } + .alert-danger { + background-color: @default-bg; + border: none; + color: @default-interface-txt; + } } #prompt-modal .modal-body { @@ -1690,15 +1692,15 @@ tr td button i { } .modal-footer { - padding: .3em 1em .5em .4em; + padding: 0; border: none; + margin-top: 0; .btn.btn-primary { text-transform: uppercase; background-color: @default-succ; border-color: @default-succ; padding: 5px 15px; - margin: .5em .4em .5em 0; cursor: pointer; &:hover { @@ -1720,8 +1722,7 @@ tr td button i { /* PW progress bar */ -.pw-progress { - margin-top: 10px; +.pw-progress { margin-top: 10px; li { line-height: normal; @@ -2219,10 +2220,6 @@ a:hover { font-family: 'Open Sans'; } -.modal-body .alert { - padding: 10px; -} - .WorkflowBadge{ background-color: @b7grey; border-radius: 10px; @@ -2237,3 +2234,8 @@ a:hover { padding-left: 2px; width: 14px; } + +button[disabled], +html input[disabled] { + cursor: not-allowed; +} diff --git a/awx/ui/client/legacy-styles/forms.less b/awx/ui/client/legacy-styles/forms.less index 89ff33dd03..570a096c7e 100644 --- a/awx/ui/client/legacy-styles/forms.less +++ b/awx/ui/client/legacy-styles/forms.less @@ -44,11 +44,10 @@ color: @list-header-txt; font-size: 14px; font-weight: bold; - padding-bottom: 25px; - min-height: 45px; word-break: break-all; max-width: 90%; word-wrap: break-word; + margin-bottom: 20px; } .Form-secondaryTitle{ diff --git a/awx/ui/client/legacy-styles/lists.less b/awx/ui/client/legacy-styles/lists.less index d1633a67d1..351bf3988b 100644 --- a/awx/ui/client/legacy-styles/lists.less +++ b/awx/ui/client/legacy-styles/lists.less @@ -43,7 +43,7 @@ table, tbody { border-top-right-radius: 5px; } -.List-tableHeader--actions { +.List-tableHeader--info, .List-tableHeader--actions { text-align: right; } @@ -387,6 +387,21 @@ table, tbody { border-left: 4px solid transparent; } +.List-infoCell { + display: flex; + justify-content: flex-end; + font-size: 0.8em; + cursor: pointer; +} + +.List-infoCell a { + color: @default-icon; +} + +.List-infoCell a:hover, .List-infoCell a:focus { + color: @default-interface-txt; +} + @media (max-width: 991px) { .List-searchWidget + .List-searchWidget { margin-top: 20px; diff --git a/awx/ui/client/src/about/about.partial.html b/awx/ui/client/src/about/about.partial.html index bcb2a5cd33..867a5f6035 100644 --- a/awx/ui/client/src/about/about.partial.html +++ b/awx/ui/client/src/about/about.partial.html @@ -11,19 +11,19 @@
  ________________
-/  Tower {{version_str}} \\
-\\{{version}}/
+/  Tower {{version_str}} \
+\{{version}}/
  ----------------
-        \\   ^__^
-         \\  (oo)\\_______
-            (__)      A )\\/\\
+        \   ^__^
+         \  (oo)\_______
+            (__)      A )\/\
                 ||----w |
                 ||     ||
 
diff --git a/awx/ui/client/src/access/add-rbac-resource/main.js b/awx/ui/client/src/access/add-rbac-resource/main.js new file mode 100644 index 0000000000..346e6106c6 --- /dev/null +++ b/awx/ui/client/src/access/add-rbac-resource/main.js @@ -0,0 +1,12 @@ +/************************************************* + * Copyright (c) 2015 Ansible, Inc. + * + * All Rights Reserved + *************************************************/ + +import addRbacResourceDirective from './rbac-resource.directive'; +import rbacMultiselect from '../rbac-multiselect/main'; + +export default + angular.module('AddRbacResourceModule', [rbacMultiselect.name]) + .directive('addRbacResource', addRbacResourceDirective); diff --git a/awx/ui/client/src/access/addPermissions/addPermissions.controller.js b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.controller.js similarity index 89% rename from awx/ui/client/src/access/addPermissions/addPermissions.controller.js rename to awx/ui/client/src/access/add-rbac-resource/rbac-resource.controller.js index d5774e7c79..e40589a3f6 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissions.controller.js +++ b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.controller.js @@ -18,16 +18,7 @@ export default ['$rootScope', '$scope', 'GetBasePath', 'Rest', '$q', 'Wait', 'Pr // the object permissions are being added to scope.object = scope.resourceData.data; // array for all possible roles for the object - scope.roles = Object - .keys(scope.object.summary_fields.object_roles) - .map(function(key) { - return { - value: scope.object.summary_fields - .object_roles[key].id, - label: scope.object.summary_fields - .object_roles[key].name - }; - }); + scope.roles = scope.object.summary_fields.object_roles; // TODO: get working with api // array w roles and descriptions for key @@ -44,6 +35,11 @@ export default ['$rootScope', '$scope', 'GetBasePath', 'Rest', '$q', 'Wait', 'Pr scope.showKeyPane = false; + scope.removeObject = function(obj){ + _.remove(scope.allSelected, {id: obj.id}); + obj.isSelected = false; + }; + scope.toggleKeyPane = function() { scope.showKeyPane = !scope.showKeyPane; }; @@ -88,7 +84,7 @@ export default ['$rootScope', '$scope', 'GetBasePath', 'Rest', '$q', 'Wait', 'Pr .map(function(role) { return { url: url, - id: role.value + id: role.value || role.id }; }); })); diff --git a/awx/ui/client/src/access/addPermissions/addPermissions.directive.js b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.directive.js similarity index 78% rename from awx/ui/client/src/access/addPermissions/addPermissions.directive.js rename to awx/ui/client/src/access/add-rbac-resource/rbac-resource.directive.js index 284110b0ce..e84b78face 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissions.directive.js +++ b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.directive.js @@ -3,7 +3,7 @@ * * All Rights Reserved *************************************************/ -import addPermissionsController from './addPermissions.controller'; +import controller from './rbac-resource.controller'; /* jshint unused: vars */ export default ['templateUrl', '$state', @@ -15,9 +15,11 @@ export default ['templateUrl', '$state', usersDataset: '=', teamsDataset: '=', resourceData: '=', + withoutTeamPermissions: '@', + title: '@' }, - controller: addPermissionsController, - templateUrl: templateUrl('access/addPermissions/addPermissions'), + controller: controller, + templateUrl: templateUrl('access/add-rbac-resource/rbac-resource'), link: function(scope, element, attrs) { scope.toggleFormTabs('users'); $('#add-permissions-modal').modal('show'); diff --git a/awx/ui/client/src/access/addPermissions/addPermissions.partial.html b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.partial.html similarity index 88% rename from awx/ui/client/src/access/addPermissions/addPermissions.partial.html rename to awx/ui/client/src/access/add-rbac-resource/rbac-resource.partial.html index 264a4cf834..5cd8e19b1e 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissions.partial.html +++ b/awx/ui/client/src/access/add-rbac-resource/rbac-resource.partial.html @@ -6,9 +6,9 @@
- {{ object.name }} + {{ object.name || object.username }}
- Add Permissions + {{ title }}
@@ -39,17 +39,16 @@
+ ng-class="{'is-selected': teamsSelected }"> Teams
-
- +
+
-
- +
+
Please assign roles to the selected users/teams -
Key @@ -91,8 +90,8 @@ {{ obj.type }}
- - + + +
+
+
+ +
+ +
+ + 1 + +
+ Please select resources from the lists below. +
+
+ +
+
+ Job Templates +
+
+ Workflow Templates +
+
+ Projects +
+
+ Inventories +
+
+ Credentials +
+
+ +
+ +
+
+ +
+
+ +
+
+ +
+
+ +
+ + + +
+
+
+ + 2 + + Please assign roles to the selected resources +
+ Key +
+
+
+
+ Job Templates +
+
+ Workflow Templates +
+
+ Projects +
+
+ Inventories +
+
+ Credentials +
+
+
+
+
+ {{ key.name }} +
+
+ {{ key.description || "No description provided" }} +
+
+
+ + +
+ +
+ + + +
+ + +
+ +
+ +
+ + + + + + diff --git a/awx/ui/client/src/access/addPermissions/addPermissions.block.less b/awx/ui/client/src/access/add-rbac.block.less similarity index 94% rename from awx/ui/client/src/access/addPermissions/addPermissions.block.less rename to awx/ui/client/src/access/add-rbac.block.less index 590d97d269..0bc6617ba4 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissions.block.less +++ b/awx/ui/client/src/access/add-rbac.block.less @@ -1,4 +1,4 @@ -@import "../../shared/branding/colors.default.less"; +@import "../shared/branding/colors.default.less"; /** @define AddPermissions */ @@ -51,6 +51,10 @@ padding-top: 20px; } +.AddPermissions-list { + margin-bottom: 20px; +} + .AddPermissions-list .List-searchRow { height: 0px; } @@ -168,13 +172,14 @@ .AddPermissions-keyToggle { margin-left: auto; text-transform: uppercase; - padding: 3px 9px; - font-size: 12px; background-color: @default-bg; border-radius: 5px; color: @default-interface-txt; border: 1px solid @d7grey; cursor: pointer; + width: 70px; + height: 34px; + line-height: 20px; } .AddPermissions-keyToggle:hover { @@ -185,6 +190,9 @@ background-color: @default-link; border-color: @default-link; color: @default-bg; + &:hover{ + background-color: @default-link-hov; + } } .AddPermissions-keyPane { diff --git a/awx/ui/client/src/access/addPermissions/addPermissionsList/addPermissionsList.directive.js b/awx/ui/client/src/access/addPermissions/addPermissionsList/addPermissionsList.directive.js deleted file mode 100644 index 8051371436..0000000000 --- a/awx/ui/client/src/access/addPermissions/addPermissionsList/addPermissionsList.directive.js +++ /dev/null @@ -1,47 +0,0 @@ -/************************************************* - * Copyright (c) 2015 Ansible, Inc. - * - * All Rights Reserved - *************************************************/ - -/* jshint unused: vars */ -export default ['addPermissionsTeamsList', 'addPermissionsUsersList', '$compile', 'generateList', 'GetBasePath', 'SelectionInit', function(addPermissionsTeamsList, - addPermissionsUsersList, $compile, generateList, - GetBasePath, SelectionInit) { - return { - restrict: 'E', - scope: { - allSelected: '=', - view: '@', - dataset: '=' - }, - template: "
", - link: function(scope, element, attrs, ctrl) { - let listMap, list, list_html; - - listMap = {Teams: addPermissionsTeamsList, Users: addPermissionsUsersList}; - list = listMap[scope.view]; - list_html = generateList.build({ - mode: 'edit', - list: list - }); - - scope.list = listMap[scope.view]; - scope[`${list.iterator}_dataset`] = scope.dataset.data; - scope[`${list.name}`] = scope[`${list.iterator}_dataset`].results; - - scope.$watch(list.name, function(){ - _.forEach(scope[`${list.name}`], isSelected); - }); - - function isSelected(item){ - if(_.find(scope.allSelected, {id: item.id})){ - item.isSelected = true; - } - return item; - } - element.append(list_html); - $compile(element.contents())(scope); - } - }; -}]; diff --git a/awx/ui/client/src/access/addPermissions/main.js b/awx/ui/client/src/access/addPermissions/main.js deleted file mode 100644 index ca627908de..0000000000 --- a/awx/ui/client/src/access/addPermissions/main.js +++ /dev/null @@ -1,14 +0,0 @@ -/************************************************* - * Copyright (c) 2015 Ansible, Inc. - * - * All Rights Reserved - *************************************************/ - -import addPermissionsDirective from './addPermissions.directive'; -import roleSelect from './roleSelect.directive'; -import addPermissionsList from './addPermissionsList/main'; - -export default - angular.module('AddPermissions', [addPermissionsList.name]) - .directive('addPermissions', addPermissionsDirective) - .directive('roleSelect', roleSelect); diff --git a/awx/ui/client/src/access/main.js b/awx/ui/client/src/access/main.js index 084fe5ef87..eedfe0db8c 100644 --- a/awx/ui/client/src/access/main.js +++ b/awx/ui/client/src/access/main.js @@ -4,9 +4,13 @@ * All Rights Reserved *************************************************/ -import roleList from './roleList.directive'; -import addPermissions from './addPermissions/main'; +import roleList from './rbac-role-column/roleList.directive'; +import addRbacResource from './add-rbac-resource/main'; +import addRbacUserTeam from './add-rbac-user-team/main'; export default - angular.module('access', [addPermissions.name]) + angular.module('RbacModule', [ + addRbacResource.name, + addRbacUserTeam.name + ]) .directive('roleList', roleList); diff --git a/awx/ui/client/src/access/addPermissions/addPermissionsList/main.js b/awx/ui/client/src/access/rbac-multiselect/main.js similarity index 55% rename from awx/ui/client/src/access/addPermissions/addPermissionsList/main.js rename to awx/ui/client/src/access/rbac-multiselect/main.js index c523ca2032..c5bc4f030f 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissionsList/main.js +++ b/awx/ui/client/src/access/rbac-multiselect/main.js @@ -4,12 +4,14 @@ * All Rights Reserved *************************************************/ -import addPermissionsListDirective from './addPermissionsList.directive'; +import rbacMultiselectList from './rbac-multiselect-list.directive'; +import rbacMultiselectRole from './rbac-multiselect-role.directive'; import teamsList from './permissionsTeams.list'; import usersList from './permissionsUsers.list'; export default - angular.module('addPermissionsListModule', []) - .directive('addPermissionsList', addPermissionsListDirective) + angular.module('rbacMultiselectModule', []) + .directive('rbacMultiselectList', rbacMultiselectList) + .directive('rbacMultiselectRole', rbacMultiselectRole) .factory('addPermissionsTeamsList', teamsList) .factory('addPermissionsUsersList', usersList); diff --git a/awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsTeams.list.js b/awx/ui/client/src/access/rbac-multiselect/permissionsTeams.list.js similarity index 92% rename from awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsTeams.list.js rename to awx/ui/client/src/access/rbac-multiselect/permissionsTeams.list.js index ae7bdd3d5d..8986478e85 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsTeams.list.js +++ b/awx/ui/client/src/access/rbac-multiselect/permissionsTeams.list.js @@ -25,8 +25,7 @@ label: 'organization', ngBind: 'team.summary_fields.organization.name', sourceModel: 'organization', - sourceField: 'name', - searchable: true + sourceField: 'name' } } diff --git a/awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsUsers.list.js b/awx/ui/client/src/access/rbac-multiselect/permissionsUsers.list.js similarity index 94% rename from awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsUsers.list.js rename to awx/ui/client/src/access/rbac-multiselect/permissionsUsers.list.js index 8955d30aa0..58a5605281 100644 --- a/awx/ui/client/src/access/addPermissions/addPermissionsList/permissionsUsers.list.js +++ b/awx/ui/client/src/access/rbac-multiselect/permissionsUsers.list.js @@ -34,7 +34,7 @@ username: { key: true, label: 'Username', - columnClass: 'col-md-3 col-sm-3 col-xs-9' + columnClass: 'col-md-5 col-sm-5 col-xs-11' }, }, diff --git a/awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-list.directive.js b/awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-list.directive.js new file mode 100644 index 0000000000..2f0d790317 --- /dev/null +++ b/awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-list.directive.js @@ -0,0 +1,133 @@ +/************************************************* + * Copyright (c) 2015 Ansible, Inc. + * + * All Rights Reserved + *************************************************/ + +/* jshint unused: vars */ +export default ['addPermissionsTeamsList', 'addPermissionsUsersList', 'TemplateList', 'ProjectList', + 'InventoryList', 'CredentialList', '$compile', 'generateList', 'GetBasePath', 'SelectionInit', + function(addPermissionsTeamsList, addPermissionsUsersList, TemplateList, ProjectList, + InventoryList, CredentialList, $compile, generateList, GetBasePath, SelectionInit) { + return { + restrict: 'E', + scope: { + allSelected: '=', + view: '@', + dataset: '=' + }, + template: "
", + link: function(scope, element, attrs, ctrl) { + let listMap, list, list_html; + + listMap = { + Teams: addPermissionsTeamsList, + Users: addPermissionsUsersList, + Projects: ProjectList, + JobTemplates: TemplateList, + WorkflowTemplates: TemplateList, + Inventories: InventoryList, + Credentials: CredentialList + }; + list = _.cloneDeep(listMap[scope.view]); + list.multiSelect = true; + list.multiSelectExtended = true; + list.listTitleBadge = false; + delete list.actions; + delete list.fieldActions; + + switch(scope.view){ + + case 'Projects': + list.fields = { + name: list.fields.name, + scm_type: list.fields.scm_type + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.scm_type.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + break; + + case 'Inventories': + list.fields = { + name: list.fields.name, + organization: list.fields.organization + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.organization.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + break; + + case 'JobTemplates': + list.name = 'job_templates'; + list.iterator = 'job_template'; + list.fields = { + name: list.fields.name, + description: list.fields.description + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.description.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + break; + + case 'WorkflowTemplates': + list.name = 'workflow_templates'; + list.iterator = 'workflow_template'; + list.basePath = 'workflow_job_templates'; + list.fields = { + name: list.fields.name, + description: list.fields.description + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.description.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + break; + case 'Users': + list.fields = { + username: list.fields.username, + first_name: list.fields.first_name, + last_name: list.fields.last_name + }; + list.fields.username.columnClass = 'col-md-5 col-sm-5 col-xs-11'; + list.fields.first_name.columnClass = 'col-md-3 col-sm-3 hidden-xs'; + list.fields.last_name.columnClass = 'col-md-3 col-sm-3 hidden-xs'; + break; + case 'Teams': + list.fields = { + name: list.fields.name, + organization: list.fields.organization, + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.organization.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + break; + default: + list.fields = { + name: list.fields.name, + description: list.fields.description + }; + list.fields.name.columnClass = 'col-md-6 col-sm-6 col-xs-11'; + list.fields.description.columnClass = 'col-md-5 col-sm-5 hidden-xs'; + } + + list_html = generateList.build({ + mode: 'edit', + list: list, + related: false, + title: false + }); + + scope.list = list; + scope[`${list.iterator}_dataset`] = scope.dataset.data; + scope[`${list.name}`] = scope[`${list.iterator}_dataset`].results; + + scope.$watch(list.name, function(){ + _.forEach(scope[`${list.name}`], isSelected); + }); + + function isSelected(item){ + if(_.find(scope.allSelected, {id: item.id})){ + item.isSelected = true; + } + return item; + } + element.append(list_html); + $compile(element.contents())(scope); + } + }; +}]; diff --git a/awx/ui/client/src/access/addPermissions/roleSelect.directive.js b/awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-role.directive.js similarity index 67% rename from awx/ui/client/src/access/addPermissions/roleSelect.directive.js rename to awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-role.directive.js index 53bc191a6d..99e3a1d8ed 100644 --- a/awx/ui/client/src/access/addPermissions/roleSelect.directive.js +++ b/awx/ui/client/src/access/rbac-multiselect/rbac-multiselect-role.directive.js @@ -11,8 +11,12 @@ export default function(CreateSelect2) { return { restrict: 'E', - scope: false, - template: '', + scope: { + roles: '=', + model: '=' + }, + // @issue why is the read-only role ommited from this selection? + template: '', link: function(scope, element, attrs, ctrl) { CreateSelect2({ element: '.roleSelect2', diff --git a/awx/ui/client/src/access/roleList.block.less b/awx/ui/client/src/access/rbac-role-column/roleList.block.less similarity index 96% rename from awx/ui/client/src/access/roleList.block.less rename to awx/ui/client/src/access/rbac-role-column/roleList.block.less index 5cacfb1814..40b76717a3 100644 --- a/awx/ui/client/src/access/roleList.block.less +++ b/awx/ui/client/src/access/rbac-role-column/roleList.block.less @@ -1,5 +1,5 @@ /** @define RoleList */ -@import "../shared/branding/colors.default.less"; +@import "../../shared/branding/colors.default.less"; .RoleList { display: flex; diff --git a/awx/ui/client/src/access/rbac-role-column/roleList.directive.js b/awx/ui/client/src/access/rbac-role-column/roleList.directive.js new file mode 100644 index 0000000000..10b589cf7c --- /dev/null +++ b/awx/ui/client/src/access/rbac-role-column/roleList.directive.js @@ -0,0 +1,87 @@ +/* jshint unused: vars */ +export default + [ 'templateUrl', 'Wait', 'GetBasePath', 'Rest', '$state', 'ProcessErrors', 'Prompt', '$filter', '$rootScope', + function(templateUrl, Wait, GetBasePath, Rest, $state, ProcessErrors, Prompt, $filter, $rootScope) { + return { + restrict: 'E', + scope: { + 'deleteTarget': '=' + }, + templateUrl: templateUrl('access/rbac-role-column/roleList'), + link: function(scope, element, attrs) { + // given a list of roles (things like "project + // auditor") which are pulled from two different + // places in summary fields, and creates a + // concatenated/sorted list + scope.access_list = [] + .concat(scope.deleteTarget.summary_fields + .direct_access.map((i) => { + i.role.explicit = true; + return i.role; + })) + .concat(scope.deleteTarget.summary_fields + .indirect_access.map((i) => { + i.role.explicit = false; + return i.role; + })) + .filter((role) => { + return Boolean(attrs.teamRoleList) === Boolean(role.team_id); + }) + .sort((a, b) => { + if (a.name + .toLowerCase() > b.name + .toLowerCase()) { + return 1; + } else { + return -1; + } + }); + + scope.deletePermission = function(user, accessListEntry) { + let entry = accessListEntry; + + let action = function() { + $('#prompt-modal').modal('hide'); + Wait('start'); + + let url; + if (entry.team_id) { + url = GetBasePath("teams") + entry.team_id + "/roles/"; + } else { + url = GetBasePath("users") + user.id + "/roles/"; + } + + Rest.setUrl(url); + Rest.post({ "disassociate": true, "id": entry.id }) + .success(function() { + Wait('stop'); + $state.go('.', null, { reload: true }); + }) + .error(function(data, status) { + ProcessErrors($rootScope, data, status, null, { + hdr: 'Error!', + msg: 'Failed to remove access. Call to ' + url + ' failed. DELETE returned status: ' + status + }); + }); + }; + + if (accessListEntry.team_id) { + Prompt({ + hdr: `Team access removal`, + body: `
Please confirm that you would like to remove ${entry.name} access from the team ${$filter('sanitize')(entry.team_name)}. This will affect all members of the team. If you would like to only remove access for this particular user, please remove them from the team.
`, + action: action, + actionText: 'REMOVE TEAM ACCESS' + }); + } else { + Prompt({ + hdr: `User access removal`, + body: `
Please confirm that you would like to remove ${entry.name} access from ${user.username}.
`, + action: action, + actionText: 'REMOVE' + }); + } + }; + } + }; + } + ]; diff --git a/awx/ui/client/src/access/roleList.partial.html b/awx/ui/client/src/access/rbac-role-column/roleList.partial.html similarity index 73% rename from awx/ui/client/src/access/roleList.partial.html rename to awx/ui/client/src/access/rbac-role-column/roleList.partial.html index 365a20f061..799edbb408 100644 --- a/awx/ui/client/src/access/roleList.partial.html +++ b/awx/ui/client/src/access/rbac-role-column/roleList.partial.html @@ -2,14 +2,14 @@ ng-repeat="entry in access_list">
+ ng-click="deletePermission(deleteTarget, entry)">
+ aw-tool-tip='
Organization: {{ entry.team_organization_name | sanitize }}
Team: {{entry.team_name | sanitize}}
' aw-tip-placement='bottom'> {{ entry.name }}
diff --git a/awx/ui/client/src/access/roleList.directive.js b/awx/ui/client/src/access/roleList.directive.js deleted file mode 100644 index 7bdd1b29d4..0000000000 --- a/awx/ui/client/src/access/roleList.directive.js +++ /dev/null @@ -1,40 +0,0 @@ -/* jshint unused: vars */ -export default - [ 'templateUrl', - function(templateUrl) { - return { - restrict: 'E', - scope: true, - templateUrl: templateUrl('access/roleList'), - link: function(scope, element, attrs) { - // given a list of roles (things like "project - // auditor") which are pulled from two different - // places in summary fields, and creates a - // concatenated/sorted list - scope.access_list = [] - .concat(scope.permission.summary_fields - .direct_access.map((i) => { - i.role.explicit = true; - return i.role; - })) - .concat(scope.permission.summary_fields - .indirect_access.map((i) => { - i.role.explicit = false; - return i.role; - })) - .filter((role) => { - return Boolean(attrs.teamRoleList) === Boolean(role.team_id); - }) - .sort((a, b) => { - if (a.name - .toLowerCase() > b.name - .toLowerCase()) { - return 1; - } else { - return -1; - } - }); - } - }; - } - ]; diff --git a/awx/ui/client/src/activity-stream/activitystream.controller.js b/awx/ui/client/src/activity-stream/activitystream.controller.js index 05609d3bc5..ada72d572c 100644 --- a/awx/ui/client/src/activity-stream/activitystream.controller.js +++ b/awx/ui/client/src/activity-stream/activitystream.controller.js @@ -12,6 +12,7 @@ function activityStreamController($scope, $state, subTitle, Stream, GetTargetTitle, list, Dataset) { init(); + initOmitSmartTags(); function init() { // search init @@ -33,6 +34,20 @@ function activityStreamController($scope, $state, subTitle, Stream, GetTargetTit }); } + // Specification of smart-tags omission from the UI is done in the route/state init. + // A limitation is that this specficiation is static and the key for which to be omitted from + // the smart-tags must be known at that time. + // In the case of activity stream, we won't to dynamically ommit the resource for which we are + // displaying the activity stream for. i.e. 'project', 'credential', etc. + function initOmitSmartTags() { + let defaults, route = _.find($state.$current.path, (step) => { + return step.params.hasOwnProperty('activity_search'); + }); + if (route && $state.params.target !== undefined) { + defaults = route.params.activity_search.config.value; + defaults[$state.params.target] = null; + } + } } export default ['$scope', '$state', 'subTitle', 'Stream', 'GetTargetTitle', 'StreamList', 'Dataset', activityStreamController]; diff --git a/awx/ui/client/src/activity-stream/activitystream.route.js b/awx/ui/client/src/activity-stream/activitystream.route.js index d4af2bd0aa..73877d5f1b 100644 --- a/awx/ui/client/src/activity-stream/activitystream.route.js +++ b/awx/ui/client/src/activity-stream/activitystream.route.js @@ -16,8 +16,8 @@ export default { value: { // default params will not generate search tags order_by: '-timestamp', - or__object1: null, - or__object2: null + or__object1__in: null, + or__object2__in: null } } }, @@ -46,7 +46,16 @@ export default { Dataset: ['StreamList', 'QuerySet', '$stateParams', 'GetBasePath', function(list, qs, $stateParams, GetBasePath) { let path = GetBasePath(list.basePath) || GetBasePath(list.name); - return qs.search(path, $stateParams[`${list.iterator}_search`]); + let stateParams = $stateParams[`${list.iterator}_search`]; + // Sending or__object1__in=null will result in an api error response so lets strip + // these out. This should only be null when hitting the All Activity page. + if(stateParams.or__object1__in && stateParams.or__object1__in === null) { + delete stateParams.or__object1__in; + } + if(stateParams.or__object2__in && stateParams.or__object2__in === null) { + delete stateParams.or__object2__in; + } + return qs.search(path, stateParams); } ], features: ['FeaturesService', 'ProcessErrors', '$state', '$rootScope', diff --git a/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.block.less b/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.block.less index 6b00dc76f5..9e0cc73720 100644 --- a/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.block.less +++ b/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.block.less @@ -35,5 +35,6 @@ margin-bottom: 0; max-height: 200px; overflow: scroll; + overflow-x: auto; color: @as-detail-changes-txt; } diff --git a/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.partial.html b/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.partial.html index f5d5acf553..67e4452ebc 100644 --- a/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.partial.html +++ b/awx/ui/client/src/activity-stream/streamDetailModal/streamDetailModal.partial.html @@ -22,7 +22,7 @@ diff --git a/awx/ui/client/src/activity-stream/streamDropdownNav/stream-dropdown-nav.directive.js b/awx/ui/client/src/activity-stream/streamDropdownNav/stream-dropdown-nav.directive.js index 8e01af25e5..dc6c4a819d 100644 --- a/awx/ui/client/src/activity-stream/streamDropdownNav/stream-dropdown-nav.directive.js +++ b/awx/ui/client/src/activity-stream/streamDropdownNav/stream-dropdown-nav.directive.js @@ -20,12 +20,12 @@ export default ['templateUrl', function(templateUrl) { {label: 'Hosts', value: 'host'}, {label: 'Inventories', value: 'inventory'}, {label: 'Inventory Scripts', value: 'inventory_script'}, - {label: 'Job Templates', value: 'job_template'}, {label: 'Jobs', value: 'job'}, {label: 'Organizations', value: 'organization'}, {label: 'Projects', value: 'project'}, {label: 'Schedules', value: 'schedule'}, {label: 'Teams', value: 'team'}, + {label: 'Templates', value: 'template'}, {label: 'Users', value: 'user'} ]; @@ -37,12 +37,12 @@ export default ['templateUrl', function(templateUrl) { $scope.changeStreamTarget = function(){ if($scope.streamTarget && $scope.streamTarget === 'dashboard') { // Just navigate to the base activity stream - $state.go('activityStream'); + $state.go('activityStream', {target: null, activity_search: {page_size:"20", order_by: '-timestamp'}}); } else { let search = _.merge($stateParams.activity_search, { - or__object1: $scope.streamTarget, - or__object2: $scope.streamTarget + or__object1__in: $scope.streamTarget && $scope.streamTarget === 'template' ? 'job_template,workflow_job_template' : $scope.streamTarget, + or__object2__in: $scope.streamTarget && $scope.streamTarget === 'template' ? 'job_template,workflow_job_template' : $scope.streamTarget }); // Attach the taget to the query parameters $state.go('activityStream', {target: $scope.streamTarget, activity_search: search}); diff --git a/awx/ui/client/src/app.js b/awx/ui/client/src/app.js index d903776765..577ce739b8 100644 --- a/awx/ui/client/src/app.js +++ b/awx/ui/client/src/app.js @@ -149,7 +149,6 @@ var tower = angular.module('Tower', [ 'InventoryHostsDefinition', 'HostsHelper', 'AWFilters', - 'ScanJobsListDefinition', 'HostFormDefinition', 'HostListDefinition', 'GroupFormDefinition', @@ -385,6 +384,10 @@ var tower = angular.module('Tower', [ }; $rootScope.$stateParams = $stateParams; + $state.defaultErrorHandler(function(error) { + $log.debug(`$state.defaultErrorHandler: ${error}`); + }); + I18NInit(); $stateExtender.addState({ name: 'dashboard', @@ -464,21 +467,6 @@ var tower = angular.module('Tower', [ } }); - - $stateExtender.addState({ - name: 'teamUsers', - url: '/teams/:team_id/users', - templateUrl: urlPrefix + 'partials/teams.html', - controller: UsersList, - resolve: { - Users: ['UsersList', 'QuerySet', '$stateParams', 'GetBasePath', (list, qs, $stateParams, GetBasePath) => { - let path = GetBasePath(list.basePath) || GetBasePath(list.name); - return qs.search(path, $stateParams[`${list.iterator}_search`]); - }] - } - }); - - $stateExtender.addState({ name: 'userCredentials', url: '/users/:user_id/credentials', @@ -510,58 +498,6 @@ var tower = angular.module('Tower', [ } }); - $rootScope.addPermission = function(scope) { - $compile("")(scope); - }; - $rootScope.addPermissionWithoutTeamTab = function(scope) { - $compile("")(scope); - }; - - $rootScope.deletePermission = function(user, accessListEntry) { - let entry = accessListEntry; - - let action = function() { - $('#prompt-modal').modal('hide'); - Wait('start'); - - let url; - if (entry.team_id) { - url = GetBasePath("teams") + entry.team_id + "/roles/"; - } else { - url = GetBasePath("users") + user.id + "/roles/"; - } - - Rest.setUrl(url); - Rest.post({ "disassociate": true, "id": entry.id }) - .success(function() { - Wait('stop'); - $state.go('.', null, { reload: true }); - }) - .error(function(data, status) { - ProcessErrors($rootScope, data, status, null, { - hdr: 'Error!', - msg: 'Failed to remove access. Call to ' + url + ' failed. DELETE returned status: ' + status - }); - }); - }; - - if (accessListEntry.team_id) { - Prompt({ - hdr: `Team access removal`, - body: `
Please confirm that you would like to remove ${entry.name} access from the team ${$filter('sanitize')(entry.team_name)}. This will affect all members of the team. If you would like to only remove access for this particular user, please remove them from the team.
`, - action: action, - actionText: 'REMOVE TEAM ACCESS' - }); - } else { - Prompt({ - hdr: `User access removal`, - body: `
Please confirm that you would like to remove ${entry.name} access from ${user.username}.
`, - action: action, - actionText: 'REMOVE' - }); - } - }; - $rootScope.deletePermissionFromUser = function(userId, userName, roleName, roleType, url) { var action = function() { $('#prompt-modal').modal('hide'); @@ -681,9 +617,52 @@ var tower = angular.module('Tower', [ $rootScope.crumbCache = []; - // $rootScope.$on('$stateChangeStart', function(event, toState, toParams, fromState) { - // SocketService.subscribe(toState, toParams); - // }); + $rootScope.$on("$stateChangeStart", function (event, next) { + // Remove any lingering intervals + // except on jobDetails.* states + var jobDetailStates = [ + 'jobDetail', + 'jobDetail.host-summary', + 'jobDetail.host-event.details', + 'jobDetail.host-event.json', + 'jobDetail.host-events', + 'jobDetail.host-event.stdout' + ]; + if ($rootScope.jobDetailInterval && !_.includes(jobDetailStates, next.name) ) { + window.clearInterval($rootScope.jobDetailInterval); + } + if ($rootScope.jobStdOutInterval && !_.includes(jobDetailStates, next.name) ) { + window.clearInterval($rootScope.jobStdOutInterval); + } + + // On each navigation request, check that the user is logged in + if (!/^\/(login|logout)/.test($location.path())) { + // capture most recent URL, excluding login/logout + $rootScope.lastPath = $location.path(); + $rootScope.enteredPath = $location.path(); + $cookieStore.put('lastPath', $location.path()); + } + + if (Authorization.isUserLoggedIn() === false) { + if (next.name !== "signIn") { + $state.go('signIn'); + } + } else if ($rootScope && $rootScope.sessionTimer && $rootScope.sessionTimer.isExpired()) { + // gets here on timeout + if (next.name !== "signIn") { + $state.go('signIn'); + } + } else { + if ($rootScope.current_user === undefined || $rootScope.current_user === null) { + Authorization.restoreUserInfo(); //user must have hit browser refresh + } + if (next && (next.name !== "signIn" && next.name !== "signOut" && next.name !== "license")) { + // if not headed to /login or /logout, then check the license + CheckLicense.test(event); + } + } + activateTab(); + }); $rootScope.$on('$stateChangeSuccess', function(event, toState, toParams, fromState) { diff --git a/awx/ui/client/src/bread-crumb/bread-crumb.block.less b/awx/ui/client/src/bread-crumb/bread-crumb.block.less index ac8e0792c9..e0abee7832 100644 --- a/awx/ui/client/src/bread-crumb/bread-crumb.block.less +++ b/awx/ui/client/src/bread-crumb/bread-crumb.block.less @@ -66,7 +66,6 @@ display: inline-block; color: @default-interface-txt; text-transform: uppercase; - max-width: 200px; white-space: nowrap; overflow: hidden; text-overflow: ellipsis; diff --git a/awx/ui/client/src/bread-crumb/bread-crumb.directive.js b/awx/ui/client/src/bread-crumb/bread-crumb.directive.js index 36fe3901ee..3c64a0b701 100644 --- a/awx/ui/client/src/bread-crumb/bread-crumb.directive.js +++ b/awx/ui/client/src/bread-crumb/bread-crumb.directive.js @@ -1,6 +1,6 @@ export default - ['templateUrl', '$state', 'FeaturesService', 'ProcessErrors','$rootScope', 'Store', 'Empty', - function(templateUrl, $state, FeaturesService, ProcessErrors, $rootScope, Store, Empty) { + ['templateUrl', '$state', 'FeaturesService', 'ProcessErrors','$rootScope', 'Store', 'Empty', '$window', 'BreadCrumbService', + function(templateUrl, $state, FeaturesService, ProcessErrors, $rootScope, Store, Empty, $window, BreadCrumbService) { return { restrict: 'E', templateUrl: templateUrl('bread-crumb/bread-crumb'), @@ -8,9 +8,25 @@ export default var streamConfig = {}, originalRoute; - scope.showActivityStreamButton = false; - scope.showRefreshButton = false; - scope.loadingLicense = true; + function init() { + + scope.showActivityStreamButton = false; + scope.showRefreshButton = false; + scope.loadingLicense = true; + + function onResize(){ + BreadCrumbService.truncateCrumbs(); + } + + function cleanUp() { + angular.element($window).off('resize', onResize); + } + + angular.element($window).on('resize', onResize); + scope.$on('$destroy', cleanUp); + } + + init(); scope.refresh = function() { $state.go($state.current, {}, {reload: true}); @@ -26,11 +42,14 @@ export default if(streamConfig.activityStreamTarget) { stateGoParams.target = streamConfig.activityStreamTarget; stateGoParams.activity_search = { - or__object1: streamConfig.activityStreamTarget, - or__object2: streamConfig.activityStreamTarget, + or__object1__in: streamConfig.activityStreamTarget === 'template' ? 'job_template,workflow_job_template' : streamConfig.activityStreamTarget, + or__object2__in: streamConfig.activityStreamTarget === 'template' ? 'job_template,workflow_job_template' : streamConfig.activityStreamTarget, order_by: '-timestamp', page_size: '20', }; + if (streamConfig.activityStreamTarget && streamConfig.activityStreamId) { + stateGoParams.activity_search[streamConfig.activityStreamTarget] = $state.params[streamConfig.activityStreamId]; + } } else { stateGoParams.activity_search = { diff --git a/awx/ui/client/src/bread-crumb/bread-crumb.partial.html b/awx/ui/client/src/bread-crumb/bread-crumb.partial.html index 4500556e96..563e6c4f79 100644 --- a/awx/ui/client/src/bread-crumb/bread-crumb.partial.html +++ b/awx/ui/client/src/bread-crumb/bread-crumb.partial.html @@ -30,4 +30,5 @@ + diff --git a/awx/ui/client/src/bread-crumb/bread-crumb.service.js b/awx/ui/client/src/bread-crumb/bread-crumb.service.js new file mode 100644 index 0000000000..b3d510047d --- /dev/null +++ b/awx/ui/client/src/bread-crumb/bread-crumb.service.js @@ -0,0 +1,80 @@ +export default + [function(){ + return { + truncateCrumbs: function(){ + let breadCrumbBarWidth = $('#bread_crumb').outerWidth(); + let menuLinkWidth = $('.BreadCrumb-menuLinkHolder').outerWidth(); + let availableWidth = breadCrumbBarWidth - menuLinkWidth; + let $breadcrumbClone = $('.BreadCrumb-list').clone().appendTo('#bread_crumb_width_checker'); + let $breadcrumbCloneItems = $breadcrumbClone.find('.BreadCrumb-item'); + // 40px for the padding on the breadcrumb bar and a few extra pixels for rounding + let breadcrumbBarPadding = 45; + let expandedBreadcrumbWidth = breadcrumbBarPadding; + let crumbs = []; + $breadcrumbCloneItems.css('max-width', 'none'); + $breadcrumbCloneItems.each(function(index, item){ + let crumbWidth = $(item).outerWidth(); + expandedBreadcrumbWidth += crumbWidth; + crumbs.push({ + index: index, + origWidth: crumbWidth + }); + }); + // Remove the clone from the dom + $breadcrumbClone.remove(); + if(expandedBreadcrumbWidth > availableWidth) { + let widthToTrim = expandedBreadcrumbWidth - availableWidth; + // Sort the crumbs from biggest to smallest + let sortedCrumbs = _.sortByOrder(crumbs, ["origWidth"], ["desc"]); + let maxWidth; + for(let i=0; i widthToTrim) { + // If we trim down the biggest (i+1) crumbs equally then we can make it fit + maxWidth = maxWidth - (widthToTrim/potentialCrumbsToTrim); + break; + } + else { + // Trim this biggest crumb down to the next biggest + widthToTrim = widthToTrim - (sortedCrumbs[i].origWidth - sortedCrumbs[i+1].origWidth); + maxWidth = sortedCrumbs[i].origWidth; + } + } + else { + // This is the biggest crumb + if(sortedCrumbs[i].origWidth - widthToTrim > sortedCrumbs[i+1].origWidth) { + maxWidth = sortedCrumbs[i].origWidth - widthToTrim; + break; + } + else { + // Trim this biggest crumb down to the next biggest + widthToTrim = widthToTrim - (sortedCrumbs[i].origWidth - sortedCrumbs[i+1].origWidth); + maxWidth = sortedCrumbs[i+1].origWidth; + } + } + } + else { + // This is the smallest crumb + if(sortedCrumbs[i-1]) { + // We've gotten all the way down to the smallest crumb without being able to reasonably trim + // the previous crumbs. Go ahead and trim all of them equally. + maxWidth = (availableWidth-breadcrumbBarPadding)/(i+1); + } + else { + // There's only one breadcrumb so trim this one down + maxWidth = sortedCrumbs[i].origWidth - widthToTrim; + } + } + } + $('.BreadCrumb-item').css('max-width', maxWidth); + } + else { + $('.BreadCrumb-item').css('max-width', 'none'); + } + } + }; + }]; diff --git a/awx/ui/client/src/bread-crumb/main.js b/awx/ui/client/src/bread-crumb/main.js index fde7a7991c..6369beda61 100644 --- a/awx/ui/client/src/bread-crumb/main.js +++ b/awx/ui/client/src/bread-crumb/main.js @@ -1,5 +1,7 @@ import breadCrumb from './bread-crumb.directive'; +import breadCrumbService from './bread-crumb.service'; export default angular.module('breadCrumb', []) + .service('BreadCrumbService', breadCrumbService) .directive('breadCrumb', breadCrumb); diff --git a/awx/ui/client/src/configuration/auth-form/configuration-auth.controller.js b/awx/ui/client/src/configuration/auth-form/configuration-auth.controller.js index 20166f3b85..8098172162 100644 --- a/awx/ui/client/src/configuration/auth-form/configuration-auth.controller.js +++ b/awx/ui/client/src/configuration/auth-form/configuration-auth.controller.js @@ -6,6 +6,7 @@ export default [ '$scope', + '$rootScope', '$state', '$stateParams', '$timeout', @@ -22,9 +23,11 @@ export default [ 'ConfigurationUtils', 'CreateSelect2', 'GenerateForm', + 'i18n', 'ParseTypeChange', function( $scope, + $rootScope, $state, $stateParams, $timeout, @@ -41,6 +44,7 @@ export default [ ConfigurationUtils, CreateSelect2, GenerateForm, + i18n, ParseTypeChange ) { var authVm = this; @@ -60,10 +64,10 @@ export default [ authVm.activeAuthForm = authVm.dropdownValue; formTracker.setCurrentAuth(authVm.activeAuthForm); } else { - var msg = 'You have unsaved changes. Would you like to proceed without saving?'; - var title = 'Warning: Unsaved Changes'; + var msg = i18n._('You have unsaved changes. Would you like to proceed without saving?'); + var title = i18n._('Warning: Unsaved Changes'); var buttons = [{ - label: "Discard changes", + label: i18n._('Discard changes'), "class": "btn Form-cancelButton", "id": "formmodal-cancel-button", onClick: function() { @@ -74,7 +78,7 @@ export default [ $('#FormModal-dialog').dialog('close'); } }, { - label: "Save changes", + label: i18n._('Save changes'), onClick: function() { $scope.$parent.vm.formSave() .then(function() { @@ -94,14 +98,14 @@ export default [ }; var dropdownOptions = [ - {label: 'Azure AD', value: 'azure'}, - {label: 'Github', value: 'github'}, - {label: 'Github Org', value: 'github_org'}, - {label: 'Github Team', value: 'github_team'}, - {label: 'Google OAuth2', value: 'google_oauth'}, - {label: 'LDAP', value: 'ldap'}, - {label: 'RADIUS', value: 'radius'}, - {label: 'SAML', value: 'saml'} + {label: i18n._('Azure AD'), value: 'azure'}, + {label: i18n._('GitHub'), value: 'github'}, + {label: i18n._('GitHub Org'), value: 'github_org'}, + {label: i18n._('GitHub Team'), value: 'github_team'}, + {label: i18n._('Google OAuth2'), value: 'google_oauth'}, + {label: i18n._('LDAP'), value: 'ldap'}, + {label: i18n._('RADIUS'), value: 'radius'}, + {label: i18n._('SAML'), value: 'saml'} ]; CreateSelect2({ @@ -136,7 +140,6 @@ export default [ }, ]; var forms = _.pluck(authForms, 'formDef'); - _.each(forms, function(form) { var keys = _.keys(form.fields); _.each(keys, function(key) { @@ -154,6 +157,8 @@ export default [ } addFieldInfo(form, key); }); + // Disable the save button for system auditors + form.buttons.save.disabled = $rootScope.user_is_system_auditor; }); function addFieldInfo(form, key) { @@ -165,7 +170,10 @@ export default [ dataPlacement: 'top', placeholder: ConfigurationUtils.formatPlaceholder($scope.$parent.configDataResolve[key].placeholder, key) || null, dataTitle: $scope.$parent.configDataResolve[key].label, - required: $scope.$parent.configDataResolve[key].required + required: $scope.$parent.configDataResolve[key].required, + ngDisabled: $rootScope.user_is_system_auditor, + disabled: $scope.$parent.configDataResolve[key].disabled || null, + readonly: $scope.$parent.configDataResolve[key].readonly || null, }); } @@ -197,7 +205,8 @@ export default [ scope: $scope.$parent, variable: field.name, parse_variable: 'parseType', - field_id: form.formDef.name + '_' + field.name + field_id: form.formDef.name + '_' + field.name, + readonly: true, }); } }); @@ -217,7 +226,7 @@ export default [ CreateSelect2({ element: '#configuration_ldap_template_AUTH_LDAP_GROUP_TYPE', multiple: false, - placeholder: 'Select group types', + placeholder: i18n._('Select group types'), opts: opts }); // Fix for bug where adding selected opts causes form to be $dirty and triggering modal diff --git a/awx/ui/client/src/configuration/auth-form/configuration-auth.partial.html b/awx/ui/client/src/configuration/auth-form/configuration-auth.partial.html index 5efeeed532..71192e17c6 100644 --- a/awx/ui/client/src/configuration/auth-form/configuration-auth.partial.html +++ b/awx/ui/client/src/configuration/auth-form/configuration-auth.partial.html @@ -1,7 +1,7 @@
- -
+
+
Sub Category
+
+
diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-azure.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-azure.form.js index bf2546ad11..95caa48439 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-azure.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-azure.form.js @@ -4,19 +4,24 @@ * All Rights Reserved *************************************************/ - export default function() { + export default ['i18n', function(i18n) { return { name: 'configuration_azure_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_AZUREAD_OAUTH2_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_AZUREAD_OAUTH2_CALLBACK_URL' + }, SOCIAL_AUTH_AZUREAD_OAUTH2_KEY: { type: 'text', reset: 'SOCIAL_AUTH_AZUREAD_OAUTH2_KEY' }, SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET' }, SOCIAL_AUTH_AZUREAD_OAUTH2_ORGANIZATION_MAP: { @@ -38,8 +43,8 @@ buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -51,3 +56,4 @@ } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-org.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-org.form.js index 6bc58773e3..5f29da4924 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-org.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-org.form.js @@ -4,32 +4,51 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { name: 'configuration_github_org_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_GITHUB_ORG_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_GITHUB_TEAM_CALLBACK_URL' + }, SOCIAL_AUTH_GITHUB_ORG_KEY: { type: 'text', reset: 'SOCIAL_AUTH_GITHUB_ORG_KEY' }, SOCIAL_AUTH_GITHUB_ORG_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_GITHUB_ORG_SECRET' }, SOCIAL_AUTH_GITHUB_ORG_NAME: { type: 'text', reset: 'SOCIAL_AUTH_GITHUB_ORG_NAME' + }, + SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_ORG_ORGANIZATION_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_ORG_TEAM_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -41,3 +60,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-team.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-team.form.js index bad0c95627..959d4ae14d 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-team.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github-team.form.js @@ -4,32 +4,51 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { name: 'configuration_github_team_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_GITHUB_TEAM_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_GITHUB_TEAM_CALLBACK_URL' + }, SOCIAL_AUTH_GITHUB_TEAM_KEY: { type: 'text', reset: 'SOCIAL_AUTH_GITHUB_TEAM_KEY' }, SOCIAL_AUTH_GITHUB_TEAM_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_GITHUB_TEAM_SECRET' }, SOCIAL_AUTH_GITHUB_TEAM_ID: { type: 'text', reset: 'SOCIAL_AUTH_GITHUB_TEAM_ID' + }, + SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_TEAM_ORGANIZATION_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_TEAM_TEAM_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -41,3 +60,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github.form.js index ee46c53fb7..8b7b8dbc6a 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-github.form.js @@ -4,28 +4,47 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { name: 'configuration_github_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_GITHUB_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_GITHUB_CALLBACK_URL' + }, SOCIAL_AUTH_GITHUB_KEY: { type: 'text', reset: 'SOCIAL_AUTH_GITHUB_KEY' }, SOCIAL_AUTH_GITHUB_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_GITHUB_SECRET' + }, + SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_ORGANIZATION_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_GITHUB_TEAM_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GITHUB_TEAM_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -37,3 +56,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-google-oauth2.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-google-oauth2.form.js index b00748da55..3546fdbc3e 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-google-oauth2.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-google-oauth2.form.js @@ -4,19 +4,24 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { name: 'configuration_google_oauth_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_GOOGLE_OAUTH2_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_GOOGLE_OAUTH2_CALLBACK_URL' + }, SOCIAL_AUTH_GOOGLE_OAUTH2_KEY: { type: 'text', reset: 'SOCIAL_AUTH_GOOGLE_OAUTH2_KEY' }, SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET' }, SOCIAL_AUTH_GOOGLE_OAUTH2_WHITELISTED_DOMAINS: { @@ -30,14 +35,28 @@ export default function() { codeMirror: true, rows: 6, class: 'Form-textAreaLabel Form-formGroup--fullWidth', + }, + SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GOOGLE_OAUTH2_ORGANIZATION_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_GOOGLE_OAUTH2_TEAM_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -49,3 +68,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-ldap.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-ldap.form.js index 2ef6b8b5c4..0943d21b27 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-ldap.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-ldap.form.js @@ -4,7 +4,7 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { // editTitle: 'Authorization Configuration', name: 'configuration_ldap_template', @@ -21,7 +21,8 @@ export default function() { reset: 'AUTH_LDAP_BIND_DN' }, AUTH_LDAP_BIND_PASSWORD: { - type: 'password' + type: 'sensitive', + hasShowInputButton: true, }, AUTH_LDAP_USER_SEARCH: { type: 'textarea', @@ -84,8 +85,8 @@ export default function() { buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -97,3 +98,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-radius.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-radius.form.js index cd8dc68352..f8aa37b014 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-radius.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-radius.form.js @@ -4,7 +4,7 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { // editTitle: 'Authorization Configuration', name: 'configuration_radius_template', @@ -21,7 +21,8 @@ export default function() { reset: 'RADIUS_PORT' }, RADIUS_SECRET: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'RADIUS_SECRET' } }, @@ -29,8 +30,8 @@ export default function() { buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -42,3 +43,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-saml.form.js b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-saml.form.js index 462e1373fd..ad1f7cb6d8 100644 --- a/awx/ui/client/src/configuration/auth-form/sub-forms/auth-saml.form.js +++ b/awx/ui/client/src/configuration/auth-form/sub-forms/auth-saml.form.js @@ -4,13 +4,21 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { name: 'configuration_saml_template', showActions: true, showHeader: false, fields: { + SOCIAL_AUTH_SAML_CALLBACK_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_SAML_CALLBACK_URL' + }, + SOCIAL_AUTH_SAML_METADATA_URL: { + type: 'text', + reset: 'SOCIAL_AUTH_SAML_METADATA_URL' + }, SOCIAL_AUTH_SAML_SP_ENTITY_ID: { type: 'text', reset: 'SOCIAL_AUTH_SAML_SP_ENTITY_ID' @@ -20,7 +28,8 @@ export default function() { reset: 'SOCIAL_AUTH_SAML_SP_PUBLIC_CERT' }, SOCIAL_AUTH_SAML_SP_PRIVATE_KEY: { - type: 'text', + type: 'sensitive', + hasShowInputButton: true, reset: 'SOCIAL_AUTH_SAML_SP_PRIVATE_KEY' }, SOCIAL_AUTH_SAML_ORG_INFO: { @@ -50,14 +59,28 @@ export default function() { rows: 6, codeMirror: true, class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_SAML_ORGANIZATION_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_SAML_ORGANIZATION_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' + }, + SOCIAL_AUTH_SAML_TEAM_MAP: { + type: 'textarea', + reset: 'SOCIAL_AUTH_SAML_TEAM_MAP', + rows: 6, + codeMirror: true, + class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -69,3 +92,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/configuration.block.less b/awx/ui/client/src/configuration/configuration.block.less index edf80e01a8..f06aa21d01 100644 --- a/awx/ui/client/src/configuration/configuration.block.less +++ b/awx/ui/client/src/configuration/configuration.block.less @@ -1,4 +1,5 @@ @import "./client/src/shared/branding/colors.default.less"; +@import "../shared/branding/colors.less"; .Form-resetValue, .Form-resetFile { text-transform: uppercase; @@ -11,6 +12,19 @@ float: right } +.Form-resetAll { + border: none; + padding: 0; + background-color: @white; + margin-right: auto; + color: @default-link; + font-size: 12px; + + &:hover { + color: @default-link-hov; + } +} + .Form-tab { min-width: 77px; } @@ -20,15 +34,36 @@ margin-left: 0; } -.Form-nav--dropdown { - width: 175px; +.Form-nav--dropdownContainer { + width: 285px; margin-top: -52px; margin-bottom: 22px; margin-left: auto; + display: flex; + justify-content: space-between; +} + +@media (max-width: 900px) { + .Form-nav--dropdownContainer { + margin: 0; + } +} + +.Form-nav--dropdown { + width: 60%; +} + +.Form-nav--dropdownLabel { + text-transform: uppercase; + color: @default-interface-txt; + font-size: 14px; + font-weight: bold; + padding-right: 5px; + padding-top: 5px; } .Form-tabRow { - width: 80%; + display: flex; } input.Form-filePicker { @@ -49,3 +84,66 @@ input#filePickerText { border-radius: 0 5px 5px 0; background-color: #fff; } + +// Messagebar for system auditor role notifications +.Section-messageBar { + width: 120%; + margin-left: -20px; + padding: 10px; + color: @white; + background-color: @default-link; +} + +.Section-messageBar--close { + position: absolute; + right: 0; + background: none; + border: none; + color: @info-close; +} + +.Section-messageBar--close:hover { + color: @white; +} + +//Codemirror and more disabling - you can still tab into the field with this method though +textarea[disabled="disabled"] + div[id*="-container"]{ + pointer-events: none; + cursor: not-allowed; + + .CodeMirror { + cursor: not-allowed; + } + + .CodeMirror.cm-s-default, + .CodeMirror-line { + background-color: #f6f6f6; + } + + .CodeMirror-gutter.CodeMirror-lint-markers, + .CodeMirror-gutter.CodeMirror-linenumbers { + background-color: #ebebeb; + color: @b7grey; + } + + .CodeMirror-lines { + cursor: default; + } + + .CodeMirror-cursors { + display: none; + } +} + +//Needed to show the not-allowed cursor over a Codemirror instance +.Form-formGroup--disabled { + cursor: not-allowed; + + // Filepicker and toggle disabling + .Form-filePicker--pickerButton, .Form-filePicker--textBox, + .ScheduleToggle { + pointer-events: none; + cursor: not-allowed; + } + +} diff --git a/awx/ui/client/src/configuration/configuration.controller.js b/awx/ui/client/src/configuration/configuration.controller.js index fd5ddcc017..f9319300a9 100644 --- a/awx/ui/client/src/configuration/configuration.controller.js +++ b/awx/ui/client/src/configuration/configuration.controller.js @@ -6,7 +6,7 @@ export default [ '$scope', '$rootScope', '$state', '$stateParams', '$timeout', '$q', 'Alert', 'ClearScope', - 'ConfigurationService', 'ConfigurationUtils', 'CreateDialog', 'CreateSelect2', 'ParseTypeChange', 'ProcessErrors', 'Store', + 'ConfigurationService', 'ConfigurationUtils', 'CreateDialog', 'CreateSelect2', 'i18n', 'ParseTypeChange', 'ProcessErrors', 'Store', 'Wait', 'configDataResolve', //Form definitions 'configurationAzureForm', @@ -17,12 +17,14 @@ export default [ 'configurationLdapForm', 'configurationRadiusForm', 'configurationSamlForm', + 'systemActivityStreamForm', + 'systemLoggingForm', + 'systemMiscForm', 'ConfigurationJobsForm', - 'ConfigurationSystemForm', 'ConfigurationUiForm', function( $scope, $rootScope, $state, $stateParams, $timeout, $q, Alert, ClearScope, - ConfigurationService, ConfigurationUtils, CreateDialog, CreateSelect2, ParseTypeChange, ProcessErrors, Store, + ConfigurationService, ConfigurationUtils, CreateDialog, CreateSelect2, i18n, ParseTypeChange, ProcessErrors, Store, Wait, configDataResolve, //Form definitions configurationAzureForm, @@ -33,8 +35,10 @@ export default [ configurationLdapForm, configurationRadiusForm, configurationSamlForm, + systemActivityStreamForm, + systemLoggingForm, + systemMiscForm, ConfigurationJobsForm, - ConfigurationSystemForm, ConfigurationUiForm ) { var vm = this; @@ -48,8 +52,10 @@ export default [ 'ldap': configurationLdapForm, 'radius': configurationRadiusForm, 'saml': configurationSamlForm, + 'activity_stream': systemActivityStreamForm, + 'logging': systemLoggingForm, + 'misc': systemMiscForm, 'jobs': ConfigurationJobsForm, - 'system': ConfigurationSystemForm, 'ui': ConfigurationUiForm }; @@ -61,7 +67,18 @@ export default [ if (data[key] !== null && typeof data[key] === 'object') { if (Array.isArray(data[key])) { //handle arrays - $scope[key] = ConfigurationUtils.arrayToList(data[key], key); + //having to do this particular check b/c + // we want the options w/o a space, and + // the ConfigurationUtils.arrayToList() + // does a string.split(', ') w/ an extra space + // behind the comma. + if(key === "AD_HOC_COMMANDS"){ + $scope[key] = data[key].toString(); + } + else{ + $scope[key] = ConfigurationUtils.arrayToList(data[key], key); + } + } else { //handle nested objects if(ConfigurationUtils.isEmpty(data[key])) { @@ -84,19 +101,24 @@ export default [ lastForm: '', currentForm: '', currentAuth: '', + currentSystem: '', setCurrent: function(form) { this.lastForm = this.currentForm; this.currentForm = form; }, - setCurrentAuth: function(form) { - this.currentAuth = form; - this.setCurrent(this.currentAuth); - }, getCurrent: function() { return this.currentForm; }, currentFormName: function() { return 'configuration_' + this.currentForm + '_template_form'; + }, + setCurrentAuth: function(form) { + this.currentAuth = form; + this.setCurrent(this.currentAuth); + }, + setCurrentSystem: function(form) { + this.currentSystem = form; + this.setCurrent(this.currentSystem); } }; @@ -153,10 +175,10 @@ export default [ if(!$scope[formTracker.currentFormName()].$dirty) { active(setForm); } else { - var msg = 'You have unsaved changes. Would you like to proceed without saving?'; - var title = 'Warning: Unsaved Changes'; + var msg = i18n._('You have unsaved changes. Would you like to proceed without saving?'); + var title = i18n._('Warning: Unsaved Changes'); var buttons = [{ - label: "Discard changes", + label: i18n._("Discard changes"), "class": "btn Form-cancelButton", "id": "formmodal-cancel-button", onClick: function() { @@ -167,7 +189,7 @@ export default [ active(setForm); } }, { - label: "Save changes", + label: i18n._("Save changes"), onClick: function() { vm.formSave(); $scope[formTracker.currentFormName()].$setPristine(); @@ -182,6 +204,7 @@ export default [ } function active(setForm) { + // Authentication and System's sub-module dropdowns handled first: if (setForm === 'auth') { // Default to 'azure' on first load if (formTracker.currentAuth === '') { @@ -190,7 +213,15 @@ export default [ // If returning to auth tab reset current form to previously viewed formTracker.setCurrentAuth(formTracker.currentAuth); } - } else { + } else if (setForm === 'system') { + if (formTracker.currentSystem === '') { + formTracker.setCurrentSystem('misc'); + } else { + // If returning to system tab reset current form to previously viewed + formTracker.setCurrentSystem(formTracker.currentSystem); + } + } + else { formTracker.setCurrent(setForm); } vm.activeTab = setForm; @@ -206,10 +237,10 @@ export default [ var formCancel = function() { if ($scope[formTracker.currentFormName()].$dirty === true) { - var msg = 'You have unsaved changes. Would you like to proceed without saving?'; - var title = 'Warning: Unsaved Changes'; + var msg = i18n._('You have unsaved changes. Would you like to proceed without saving?'); + var title = i18n._('Warning: Unsaved Changes'); var buttons = [{ - label: "Discard changes", + label: i18n._("Discard changes"), "class": "btn Form-cancelButton", "id": "formmodal-cancel-button", onClick: function() { @@ -217,7 +248,7 @@ export default [ $state.go('setup'); } }, { - label: "Save changes", + label: i18n._("Save changes"), onClick: function() { $scope.formSave(); $('#FormModal-dialog').dialog('close'); @@ -264,13 +295,17 @@ export default [ ConfigurationService.patchConfiguration(payload) .then(function() { $scope[key] = $scope.configDataResolve[key].default; + if(key === "AD_HOC_COMMANDS"){ + $scope.AD_HOC_COMMANDS = $scope.AD_HOC_COMMANDS.toString(); + $scope.$broadcast('adhoc_populated', null, false); + } loginUpdate(); }) .catch(function(error) { ProcessErrors($scope, error, status, formDefs[formTracker.getCurrent()], { - hdr: 'Error!', - msg: 'There was an error resetting value. Returned status: ' + error.detail + hdr: i18n._('Error!'), + msg: i18n._('There was an error resetting value. Returned status: ') + error.detail }); }) @@ -347,8 +382,8 @@ export default [ .catch(function(error, status) { ProcessErrors($scope, error, status, formDefs[formTracker.getCurrent()], { - hdr: 'Error!', - msg: 'Failed to save settings. Returned status: ' + status + hdr: i18n._('Error!'), + msg: i18n._('Failed to save settings. Returned status: ') + status }); saveDeferred.reject(error); }) @@ -362,6 +397,12 @@ export default [ $scope.toggleForm = function(key) { + if($rootScope.user_is_system_auditor) { + // Block system auditors from making changes + event.preventDefault(); + return; + } + $scope[key] = !$scope[key]; Wait('start'); var payload = {}; @@ -375,8 +416,8 @@ export default [ $scope[key] = !$scope[key]; ProcessErrors($scope, error, status, formDefs[formTracker.getCurrent()], { - hdr: 'Error!', - msg: 'Failed to save toggle settings. Returned status: ' + error.detail + hdr: i18n._('Error!'), + msg: i18n._('Failed to save toggle settings. Returned status: ') + error.detail }); }) .finally(function() { @@ -394,8 +435,8 @@ export default [ .catch(function(error) { ProcessErrors($scope, error, status, formDefs[formTracker.getCurrent()], { - hdr: 'Error!', - msg: 'There was an error resetting values. Returned status: ' + error.detail + hdr: i18n._('Error!'), + msg: i18n._('There was an error resetting values. Returned status: ') + error.detail }); }) .finally(function() { @@ -405,14 +446,14 @@ export default [ var resetAllConfirm = function() { var buttons = [{ - label: "Cancel", + label: i18n._("Cancel"), "class": "btn btn-default", "id": "formmodal-cancel-button", onClick: function() { $('#FormModal-dialog').dialog('close'); } }, { - label: "Confirm Reset", + label: i18n._("Confirm Reset"), onClick: function() { resetAll(); $('#FormModal-dialog').dialog('close'); @@ -420,21 +461,57 @@ export default [ "class": "btn btn-primary", "id": "formmodal-reset-button" }]; - var msg = 'This will reset all configuration values to their factory defaults. Are you sure you want to proceed?'; - var title = 'Confirm factory reset'; + var msg = i18n._('This will reset all configuration values to their factory defaults. Are you sure you want to proceed?'); + var title = i18n._('Confirm factory reset'); + triggerModal(msg, title, buttons); + }; + + var show_auditor_bar; + if($rootScope.user_is_system_auditor && Store('show_auditor_bar') !== false) { + show_auditor_bar = true; + } else { + show_auditor_bar = false; + } + + var updateMessageBarPrefs = function() { + vm.show_auditor_bar = false; + Store('show_auditor_bar', vm.show_auditor_bar); + }; + + var closeMessageBar = function() { + var msg = 'Are you sure you want to hide the notification bar?'; + var title = 'Warning: Closing notification bar'; + var buttons = [{ + label: "Cancel", + "class": "btn Form-cancelButton", + "id": "formmodal-cancel-button", + onClick: function() { + $('#FormModal-dialog').dialog('close'); + } + }, { + label: "OK", + onClick: function() { + $('#FormModal-dialog').dialog('close'); + updateMessageBarPrefs(); + }, + "class": "btn btn-primary", + "id": "formmodal-save-button" + }]; triggerModal(msg, title, buttons); }; angular.extend(vm, { activeTab: activeTab, activeTabCheck: activeTabCheck, + closeMessageBar: closeMessageBar, currentForm: currentForm, formCancel: formCancel, formTracker: formTracker, formSave: formSave, populateFromApi: populateFromApi, resetAllConfirm: resetAllConfirm, - triggerModal: triggerModal + show_auditor_bar: show_auditor_bar, + triggerModal: triggerModal, }); } ]; diff --git a/awx/ui/client/src/configuration/configuration.partial.html b/awx/ui/client/src/configuration/configuration.partial.html index 31a14d9e51..42beab808c 100644 --- a/awx/ui/client/src/configuration/configuration.partial.html +++ b/awx/ui/client/src/configuration/configuration.partial.html @@ -1,3 +1,9 @@ +
+ + System auditors have read-only permissions in this section. + +
+
diff --git a/awx/ui/client/src/configuration/configuration.route.js b/awx/ui/client/src/configuration/configuration.route.js index f6303966e6..7bf829ab53 100644 --- a/awx/ui/client/src/configuration/configuration.route.js +++ b/awx/ui/client/src/configuration/configuration.route.js @@ -25,6 +25,7 @@ }, ncyBreadcrumb: { + parent: 'setup', label: "Edit Configuration" }, controller: ConfigurationController, diff --git a/awx/ui/client/src/configuration/configuration.service.js b/awx/ui/client/src/configuration/configuration.service.js index 5c86f3beae..c770de2400 100644 --- a/awx/ui/client/src/configuration/configuration.service.js +++ b/awx/ui/client/src/configuration/configuration.service.js @@ -4,19 +4,35 @@ * All Rights Reserved *************************************************/ -export default ['GetBasePath', 'ProcessErrors', '$q', '$http', 'Rest', - function(GetBasePath, ProcessErrors, $q, $http, Rest) { +export default ['$rootScope', 'GetBasePath', 'ProcessErrors', '$q', '$http', 'Rest', + function($rootScope, GetBasePath, ProcessErrors, $q, $http, Rest) { var url = GetBasePath('settings'); return { getConfigurationOptions: function() { var deferred = $q.defer(); + var returnData = {}; + Rest.setUrl(url + '/all'); Rest.options() .success(function(data) { - var returnData = data.actions.PUT; - //LICENSE is read only, returning here explicitly for display - returnData.LICENSE = data.actions.GET.LICENSE; + // Compare GET actions with PUT actions and flag discrepancies + // for disabling in the UI + var getActions = data.actions.GET; + var getKeys = _.keys(getActions); + var putActions = data.actions.PUT; + + _.each(getKeys, function(key) { + if(putActions[key]) { + returnData[key] = putActions[key]; + } else { + returnData[key] = _.extend(getActions[key], { + required: false, + disabled: true + }); + } + }); + deferred.resolve(returnData); }) .error(function(error) { diff --git a/awx/ui/client/src/configuration/jobs-form/configuration-jobs.controller.js b/awx/ui/client/src/configuration/jobs-form/configuration-jobs.controller.js index ea2945ab75..708ea3c28c 100644 --- a/awx/ui/client/src/configuration/jobs-form/configuration-jobs.controller.js +++ b/awx/ui/client/src/configuration/jobs-form/configuration-jobs.controller.js @@ -6,6 +6,7 @@ export default [ '$scope', + '$rootScope', '$state', '$timeout', 'ConfigurationJobsForm', @@ -13,15 +14,18 @@ export default [ 'ConfigurationUtils', 'CreateSelect2', 'GenerateForm', + 'i18n', function( $scope, + $rootScope, $state, $timeout, ConfigurationJobsForm, ConfigurationService, ConfigurationUtils, CreateSelect2, - GenerateForm + GenerateForm, + i18n ) { var jobsVm = this; var generator = GenerateForm; @@ -35,6 +39,9 @@ export default [ }); }); + // Disable the save button for system auditors + form.buttons.save.disabled = $rootScope.user_is_system_auditor; + var keys = _.keys(form.fields); _.each(keys, function(key) { addFieldInfo(form, key); @@ -48,7 +55,10 @@ export default [ toggleSource: key, dataPlacement: 'top', dataTitle: $scope.$parent.configDataResolve[key].label, - required: $scope.$parent.configDataResolve[key].required + required: $scope.$parent.configDataResolve[key].required, + ngDisabled: $rootScope.user_is_system_auditor, + disabled: $scope.$parent.configDataResolve[key].disabled || null, + readonly: $scope.$parent.configDataResolve[key].readonly || null, }); } @@ -62,26 +72,33 @@ export default [ // Flag to avoid re-rendering and breaking Select2 dropdowns on tab switching var dropdownRendered = false; - $scope.$on('populated', function() { - var opts = []; - _.each(ConfigurationUtils.listToArray($scope.$parent.AD_HOC_COMMANDS), function(command) { - opts.push({ - id: command, - text: command - }); - }); + + function populateAdhocCommand(flag){ + var ad_hoc_commands = $scope.$parent.AD_HOC_COMMANDS.split(','); + $scope.$parent.AD_HOC_COMMANDS = _.map(ad_hoc_commands, (item) => _.find($scope.$parent.AD_HOC_COMMANDS_options, { value: item })); + + if(flag !== undefined){ + dropdownRendered = flag; + } if(!dropdownRendered) { dropdownRendered = true; CreateSelect2({ element: '#configuration_jobs_template_AD_HOC_COMMANDS', multiple: true, - placeholder: 'Select commands', - opts: opts + placeholder: i18n._('Select commands') }); } + } + $scope.$on('adhoc_populated', function(e, data, flag) { + populateAdhocCommand(flag); }); + + $scope.$on('populated', function(e, data, flag) { + populateAdhocCommand(flag); + }); + // Fix for bug where adding selected opts causes form to be $dirty and triggering modal // TODO Find better solution for this bug $timeout(function(){ diff --git a/awx/ui/client/src/configuration/jobs-form/configuration-jobs.form.js b/awx/ui/client/src/configuration/jobs-form/configuration-jobs.form.js index db84e4233d..99e52498c2 100644 --- a/awx/ui/client/src/configuration/jobs-form/configuration-jobs.form.js +++ b/awx/ui/client/src/configuration/jobs-form/configuration-jobs.form.js @@ -4,7 +4,7 @@ * All Rights Reserved *************************************************/ - export default function() { + export default ['i18n', function(i18n) { return { showHeader: false, name: 'configuration_jobs_template', @@ -46,14 +46,26 @@ }, AWX_PROOT_ENABLED: { type: 'toggleSwitch', - } + }, + DEFAULT_JOB_TIMEOUT: { + type: 'text', + reset: 'DEFAULT_JOB_TIMEOUT', + }, + DEFAULT_INVENTORY_UPDATE_TIMEOUT: { + type: 'text', + reset: 'DEFAULT_INVENTORY_UPDATE_TIMEOUT', + }, + DEFAULT_PROJECT_UPDATE_TIMEOUT: { + type: 'text', + reset: 'DEFAULT_PROJECT_UPDATE_TIMEOUT', + }, }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -64,4 +76,4 @@ } } }; - } + }]; diff --git a/awx/ui/client/src/configuration/main.js b/awx/ui/client/src/configuration/main.js index 74dce80c8f..f37bab3d84 100644 --- a/awx/ui/client/src/configuration/main.js +++ b/awx/ui/client/src/configuration/main.js @@ -20,8 +20,12 @@ import configurationLdapForm from './auth-form/sub-forms/auth-ldap.form.js'; import configurationRadiusForm from './auth-form/sub-forms/auth-radius.form.js'; import configurationSamlForm from './auth-form/sub-forms/auth-saml.form'; +//system sub-forms +import systemActivityStreamForm from './system-form/sub-forms/system-activity-stream.form.js'; +import systemLoggingForm from './system-form/sub-forms/system-logging.form.js'; +import systemMiscForm from './system-form/sub-forms/system-misc.form.js'; + import configurationJobsForm from './jobs-form/configuration-jobs.form'; -import configurationSystemForm from './system-form/configuration-system.form'; import configurationUiForm from './ui-form/configuration-ui.form'; export default @@ -36,10 +40,15 @@ angular.module('configuration', []) .factory('configurationLdapForm', configurationLdapForm) .factory('configurationRadiusForm', configurationRadiusForm) .factory('configurationSamlForm', configurationSamlForm) + //system forms + .factory('systemActivityStreamForm', systemActivityStreamForm) + .factory('systemLoggingForm', systemLoggingForm) + .factory('systemMiscForm', systemMiscForm) + //other forms .factory('ConfigurationJobsForm', configurationJobsForm) - .factory('ConfigurationSystemForm', configurationSystemForm) .factory('ConfigurationUiForm', configurationUiForm) + //helpers and services .factory('ConfigurationUtils', ConfigurationUtils) .service('ConfigurationService', configurationService) diff --git a/awx/ui/client/src/configuration/system-form/configuration-system.controller.js b/awx/ui/client/src/configuration/system-form/configuration-system.controller.js index 340e452870..3d6ebef0de 100644 --- a/awx/ui/client/src/configuration/system-form/configuration-system.controller.js +++ b/awx/ui/client/src/configuration/system-form/configuration-system.controller.js @@ -5,17 +5,119 @@ *************************************************/ export default [ - '$scope', '$state', 'AngularCodeMirror', 'ConfigurationSystemForm', 'ConfigurationService', 'ConfigurationUtils', 'GenerateForm', 'ParseTypeChange', + '$rootScope', '$scope', '$state', '$stateParams', '$timeout', + 'AngularCodeMirror', + 'systemActivityStreamForm', + 'systemLoggingForm', + 'systemMiscForm', + 'ConfigurationService', + 'ConfigurationUtils', + 'CreateSelect2', + 'GenerateForm', + 'i18n', function( - $scope, $state, AngularCodeMirror, ConfigurationSystemForm, ConfigurationService, ConfigurationUtils, GenerateForm, ParseTypeChange + $rootScope, $scope, $state, $stateParams, $timeout, + AngularCodeMirror, + systemActivityStreamForm, + systemLoggingForm, + systemMiscForm, + ConfigurationService, + ConfigurationUtils, + CreateSelect2, + GenerateForm, + i18n ) { var systemVm = this; - var generator = GenerateForm; - var form = ConfigurationSystemForm; - var keys = _.keys(form.fields); - _.each(keys, function(key) { - addFieldInfo(form, key); + var generator = GenerateForm; + var formTracker = $scope.$parent.vm.formTracker; + var dropdownValue = 'misc'; + var activeSystemForm = 'misc'; + + if ($stateParams.currentTab === 'system') { + formTracker.setCurrentSystem(activeSystemForm); + } + + var activeForm = function() { + if(!$scope.$parent[formTracker.currentFormName()].$dirty) { + systemVm.activeSystemForm = systemVm.dropdownValue; + formTracker.setCurrentSystem(systemVm.activeSystemForm); + } else { + var msg = i18n._('You have unsaved changes. Would you like to proceed without saving?'); + var title = i18n._('Warning: Unsaved Changes'); + var buttons = [{ + label: i18n._('Discard changes'), + "class": "btn Form-cancelButton", + "id": "formmodal-cancel-button", + onClick: function() { + $scope.$parent.vm.populateFromApi(); + $scope.$parent[formTracker.currentFormName()].$setPristine(); + systemVm.activeSystemForm = systemVm.dropdownValue; + formTracker.setCurrentSystem(systemVm.activeSystemForm); + $('#FormModal-dialog').dialog('close'); + } + }, { + label: i18n._('Save changes'), + onClick: function() { + $scope.$parent.vm.formSave() + .then(function() { + $scope.$parent[formTracker.currentFormName()].$setPristine(); + $scope.$parent.vm.populateFromApi(); + systemVm.activeSystemForm = systemVm.dropdownValue; + formTracker.setCurrentSystem(systemVm.activeSystemForm); + $('#FormModal-dialog').dialog('close'); + }); + }, + "class": "btn btn-primary", + "id": "formmodal-save-button" + }]; + $scope.$parent.vm.triggerModal(msg, title, buttons); + } + formTracker.setCurrentSystem(systemVm.activeSystemForm); + }; + + var dropdownOptions = [ + {label: i18n._('Misc. System'), value: 'misc'}, + {label: i18n._('Activity Stream'), value: 'activity_stream'}, + {label: i18n._('Logging'), value: 'logging'}, + ]; + + CreateSelect2({ + element: '#system-configure-dropdown-nav', + multiple: false, + }); + + var systemForms = [{ + formDef: systemLoggingForm, + id: 'system-logging-form' + }, { + formDef: systemActivityStreamForm, + id: 'system-activity-stream-form' + }, { + formDef: systemMiscForm, + id: 'system-misc-form' + }]; + + var forms = _.pluck(systemForms, 'formDef'); + _.each(forms, function(form) { + var keys = _.keys(form.fields); + _.each(keys, function(key) { + if($scope.$parent.configDataResolve[key].type === 'choice') { + // Create options for dropdowns + var optionsGroup = key + '_options'; + $scope.$parent[optionsGroup] = []; + _.each($scope.$parent.configDataResolve[key].choices, function(choice){ + $scope.$parent[optionsGroup].push({ + name: choice[0], + label: choice[1], + value: choice[0] + }); + }); + } + addFieldInfo(form, key); + }); + // Disable the save button for system auditors + form.buttons.save.disabled = $rootScope.user_is_system_auditor; }); function addFieldInfo(form, key) { @@ -25,45 +127,64 @@ export default [ name: key, toggleSource: key, dataPlacement: 'top', + placeholder: ConfigurationUtils.formatPlaceholder($scope.$parent.configDataResolve[key].placeholder, key) || null, dataTitle: $scope.$parent.configDataResolve[key].label, - required: $scope.$parent.configDataResolve[key].required + required: $scope.$parent.configDataResolve[key].required, + ngDisabled: $rootScope.user_is_system_auditor, + disabled: $scope.$parent.configDataResolve[key].disabled || null, + readonly: $scope.$parent.configDataResolve[key].readonly || null, }); } - generator.inject(form, { - id: 'configure-system-form', - mode: 'edit', - scope: $scope.$parent, - related: true - }); + $scope.$parent.parseType = 'json'; - - $scope.$on('populated', function() { - - // var fld = 'LICENSE'; - // var readOnly = true; - // $scope.$parent[fld + 'codeMirror'] = AngularCodeMirror(readOnly); - // $scope.$parent[fld + 'codeMirror'].addModes($AnsibleConfig.variable_edit_modes); - // $scope.$parent[fld + 'codeMirror'].showTextArea({ - // scope: $scope.$parent, - // model: fld, - // element: "configuration_system_template_LICENSE", - // lineNumbers: true, - // mode: 'json', - // }); - - $scope.$parent.parseType = 'json'; - ParseTypeChange({ + _.each(systemForms, function(form) { + generator.inject(form.formDef, { + id: form.id, + mode: 'edit', scope: $scope.$parent, - variable: 'LICENSE', - parse_variable: 'parseType', - field_id: 'configuration_system_template_LICENSE', - readOnly: true + related: true }); }); - angular.extend(systemVm, { + var dropdownRendered = false; + $scope.$on('populated', function() { + + var opts = []; + if($scope.$parent.LOG_AGGREGATOR_TYPE !== null) { + _.each(ConfigurationUtils.listToArray($scope.$parent.LOG_AGGREGATOR_TYPE), function(type) { + opts.push({ + id: type, + text: type + }); + }); + } + + if(!dropdownRendered) { + dropdownRendered = true; + CreateSelect2({ + element: '#configuration_logging_template_LOG_AGGREGATOR_TYPE', + multiple: true, + placeholder: i18n._('Select types'), + opts: opts + }); + } + + }); + + // Fix for bug where adding selected opts causes form to be $dirty and triggering modal + // TODO Find better solution for this bug + $timeout(function(){ + $scope.$parent.configuration_logging_template_form.$setPristine(); + }, 1000); + + angular.extend(systemVm, { + activeForm: activeForm, + activeSystemForm: activeSystemForm, + dropdownOptions: dropdownOptions, + dropdownValue: dropdownValue, + systemForms: systemForms }); } ]; diff --git a/awx/ui/client/src/configuration/system-form/configuration-system.partial.html b/awx/ui/client/src/configuration/system-form/configuration-system.partial.html index 7ff91b4441..0b039e761b 100644 --- a/awx/ui/client/src/configuration/system-form/configuration-system.partial.html +++ b/awx/ui/client/src/configuration/system-form/configuration-system.partial.html @@ -1,9 +1,34 @@
- +
+
Sub Category
+
+ +
+
-
+ +
+
+ +
+
+
+
+ +
+
+
+
+ +
+
diff --git a/awx/ui/client/src/configuration/system-form/sub-forms/system-activity-stream.form.js b/awx/ui/client/src/configuration/system-form/sub-forms/system-activity-stream.form.js new file mode 100644 index 0000000000..3dc7fd89f7 --- /dev/null +++ b/awx/ui/client/src/configuration/system-form/sub-forms/system-activity-stream.form.js @@ -0,0 +1,38 @@ +/************************************************* + * Copyright (c) 2016 Ansible, Inc. + * + * All Rights Reserved + *************************************************/ + + export default ['i18n', function(i18n) { + return { + name: 'configuration_activity_stream_template', + showActions: true, + showHeader: false, + + fields: { + ACTIVITY_STREAM_ENABLED: { + type: 'toggleSwitch', + }, + ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC: { + type: 'toggleSwitch' + } + }, + + buttons: { + reset: { + ngClick: 'vm.resetAllConfirm()', + label: i18n._('Revert all to default'), + class: 'Form-resetAll' + }, + cancel: { + ngClick: 'vm.formCancel()', + }, + save: { + ngClick: 'vm.formSave()', + ngDisabled: true + } + } + }; + } +]; diff --git a/awx/ui/client/src/configuration/system-form/sub-forms/system-logging.form.js b/awx/ui/client/src/configuration/system-form/sub-forms/system-logging.form.js new file mode 100644 index 0000000000..f2ed9f54e3 --- /dev/null +++ b/awx/ui/client/src/configuration/system-form/sub-forms/system-logging.form.js @@ -0,0 +1,65 @@ +/************************************************* + * Copyright (c) 2016 Ansible, Inc. + * + * All Rights Reserved + *************************************************/ + + export default ['i18n', function(i18n) { + return { + name: 'configuration_logging_template', + showActions: true, + showHeader: false, + + fields: { + LOG_AGGREGATOR_HOST: { + type: 'text', + reset: 'LOG_AGGREGATOR_HOST' + }, + LOG_AGGREGATOR_PORT: { + type: 'text', + reset: 'LOG_AGGREGATOR_PORT' + }, + LOG_AGGREGATOR_TYPE: { + type: 'select', + reset: 'LOG_AGGREGATOR_TYPE', + ngOptions: 'type.label for type in LOG_AGGREGATOR_TYPE_options track by type.value', + multiSelect: true + }, + LOG_AGGREGATOR_USERNAME: { + type: 'text', + reset: 'LOG_AGGREGATOR_USERNAME' + }, + LOG_AGGREGATOR_PASSWORD: { + type: 'sensitive', + hasShowInputButton: true, + reset: 'LOG_AGGREGATOR_PASSWORD' + }, + LOG_AGGREGATOR_LOGGERS: { + type: 'textarea', + reset: 'LOG_AGGREGATOR_LOGGERS' + }, + LOG_AGGREGATOR_INDIVIDUAL_FACTS: { + type: 'toggleSwitch', + }, + LOG_AGGREGATOR_ENABLED: { + type: 'toggleSwitch', + } + }, + + buttons: { + reset: { + ngClick: 'vm.resetAllConfirm()', + label: i18n._('Revert all to default'), + class: 'Form-resetAll' + }, + cancel: { + ngClick: 'vm.formCancel()', + }, + save: { + ngClick: 'vm.formSave()', + ngDisabled: true + } + } + }; + } +]; diff --git a/awx/ui/client/src/configuration/system-form/configuration-system.form.js b/awx/ui/client/src/configuration/system-form/sub-forms/system-misc.form.js similarity index 60% rename from awx/ui/client/src/configuration/system-form/configuration-system.form.js rename to awx/ui/client/src/configuration/system-form/sub-forms/system-misc.form.js index d0e4cc9d2b..892dfd0bc0 100644 --- a/awx/ui/client/src/configuration/system-form/configuration-system.form.js +++ b/awx/ui/client/src/configuration/system-form/sub-forms/system-misc.form.js @@ -4,10 +4,10 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { showHeader: false, - name: 'configuration_system_template', + name: 'configuration_misc_template', showActions: true, fields: { @@ -18,28 +18,16 @@ export default function() { TOWER_ADMIN_ALERTS: { type: 'toggleSwitch', }, - ACTIVITY_STREAM_ENABLED: { - type: 'toggleSwitch', - }, - ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC: { - type: 'toggleSwitch' - }, ORG_ADMINS_CAN_SEE_ALL_USERS: { type: 'toggleSwitch', - }, - LICENSE: { - type: 'textarea', - rows: 6, - codeMirror: true, - class: 'Form-textAreaLabel Form-formGroup--fullWidth' } }, buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -51,3 +39,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/configuration/ui-form/configuration-ui.controller.js b/awx/ui/client/src/configuration/ui-form/configuration-ui.controller.js index 103c9b8040..2ae540625e 100644 --- a/awx/ui/client/src/configuration/ui-form/configuration-ui.controller.js +++ b/awx/ui/client/src/configuration/ui-form/configuration-ui.controller.js @@ -6,20 +6,24 @@ export default [ '$scope', + '$rootScope', '$state', '$timeout', 'ConfigurationUiForm', 'ConfigurationService', 'CreateSelect2', 'GenerateForm', + 'i18n', function( $scope, + $rootScope, $state, $timeout, ConfigurationUiForm, ConfigurationService, CreateSelect2, - GenerateForm + GenerateForm, + i18n ) { var uiVm = this; var generator = GenerateForm; @@ -43,6 +47,9 @@ addFieldInfo(form, key); }); + // Disable the save button for system auditors + form.buttons.save.disabled = $rootScope.user_is_system_auditor; + function addFieldInfo(form, key) { _.extend(form.fields[key], { awPopOver: $scope.$parent.configDataResolve[key].help_text, @@ -51,7 +58,10 @@ toggleSource: key, dataPlacement: 'top', dataTitle: $scope.$parent.configDataResolve[key].label, - required: $scope.$parent.configDataResolve[key].required + required: $scope.$parent.configDataResolve[key].required, + ngDisabled: $rootScope.user_is_system_auditor, + disabled: $scope.$parent.configDataResolve[key].disabled || null, + readonly: $scope.$parent.configDataResolve[key].readonly || null, }); } @@ -71,7 +81,7 @@ CreateSelect2({ element: '#configuration_ui_template_PENDO_TRACKING_STATE', multiple: false, - placeholder: 'Select commands', + placeholder: i18n._('Select commands'), opts: [{ id: $scope.$parent.PENDO_TRACKING_STATE, text: $scope.$parent.PENDO_TRACKING_STATE diff --git a/awx/ui/client/src/configuration/ui-form/configuration-ui.form.js b/awx/ui/client/src/configuration/ui-form/configuration-ui.form.js index eb61885d95..e566c076d1 100644 --- a/awx/ui/client/src/configuration/ui-form/configuration-ui.form.js +++ b/awx/ui/client/src/configuration/ui-form/configuration-ui.form.js @@ -4,7 +4,7 @@ * All Rights Reserved *************************************************/ -export default function() { +export default ['i18n', function(i18n) { return { showHeader: false, name: 'configuration_ui_template', @@ -32,8 +32,8 @@ export default function() { buttons: { reset: { ngClick: 'vm.resetAllConfirm()', - label: 'Reset All', - class: 'Form-button--left Form-cancelButton' + label: i18n._('Revert all to default'), + class: 'Form-resetAll' }, cancel: { ngClick: 'vm.formCancel()', @@ -45,3 +45,4 @@ export default function() { } }; } +]; diff --git a/awx/ui/client/src/controllers/Credentials.js b/awx/ui/client/src/controllers/Credentials.js index 1681205992..8fc0d4e55f 100644 --- a/awx/ui/client/src/controllers/Credentials.js +++ b/awx/ui/client/src/controllers/Credentials.js @@ -36,6 +36,35 @@ export function CredentialsList($scope, $rootScope, $location, $log, $scope.selected = []; } + $scope.$on(`${list.iterator}_options`, function(event, data){ + $scope.options = data.data.actions.GET; + optionsRequestDataProcessing(); + }); + + $scope.$watchCollection(`${$scope.list.name}`, function() { + optionsRequestDataProcessing(); + } + ); + // iterate over the list and add fields like type label, after the + // OPTIONS request returns, or the list is sorted/paginated/searched + function optionsRequestDataProcessing(){ + $scope[list.name].forEach(function(item, item_idx) { + var itm = $scope[list.name][item_idx]; + + // Set the item type label + if (list.fields.kind && $scope.options && + $scope.options.hasOwnProperty('kind')) { + $scope.options.kind.choices.every(function(choice) { + if (choice[0] === item.kind) { + itm.kind_label = choice[1]; + return false; + } + return true; + }); + } + }); + } + $scope.addCredential = function() { $state.go('credentials.add'); }; @@ -84,7 +113,7 @@ CredentialsList.$inject = ['$scope', '$rootScope', '$location', '$log', export function CredentialsAdd($scope, $rootScope, $compile, $location, $log, $stateParams, CredentialForm, GenerateForm, Rest, Alert, ProcessErrors, - ClearScope, GetBasePath, GetChoices, Empty, KindChange, + ClearScope, GetBasePath, GetChoices, Empty, KindChange, BecomeMethodChange, OwnerChange, FormSave, $state, CreateSelect2) { ClearScope(); @@ -192,6 +221,10 @@ export function CredentialsAdd($scope, $rootScope, $compile, $location, $log, KindChange({ scope: $scope, form: form, reset: true }); }; + $scope.becomeMethodChange = function() { + BecomeMethodChange({ scope: $scope }); + }; + // Save $scope.formSave = function() { if ($scope[form.name + '_form'].$valid) { @@ -247,14 +280,14 @@ export function CredentialsAdd($scope, $rootScope, $compile, $location, $log, CredentialsAdd.$inject = ['$scope', '$rootScope', '$compile', '$location', '$log', '$stateParams', 'CredentialForm', 'GenerateForm', 'Rest', 'Alert', - 'ProcessErrors', 'ClearScope', 'GetBasePath', 'GetChoices', 'Empty', 'KindChange', + 'ProcessErrors', 'ClearScope', 'GetBasePath', 'GetChoices', 'Empty', 'KindChange', 'BecomeMethodChange', 'OwnerChange', 'FormSave', '$state', 'CreateSelect2' ]; export function CredentialsEdit($scope, $rootScope, $compile, $location, $log, $stateParams, CredentialForm, Rest, Alert, ProcessErrors, ClearScope, Prompt, - GetBasePath, GetChoices, KindChange, Empty, OwnerChange, FormSave, Wait, - $state, CreateSelect2, Authorization) { + GetBasePath, GetChoices, KindChange, BecomeMethodChange, Empty, OwnerChange, FormSave, Wait, + $state, CreateSelect2, Authorization, i18n) { ClearScope(); @@ -307,19 +340,15 @@ export function CredentialsEdit($scope, $rootScope, $compile, $location, $log, }); } - // if the credential is assigned to an organization, allow permission delegation - // do NOT use $scope.organization in a view directive to determine if a credential is associated with an org - // @todo why not? ^ and what is this type check for a number doing - should this be a type check for undefined? - $scope.disablePermissionAssignment = typeof($scope.organization) === 'number' ? false : true; - if ($scope.disablePermissionAssignment) { - $scope.permissionsTooltip = 'Credentials are only shared within an organization. Assign credentials to an organization to delegate credential permissions. The organization cannot be edited after credentials are assigned.'; - } - setAskCheckboxes(); - KindChange({ - scope: $scope, - form: form, - reset: false + $scope.$watch('organization', function(val) { + if (val === undefined) { + $scope.permissionsTooltip = i18n._('Credentials are only shared within an organization. Assign credentials to an organization to delegate credential permissions. The organization cannot be edited after credentials are assigned.'); + } else { + $scope.permissionsTooltip = ''; + } }); + + setAskCheckboxes(); OwnerChange({ scope: $scope }); $scope.$watch("ssh_key_data", function(val) { if (val === "" || val === null || val === undefined) { @@ -424,6 +453,13 @@ export function CredentialsEdit($scope, $rootScope, $compile, $location, $log, break; } } + + KindChange({ + scope: $scope, + form: form, + reset: false + }); + master.kind = $scope.kind; CreateSelect2({ @@ -489,6 +525,10 @@ export function CredentialsEdit($scope, $rootScope, $compile, $location, $log, KindChange({ scope: $scope, form: form, reset: true }); }; + $scope.becomeMethodChange = function() { + BecomeMethodChange({ scope: $scope }); + }; + $scope.formCancel = function() { $state.transitionTo('credentials'); }; @@ -583,6 +623,6 @@ export function CredentialsEdit($scope, $rootScope, $compile, $location, $log, CredentialsEdit.$inject = ['$scope', '$rootScope', '$compile', '$location', '$log', '$stateParams', 'CredentialForm', 'Rest', 'Alert', 'ProcessErrors', 'ClearScope', 'Prompt', 'GetBasePath', 'GetChoices', - 'KindChange', 'Empty', 'OwnerChange', - 'FormSave', 'Wait', '$state', 'CreateSelect2', 'Authorization' + 'KindChange', 'BecomeMethodChange', 'Empty', 'OwnerChange', + 'FormSave', 'Wait', '$state', 'CreateSelect2', 'Authorization', 'i18n', ]; diff --git a/awx/ui/client/src/controllers/Jobs.js b/awx/ui/client/src/controllers/Jobs.js index 9dce01c863..0015216389 100644 --- a/awx/ui/client/src/controllers/Jobs.js +++ b/awx/ui/client/src/controllers/Jobs.js @@ -13,7 +13,7 @@ export function JobsListController($state, $rootScope, $log, $scope, $compile, $stateParams, - ClearScope, Find, DeleteJob, RelaunchJob, AllJobsList, ScheduledJobsList, GetBasePath, Dataset, GetChoices) { + ClearScope, Find, DeleteJob, RelaunchJob, AllJobsList, ScheduledJobsList, GetBasePath, Dataset) { ClearScope(); @@ -28,37 +28,44 @@ export function JobsListController($state, $rootScope, $log, $scope, $compile, $ $scope[list.name] = $scope[`${list.iterator}_dataset`].results; $scope.showJobType = true; + } - _.forEach($scope[list.name], buildTooltips); - if ($scope.removeChoicesReady) { - $scope.removeChoicesReady(); + $scope.$on(`${list.iterator}_options`, function(event, data){ + $scope.options = data.data.actions.GET; + optionsRequestDataProcessing(); + }); + + $scope.$watchCollection(`${$scope.list.name}`, function() { + optionsRequestDataProcessing(); } - $scope.removeChoicesReady = $scope.$on('choicesReady', function() { - $scope[list.name].forEach(function(item, item_idx) { - var itm = $scope[list.name][item_idx]; + ); - // Set the item type label - if (list.fields.type) { - $scope.type_choices.every(function(choice) { - if (choice.value === item.type) { - itm.type_label = choice.label; + // iterate over the list and add fields like type label, after the + // OPTIONS request returns, or the list is sorted/paginated/searched + function optionsRequestDataProcessing(){ + + $scope[list.name].forEach(function(item, item_idx) { + var itm = $scope[list.name][item_idx]; + + if(item.summary_fields && item.summary_fields.source_workflow_job && + item.summary_fields.source_workflow_job.id){ + item.workflow_result_link = `/#/workflows/${item.summary_fields.source_workflow_job.id}`; + } + + // Set the item type label + if (list.fields.type && $scope.options && + $scope.options.hasOwnProperty('type')) { + $scope.options.type.choices.every(function(choice) { + if (choice[0] === item.type) { + itm.type_label = choice[1]; return false; } return true; }); } - }); - }); - - GetChoices({ - scope: $scope, - url: GetBasePath('unified_jobs'), - field: 'type', - variable: 'type_choices', - callback: 'choicesReady' + buildTooltips(itm); }); } - function buildTooltips(job) { job.status_tip = 'Job ' + job.status + ". Click for details."; } @@ -80,7 +87,7 @@ export function JobsListController($state, $rootScope, $log, $scope, $compile, $ typeId = job.inventory_source; } else if (job.type === 'project_update') { typeId = job.project; - } else if (job.type === 'job' || job.type === "system_job" || job.type === 'ad_hoc_command') { + } else if (job.type === 'job' || job.type === "system_job" || job.type === 'ad_hoc_command' || job.type === 'workflow_job') { typeId = job.id; } RelaunchJob({ scope: $scope, id: typeId, type: job.type, name: job.name }); @@ -128,5 +135,5 @@ export function JobsListController($state, $rootScope, $log, $scope, $compile, $ } JobsListController.$inject = ['$state', '$rootScope', '$log', '$scope', '$compile', '$stateParams', - 'ClearScope', 'Find', 'DeleteJob', 'RelaunchJob', 'AllJobsList', 'ScheduledJobsList', 'GetBasePath', 'Dataset', 'GetChoices' + 'ClearScope', 'Find', 'DeleteJob', 'RelaunchJob', 'AllJobsList', 'ScheduledJobsList', 'GetBasePath', 'Dataset' ]; diff --git a/awx/ui/client/src/controllers/Projects.js b/awx/ui/client/src/controllers/Projects.js index 429338cd5f..5bfbb39273 100644 --- a/awx/ui/client/src/controllers/Projects.js +++ b/awx/ui/client/src/controllers/Projects.js @@ -14,7 +14,7 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, Rest, Alert, ProjectList, Prompt, ReturnToCaller, ClearScope, ProcessErrors, GetBasePath, ProjectUpdate, Wait, GetChoices, Empty, Find, GetProjectIcon, - GetProjectToolTip, $filter, $state, rbacUiControlService, Dataset, i18n) { + GetProjectToolTip, $filter, $state, rbacUiControlService, Dataset, i18n, qs) { var list = ProjectList, defaultUrl = GetBasePath('projects'); @@ -38,10 +38,39 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, $rootScope.flashMessage = null; } - $scope.$watch(`${list.name}`, function() { - _.forEach($scope[list.name], buildTooltips); + $scope.$on(`${list.iterator}_options`, function(event, data){ + $scope.options = data.data.actions.GET; + optionsRequestDataProcessing(); }); + $scope.$watchCollection(`${$scope.list.name}`, function() { + optionsRequestDataProcessing(); + } + ); + + // iterate over the list and add fields like type label, after the + // OPTIONS request returns, or the list is sorted/paginated/searched + function optionsRequestDataProcessing(){ + $scope[list.name].forEach(function(item, item_idx) { + var itm = $scope[list.name][item_idx]; + + // Set the item type label + if (list.fields.scm_type && $scope.options && + $scope.options.hasOwnProperty('scm_type')) { + $scope.options.scm_type.choices.every(function(choice) { + if (choice[0] === item.scm_type) { + itm.type_label = choice[1]; + return false; + } + return true; + }); + } + + buildTooltips(itm); + + }); + } + function buildTooltips(project) { project.statusIcon = GetProjectIcon(project.status); project.statusTip = GetProjectToolTip(project.status); @@ -66,6 +95,15 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, } } + $scope.reloadList = function(){ + let path = GetBasePath(list.basePath) || GetBasePath(list.name); + qs.search(path, $stateParams[`${list.iterator}_search`]) + .then(function(searchResponse) { + $scope[`${list.iterator}_dataset`] = searchResponse.data; + $scope[list.name] = $scope[`${list.iterator}_dataset`].results; + }); + }; + $scope.$on(`ws-jobs`, function(e, data) { var project; $log.debug(data); @@ -77,7 +115,7 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, $log.debug('Received event for project: ' + project.name); $log.debug('Status changed to: ' + data.status); if (data.status === 'successful' || data.status === 'failed') { - $state.go('.', null, { reload: true }); + $scope.reloadList(); } else { project.scm_update_tooltip = "SCM update currently running"; project.scm_type_class = "btn-disabled"; @@ -147,13 +185,15 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, if (parseInt($state.params.project_id) === id) { $state.go("^", null, { reload: true }); } else { - // @issue: OLD SEARCH - // $scope.search(list.iterator); + $state.go('.', null, {reload: true}); } }) .error(function (data, status) { ProcessErrors($scope, data, status, null, { hdr: i18n._('Error!'), msg: i18n.sprintf(i18n._('Call to %s failed. DELETE returned status: '), url) + status }); + }) + .finally(function() { + Wait('stop'); }); }; @@ -261,7 +301,7 @@ export function ProjectsList($scope, $rootScope, $location, $log, $stateParams, ProjectsList.$inject = ['$scope', '$rootScope', '$location', '$log', '$stateParams', 'Rest', 'Alert', 'ProjectList', 'Prompt', 'ReturnToCaller', 'ClearScope', 'ProcessErrors', 'GetBasePath', 'ProjectUpdate', 'Wait', 'GetChoices', 'Empty', 'Find', 'GetProjectIcon', - 'GetProjectToolTip', '$filter', '$state', 'rbacUiControlService', 'Dataset', 'i18n' + 'GetProjectToolTip', '$filter', '$state', 'rbacUiControlService', 'Dataset', 'i18n', 'QuerySet' ]; export function ProjectsAdd($scope, $rootScope, $compile, $location, $log, @@ -281,7 +321,7 @@ export function ProjectsAdd($scope, $rootScope, $compile, $location, $log, .success(function(data) { if (!data.actions.POST) { $state.go("^"); - Alert('Permission Error', 'You do not have permission to add a project.', 'alert-info'); + Alert(i18n._('Permission Error'), i18n._('You do not have permission to add a project.'), 'alert-info'); } }); @@ -465,7 +505,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, }); $scope.project_local_paths = opts; $scope.local_path = $scope.project_local_paths[0]; - $scope.base_dir = 'You do not have access to view this property'; + $scope.base_dir = i18n._('You do not have access to view this property'); $scope.$emit('pathsReady'); } @@ -555,7 +595,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, }) .error(function (data, status) { ProcessErrors($scope, data, status, form, { hdr: i18n._('Error!'), - msg: i18n._('Failed to retrieve project: ') + id + i18n._('. GET status: ') + status + msg: i18n.sprintf(i18n._('Failed to retrieve project: %s. GET status: '), id) + status }); }); }); @@ -620,7 +660,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, $state.go($state.current, {}, { reload: true }); }) .error(function(data, status) { - ProcessErrors($scope, data, status, form, { hdr: 'Error!', msg: 'Failed to update project: ' + id + '. PUT status: ' + status }); + ProcessErrors($scope, data, status, form, { hdr: i18n._('Error!'), msg: i18n.sprintf(i18n._('Failed to update project: %s. PUT status: '), id) + status }); }); }; @@ -638,7 +678,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, }) .error(function(data, status) { $('#prompt-modal').modal('hide'); - ProcessErrors($scope, data, status, null, { hdr: 'Error!', msg: 'Call to ' + url + ' failed. POST returned status: ' + status }); + ProcessErrors($scope, data, status, null, { hdr: i18n._('Error!'), msg: i18n.sprintf(i18n._('Call to %s failed. POST returned status: '), url) + status }); }); }; @@ -646,7 +686,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, hdr: i18n._('Delete'), body: '
' + i18n.sprintf(i18n._('Are you sure you want to remove the %s below from %s?'), title, $scope.name) + '
' + '
' + name + '
', action: action, - actionText: 'DELETE' + actionText: i18n._('DELETE') }); }; @@ -654,7 +694,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, if ($scope.scm_type) { $scope.pathRequired = ($scope.scm_type.value === 'manual') ? true : false; $scope.scmRequired = ($scope.scm_type.value !== 'manual') ? true : false; - $scope.scmBranchLabel = ($scope.scm_type.value === 'svn') ? 'Revision #' : 'SCM Branch'; + $scope.scmBranchLabel = ($scope.scm_type.value === 'svn') ? i18n._('Revision #') : i18n._('SCM Branch'); } // Dynamically update popover values @@ -690,7 +730,7 @@ export function ProjectsEdit($scope, $rootScope, $compile, $location, $log, if ($scope.project_obj.scm_type === "Manual" || Empty($scope.project_obj.scm_type)) { // ignore } else if ($scope.project_obj.status === 'updating' || $scope.project_obj.status === 'running' || $scope.project_obj.status === 'pending') { - Alert('Update in Progress', i18n._('The SCM update process is running.'), 'alert-info'); + Alert(i18n._('Update in Progress'), i18n._('The SCM update process is running.'), 'alert-info'); } else { ProjectUpdate({ scope: $scope, project_id: $scope.project_obj.id }); } diff --git a/awx/ui/client/src/controllers/Teams.js b/awx/ui/client/src/controllers/Teams.js index 0e3a47d855..464a10ee10 100644 --- a/awx/ui/client/src/controllers/Teams.js +++ b/awx/ui/client/src/controllers/Teams.js @@ -217,6 +217,7 @@ export function TeamsEdit($scope, $rootScope, $stateParams, $rootScope.flashMessage = null; if ($scope[form.name + '_form'].$valid) { var data = processNewData(form.fields); + Rest.setUrl(defaultUrl); Rest.put(data).success(function() { $state.go($state.current, null, { reload: true }); }) diff --git a/awx/ui/client/src/controllers/Users.js b/awx/ui/client/src/controllers/Users.js index 61ffb69d2e..a305a3e133 100644 --- a/awx/ui/client/src/controllers/Users.js +++ b/awx/ui/client/src/controllers/Users.js @@ -91,17 +91,17 @@ export function UsersList($scope, $rootScope, $stateParams, }) .error(function(data, status) { ProcessErrors($scope, data, status, null, { - hdr: 'Error!', - msg: 'Call to ' + url + ' failed. DELETE returned status: ' + status + hdr: i18n._('Error!'), + msg: i18n.sprintf(i18n._('Call to %s failed. DELETE returned status: '), url) + status }); }); }; Prompt({ - hdr: 'Delete', - body: '
Are you sure you want to delete the user below?
' + $filter('sanitize')(name) + '
', + hdr: i18n._('Delete'), + body: '
' + i18n._('Are you sure you want to delete the user below?') + '
' + $filter('sanitize')(name) + '
', action: action, - actionText: 'DELETE' + actionText: i18n._('DELETE') }); }; } @@ -114,7 +114,7 @@ UsersList.$inject = ['$scope', '$rootScope', '$stateParams', export function UsersAdd($scope, $rootScope, $stateParams, UserForm, GenerateForm, Rest, Alert, ProcessErrors, ReturnToCaller, ClearScope, - GetBasePath, ResetForm, Wait, CreateSelect2, $state, $location) { + GetBasePath, ResetForm, Wait, CreateSelect2, $state, $location, i18n) { ClearScope(); @@ -138,7 +138,7 @@ export function UsersAdd($scope, $rootScope, $stateParams, UserForm, .success(function(data) { if (!data.actions.POST) { $state.go("^"); - Alert('Permission Error', 'You do not have permission to add a user.', 'alert-info'); + Alert(i18n._('Permission Error'), i18n._('You do not have permission to add a user.'), 'alert-info'); } }); @@ -171,7 +171,7 @@ export function UsersAdd($scope, $rootScope, $stateParams, UserForm, .success(function(data) { var base = $location.path().replace(/^\//, '').split('/')[0]; if (base === 'users') { - $rootScope.flashMessage = 'New user successfully created!'; + $rootScope.flashMessage = i18n._('New user successfully created!'); $rootScope.$broadcast("EditIndicatorChange", "users", data.id); $state.go('users.edit', { user_id: data.id }, { reload: true }); } else { @@ -179,10 +179,10 @@ export function UsersAdd($scope, $rootScope, $stateParams, UserForm, } }) .error(function(data, status) { - ProcessErrors($scope, data, status, form, { hdr: 'Error!', msg: 'Failed to add new user. POST returned status: ' + status }); + ProcessErrors($scope, data, status, form, { hdr: i18n._('Error!'), msg: i18n._('Failed to add new user. POST returned status: ') + status }); }); } else { - $scope.organization_name_api_error = 'A value is required'; + $scope.organization_name_api_error = i18n._('A value is required'); } } }; @@ -201,7 +201,7 @@ export function UsersAdd($scope, $rootScope, $stateParams, UserForm, UsersAdd.$inject = ['$scope', '$rootScope', '$stateParams', 'UserForm', 'GenerateForm', 'Rest', 'Alert', 'ProcessErrors', 'ReturnToCaller', 'ClearScope', 'GetBasePath', - 'ResetForm', 'Wait', 'CreateSelect2', '$state', '$location' + 'ResetForm', 'Wait', 'CreateSelect2', '$state', '$location', 'i18n' ]; export function UsersEdit($scope, $rootScope, $location, @@ -221,9 +221,12 @@ export function UsersEdit($scope, $rootScope, $location, init(); function init() { + $scope.hidePagination = false; + $scope.hideSmartSearch = false; $scope.user_type_options = user_type_options; $scope.user_type = user_type_options[0]; $scope.$watch('user_type', user_type_sync($scope)); + $scope.$watch('is_superuser', hidePermissionsTabSmartSearchAndPaginationIfSuperUser($scope)); Rest.setUrl(defaultUrl); Wait('start'); Rest.get(defaultUrl).success(function(data) { @@ -247,6 +250,7 @@ export function UsersEdit($scope, $rootScope, $location, } $scope.user_obj = data; + $scope.name = data.username; CreateSelect2({ element: '#user_user_type', @@ -264,13 +268,26 @@ export function UsersEdit($scope, $rootScope, $location, }) .error(function(data, status) { ProcessErrors($scope, data, status, null, { - hdr: 'Error!', - msg: 'Failed to retrieve user: ' + - $stateParams.id + '. GET status: ' + status + hdr: i18n._('Error!'), + msg: i18n.sprintf(i18n._('Failed to retrieve user: %s. GET status: '), $stateParams.id) + status }); }); } + // Organizations and Teams tab pagination is hidden through other mechanism + function hidePermissionsTabSmartSearchAndPaginationIfSuperUser(scope) { + return function(isSuperuserNewValue) { + let shouldHide = isSuperuserNewValue; + if (shouldHide === true) { + scope.hidePagination = true; + scope.hideSmartSearch = true; + } else if (shouldHide === false) { + scope.hidePagination = false; + scope.hideSmartSearch = false; + } + }; + } + function setScopeFields(data) { _(data) @@ -312,16 +329,15 @@ export function UsersEdit($scope, $rootScope, $location, $scope.formSave = function() { $rootScope.flashMessage = null; if ($scope[form.name + '_form'].$valid) { - Rest.setUrl(defaultUrl + id + '/'); + Rest.setUrl(defaultUrl + '/'); var data = processNewData(form.fields); Rest.put(data).success(function() { $state.go($state.current, null, { reload: true }); }) .error(function(data, status) { ProcessErrors($scope, data, status, null, { - hdr: 'Error!', - msg: 'Failed to retrieve user: ' + - $stateParams.id + '. GET status: ' + status + hdr: i18n._('Error!'), + msg: i18n.sprintf(i18n._('Failed to retrieve user: %s. GET status: '), $stateParams.id) + status }); }); } diff --git a/awx/ui/client/src/credentials/ownerList.block.less b/awx/ui/client/src/credentials/ownerList.block.less new file mode 100644 index 0000000000..64f76db17b --- /dev/null +++ b/awx/ui/client/src/credentials/ownerList.block.less @@ -0,0 +1,36 @@ +/** @define OwnerList */ +@import "./client/src/shared/branding/colors.default.less"; + +.OwnerList { + display: flex; + flex-wrap: wrap; + align-items: flex-start; +} + +.OwnerList-seeBase { + display: flex; + max-width: 100%; + + color: @default-link; + text-transform: uppercase; + padding: 2px 15px; + cursor: pointer; + border-radius: 5px; + font-size: 11px; +} + +.OwnerList-seeBase:hover { + color: @default-link-hov; +} + +.OwnerList-seeLess { + .OwnerList-seeBase; +} + +.OwnerList-seeMore { + .OwnerList-seeBase; +} + +.OwnerList-Container { + margin-right: 5px; +} diff --git a/awx/ui/client/src/credentials/ownerList.partial.html b/awx/ui/client/src/credentials/ownerList.partial.html index 1ed46c081b..5c73ac4d15 100644 --- a/awx/ui/client/src/credentials/ownerList.partial.html +++ b/awx/ui/client/src/credentials/ownerList.partial.html @@ -1,5 +1,12 @@ - + \ No newline at end of file diff --git a/awx/ui/client/src/dashboard/hosts/dashboard-hosts-edit.controller.js b/awx/ui/client/src/dashboard/hosts/dashboard-hosts-edit.controller.js index f77764903b..3edbe6de90 100644 --- a/awx/ui/client/src/dashboard/hosts/dashboard-hosts-edit.controller.js +++ b/awx/ui/client/src/dashboard/hosts/dashboard-hosts-edit.controller.js @@ -7,9 +7,7 @@ export default ['$scope', '$state', '$stateParams', 'DashboardHostsForm', 'GenerateForm', 'ParseTypeChange', 'DashboardHostService', 'host', function($scope, $state, $stateParams, DashboardHostsForm, GenerateForm, ParseTypeChange, DashboardHostService, host){ - var generator = GenerateForm, - form = DashboardHostsForm; - $scope.parseType = 'yaml'; + $scope.parseType = 'yaml'; $scope.formCancel = function(){ $state.go('^', null, {reload: true}); }; @@ -33,11 +31,10 @@ }; var init = function(){ - $scope.host = host; - generator.inject(form, {mode: 'edit', related: false, scope: $scope}); - $scope.name = host.name; - $scope.description = host.description; - $scope.variables = host.variables === '' ? '---' : host.variables; + $scope.host = host.data; + $scope.name = host.data.name; + $scope.description = host.data.description; + $scope.variables = getVars(host.data.variables); ParseTypeChange({ scope: $scope, field_id: 'host_variables', @@ -45,5 +42,35 @@ }); }; + // Adding this function b/c sometimes extra vars are returned to the + // UI as a string (ex: "foo: bar"), and other times as a + // json-object-string (ex: "{"foo": "bar"}"). CodeMirror wouldn't know + // how to prettify the latter. The latter occurs when host vars were + // system generated and not user-input (such as adding a cloud host); + function getVars(str){ + + // Quick function to test if the host vars are a json-object-string, + // by testing if they can be converted to a JSON object w/o error. + function IsJsonString(str) { + try { + JSON.parse(str); + } catch (e) { + return false; + } + return true; + } + + if(str === ''){ + return '---'; + } + else if(IsJsonString(str)){ + str = JSON.parse(str); + return jsyaml.safeDump(str); + } + else if(!IsJsonString(str)){ + return str; + } + } + init(); }]; diff --git a/awx/ui/client/src/dashboard/hosts/dashboard-hosts-list.controller.js b/awx/ui/client/src/dashboard/hosts/dashboard-hosts-list.controller.js index d07c544942..3ebf7dd370 100644 --- a/awx/ui/client/src/dashboard/hosts/dashboard-hosts-list.controller.js +++ b/awx/ui/client/src/dashboard/hosts/dashboard-hosts-list.controller.js @@ -39,7 +39,7 @@ export default ['$scope', '$state', '$stateParams', 'GetBasePath', 'DashboardHos } $scope.editHost = function(id) { - $state.go('dashboardHosts.edit', { id: id }); + $state.go('dashboardHosts.edit', { host_id: id }); }; $scope.toggleHostEnabled = function(host) { diff --git a/awx/ui/client/src/dashboard/hosts/dashboard-hosts.form.js b/awx/ui/client/src/dashboard/hosts/dashboard-hosts.form.js index 6af1326980..659ccdbe44 100644 --- a/awx/ui/client/src/dashboard/hosts/dashboard-hosts.form.js +++ b/awx/ui/client/src/dashboard/hosts/dashboard-hosts.form.js @@ -12,6 +12,7 @@ export default function(){ formLabelSize: 'col-lg-3', formFieldSize: 'col-lg-9', iterator: 'host', + basePath: 'hosts', headerFields:{ enabled: { //flag: 'host.enabled', diff --git a/awx/ui/client/src/dashboard/hosts/dashboard-hosts.list.js b/awx/ui/client/src/dashboard/hosts/dashboard-hosts.list.js index 0c3c0adb9d..dd4531c98e 100644 --- a/awx/ui/client/src/dashboard/hosts/dashboard-hosts.list.js +++ b/awx/ui/client/src/dashboard/hosts/dashboard-hosts.list.js @@ -38,21 +38,20 @@ export default [ 'i18n', function(i18n){ ngClick: 'editHost(host.id)' }, inventory_name: { - label: 'Inventory', + label: i18n._('Inventory'), sourceModel: 'inventory', sourceField: 'name', columnClass: 'col-lg-5 col-md-4 col-sm-4 hidden-xs elllipsis', - linkTo: "{{ '/#/inventories/' + host.inventory_id }}", - searchable: false + linkTo: "{{ '/#/inventories/' + host.inventory_id }}" }, enabled: { - label: 'Status', + label: i18n._('Status'), columnClass: 'List-staticColumn--toggle', type: 'toggle', ngClick: 'toggleHostEnabled(host)', nosort: true, - awToolTip: "

Indicates if a host is available and should be included in running jobs.

For hosts that are part of an external inventory, this flag cannot be changed. It will be set by the inventory sync process.

", - dataTitle: 'Host Enabled', + awToolTip: "

" + i18n._("Indicates if a host is available and should be included in running jobs.") + "

" + i18n._("For hosts that are part of an external inventory, this flag cannot be changed. It will be set by the inventory sync process.") + "

", + dataTitle: i18n._('Host Enabled'), } }, @@ -60,10 +59,10 @@ export default [ 'i18n', function(i18n){ columnClass: 'col-lg-2 col-md-3 col-sm-3 col-xs-4', edit: { - label: 'Edit', + label: i18n._('Edit'), ngClick: 'editHost(host.id)', icon: 'icon-edit', - awToolTip: 'Edit host', + awToolTip: i18n._('Edit host'), dataPlacement: 'top' } }, diff --git a/awx/ui/client/src/dashboard/hosts/main.js b/awx/ui/client/src/dashboard/hosts/main.js index 935dec2d49..8b383ed5bb 100644 --- a/awx/ui/client/src/dashboard/hosts/main.js +++ b/awx/ui/client/src/dashboard/hosts/main.js @@ -23,7 +23,9 @@ angular.module('dashboardHosts', []) name: 'dashboardHosts', url: '/home/hosts', lazyLoad: () => stateDefinitions.generateTree({ - url: '/home/hosts', + urls: { + list: '/home/hosts' + }, parent: 'dashboardHosts', modes: ['edit'], list: 'DashboardHostsList', @@ -32,6 +34,17 @@ angular.module('dashboardHosts', []) list: listController, edit: editController }, + resolve: { + edit: { + host: ['Rest', '$stateParams', 'GetBasePath', + function(Rest, $stateParams, GetBasePath) { + let path = GetBasePath('hosts') + $stateParams.host_id; + Rest.setUrl(path); + return Rest.get(); + } + ] + } + }, data: { activityStream: true, activityStreamTarget: 'host' diff --git a/awx/ui/client/src/dashboard/lists/dashboard-list.block.less b/awx/ui/client/src/dashboard/lists/dashboard-list.block.less index 102188034d..0903b01c74 100644 --- a/awx/ui/client/src/dashboard/lists/dashboard-list.block.less +++ b/awx/ui/client/src/dashboard/lists/dashboard-list.block.less @@ -32,34 +32,21 @@ } .DashboardList-viewAll { - color: @btn-txt; - background-color: @btn-bg; - font-size: 12px; - border: 1px solid @default-icon-hov; - border-radius: 5px; + font-size: 11px; margin-right: 15px; - margin-top: 10px; + margin-top: 13px; margin-bottom: 10px; padding-left: 10px; padding-right: 10px; padding-bottom: 5px; padding-top: 5px; - transition: background-color 0.2s; -} - -.DashboardList-viewAll:hover { - color: @btn-txt; - background-color: @btn-bg-hov; -} - -.DashboardList-viewAll:focus { - color: @btn-txt; } .DashboardList-container { flex: 1; width: 100%; padding: 20px; + padding-top: 0; } .DashboardList-tableHeader--name { diff --git a/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.directive.js b/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.directive.js index 928f03a1c4..82aca5b377 100644 --- a/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.directive.js +++ b/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.directive.js @@ -47,7 +47,7 @@ export default }; scope.editJobTemplate = function (jobTemplateId) { - $state.go('templates.editJobTemplate', {id: jobTemplateId}); + $state.go('templates.editJobTemplate', {job_template_id: jobTemplateId}); }; } }]; diff --git a/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.partial.html b/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.partial.html index 77ba2f41ac..8b404d9076 100644 --- a/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.partial.html +++ b/awx/ui/client/src/dashboard/lists/job-templates/job-templates-list.partial.html @@ -54,6 +54,6 @@

No job templates were recently used.
- You can create a job template here.

+ You can create a job template here.

diff --git a/awx/ui/client/src/footer/footer.partial.html b/awx/ui/client/src/footer/footer.partial.html index 9aaaeb7f75..4a34bde28a 100644 --- a/awx/ui/client/src/footer/footer.partial.html +++ b/awx/ui/client/src/footer/footer.partial.html @@ -1,3 +1,3 @@ diff --git a/awx/ui/client/src/forms/Credentials.js b/awx/ui/client/src/forms/Credentials.js index 7845d44daf..ccea4dca07 100644 --- a/awx/ui/client/src/forms/Credentials.js +++ b/awx/ui/client/src/forms/Credentials.js @@ -43,11 +43,11 @@ export default ngDisabled: '!(credential_obj.summary_fields.user_capabilities.edit || canAdd)' }, organization: { - // interpolated with $rootScope - basePath: "{{$rootScope.current_user.is_superuser ? 'api/v1/organizations' : $rootScope.current_user.url + 'admin_of_organizations'}}", + basePath: 'organizations', ngShow: 'canShareCredential', label: i18n._('Organization'), type: 'lookup', + autopopulateLookup: false, list: 'OrganizationList', sourceModel: 'organization', sourceField: 'name', @@ -231,7 +231,8 @@ export default subCheckbox: { variable: 'ssh_password_ask', text: i18n._('Ask at runtime?'), - ngChange: 'ask(\'ssh_password\', \'undefined\')' + ngChange: 'ask(\'ssh_password\', \'undefined\')', + ngDisabled: false, }, hasShowInputButton: true, autocomplete: false, @@ -263,7 +264,7 @@ export default "ssh_key_unlock": { label: i18n._('Private Key Passphrase'), type: 'sensitive', - ngShow: "kind.value == 'ssh' || kind.value == 'scm'", + ngShow: "kind.value === 'ssh' || kind.value === 'scm' || kind.value === 'net'", ngDisabled: "keyEntered === false || ssh_key_unlock_ask || !(credential_obj.summary_fields.user_capabilities.edit || canAdd)", subCheckbox: { variable: 'ssh_key_unlock_ask', @@ -288,7 +289,8 @@ export default dataPlacement: 'right', dataContainer: "body", subForm: 'credentialSubForm', - ngDisabled: '!(credential_obj.summary_fields.user_capabilities.edit || canAdd)' + ngDisabled: '!(credential_obj.summary_fields.user_capabilities.edit || canAdd)', + ngChange: 'becomeMethodChange()', }, "become_username": { labelBind: 'becomeUsernameLabel', @@ -308,7 +310,8 @@ export default subCheckbox: { variable: 'become_password_ask', text: i18n._('Ask at runtime?'), - ngChange: 'ask(\'become_password\', \'undefined\')' + ngChange: 'ask(\'become_password\', \'undefined\')', + ngDisabled: false, }, hasShowInputButton: true, autocomplete: false, @@ -393,7 +396,8 @@ export default subCheckbox: { variable: 'vault_password_ask', text: i18n._('Ask at runtime?'), - ngChange: 'ask(\'vault_password\', \'undefined\')' + ngChange: 'ask(\'vault_password\', \'undefined\')', + ngDisabled: false, }, hasShowInputButton: true, autocomplete: false, @@ -420,9 +424,12 @@ export default related: { permissions: { - disabled: 'disablePermissionAssignment', + disabled: '(organization === undefined ? true : false)', + // Do not transition the state if organization is undefined + ngClick: `(organization === undefined ? true : false)||$state.go('credentials.edit.permissions')`, awToolTip: '{{permissionsTooltip}}', dataTipWatch: 'permissionsTooltip', + awToolTipTabEnabledInEditMode: true, dataPlacement: 'top', basePath: 'api/v1/credentials/{{$stateParams.credential_id}}/access_list/', search: { @@ -454,15 +461,13 @@ export default label: i18n._('Role'), type: 'role', noSort: true, - class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4', - searchable: false + class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4' }, team_roles: { label: i18n._('Team Roles'), type: 'team_roles', noSort: true, - class: 'col-lg-5 col-md-5 col-sm-5 col-xs-4', - searchable: false + class: 'col-lg-5 col-md-5 col-sm-5 col-xs-4' } } } diff --git a/awx/ui/client/src/forms/Groups.js b/awx/ui/client/src/forms/Groups.js index 9f9828a10f..af6936f663 100644 --- a/awx/ui/client/src/forms/Groups.js +++ b/awx/ui/client/src/forms/Groups.js @@ -43,7 +43,7 @@ export default label: 'Variables', type: 'textarea', class: 'Form-textAreaLabel Form-formGroup--fullWidth', - rows: 12, + rows: 6, 'default': '---', dataTitle: 'Group Variables', dataPlacement: 'right', @@ -69,6 +69,11 @@ export default ngModel: 'source' }, credential: { + // initializes a default value for this search param + // search params with default values set will not generate user-interactable search tags + search: { + kind: null + }, label: 'Cloud Credential', type: 'lookup', list: 'CredentialList', @@ -81,7 +86,8 @@ export default reqExpression: "cloudCredentialRequired", init: "false" }, - ngDisabled: '!(group_obj.summary_fields.user_capabilities.edit || canAdd)' + ngDisabled: '!(group_obj.summary_fields.user_capabilities.edit || canAdd)', + watchBasePath: "credentialBasePath" }, source_regions: { label: 'Regions', @@ -212,8 +218,8 @@ export default dataTitle: "Source Variables", dataPlacement: 'right', awPopOver: "

Override variables found in vmware.ini and used by the inventory update script. For a detailed description of these variables " + - "" + - "view vmware.ini in the Ansible github repo.

" + + "" + + "view vmware_inventory.ini in the Ansible github repo.

" + "

Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two.

" + "JSON:
\n" + "
{
 \"somevar\": \"somevalue\",
 \"password\": \"magic\"
}
\n" + @@ -354,6 +360,7 @@ export default GroupFormObject.related[itm] = angular.copy(NotificationsList); GroupFormObject.related[itm].generateList = true; GroupFormObject.related[itm].disabled = "source === undefined || source.value === ''"; + GroupFormObject.related[itm].ngClick = "$state.go('inventoryManage.editGroup.notifications')"; } } return GroupFormObject; diff --git a/awx/ui/client/src/forms/Inventories.js b/awx/ui/client/src/forms/Inventories.js index e7106b8277..bde0d2d712 100644 --- a/awx/ui/client/src/forms/Inventories.js +++ b/awx/ui/client/src/forms/Inventories.js @@ -11,8 +11,8 @@ */ export default -angular.module('InventoryFormDefinition', ['ScanJobsListDefinition']) - .factory('InventoryFormObject', ['i18n', function(i18n) { +angular.module('InventoryFormDefinition', []) + .factory('InventoryForm', ['i18n', function(i18n) { return { addTitle: i18n._('New Inventory'), @@ -78,7 +78,7 @@ angular.module('InventoryFormDefinition', ['ScanJobsListDefinition']) }, close: { ngClick: 'formCancel()', - ngHide: '(inventory_obj.summary_fields.user_capabilities.edit || canAdd)' + ngShow: '!(inventory_obj.summary_fields.user_capabilities.edit || canAdd)' }, save: { ngClick: 'formSave()', @@ -103,7 +103,7 @@ angular.module('InventoryFormDefinition', ['ScanJobsListDefinition']) add: { label: i18n._('Add'), ngClick: "$state.go('.add')", - awToolTip: 'Add a permission', + awToolTip: i18n._('Add a permission'), actionClass: 'btn List-buttonSubmit', buttonContent: '+ ADD', ngShow: '(inventory_obj.summary_fields.user_capabilities.edit || canAdd)' @@ -181,19 +181,4 @@ angular.module('InventoryFormDefinition', ['ScanJobsListDefinition']) }; } - };}]) - - .factory('InventoryForm', ['InventoryFormObject', 'ScanJobsList', - function(InventoryFormObject, ScanJobsList) { - return function() { - var itm; - for (itm in InventoryFormObject.related) { - if (InventoryFormObject.related[itm].include === "ScanJobsList") { - InventoryFormObject.related[itm] = ScanJobsList; - InventoryFormObject.related[itm].generateList = true; // tell form generator to call list generator and inject a list - } - } - return InventoryFormObject; - }; - } - ]); + };}]); diff --git a/awx/ui/client/src/forms/JobTemplates.js b/awx/ui/client/src/forms/JobTemplates.js index 9fe64f0469..28462577ae 100644 --- a/awx/ui/client/src/forms/JobTemplates.js +++ b/awx/ui/client/src/forms/JobTemplates.js @@ -20,7 +20,7 @@ export default addTitle: i18n._('New Job Template'), editTitle: '{{ name }}', name: 'job_template', - breadcrumbName: 'JOB TEMPLATE', + breadcrumbName: i18n._('JOB TEMPLATE'), basePath: 'job_templates', // the top-most node of generated state tree stateTree: 'templates', @@ -76,11 +76,12 @@ export default list: 'InventoryList', sourceModel: 'inventory', sourceField: 'name', + autopopulateLookup: false, awRequiredWhen: { reqExpression: '!ask_inventory_on_launch', alwaysShowAsterisk: true }, - requiredErrorMsg: "Please select an Inventory or check the Prompt on launch option.", + requiredErrorMsg: i18n._("Please select an Inventory or check the Prompt on launch option."), column: 1, awPopOver: "

" + i18n._("Select the inventory containing the hosts you want this job to manage.") + "

", dataTitle: i18n._('Inventory'), @@ -96,7 +97,7 @@ export default project: { label: i18n._('Project'), labelAction: { - label: 'RESET', + label: i18n._('RESET'), ngClick: 'resetProjectToDefault()', 'class': "{{!(job_type.value === 'scan' && project_name !== 'Default') ? 'hidden' : ''}}", }, @@ -138,6 +139,7 @@ export default type: 'lookup', list: 'CredentialList', basePath: 'credentials', + autopopulateLookup: false, search: { kind: 'ssh' }, @@ -147,7 +149,7 @@ export default reqExpression: '!ask_credential_on_launch', alwaysShowAsterisk: true }, - requiredErrorMsg: "Please select a Machine Credential or check the Prompt on launch option.", + requiredErrorMsg: i18n._("Please select a Machine Credential or check the Prompt on launch option."), column: 1, awPopOver: "

" + i18n._("Select the credential you want the job to use when accessing the remote hosts. Choose the credential containing " + " the username and SSH key or password that Ansible will need to log into the remote hosts.") + "

", @@ -307,6 +309,17 @@ export default dataContainer: "body", labelClass: 'stack-inline', ngDisabled: '!(job_template_obj.summary_fields.user_capabilities.edit || canAddJobTemplate)' + }, { + name: 'allow_simultaneous', + label: i18n._('Enable Concurrent Jobs'), + type: 'checkbox', + column: 2, + awPopOver: "

" + i18n._("If enabled, simultaneous runs of this job template will be allowed.") + "

", + dataPlacement: 'right', + dataTitle: i18n._('Enable Concurrent Jobs'), + dataContainer: "body", + labelClass: 'stack-inline', + ngDisabled: '!(job_template_obj.summary_fields.user_capabilities.edit || canAddJobTemplate)' }] }, callback_url: { @@ -409,9 +422,9 @@ export default add: { ngClick: "$state.go('.add')", label: 'Add', - awToolTip: 'Add a permission', + awToolTip: i18n._('Add a permission'), actionClass: 'btn List-buttonSubmit', - buttonContent: '+ ADD', + buttonContent: '+ ' + i18n._('ADD'), ngShow: '(job_template_obj.summary_fields.user_capabilities.edit || canAddJobTemplate)' } }, diff --git a/awx/ui/client/src/forms/Organizations.js b/awx/ui/client/src/forms/Organizations.js index 011ad90907..d8f12ea371 100644 --- a/awx/ui/client/src/forms/Organizations.js +++ b/awx/ui/client/src/forms/Organizations.js @@ -68,7 +68,7 @@ export default searchType: 'select', actions: { add: { - ngClick: "addPermission", + ngClick: "$state.go('.add')", label: i18n._('Add'), awToolTip: i18n._('Add a permission'), actionClass: 'btn List-buttonSubmit', @@ -88,15 +88,13 @@ export default label: i18n._('Role'), type: 'role', noSort: true, - class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4', - searchable: false + class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4' }, team_roles: { label: i18n._('Team Roles'), type: 'team_roles', noSort: true, - class: 'col-lg-5 col-md-5 col-sm-5 col-xs-4', - searchable: false + class: 'col-lg-5 col-md-5 col-sm-5 col-xs-4' } } }, diff --git a/awx/ui/client/src/forms/Projects.js b/awx/ui/client/src/forms/Projects.js index a1d65a82d5..54c7209dea 100644 --- a/awx/ui/client/src/forms/Projects.js +++ b/awx/ui/client/src/forms/Projects.js @@ -79,7 +79,7 @@ angular.module('ProjectFormDefinition', ['SchedulesListDefinition']) ngShow: "scm_type.value == 'manual' " , awPopOver: '

' + i18n._('Base path used for locating playbooks. Directories found inside this path will be listed in the playbook directory drop-down. ' + 'Together the base path and selected playbook directory provide the full path used to locate playbooks.') + '

' + - '

' + i18n.sprintf(i18n._('Use %s in your environment settings file to determine the base path value.'), 'PROJECTS_ROOT') + '

', + '

' + i18n.sprintf(i18n._('Change %s under "Configure Tower" to change this location.'), 'PROJECTS_ROOT') + '

', dataTitle: i18n._('Project Base Path'), dataContainer: 'body', dataPlacement: 'right', @@ -95,9 +95,8 @@ angular.module('ProjectFormDefinition', ['SchedulesListDefinition']) init: false }, ngShow: "scm_type.value == 'manual' && !showMissingPlaybooksAlert", - awPopOver: '

' + i18n._('Select from the list of directories found in the base path.' + - 'Together the base path and the playbook directory provide the full path used to locate playbooks.') + '

' + - '

' + i18n.sprintf(i18n._('Use %s in your environment settings file to determine the base path value.'), 'PROJECTS_ROOT') + '

', + awPopOver: '

' + i18n._('Select from the list of directories found in the Project Base Path. ' + + 'Together the base path and the playbook directory provide the full path used to locate playbooks.') + '

', dataTitle: i18n._('Project Path'), dataContainer: 'body', dataPlacement: 'right', @@ -186,7 +185,7 @@ angular.module('ProjectFormDefinition', ['SchedulesListDefinition']) type: 'number', integer: true, min: 0, - ngShow: "scm_update_on_launch && projectSelected && scm_type.value !== 'manual'", + ngShow: "scm_update_on_launch && scm_type.value !== 'manual'", spinner: true, "default": '0', awPopOver: '

' + i18n._('Time in seconds to consider a project to be current. During job runs and callbacks the task system will ' + @@ -242,18 +241,18 @@ angular.module('ProjectFormDefinition', ['SchedulesListDefinition']) fields: { username: { - label: 'User', + label: i18n._('User'), uiSref: 'users({user_id: field.id})', class: 'col-lg-3 col-md-3 col-sm-3 col-xs-4' }, role: { - label: 'Role', + label: i18n._('Role'), type: 'role', noSort: true, class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4', }, team_roles: { - label: 'Team Roles', + label: i18n._('Team Roles'), type: 'team_roles', noSort: true, class: 'col-lg-5 col-md-5 col-sm-5 col-xs-4', diff --git a/awx/ui/client/src/forms/Teams.js b/awx/ui/client/src/forms/Teams.js index a7f03a490b..3154dd42ed 100644 --- a/awx/ui/client/src/forms/Teams.js +++ b/awx/ui/client/src/forms/Teams.js @@ -38,7 +38,7 @@ export default organization: { label: i18n._('Organization'), type: 'lookup', - list: 'OrganizationsList', + list: 'OrganizationList', sourceModel: 'organization', basePath: 'organizations', sourceField: 'name', @@ -64,7 +64,7 @@ export default }, related: { - permissions: { + users: { dataPlacement: 'top', awToolTip: i18n._('Please save before adding users'), basePath: 'api/v1/teams/{{$stateParams.team_id}}/access_list/', @@ -73,15 +73,14 @@ export default }, type: 'collection', title: i18n._('Users'), - iterator: 'permission', + iterator: 'user', index: false, open: false, actions: { add: { - // @issue https://github.com/ansible/ansible-tower/issues/3487 - //ngClick: "addPermissionWithoutTeamTab", - label: 'Add', - awToolTip: i18n._('Add user to team'), + ngClick: "$state.go('.add')", + label: i18n._('Add'), + awToolTip: i18n._('Add User'), actionClass: 'btn List-buttonSubmit', buttonContent: '+ ' + i18n._('ADD'), ngShow: '(team_obj.summary_fields.user_capabilities.edit || canAdd)' @@ -99,50 +98,48 @@ export default label: i18n._('Role'), type: 'role', noSort: true, - class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4', - searchable: false + class: 'col-lg-4 col-md-4 col-sm-4 col-xs-4' } } }, - roles: { - hideSearchAndActions: true, - dataPlacement: 'top', - awToolTip: i18n._('Please save before assigning permissions'), + permissions: { basePath: 'api/v1/teams/{{$stateParams.team_id}}/roles/', search: { page_size: '10', // @todo ask about name field / serializer on this endpoint order_by: 'id' }, + awToolTip: i18n._('Please save before assigning permissions'), + dataPlacement: 'top', + hideSearchAndActions: true, type: 'collection', - title: i18n._('Granted Permissions'), - iterator: 'role', + title: i18n._('Permissions'), + iterator: 'permission', open: false, index: false, - actions: {}, emptyListText: i18n._('No permissions have been granted'), fields: { name: { label: i18n._('Name'), - ngBind: 'role.summary_fields.resource_name', - linkTo: '{{convertApiUrl(role.related[role.summary_fields.resource_type])}}', + ngBind: 'permission.summary_fields.resource_name', + linkTo: '{{convertApiUrl(permission.related[permission.summary_fields.resource_type])}}', noSort: true }, type: { label: i18n._('Type'), - ngBind: 'role.summary_fields.resource_type_display_name', + ngBind: 'permission.summary_fields.resource_type_display_name', noSort: true }, role: { label: i18n._('Role'), - ngBind: 'role.name', + ngBind: 'permission.name', noSort: true } }, fieldActions: { "delete": { label: i18n._('Remove'), - ngClick: 'deletePermissionFromTeam(team_id, team_obj.name, role.name, role.summary_fields.resource_name, role.related.teams)', + ngClick: 'deletePermissionFromTeam(team_id, team_obj.name, permission.name, permission.summary_fields.resource_name, permission.related.teams)', 'class': "List-actionButton--delete", iconClass: 'fa fa-times', awToolTip: i18n._('Dissasociate permission from team'), @@ -150,7 +147,16 @@ export default ngShow: 'permission.summary_fields.user_capabilities.unattach' } }, - //hideOnSuperuser: true // defunct with RBAC + actions: { + add: { + ngClick: "$state.go('.add')", + label: 'Add', + awToolTip: i18n._('Grant Permission'), + actionClass: 'btn List-buttonSubmit', + buttonContent: '+ ' + i18n._('ADD PERMISSIONS'), + ngShow: '(team_obj.summary_fields.user_capabilities.edit || canAdd)' + } + } } }, };}]); //InventoryForm diff --git a/awx/ui/client/src/forms/Users.js b/awx/ui/client/src/forms/Users.js index 5e95b6166f..0e6c1a6aa2 100644 --- a/awx/ui/client/src/forms/Users.js +++ b/awx/ui/client/src/forms/Users.js @@ -121,6 +121,7 @@ export default organizations: { awToolTip: i18n._('Please save before assigning to organizations'), basePath: 'api/v1/users/{{$stateParams.user_id}}/organizations', + emptyListText: i18n._('Please add user to an Organization.'), search: { page_size: '10' }, @@ -136,10 +137,10 @@ export default fields: { name: { key: true, - label: 'Name' + label: i18n._('Name') }, description: { - label: 'Description' + label: i18n._('Description') } }, //hideOnSuperuser: true // RBAC defunct @@ -157,14 +158,14 @@ export default open: false, index: false, actions: {}, - emptyListText: 'This user is not a member of any teams', + emptyListText: i18n._('This user is not a member of any teams'), fields: { name: { key: true, - label: 'Name' + label: i18n._('Name') }, description: { - label: 'Description' + label: i18n._('Description') } }, //hideOnSuperuser: true // RBAC defunct @@ -173,14 +174,13 @@ export default basePath: 'api/v1/users/{{$stateParams.user_id}}/roles/', search: { page_size: '10', - // @todo ask about name field / serializer on this endpoint order_by: 'id' }, awToolTip: i18n._('Please save before assigning to organizations'), dataPlacement: 'top', hideSearchAndActions: true, type: 'collection', - title: i18n._('Granted permissions'), + title: i18n._('Permissions'), iterator: 'permission', open: false, index: false, @@ -203,12 +203,16 @@ export default noSort: true }, }, - // @issue https://github.com/ansible/ansible-tower/issues/3487 - // actions: { - // add: { - - // } - // } + actions: { + add: { + ngClick: "$state.go('.add')", + label: 'Add', + awToolTip: i18n._('Grant Permission'), + actionClass: 'btn List-buttonSubmit', + buttonContent: '+ ' + i18n._('ADD PERMISSIONS'), + ngShow: '(!is_superuser && (user_obj.summary_fields.user_capabilities.edit || canAdd))' + } + }, fieldActions: { "delete": { label: i18n._('Remove'), diff --git a/awx/ui/client/src/forms/WorkflowMaker.js b/awx/ui/client/src/forms/WorkflowMaker.js index 9b1c369e32..9ed3c69b65 100644 --- a/awx/ui/client/src/forms/WorkflowMaker.js +++ b/awx/ui/client/src/forms/WorkflowMaker.js @@ -33,27 +33,27 @@ export default edgeType: { label: i18n._('Type'), type: 'radio_group', - ngShow: 'selectedTemplate && showTypeOptions', + ngShow: 'selectedTemplate && edgeFlags.showTypeOptions', ngDisabled: '!canAddWorkflowJobTemplate', options: [ { label: i18n._('On Success'), value: 'success', - ngShow: '!edgeTypeRestriction || edgeTypeRestriction === "successFailure"' + ngShow: '!edgeFlags.typeRestriction || edgeFlags.typeRestriction === "successFailure"' }, { label: i18n._('On Failure'), value: 'failure', - ngShow: '!edgeTypeRestriction || edgeTypeRestriction === "successFailure"' + ngShow: '!edgeFlags.typeRestriction || edgeFlags.typeRestriction === "successFailure"' }, { label: i18n._('Always'), value: 'always', - ngShow: '!edgeTypeRestriction || edgeTypeRestriction === "always"' + ngShow: '!edgeFlags.typeRestriction || edgeFlags.typeRestriction === "always"' } ], awRequiredWhen: { - reqExpression: 'showTypeOptions' + reqExpression: 'edgeFlags.showTypeOptions' } }, credential: { @@ -170,7 +170,7 @@ export default ngClick: 'cancelNodeForm()', ngShow: '!canAddWorkflowJobTemplate' }, - save: { + select: { ngClick: 'saveNodeForm()', ngDisabled: "workflow_maker_form.$invalid || !selectedTemplate", ngShow: 'canAddWorkflowJobTemplate' diff --git a/awx/ui/client/src/forms/Workflows.js b/awx/ui/client/src/forms/Workflows.js index d281ae0e0b..a5a76d0bbd 100644 --- a/awx/ui/client/src/forms/Workflows.js +++ b/awx/ui/client/src/forms/Workflows.js @@ -16,9 +16,10 @@ export default .factory('WorkflowFormObject', ['i18n', function(i18n) { return { - addTitle: i18n._('New Workflow'), + addTitle: i18n._('New Workflow Job Template'), editTitle: '{{ name }}', name: 'workflow_job_template', + breadcrumbName: i18n._('WORKFLOW'), base: 'workflow', basePath: 'workflow_job_templates', // the top-most node of generated state tree @@ -120,10 +121,10 @@ export default actions: { add: { ngClick: "$state.go('.add')", - label: 'Add', - awToolTip: 'Add a permission', + label: i18n._('Add'), + awToolTip: i18n._('Add a permission'), actionClass: 'btn List-buttonSubmit', - buttonContent: '+ ADD', + buttonContent: '+ '+ i18n._('ADD'), ngShow: '(workflow_job_template_obj.summary_fields.user_capabilities.edit || canAddWorkflowJobTemplate)' } }, diff --git a/awx/ui/client/src/helpers/ActivityStream.js b/awx/ui/client/src/helpers/ActivityStream.js index 215eb00b7e..37da4f2857 100644 --- a/awx/ui/client/src/helpers/ActivityStream.js +++ b/awx/ui/client/src/helpers/ActivityStream.js @@ -25,9 +25,6 @@ export default case 'inventory': rtnTitle = 'INVENTORIES'; break; - case 'job_template': - rtnTitle = 'JOB TEMPLATES'; - break; case 'credential': rtnTitle = 'CREDENTIALS'; break; @@ -52,6 +49,9 @@ export default case 'host': rtnTitle = 'HOSTS'; break; + case 'template': + rtnTitle = 'TEMPLATES'; + break; } return rtnTitle; diff --git a/awx/ui/client/src/helpers/ApiModel.js b/awx/ui/client/src/helpers/ApiModel.js index ff1923ce0a..06544675c5 100644 --- a/awx/ui/client/src/helpers/ApiModel.js +++ b/awx/ui/client/src/helpers/ApiModel.js @@ -48,6 +48,9 @@ export default case 'inventory_script': basePathKey = 'inventory_scripts'; break; + case 'workflow_job_template': + basePathKey = 'workflow_job_templates'; + break; } return basePathKey; diff --git a/awx/ui/client/src/helpers/Credentials.js b/awx/ui/client/src/helpers/Credentials.js index 375a556be0..007756d17c 100644 --- a/awx/ui/client/src/helpers/Credentials.js +++ b/awx/ui/client/src/helpers/Credentials.js @@ -74,6 +74,7 @@ angular.module('CredentialsHelper', ['Utilities']) scope.projectPopOver = "

" + i18n._("The project value") + "

"; scope.hostPopOver = "

" + i18n._("The host value") + "

"; scope.ssh_key_data_api_error = ''; + if (!Empty(scope.kind)) { // Apply kind specific settings switch (scope.kind.value) { @@ -146,18 +147,20 @@ angular.module('CredentialsHelper', ['Utilities']) scope.password_required = true; scope.passwordLabel = i18n._('Password'); scope.host_required = true; - scope.hostLabel = i18n._("Satellite 6 Host"); - scope.hostPopOver = i18n.sprintf(i18n._("Enter the hostname or IP address name which %s" + - "corresponds to your Red Hat Satellite 6 server."), "
"); + scope.hostLabel = i18n._("Satellite 6 URL"); + scope.hostPopOver = i18n.sprintf(i18n._("Enter the URL which corresponds to your %s" + + "Red Hat Satellite 6 server. %s" + + "For example, %s"), "
", "
", "https://satellite.example.org"); break; case 'cloudforms': scope.username_required = true; scope.password_required = true; scope.passwordLabel = i18n._('Password'); scope.host_required = true; - scope.hostLabel = i18n._("CloudForms Host"); - scope.hostPopOver = i18n.sprintf(i18n._("Enter the hostname or IP address for the virtual %s" + - " machine which is hosting the CloudForm appliance."), "
"); + scope.hostLabel = i18n._("CloudForms URL"); + scope.hostPopOver = i18n.sprintf(i18n._("Enter the URL for the virtual machine which %s" + + "corresponds to your CloudForm instance. %s" + + "For example, %s"), "
", "
", "https://cloudforms.example.org"); break; case 'net': scope.username_required = true; @@ -202,6 +205,111 @@ angular.module('CredentialsHelper', ['Utilities']) } ]) +.factory('BecomeMethodChange', ['Empty', 'i18n', + function (Empty, i18n) { + return function (params) { + console.log('become method has changed'); + var scope = params.scope; + + if (!Empty(scope.kind)) { + // Apply kind specific settings + switch (scope.kind.value) { + case 'aws': + scope.aws_required = true; + break; + case 'rax': + scope.rackspace_required = true; + scope.username_required = true; + break; + case 'ssh': + scope.usernameLabel = i18n._('Username'); //formally 'SSH Username' + scope.becomeUsernameLabel = i18n._('Privilege Escalation Username'); + scope.becomePasswordLabel = i18n._('Privilege Escalation Password'); + break; + case 'scm': + scope.sshKeyDataLabel = i18n._('SCM Private Key'); + scope.passwordLabel = i18n._('Password'); + break; + case 'gce': + scope.usernameLabel = i18n._('Service Account Email Address'); + scope.sshKeyDataLabel = i18n._('RSA Private Key'); + scope.email_required = true; + scope.key_required = true; + scope.project_required = true; + scope.key_description = i18n._('Paste the contents of the PEM file associated with the service account email.'); + scope.projectLabel = i18n._("Project"); + scope.project_required = false; + scope.projectPopOver = "

" + i18n._("The Project ID is the " + + "GCE assigned identification. It is constructed as " + + "two words followed by a three digit number. Such " + + "as: ") + "

adjective-noun-000

"; + break; + case 'azure': + scope.sshKeyDataLabel = i18n._('Management Certificate'); + scope.subscription_required = true; + scope.key_required = true; + scope.key_description = i18n._("Paste the contents of the PEM file that corresponds to the certificate you uploaded in the Microsoft Azure console."); + break; + case 'azure_rm': + scope.usernameLabel = i18n._("Username"); + scope.subscription_required = true; + scope.passwordLabel = i18n._('Password'); + scope.azure_rm_required = true; + break; + case 'vmware': + scope.username_required = true; + scope.host_required = true; + scope.password_required = true; + scope.hostLabel = "vCenter Host"; + scope.passwordLabel = i18n._('Password'); + scope.hostPopOver = i18n._("Enter the hostname or IP address which corresponds to your VMware vCenter."); + break; + case 'openstack': + scope.hostLabel = i18n._("Host (Authentication URL)"); + scope.projectLabel = i18n._("Project (Tenant Name)"); + scope.domainLabel = i18n._("Domain Name"); + scope.password_required = true; + scope.project_required = true; + scope.host_required = true; + scope.username_required = true; + scope.projectPopOver = "

" + i18n._("This is the tenant name. " + + " This value is usually the same " + + " as the username.") + "

"; + scope.hostPopOver = "

" + i18n._("The host to authenticate with.") + + "
" + i18n.sprintf(i18n._("For example, %s"), "https://openstack.business.com/v2.0/"); + break; + case 'satellite6': + scope.username_required = true; + scope.password_required = true; + scope.passwordLabel = i18n._('Password'); + scope.host_required = true; + scope.hostLabel = i18n._("Satellite 6 URL"); + scope.hostPopOver = i18n.sprintf(i18n._("Enter the URL which corresponds to your %s" + + "Red Hat Satellite 6 server. %s" + + "For example, %s"), "
", "
", "https://satellite.example.org"); + break; + case 'cloudforms': + scope.username_required = true; + scope.password_required = true; + scope.passwordLabel = i18n._('Password'); + scope.host_required = true; + scope.hostLabel = i18n._("CloudForms URL"); + scope.hostPopOver = i18n.sprintf(i18n._("Enter the URL for the virtual machine which %s" + + "corresponds to your CloudForm instance. %s" + + "For example, %s"), "
", "
", "https://cloudforms.example.org"); + break; + case 'net': + scope.username_required = true; + scope.password_required = false; + scope.passwordLabel = i18n._('Password'); + scope.sshKeyDataLabel = i18n._('SSH Key'); + break; + } + } + }; + } +]) + .factory('OwnerChange', [ function () { @@ -223,8 +331,8 @@ angular.module('CredentialsHelper', ['Utilities']) } ]) -.factory('FormSave', ['$rootScope', '$location', 'Alert', 'Rest', 'ProcessErrors', 'Empty', 'GetBasePath', 'CredentialForm', 'ReturnToCaller', 'Wait', '$state', - function ($rootScope, $location, Alert, Rest, ProcessErrors, Empty, GetBasePath, CredentialForm, ReturnToCaller, Wait, $state) { +.factory('FormSave', ['$rootScope', '$location', 'Alert', 'Rest', 'ProcessErrors', 'Empty', 'GetBasePath', 'CredentialForm', 'ReturnToCaller', 'Wait', '$state', 'i18n', + function ($rootScope, $location, Alert, Rest, ProcessErrors, Empty, GetBasePath, CredentialForm, ReturnToCaller, Wait, $state, i18n) { return function (params) { var scope = params.scope, mode = params.mode, @@ -289,14 +397,12 @@ angular.module('CredentialsHelper', ['Utilities']) Wait('stop'); var base = $location.path().replace(/^\//, '').split('/')[0]; - if (base === 'credentials') { - ReturnToCaller(); + if (base === 'credentials') { + $state.go('credentials.edit', {credential_id: data.id}, {reload: true}); } else { ReturnToCaller(1); } - $state.go('credentials.edit', {credential_id: data.id}, {reload: true}); - }) .error(function (data, status) { Wait('stop'); @@ -305,12 +411,12 @@ angular.module('CredentialsHelper', ['Utilities']) // the error there. The ssh_key_unlock field is not shown when the kind of credential is gce/azure and as a result the // error is never shown. In the future, the API will hopefully either behave or respond differently. if(status && status === 400 && data && data.ssh_key_unlock && (scope.kind.value === 'gce' || scope.kind.value === 'azure')) { - scope.ssh_key_data_api_error = "Encrypted credentials are not supported."; + scope.ssh_key_data_api_error = i18n._("Encrypted credentials are not supported."); } else { ProcessErrors(scope, data, status, form, { - hdr: 'Error!', - msg: 'Failed to create new Credential. POST status: ' + status + hdr: i18n._('Error!'), + msg: i18n._('Failed to create new Credential. POST status: ') + status }); } }); @@ -325,8 +431,8 @@ angular.module('CredentialsHelper', ['Utilities']) .error(function (data, status) { Wait('stop'); ProcessErrors(scope, data, status, form, { - hdr: 'Error!', - msg: 'Failed to update Credential. PUT status: ' + status + hdr: i18n._('Error!'), + msg: i18n._('Failed to update Credential. PUT status: ') + status }); }); } diff --git a/awx/ui/client/src/helpers/Jobs.js b/awx/ui/client/src/helpers/Jobs.js index 19a206a22c..d7649d1c33 100644 --- a/awx/ui/client/src/helpers/Jobs.js +++ b/awx/ui/client/src/helpers/Jobs.js @@ -29,8 +29,8 @@ export default else if (type === 'ad_hoc_command') { RelaunchAdhoc({ scope: scope, id: id, name: name }); } - else if (type === 'job' || type === 'system_job') { - RelaunchPlaybook({ scope: scope, id: id, name: name }); + else if (type === 'job' || type === 'system_job' || type === 'workflow_job') { + RelaunchPlaybook({ scope: scope, id: id, name: name, job_type: type }); } else if (type === 'project_update') { RelaunchSCM({ scope: scope, id: id }); @@ -233,7 +233,7 @@ export default hdr: hdr, body: (action_label === 'cancel' || job.status === 'new') ? cancelBody : deleteBody, action: action, - actionText: (action_label === 'cancel' || job.status === 'new') ? "YES" : "DELETE" + actionText: (action_label === 'cancel' || job.status === 'new') ? "OK" : "DELETE" }); }); @@ -289,8 +289,9 @@ export default .factory('RelaunchPlaybook', ['InitiatePlaybookRun', function(InitiatePlaybookRun) { return function(params) { var scope = params.scope, - id = params.id; - InitiatePlaybookRun({ scope: scope, id: id, relaunch: true }); + id = params.id, + job_type = params.job_type; + InitiatePlaybookRun({ scope: scope, id: id, relaunch: true, job_type: job_type }); }; }]) diff --git a/awx/ui/client/src/helpers/LoadConfig.js b/awx/ui/client/src/helpers/LoadConfig.js index a472c600a9..93d381079b 100644 --- a/awx/ui/client/src/helpers/LoadConfig.js +++ b/awx/ui/client/src/helpers/LoadConfig.js @@ -99,7 +99,8 @@ angular.module('LoadConfigHelper', ['Utilities']) configInit(); }).error(function(error) { - console.log(error); + $log.debug(error); + configInit(); }); }; diff --git a/awx/ui/client/src/helpers/Schedules.js b/awx/ui/client/src/helpers/Schedules.js index c166e5c81c..90c1589e5b 100644 --- a/awx/ui/client/src/helpers/Schedules.js +++ b/awx/ui/client/src/helpers/Schedules.js @@ -146,6 +146,11 @@ export default Rest.get() .success(function(data) { schedule = data; + try { + schedule.extra_data = JSON.parse(schedule.extra_data); + } catch(e) { + // do nothing + } scope.extraVars = data.extra_data === '' ? '---' : '---\n' + jsyaml.safeDump(data.extra_data); if(schedule.extra_data.hasOwnProperty('granularity')){ @@ -176,7 +181,10 @@ export default callback= params.callback, base = params.base || $location.path().replace(/^\//, '').split('/')[0], url = params.url || null, - scheduler; + scheduler, + job_type; + + job_type = scope.parentObject.job_type; if (!Empty($stateParams.id) && base !== 'system_job_templates' && base !== 'inventories' && !url) { url = GetBasePath(base) + $stateParams.id + '/schedules/'; } @@ -201,7 +209,7 @@ export default } else if (base === 'system_job_templates') { url = GetBasePath(base) + $stateParams.id + '/schedules/'; - if($stateParams.id === 4){ + if(job_type === "cleanup_facts"){ scope.isFactCleanup = true; scope.keep_unit_choices = [{ "label" : "Days", diff --git a/awx/ui/client/src/i18n.js b/awx/ui/client/src/i18n.js index 9471e04616..b37d05a8c0 100644 --- a/awx/ui/client/src/i18n.js +++ b/awx/ui/client/src/i18n.js @@ -24,10 +24,7 @@ export default var langUrl = langInfo.replace('-', '_'); //gettextCatalog.debug = true; gettextCatalog.setCurrentLanguage(langInfo); - // TODO: the line below is commented out temporarily until - // the .po files are received from the i18n team, in order to avoid - // 404 file not found console errors in dev - // gettextCatalog.loadRemote('/static/languages/' + langUrl + '.json'); + gettextCatalog.loadRemote('/static/languages/' + langUrl + '.json'); }; }]) .factory('i18n', ['gettextCatalog', diff --git a/awx/ui/client/src/inventories/add/inventory-add.controller.js b/awx/ui/client/src/inventories/add/inventory-add.controller.js index 7f08e716a6..a20fb44891 100644 --- a/awx/ui/client/src/inventories/add/inventory-add.controller.js +++ b/awx/ui/client/src/inventories/add/inventory-add.controller.js @@ -11,10 +11,16 @@ */ function InventoriesAdd($scope, $rootScope, $compile, $location, $log, - $stateParams, GenerateForm, InventoryForm, Rest, Alert, ProcessErrors, + $stateParams, GenerateForm, InventoryForm, rbacUiControlService, Rest, Alert, ProcessErrors, ClearScope, GetBasePath, ParseTypeChange, Wait, ToJSON, $state) { + $scope.canAdd = false; + rbacUiControlService.canAdd(GetBasePath('inventory')) + .then(function(canAdd) { + $scope.canAdd = canAdd; + }); + Rest.setUrl(GetBasePath('inventory')); Rest.options() .success(function(data) { @@ -28,7 +34,7 @@ function InventoriesAdd($scope, $rootScope, $compile, $location, $log, // Inject dynamic view var defaultUrl = GetBasePath('inventory'), - form = InventoryForm(); + form = InventoryForm; init(); @@ -91,7 +97,7 @@ function InventoriesAdd($scope, $rootScope, $compile, $location, $log, } export default ['$scope', '$rootScope', '$compile', '$location', - '$log', '$stateParams', 'GenerateForm', 'InventoryForm', 'Rest', 'Alert', + '$log', '$stateParams', 'GenerateForm', 'InventoryForm', 'rbacUiControlService', 'Rest', 'Alert', 'ProcessErrors', 'ClearScope', 'GetBasePath', 'ParseTypeChange', 'Wait', 'ToJSON', '$state', InventoriesAdd ]; diff --git a/awx/ui/client/src/inventories/edit/inventory-edit.controller.js b/awx/ui/client/src/inventories/edit/inventory-edit.controller.js index ba6de0f183..d07a865766 100644 --- a/awx/ui/client/src/inventories/edit/inventory-edit.controller.js +++ b/awx/ui/client/src/inventories/edit/inventory-edit.controller.js @@ -14,11 +14,11 @@ function InventoriesEdit($scope, $rootScope, $compile, $location, $log, $stateParams, InventoryForm, Rest, Alert, ProcessErrors, ClearScope, GetBasePath, ParseTypeChange, Wait, ToJSON, ParseVariableString, Prompt, InitiatePlaybookRun, - TemplatesService, $state, $filter) { + TemplatesService, $state) { // Inject dynamic view var defaultUrl = GetBasePath('inventory'), - form = InventoryForm(), + form = InventoryForm, inventory_id = $stateParams.inventory_id, master = {}, fld, json_data, data; @@ -32,7 +32,7 @@ function InventoriesEdit($scope, $rootScope, $compile, $location, form.formFieldSize = null; $scope.inventory_id = inventory_id; - $scope.$watch('invnentory_obj.summary_fields.user_capabilities.edit', function(val) { + $scope.$watch('inventory_obj.summary_fields.user_capabilities.edit', function(val) { if (val === false) { $scope.canAdd = false; } @@ -125,54 +125,11 @@ function InventoriesEdit($scope, $rootScope, $compile, $location, $state.go('inventories'); }; - $scope.addScanJob = function() { - $location.path($location.path() + '/job_templates/add'); - }; - - $scope.launchScanJob = function() { - InitiatePlaybookRun({ scope: $scope, id: this.scan_job_template.id }); - }; - - $scope.scheduleScanJob = function() { - $location.path('/job_templates/' + this.scan_job_template.id + '/schedules'); - }; - - $scope.editScanJob = function() { - $location.path($location.path() + '/job_templates/' + this.scan_job_template.id); - }; - - $scope.deleteScanJob = function () { - var id = this.scan_job_template.id , - action = function () { - $('#prompt-modal').modal('hide'); - Wait('start'); - TemplatesService.deleteJobTemplate(id) - .success(function () { - $('#prompt-modal').modal('hide'); - // @issue: OLD SEARCH - // $scope.search(form.related.scan_job_templates.iterator); - }) - .error(function (data) { - Wait('stop'); - ProcessErrors($scope, data, status, null, { hdr: 'Error!', - msg: 'DELETE returned status: ' + status }); - }); - }; - - Prompt({ - hdr: 'Delete', - body: '

Are you sure you want to delete the job template below?
' + $filter('sanitize')(this.scan_job_template.name) + '
', - action: action, - actionText: 'DELETE' - }); - - }; - } export default ['$scope', '$rootScope', '$compile', '$location', '$log', '$stateParams', 'InventoryForm', 'Rest', 'Alert', 'ProcessErrors', 'ClearScope', 'GetBasePath', 'ParseTypeChange', 'Wait', 'ToJSON', 'ParseVariableString', 'Prompt', 'InitiatePlaybookRun', - 'TemplatesService', '$state', '$filter', InventoriesEdit, + 'TemplatesService', '$state', InventoriesEdit, ]; diff --git a/awx/ui/client/src/inventories/main.js b/awx/ui/client/src/inventories/main.js index 191bfbb17c..ac031283cd 100644 --- a/awx/ui/client/src/inventories/main.js +++ b/awx/ui/client/src/inventories/main.js @@ -25,7 +25,7 @@ angular.module('inventory', [ // This means inventoryManage states will not be registered correctly on page refresh, unless they're registered at the same time as the inventories state tree let stateTree, inventories, addGroup, editGroup, addHost, editHost, - listSchedules, addSchedule, editSchedule, + listSchedules, addSchedule, editSchedule, adhocCredentialLookup, stateDefinitions = stateDefinitionsProvider.$get(), stateExtender = $stateExtenderProvider.$get(); @@ -66,7 +66,19 @@ angular.module('inventory', [ ], ParentObject: ['groupData', function(groupData) { return groupData; - }] + }], + UnifiedJobsOptions: ['Rest', 'GetBasePath', '$stateParams', '$q', + function(Rest, GetBasePath, $stateParams, $q) { + Rest.setUrl(GetBasePath('unified_jobs')); + var val = $q.defer(); + Rest.options() + .then(function(data) { + val.resolve(data.data); + }, function(data) { + val.reject(data); + }); + return val.promise; + }] }, views: { // clear form template when views render in this substate @@ -83,7 +95,7 @@ angular.module('inventory', [ mode: 'edit' }); html = generateList.wrapPanel(html); - return html; + return generateList.insertFormView() + html; }, controller: 'schedulerListController' } @@ -195,6 +207,60 @@ angular.module('inventory', [ }, }); + adhocCredentialLookup = { + searchPrefix: 'credential', + name: 'inventoryManage.adhoc.credential', + url: '/credential', + data: { + formChildState: true + }, + params: { + credential_search: { + value: { + page_size: '5' + }, + squash: true, + dynamic: true + } + }, + ncyBreadcrumb: { + skip: true + }, + views: { + 'related': { + templateProvider: function(ListDefinition, generateList) { + let list_html = generateList.build({ + mode: 'lookup', + list: ListDefinition, + input_type: 'radio' + }); + return `${list_html}`; + + } + } + }, + resolve: { + ListDefinition: ['CredentialList', function(CredentialList) { + let list = _.cloneDeep(CredentialList); + list.lookupConfirmText = 'SELECT'; + return list; + }], + Dataset: ['ListDefinition', 'QuerySet', '$stateParams', 'GetBasePath', + (list, qs, $stateParams, GetBasePath) => { + let path = GetBasePath(list.name) || GetBasePath(list.basePath); + return qs.search(path, $stateParams[`${list.iterator}_search`]); + } + ] + }, + onExit: function($state) { + if ($state.transition) { + $('#form-modal').modal('hide'); + $('.modal-backdrop').remove(); + $('body').removeClass('modal-open'); + } + }, + }; + return Promise.all([ inventories, addGroup, @@ -210,6 +276,7 @@ angular.module('inventory', [ stateExtender.buildDefinition(copyMoveGroupRoute), stateExtender.buildDefinition(copyMoveHostRoute), stateExtender.buildDefinition(adHocRoute), + stateExtender.buildDefinition(adhocCredentialLookup) ]) }; diff --git a/awx/ui/client/src/inventories/manage/adhoc/adhoc.form.js b/awx/ui/client/src/inventories/manage/adhoc/adhoc.form.js index fbbbd7b6d4..c509da35d3 100644 --- a/awx/ui/client/src/inventories/manage/adhoc/adhoc.form.js +++ b/awx/ui/client/src/inventories/manage/adhoc/adhoc.form.js @@ -64,7 +64,6 @@ export default function() { basePath: 'credentials', sourceModel: 'credential', sourceField: 'name', - ngClick: 'lookUpCredential()', class: 'squeeze', awPopOver: '

Select the credential you want to use when ' + 'accessing the remote hosts to run the command. ' + diff --git a/awx/ui/client/src/inventories/manage/copy-move/copy-move-groups.controller.js b/awx/ui/client/src/inventories/manage/copy-move/copy-move-groups.controller.js index 37af28407b..9153dad34d 100644 --- a/awx/ui/client/src/inventories/manage/copy-move/copy-move-groups.controller.js +++ b/awx/ui/client/src/inventories/manage/copy-move/copy-move-groups.controller.js @@ -5,13 +5,13 @@ *************************************************/ export default - ['$scope', '$state', '$stateParams', 'generateList', 'GroupManageService', 'GetBasePath', 'CopyMoveGroupList', 'group', - function($scope, $state, $stateParams, GenerateList, GroupManageService, GetBasePath, CopyMoveGroupList, group){ - var list = CopyMoveGroupList, - view = GenerateList; + ['$scope', '$state', '$stateParams', 'GroupManageService', 'GetBasePath', 'CopyMoveGroupList', 'group', 'Dataset', + function($scope, $state, $stateParams, GroupManageService, GetBasePath, CopyMoveGroupList, group, Dataset){ + var list = CopyMoveGroupList; + $scope.item = group; $scope.submitMode = $stateParams.groups === undefined ? 'move' : 'copy'; - $scope['toggle_'+ list.iterator] = function(id){ + $scope.toggle_row = function(id){ // toggle off anything else currently selected _.forEach($scope.groups, (item) => {return item.id === id ? item.checked = 1 : item.checked = null;}); // yoink the currently selected thing @@ -58,33 +58,15 @@ $(el).prop('disabled', (idx, value) => !value); }); }; - var init = function(){ - var url = GetBasePath('inventory') + $stateParams.inventory_id + '/groups/'; - url += $stateParams.group ? '?not__id__in=' + group.id + ',' + _.last($stateParams.group) : '?not__id=' + group.id; - list.basePath = url; - $scope.atRootLevel = $stateParams.group ? false : true; - view.inject(list, { - mode: 'lookup', - id: 'copyMove-list', - scope: $scope, - input_type: 'radio' - }); - // @issue: OLD SEARCH - // SearchInit({ - // scope: $scope, - // set: list.name, - // list: list, - // url: url - // }); - // PaginateInit({ - // scope: $scope, - // list: list, - // url : url, - // mode: 'lookup' - // }); - // $scope.search(list.iterator, null, true, false); - // remove the current group from list - }; + function init(){ + $scope.atRootLevel = $stateParams.group ? false : true; + + // search init + $scope.list = list; + $scope[`${list.iterator}_dataset`] = Dataset.data; + $scope[list.name] = $scope[`${list.iterator}_dataset`].results; + } + init(); }]; diff --git a/awx/ui/client/src/inventories/manage/copy-move/copy-move-hosts.controller.js b/awx/ui/client/src/inventories/manage/copy-move/copy-move-hosts.controller.js index 9d278fde69..5c95523036 100644 --- a/awx/ui/client/src/inventories/manage/copy-move/copy-move-hosts.controller.js +++ b/awx/ui/client/src/inventories/manage/copy-move/copy-move-hosts.controller.js @@ -5,13 +5,13 @@ *************************************************/ export default - ['$scope', '$state', '$stateParams', 'generateList', 'HostManageService', 'GetBasePath', 'CopyMoveGroupList', 'host', - function($scope, $state, $stateParams, GenerateList, HostManageService, GetBasePath, CopyMoveGroupList, host){ - var list = CopyMoveGroupList, - view = GenerateList; + ['$scope', '$state', '$stateParams', 'generateList', 'HostManageService', 'GetBasePath', 'CopyMoveGroupList', 'host', 'Dataset', + function($scope, $state, $stateParams, GenerateList, HostManageService, GetBasePath, CopyMoveGroupList, host, Dataset){ + var list = CopyMoveGroupList; + $scope.item = host; $scope.submitMode = 'copy'; - $scope['toggle_'+ list.iterator] = function(id){ + $scope.toggle_row = function(id){ // toggle off anything else currently selected _.forEach($scope.groups, (item) => {return item.id === id ? item.checked = 1 : item.checked = null;}); // yoink the currently selected thing @@ -40,29 +40,11 @@ } }; var init = function(){ - var url = GetBasePath('inventory') + $stateParams.inventory_id + '/groups/'; - list.basePath = url; - view.inject(list, { - mode: 'lookup', - id: 'copyMove-list', - scope: $scope, - input_type: 'radio' - }); + // search init + $scope.list = list; + $scope[`${list.iterator}_dataset`] = Dataset.data; + $scope[list.name] = $scope[`${list.iterator}_dataset`].results; - // @issue: OLD SEARCH - // SearchInit({ - // scope: $scope, - // set: list.name, - // list: list, - // url: url - // }); - // PaginateInit({ - // scope: $scope, - // list: list, - // url : url, - // mode: 'lookup' - // }); - // $scope.search(list.iterator, null, true, false); }; init(); }]; diff --git a/awx/ui/client/src/inventories/manage/copy-move/copy-move.partial.html b/awx/ui/client/src/inventories/manage/copy-move/copy-move.partial.html index 1847837b92..030f0c7e3a 100644 --- a/awx/ui/client/src/inventories/manage/copy-move/copy-move.partial.html +++ b/awx/ui/client/src/inventories/manage/copy-move/copy-move.partial.html @@ -10,7 +10,7 @@ Move

-
+
Use the inventory root
diff --git a/awx/ui/client/src/inventories/manage/copy-move/copy-move.route.js b/awx/ui/client/src/inventories/manage/copy-move/copy-move.route.js index 6e2c190e3f..b93e5e1e0f 100644 --- a/awx/ui/client/src/inventories/manage/copy-move/copy-move.route.js +++ b/awx/ui/client/src/inventories/manage/copy-move/copy-move.route.js @@ -10,14 +10,31 @@ import CopyMoveHostsController from './copy-move-hosts.controller'; var copyMoveGroupRoute = { name: 'inventoryManage.copyMoveGroup', - url: '/copy-move-group/{group_id}', + url: '/copy-move-group/{group_id:int}', + searchPrefix: 'copy', data: { group_id: 'group_id', }, + params: { + copy_search: { + value: { + not__id__in: null + }, + dynamic: true, + squash: '' + } + }, ncyBreadcrumb: { label: "COPY OR MOVE {{item.name}}" }, resolve: { + Dataset: ['CopyMoveGroupList', 'QuerySet', '$stateParams', 'GetBasePath', 'group', + function(list, qs, $stateParams, GetBasePath, group) { + $stateParams.copy_search.not__id__in = ($stateParams.group && $stateParams.group.length > 0 ? group.id + ',' + _.last($stateParams.group) : group.id.toString()); + let path = GetBasePath('inventory') + $stateParams.inventory_id + '/groups/'; + return qs.search(path, $stateParams.copy_search); + } + ], group: ['GroupManageService', '$stateParams', function(GroupManageService, $stateParams){ return GroupManageService.get({id: $stateParams.group_id}).then(res => res.data.results[0]); }] @@ -26,16 +43,33 @@ var copyMoveGroupRoute = { 'form@inventoryManage' : { controller: CopyMoveGroupsController, templateUrl: templateUrl('inventories/manage/copy-move/copy-move'), + }, + 'copyMoveList@inventoryManage.copyMoveGroup': { + templateProvider: function(CopyMoveGroupList, generateList) { + let html = generateList.build({ + list: CopyMoveGroupList, + mode: 'lookup', + input_type: 'radio' + }); + return html; + } } } }; var copyMoveHostRoute = { name: 'inventoryManage.copyMoveHost', url: '/copy-move-host/{host_id}', + searchPrefix: 'copy', ncyBreadcrumb: { label: "COPY OR MOVE {{item.name}}" }, resolve: { + Dataset: ['CopyMoveGroupList', 'QuerySet', '$stateParams', 'GetBasePath', + function(list, qs, $stateParams, GetBasePath) { + let path = GetBasePath('inventory') + $stateParams.inventory_id + '/hosts/'; + return qs.search(path, $stateParams.copy_search); + } + ], host: ['HostManageService', '$stateParams', function(HostManageService, $stateParams){ return HostManageService.get({id: $stateParams.host_id}).then(res => res.data.results[0]); }] @@ -44,6 +78,18 @@ var copyMoveHostRoute = { 'form@inventoryManage': { templateUrl: templateUrl('inventories/manage/copy-move/copy-move'), controller: CopyMoveHostsController, + }, + 'copyMoveList@inventoryManage.copyMoveHost': { + templateProvider: function(CopyMoveGroupList, generateList, $stateParams, GetBasePath) { + let list = CopyMoveGroupList; + list.basePath = GetBasePath('inventory') + $stateParams.inventory_id + '/hosts/'; + let html = generateList.build({ + list: CopyMoveGroupList, + mode: 'lookup', + input_type: 'radio' + }); + return html; + } } } }; diff --git a/awx/ui/client/src/inventories/manage/groups/groups-add.controller.js b/awx/ui/client/src/inventories/manage/groups/groups-add.controller.js index 031cf03f8a..cc1283c582 100644 --- a/awx/ui/client/src/inventories/manage/groups/groups-add.controller.js +++ b/awx/ui/client/src/inventories/manage/groups/groups-add.controller.js @@ -31,9 +31,10 @@ export default ['$state', '$stateParams', '$scope', 'GroupForm', 'CredentialList } $scope.lookupCredential = function(){ + let kind = ($scope.source.value === "ec2") ? "aws" : $scope.source.value; $state.go('.credential', { credential_search: { - kind: $scope.source.value, + kind: kind, page_size: '5', page: '1' } @@ -111,6 +112,13 @@ export default ['$state', '$stateParams', '$scope', 'GroupForm', 'CredentialList }; $scope.sourceChange = function(source) { source = source.value; + if (source === 'custom'){ + $scope.credentialBasePath = GetBasePath('inventory_script'); + } + // equal to case 'ec2' || 'rax' || 'azure' || 'azure_rm' || 'vmware' || 'satellite6' || 'cloudforms' || 'openstack' + else{ + $scope.credentialBasePath = (source === 'ec2') ? GetBasePath('credentials') + '?kind=aws' : GetBasePath('credentials') + (source === '' ? '' : '?kind=' + (source)); + } if (source === 'ec2' || source === 'custom' || source === 'vmware' || source === 'openstack') { ParseTypeChange({ scope: $scope, @@ -119,6 +127,7 @@ export default ['$state', '$stateParams', '$scope', 'GroupForm', 'CredentialList parse_variable: 'envParseType' }); } + // reset fields $scope.group_by_choices = source === 'ec2' ? $scope.ec2_group_by : null; // azure_rm regions choices are keyed as "azure" in an OPTIONS request to the inventory_sources endpoint diff --git a/awx/ui/client/src/inventories/manage/groups/groups-edit.controller.js b/awx/ui/client/src/inventories/manage/groups/groups-edit.controller.js index 796dee5066..ae90cae8bf 100644 --- a/awx/ui/client/src/inventories/manage/groups/groups-edit.controller.js +++ b/awx/ui/client/src/inventories/manage/groups/groups-edit.controller.js @@ -58,9 +58,10 @@ export default ['$state', '$stateParams', '$scope', 'ToggleNotification', 'Parse }; $scope.lookupCredential = function(){ + let kind = ($scope.source.value === "ec2") ? "aws" : $scope.source.value; $state.go('.credential', { credential_search: { - kind: $scope.source.value, + kind: kind, page_size: '5', page: '1' } @@ -122,7 +123,7 @@ export default ['$state', '$stateParams', '$scope', 'ToggleNotification', 'Parse $scope.source = source; if (source.value === 'ec2' || source.value === 'custom' || source.value === 'vmware' || source.value === 'openstack') { - $scope[source.value + '_variables'] = $scope[source.value + '_variables'] === null ? '---' : $scope[source.value + '_variables']; + $scope[source.value + '_variables'] = $scope[source.value + '_variables'] === (null || undefined) ? '---' : $scope[source.value + '_variables']; ParseTypeChange({ scope: $scope, field_id: source.value + '_variables', diff --git a/awx/ui/client/src/inventories/manage/groups/groups-list.controller.js b/awx/ui/client/src/inventories/manage/groups/groups-list.controller.js index 7385e2f09b..3e08aef999 100644 --- a/awx/ui/client/src/inventories/manage/groups/groups-list.controller.js +++ b/awx/ui/client/src/inventories/manage/groups/groups-list.controller.js @@ -46,6 +46,10 @@ } function buildStatusIndicators(group){ + if (group === undefined || group === null) { + group = {}; + } + let group_status, hosts_status; group_status = GetSyncStatusMsg({ @@ -73,7 +77,15 @@ $scope.groupSelect = function(id){ var group = $stateParams.group === undefined ? [id] : _($stateParams.group).concat(id).value(); - $state.go('inventoryManage', {inventory_id: $stateParams.inventory_id, group: group}, {reload: true}); + $state.go('inventoryManage', { + inventory_id: $stateParams.inventory_id, + group: group, + group_search: { + page_size: '20', + page: '1', + order_by: 'name', + } + }, {reload: true}); }; $scope.createGroup = function(){ $state.go('inventoryManage.addGroup'); @@ -142,10 +154,14 @@ $scope.$on(`ws-jobs`, function(e, data){ var group = Find({ list: $scope.groups, key: 'id', val: data.group_id }); + + if (group === undefined || group === null) { + group = {}; + } + if(data.status === 'failed' || data.status === 'successful'){ $state.reload(); - } - else{ + } else { var status = GetSyncStatusMsg({ status: data.status, has_inventory_sources: group.has_inventory_sources, diff --git a/awx/ui/client/src/inventories/manage/hosts/hosts-edit.controller.js b/awx/ui/client/src/inventories/manage/hosts/hosts-edit.controller.js index 4914cf9312..ec825bf510 100644 --- a/awx/ui/client/src/inventories/manage/hosts/hosts-edit.controller.js +++ b/awx/ui/client/src/inventories/manage/hosts/hosts-edit.controller.js @@ -19,14 +19,46 @@ $scope.parseType = 'yaml'; $scope.host = host; - $scope.variables = host.variables === '' ? '---' : host.variables; + $scope.variables = getVars(host.variables); $scope.name = host.name; $scope.description = host.description; + ParseTypeChange({ scope: $scope, field_id: 'host_variables', }); } + + // Adding this function b/c sometimes extra vars are returned to the + // UI as a string (ex: "foo: bar"), and other times as a + // json-object-string (ex: "{"foo": "bar"}"). CodeMirror wouldn't know + // how to prettify the latter. The latter occurs when host vars were + // system generated and not user-input (such as adding a cloud host); + function getVars(str){ + + // Quick function to test if the host vars are a json-object-string, + // by testing if they can be converted to a JSON object w/o error. + function IsJsonString(str) { + try { + JSON.parse(str); + } catch (e) { + return false; + } + return true; + } + + if(str === ''){ + return '---'; + } + else if(IsJsonString(str)){ + str = JSON.parse(str); + return jsyaml.safeDump(str); + } + else if(!IsJsonString(str)){ + return str; + } + } + $scope.formCancel = function(){ $state.go('^'); }; diff --git a/awx/ui/client/src/inventories/manage/hosts/hosts-list.controller.js b/awx/ui/client/src/inventories/manage/hosts/hosts-list.controller.js index cee4abacbf..28089245f3 100644 --- a/awx/ui/client/src/inventories/manage/hosts/hosts-list.controller.js +++ b/awx/ui/client/src/inventories/manage/hosts/hosts-list.controller.js @@ -26,6 +26,12 @@ $scope[`${list.iterator}_dataset`] = hostsDataset.data; $scope[list.name] = $scope[`${list.iterator}_dataset`].results; + $scope.$watch(`${list.iterator}_dataset`, () => { + $scope.hosts + .forEach((host) => SetStatus({scope: $scope, + host: host})); + }); + // The ncy breadcrumb directive will look at this attribute when attempting to bind to the correct scope. // In this case, we don't want to incidentally bind to this scope when editing a host or a group. See: // https://github.com/ncuillery/angular-breadcrumb/issues/42 for a little more information on the diff --git a/awx/ui/client/src/inventories/manage/inventory-manage.route.js b/awx/ui/client/src/inventories/manage/inventory-manage.route.js index ddcbf85e1d..ca7589a3e1 100644 --- a/awx/ui/client/src/inventories/manage/inventory-manage.route.js +++ b/awx/ui/client/src/inventories/manage/inventory-manage.route.js @@ -98,9 +98,17 @@ export default { }, // target ui-views with name@inventoryManage state 'groupsList@inventoryManage': { - templateProvider: function(InventoryGroups, generateList, $templateRequest) { + templateProvider: function(InventoryGroups, generateList, $templateRequest, $stateParams, GetBasePath) { + let list = _.cloneDeep(InventoryGroups); + if($stateParams && $stateParams.group) { + list.basePath = GetBasePath('groups') + _.last($stateParams.group) + '/children'; + } + else { + //reaches here if the user is on the root level group + list.basePath = GetBasePath('inventory') + $stateParams.inventory_id + '/root_groups'; + } let html = generateList.build({ - list: InventoryGroups, + list: list, mode: 'edit' }); html = generateList.wrapPanel(html); @@ -112,9 +120,17 @@ export default { controller: GroupsListController }, 'hostsList@inventoryManage': { - templateProvider: function(InventoryHosts, generateList) { + templateProvider: function(InventoryHosts, generateList, $stateParams, GetBasePath) { + let list = _.cloneDeep(InventoryHosts); + if($stateParams && $stateParams.group) { + list.basePath = GetBasePath('groups') + _.last($stateParams.group) + '/all_hosts'; + } + else { + //reaches here if the user is on the root level group + list.basePath = GetBasePath('inventory') + $stateParams.inventory_id + '/hosts'; + } let html = generateList.build({ - list: InventoryHosts, + list: list, mode: 'edit' }); return generateList.wrapPanel(html); diff --git a/awx/ui/client/src/job-detail/host-event/host-event.block.less b/awx/ui/client/src/job-detail/host-event/host-event.block.less index eafa9678fd..9b31b74e87 100644 --- a/awx/ui/client/src/job-detail/host-event/host-event.block.less +++ b/awx/ui/client/src/job-detail/host-event/host-event.block.less @@ -1,150 +1,150 @@ -@import "./client/src/shared/branding/colors.less"; -@import "./client/src/shared/branding/colors.default.less"; -@import "./client/src/shared/layouts/one-plus-two.less"; - -.noselect { - -webkit-touch-callout: none; /* iOS Safari */ - -webkit-user-select: none; /* Chrome/Safari/Opera */ - -khtml-user-select: none; /* Konqueror */ - -moz-user-select: none; /* Firefox */ - -ms-user-select: none; /* Internet Explorer/Edge */ - user-select: none; /* Non-prefixed version, currently - not supported by any browser */ -} - -@media screen and (min-width: 768px){ - .HostEvent .modal-dialog{ - width: 700px; - } -} -.HostEvent .CodeMirror{ - overflow-x: hidden; -} -.HostEvent-controls button.HostEvent-close{ - color: #FFFFFF; - text-transform: uppercase; - padding-left: 15px; - padding-right: 15px; - background-color: @default-link; - border-color: @default-link; - &:hover{ - background-color: @default-link-hov; - border-color: @default-link-hov; - } -} -.HostEvent-body{ - margin-bottom: 10px; -} -.HostEvent-tab { - color: @btn-txt; - background-color: @btn-bg; - font-size: 12px; - border: 1px solid @btn-bord; - height: 30px; - border-radius: 5px; - margin-right: 20px; - padding-left: 10px; - padding-right: 10px; - padding-bottom: 5px; - padding-top: 5px; - transition: background-color 0.2s; - text-transform: uppercase; - text-align: center; - white-space: nowrap; - .noselect; -} -.HostEvent-tab:hover { - color: @btn-txt; - background-color: @btn-bg-hov; - cursor: pointer; -} -.HostEvent-tab--selected{ - color: @btn-txt-sel!important; - background-color: @default-icon!important; - border-color: @default-icon!important; -} -.HostEvent-view--container{ - width: 100%; - display: flex; - flex-direction: row; - flex-wrap: nowrap; - justify-content: space-between; -} -.HostEvent .modal-footer{ - border: 0; - margin-top: 0px; - padding-top: 5px; -} -.HostEvent-controls{ - float: right; - button { - margin-left: 10px; - } -} -.HostEvent-status--ok{ - color: @green; -} -.HostEvent-status--unreachable{ - color: @unreachable; -} -.HostEvent-status--changed{ - color: @changed; -} -.HostEvent-status--failed{ - color: @default-err; -} -.HostEvent-status--skipped{ - color: @skipped; -} -.HostEvent-title{ - color: @default-interface-txt; - font-weight: 600; - margin-bottom: 8px; -} -// .HostEvent .modal-body{ -// max-height: 500px; -// overflow-y: auto; -// padding: 20px; +// @import "./client/src/shared/branding/colors.less"; +// @import "./client/src/shared/branding/colors.default.less"; +// @import "./client/src/shared/layouts/one-plus-two.less"; +// +// .noselect { +// -webkit-touch-callout: none; /* iOS Safari */ +// -webkit-user-select: none; /* Chrome/Safari/Opera */ +// -khtml-user-select: none; /* Konqueror */ +// -moz-user-select: none; /* Firefox */ +// -ms-user-select: none; /* Internet Explorer/Edge */ +// user-select: none; /* Non-prefixed version, currently +// not supported by any browser */ +// } +// +// @media screen and (min-width: 768px){ +// .HostEvent .modal-dialog{ +// width: 700px; +// } +// } +// .HostEvent .CodeMirror{ +// overflow-x: hidden; +// } +// .HostEvent-controls button.HostEvent-close{ +// color: #FFFFFF; +// text-transform: uppercase; +// padding-left: 15px; +// padding-right: 15px; +// background-color: @default-link; +// border-color: @default-link; +// &:hover{ +// background-color: @default-link-hov; +// border-color: @default-link-hov; +// } +// } +// .HostEvent-body{ +// margin-bottom: 10px; +// } +// .HostEvent-tab { +// color: @btn-txt; +// background-color: @btn-bg; +// font-size: 12px; +// border: 1px solid @btn-bord; +// height: 30px; +// border-radius: 5px; +// margin-right: 20px; +// padding-left: 10px; +// padding-right: 10px; +// padding-bottom: 5px; +// padding-top: 5px; +// transition: background-color 0.2s; +// text-transform: uppercase; +// text-align: center; +// white-space: nowrap; +// .noselect; +// } +// .HostEvent-tab:hover { +// color: @btn-txt; +// background-color: @btn-bg-hov; +// cursor: pointer; +// } +// .HostEvent-tab--selected{ +// color: @btn-txt-sel!important; +// background-color: @default-icon!important; +// border-color: @default-icon!important; +// } +// .HostEvent-view--container{ +// width: 100%; +// display: flex; +// flex-direction: row; +// flex-wrap: nowrap; +// justify-content: space-between; +// } +// .HostEvent .modal-footer{ +// border: 0; +// margin-top: 0px; +// padding-top: 5px; +// } +// .HostEvent-controls{ +// float: right; +// button { +// margin-left: 10px; +// } +// } +// .HostEvent-status--ok{ +// color: @green; +// } +// .HostEvent-status--unreachable{ +// color: @unreachable; +// } +// .HostEvent-status--changed{ +// color: @changed; +// } +// .HostEvent-status--failed{ +// color: @default-err; +// } +// .HostEvent-status--skipped{ +// color: @skipped; +// } +// .HostEvent-title{ +// color: @default-interface-txt; +// font-weight: 600; +// margin-bottom: 8px; +// } +// // .HostEvent .modal-body{ +// // max-height: 500px; +// // overflow-y: auto; +// // padding: 20px; +// // } +// .HostEvent-nav{ +// padding-top: 12px; +// padding-bottom: 12px; +// } +// .HostEvent-field{ +// margin-bottom: 8px; +// flex: 0 1 12em; +// } +// .HostEvent-field--label{ +// text-transform: uppercase; +// flex: 0 1 80px; +// max-width: 80px; +// font-size: 12px; +// word-wrap: break-word; +// } +// .HostEvent-field{ +// .OnePlusTwo-left--detailsRow; +// } +// .HostEvent-field--content{ +// word-wrap: break-word; +// max-width: 13em; +// flex: 0 1 13em; +// } +// .HostEvent-details--left, .HostEvent-details--right{ +// flex: 1 1 47%; +// } +// .HostEvent-details--left{ +// margin-right: 40px; +// } +// .HostEvent-details--right{ +// .HostEvent-field--label{ +// flex: 0 1 25em; +// } +// .HostEvent-field--content{ +// max-width: 15em; +// flex: 0 1 15em; +// align-self: flex-end; +// } +// } +// .HostEvent-button:disabled { +// pointer-events: all!important; // } -.HostEvent-nav{ - padding-top: 12px; - padding-bottom: 12px; -} -.HostEvent-field{ - margin-bottom: 8px; - flex: 0 1 12em; -} -.HostEvent-field--label{ - text-transform: uppercase; - flex: 0 1 80px; - max-width: 80px; - font-size: 12px; - word-wrap: break-word; -} -.HostEvent-field{ - .OnePlusTwo-left--detailsRow; -} -.HostEvent-field--content{ - word-wrap: break-word; - max-width: 13em; - flex: 0 1 13em; -} -.HostEvent-details--left, .HostEvent-details--right{ - flex: 1 1 47%; -} -.HostEvent-details--left{ - margin-right: 40px; -} -.HostEvent-details--right{ - .HostEvent-field--label{ - flex: 0 1 25em; - } - .HostEvent-field--content{ - max-width: 15em; - flex: 0 1 15em; - align-self: flex-end; - } -} -.HostEvent-button:disabled { - pointer-events: all!important; -} diff --git a/awx/ui/client/src/job-detail/host-event/host-event.route.js b/awx/ui/client/src/job-detail/host-event/host-event.route.js index 97254e883f..86e499c2b0 100644 --- a/awx/ui/client/src/job-detail/host-event/host-event.route.js +++ b/awx/ui/client/src/job-detail/host-event/host-event.route.js @@ -20,7 +20,7 @@ var hostEventModal = { return res.data.results[0]; }); }], hostResults: ['JobDetailService', '$stateParams', function(JobDetailService, $stateParams) { - return JobDetailService.getJobEventChildren($stateParams.taskId).then(res => res.data.results); + return JobDetailService.getJobEventChildren($stateParams.taskUuid).then(res => res.data.results); }] }, onExit: function() { diff --git a/awx/ui/client/src/job-detail/job-detail.service.js b/awx/ui/client/src/job-detail/job-detail.service.js index e5578349a5..8e436a3e96 100644 --- a/awx/ui/client/src/job-detail/job-detail.service.js +++ b/awx/ui/client/src/job-detail/job-detail.service.js @@ -129,9 +129,9 @@ export default msg: 'Call to ' + url + '. GET returned: ' + status }); }); }, - getJobEventChildren: function(id){ + getJobEventChildren: function(uuid){ var url = GetBasePath('job_events'); - url = url + id + '/children/?order_by=host_name'; + url = `${url}?parent__uuid=${uuid}&order_by=host_name`; Rest.setUrl(url); return Rest.get() .success(function(data){ diff --git a/awx/ui/client/src/job-results/event-queue.service.js b/awx/ui/client/src/job-results/event-queue.service.js index 3737de2258..02c99ff9a5 100644 --- a/awx/ui/client/src/job-results/event-queue.service.js +++ b/awx/ui/client/src/job-results/event-queue.service.js @@ -4,72 +4,14 @@ * All Rights Reserved *************************************************/ -export default ['jobResultsService', 'parseStdoutService', '$q', function(jobResultsService, parseStdoutService, $q){ +export default ['jobResultsService', 'parseStdoutService', function(jobResultsService, parseStdoutService){ var val = {}; val = { populateDefers: {}, queue: {}, - // Get the count of the last event - getPreviousCount: function(counter, type) { - var countAttr; - - if (type === 'play') { - countAttr = 'playCount'; - } else if (type === 'task') { - countAttr = 'taskCount'; - } else { - countAttr = 'count'; - } - - var previousCount = $q.defer(); - - // iteratively find the last count - var findCount = function(counter) { - if (counter === 0) { - // if counter is 0, no count has been initialized - // initialize one! - - if (countAttr === 'count') { - previousCount.resolve({ - ok: 0, - skipped: 0, - unreachable: 0, - failures: 0, - changed: 0 - }); - } else { - previousCount.resolve(0); - } - - } else if (val.queue[counter] && val.queue[counter][countAttr] !== undefined) { - // this event has a count, resolve! - previousCount.resolve(_.clone(val.queue[counter][countAttr])); - } else { - // this event doesn't have a count, decrement to the - // previous event and check it - findCount(counter - 1); - } - }; - - if (val.queue[counter - 1]) { - // if the previous event has been resolved, start the iterative - // get previous count process - findCount(counter - 1); - } else if (val.populateDefers[counter - 1]){ - // if the previous event has not been resolved, wait for it to - // be and then start the iterative get previous count process - val.populateDefers[counter - 1].promise.then(function() { - findCount(counter - 1); - }); - } - - return previousCount.promise; - }, // munge the raw event from the backend into the event_queue's format munge: function(event) { - var mungedEventDefer = $q.defer(); - // basic data needed in the munged event var mungedEvent = { counter: event.counter, @@ -84,64 +26,15 @@ export default ['jobResultsService', 'parseStdoutService', '$q', function(jobRes // updates to it if (event.stdout) { mungedEvent.stdout = parseStdoutService.parseStdout(event); + mungedEvent.start_line = event.start_line + 1; mungedEvent.changes.push('stdout'); } // for different types of events, you need different types of data if (event.event_name === 'playbook_on_start') { - mungedEvent.count = { - ok: 0, - skipped: 0, - unreachable: 0, - failures: 0, - changed: 0 - }; mungedEvent.startTime = event.modified; - mungedEvent.changes.push('count'); mungedEvent.changes.push('startTime'); - } else if (event.event_name === 'playbook_on_play_start') { - val.getPreviousCount(mungedEvent.counter, "play") - .then(count => { - mungedEvent.playCount = count + 1; - mungedEvent.changes.push('playCount'); - }); - } else if (event.event_name === 'playbook_on_task_start') { - val.getPreviousCount(mungedEvent.counter, "task") - .then(count => { - mungedEvent.taskCount = count + 1; - mungedEvent.changes.push('taskCount'); - }); - } else if (event.event_name === 'runner_on_ok' || - event.event_name === 'runner_on_async_ok') { - val.getPreviousCount(mungedEvent.counter) - .then(count => { - mungedEvent.count = count; - mungedEvent.count.ok++; - mungedEvent.changes.push('count'); - }); - } else if (event.event_name === 'runner_on_skipped') { - val.getPreviousCount(mungedEvent.counter) - .then(count => { - mungedEvent.count = count; - mungedEvent.count.skipped++; - mungedEvent.changes.push('count'); - }); - } else if (event.event_name === 'runner_on_unreachable') { - val.getPreviousCount(mungedEvent.counter) - .then(count => { - mungedEvent.count = count; - mungedEvent.count.unreachable++; - mungedEvent.changes.push('count'); - }); - } else if (event.event_name === 'runner_on_error' || - event.event_name === 'runner_on_async_failed') { - val.getPreviousCount(mungedEvent.counter) - .then(count => { - mungedEvent.count = count; - mungedEvent.count.failed++; - mungedEvent.changes.push('count'); - }); - } else if (event.event_name === 'playbook_on_stats') { + } if (event.event_name === 'playbook_on_stats') { // get the data for populating the host status bar mungedEvent.count = jobResultsService .getCountsFromStatsEvent(event.event_data); @@ -150,10 +43,7 @@ export default ['jobResultsService', 'parseStdoutService', '$q', function(jobRes mungedEvent.changes.push('countFinished'); mungedEvent.changes.push('finishedTime'); } - - mungedEventDefer.resolve(mungedEvent); - - return mungedEventDefer.promise; + return mungedEvent; }, // reinitializes the event queue value for the job results page initialize: function() { @@ -162,88 +52,18 @@ export default ['jobResultsService', 'parseStdoutService', '$q', function(jobRes }, // populates the event queue populate: function(event) { - // if a defer hasn't been set up for the event, - // set one up now - if (!val.populateDefers[event.counter]) { - val.populateDefers[event.counter] = $q.defer(); - } + val.queue[event.counter] = val.munge(event); - // make sure not to send duplicate events over to the - // controller - if (val.queue[event.counter] && - val.queue[event.counter].processed) { - val.populateDefers.reject("duplicate event: " + - event); - } - - if (!val.queue[event.counter]) { - var resolvePopulation = function(event) { - // to resolve, put the event on the queue and - // then resolve the deferred value - val.queue[event.counter] = event; - val.populateDefers[event.counter].resolve(event); - }; - - if (event.counter === 1) { - // for the first event, go ahead and munge and - // resolve - val.munge(event).then(event => { - resolvePopulation(event); - }); - } else { - // for all other events, you have to do some things - // to keep the event processing in the UI synchronous - - if (!val.populateDefers[event.counter - 1]) { - // first, if the previous event doesn't have - // a defer set up (this happens when websocket - // events are coming in and you need to make - // rest calls to catch up), go ahead and set a - // defer for the previous event - val.populateDefers[event.counter - 1] = $q.defer(); - } - - // you can start the munging process... - val.munge(event).then(event => { - // ...but wait until the previous event has - // been resolved before resolving this one and - // doing stuff in the ui (that's why we - // needed that previous conditional). - val.populateDefers[event.counter - 1].promise - .then(() => { - resolvePopulation(event); - }); - }); - } + if (!val.queue[event.counter].processed) { + return val.munge(event); } else { - // don't repopulate the event if it's already been added - // and munged either by rest or by websocket event - val.populateDefers[event.counter] - .resolve(val.queue[event.counter]); + return {}; } - - return val.populateDefers[event.counter].promise; }, // the event has been processed in the view and should be marked as // completed in the queue markProcessed: function(event) { - var process = function(event) { - // the event has now done it's work in the UI, record - // that! - val.queue[event.counter].processed = true; - }; - - if (!val.queue[event.counter]) { - // sometimes, the process is called in the controller and - // the event queue hasn't caught up and actually added - // the event to the queue yet. Wait until that happens - val.populateDefers[event.counter].promise - .finally(function() { - process(event); - }); - } else { - process(event); - } + val.queue[event.counter].processed = true; } }; diff --git a/awx/ui/client/src/job-results/host-event/host-event-modal.partial.html b/awx/ui/client/src/job-results/host-event/host-event-modal.partial.html index 279e3672eb..6087115e45 100644 --- a/awx/ui/client/src/job-results/host-event/host-event-modal.partial.html +++ b/awx/ui/client/src/job-results/host-event/host-event-modal.partial.html @@ -59,7 +59,7 @@
- +
diff --git a/awx/ui/client/src/job-results/host-event/host-event.block.less b/awx/ui/client/src/job-results/host-event/host-event.block.less index 218792a477..6ae8f66c27 100644 --- a/awx/ui/client/src/job-results/host-event/host-event.block.less +++ b/awx/ui/client/src/job-results/host-event/host-event.block.less @@ -20,20 +20,14 @@ .HostEvent .CodeMirror{ overflow-x: hidden; } -.HostEvent-controls button.HostEvent-close{ - color: #FFFFFF; - text-transform: uppercase; - padding-left: 15px; - padding-right: 15px; - background-color: @default-link; - border-color: @default-link; - &:hover{ - background-color: @default-link-hov; - border-color: @default-link-hov; - } + +.HostEvent-close:hover{ + color: @btn-txt; + background-color: @btn-bg-hov; } + .HostEvent-body{ - margin-bottom: 10px; + margin-bottom: 20px; } .HostEvent-tab { color: @btn-txt; @@ -106,12 +100,12 @@ } .HostEvent .modal-body{ max-height: 500px; - padding: 20px; + padding: 0px!important; overflow-y: auto; } .HostEvent-nav{ padding-top: 12px; - padding-bottom: 12px; + padding-bottom: 20px; } .HostEvent-field{ margin-bottom: 8px; @@ -164,10 +158,11 @@ border-radius: 5px; border: 1px solid #ccc; font-style: normal; + background-color: @default-no-items-bord; } .HostEvent-numberColumnPreload { - background-color: @default-no-items-bord; + background-color: @default-list-header-bg; height: 198px; border-right: 1px solid #ccc; width: 30px; @@ -175,7 +170,7 @@ } .HostEvent-numberColumn { - background-color: @default-no-items-bord; + background-color: @default-list-header-bg; border-right: 1px solid #ccc; border-bottom-left-radius: 5px; color: #999; diff --git a/awx/ui/client/src/job-results/host-event/host-event.route.js b/awx/ui/client/src/job-results/host-event/host-event.route.js index 728e75467d..6dad68c5d0 100644 --- a/awx/ui/client/src/job-results/host-event/host-event.route.js +++ b/awx/ui/client/src/job-results/host-event/host-event.route.js @@ -8,7 +8,7 @@ import { templateUrl } from '../../shared/template-url/template-url.factory'; var hostEventModal = { name: 'jobDetail.host-event', - url: '/task/:taskId/host-event/:eventId', + url: '/host-event/:eventId', controller: 'HostEventController', templateUrl: templateUrl('job-results/host-event/host-event-modal'), 'abstract': false, diff --git a/awx/ui/client/src/job-results/host-status-bar/host-status-bar.directive.js b/awx/ui/client/src/job-results/host-status-bar/host-status-bar.directive.js index f7fbd2e8f6..778d238e47 100644 --- a/awx/ui/client/src/job-results/host-status-bar/host-status-bar.directive.js +++ b/awx/ui/client/src/job-results/host-status-bar/host-status-bar.directive.js @@ -15,7 +15,7 @@ export default [ 'templateUrl', link: function(scope) { // as count is changed by event data coming in, // update the host status bar - scope.$watch('count', function(val) { + var toDestroy = scope.$watch('count', function(val) { if (val) { Object.keys(val).forEach(key => { // reposition the hosts status bar by setting @@ -38,6 +38,10 @@ export default [ 'templateUrl', .filter(key => (val[key] > 0)).length > 0); } }); + + scope.$on('$destroy', function(){ + toDestroy(); + }); } }; }]; diff --git a/awx/ui/client/src/job-results/host-status-bar/host-status-bar.partial.html b/awx/ui/client/src/job-results/host-status-bar/host-status-bar.partial.html index 24a9170bcd..2d860cc2e6 100644 --- a/awx/ui/client/src/job-results/host-status-bar/host-status-bar.partial.html +++ b/awx/ui/client/src/job-results/host-status-bar/host-status-bar.partial.html @@ -20,7 +20,7 @@ aw-tool-tip="{{skippedCountTip}}" data-tip-watch="skippedCountTip">
diff --git a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.block.less b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.block.less index cc34d11c2f..7ff93d17ee 100644 --- a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.block.less +++ b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.block.less @@ -3,13 +3,16 @@ @breakpoint-md: 1200px; .JobResultsStdOut { - height: ~"calc(100% - 70px)"; + height: 100%; + width: 100%; + display: flex; + flex-direction: column; + align-items: stretch; } .JobResultsStdOut-toolbar { + flex: initial; display: flex; - height: 38px; - margin-top: 15px; border: 1px solid @default-list-header-bg; border-bottom: 0px; border-radius: 5px; @@ -28,7 +31,7 @@ display: flex; justify-content: space-between; width: 70px; - padding-bottom: 0px; + padding-bottom: 10px; padding-left: 8px; padding-right: 8px; padding-top: 10px; @@ -106,21 +109,18 @@ } .JobResultsStdOut-stdoutContainer { - height: ~"calc(100% - 48px)"; - background-color: @default-no-items-bord; + flex: 1; + position: relative; + background-color: #F6F6F6; overflow-y: scroll; overflow-x: hidden; } .JobResultsStdOut-numberColumnPreload { background-color: @default-list-header-bg; + position: absolute; + height: 100%; width: 70px; - position: fixed; - top: 148px; - bottom: 20px; - margin-top: 65px; - margin-bottom: 65px; - } .JobResultsStdOut-aLineOfStdOut { @@ -162,6 +162,7 @@ .JobResultsStdOut-stdoutColumn { padding-left: 20px; + padding-right: 20px; padding-top: 2px; padding-bottom: 2px; color: @default-interface-txt; @@ -171,6 +172,16 @@ width:100%; } +.JobResultsStdOut-stdoutColumn--tooMany { + font-weight: bold; + text-transform: uppercase; + color: @default-err; +} + +.JobResultsStdOut-stdoutColumn { + cursor: pointer; +} + .JobResultsStdOut-aLineOfStdOut:hover, .JobResultsStdOut-aLineOfStdOut:hover .JobResultsStdOut-lineNumberColumn { background-color: @default-bg; @@ -197,6 +208,7 @@ .JobResultsStdOut-followAnchor { height: 20px; width: 100%; + border-left: 70px solid @default-list-header-bg; } .JobResultsStdOut-toTop { @@ -211,6 +223,11 @@ color: @default-interface-txt; } +.JobResultsStdOut-cappedLine { + color: @b7grey; + font-style: italic; +} + @media (max-width: @breakpoint-md) { .JobResultsStdOut-numberColumnPreload { display: none; diff --git a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.directive.js b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.directive.js index 15d3e5e90c..14a34a607a 100644 --- a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.directive.js +++ b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.directive.js @@ -12,7 +12,19 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', templateUrl: templateUrl('job-results/job-results-stdout/job-results-stdout'), restrict: 'E', link: function(scope, element) { + var toDestroy = [], + resizer, + scrollWatcher; + scope.$on('$destroy', function(){ + $(window).off("resize", resizer); + $(window).off("scroll", scrollWatcher); + $(".JobResultsStdOut-stdoutContainer").off('scroll', + scrollWatcher); + toDestroy.forEach(closureFunc => closureFunc()); + }); + + scope.stdoutContainerAvailable.resolve("container available"); // utility function used to find the top visible line and // parent header in the pane // @@ -90,23 +102,14 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', var visItem, parentItem; - var containerHeight = $container.height(); - var containerTop = $container.position().top; - var containerNetHeight = containerHeight + containerTop; - // iterate through each line of standard out - $container.find('.JobResultsStdOut-aLineOfStdOut') + $container.find('.JobResultsStdOut-aLineOfStdOut:visible') .each( function () { var $this = $(this); - var lineHeight = $this.height(); - var lineTop = $this.position().top; - var lineNetHeight = lineHeight + lineTop; - // check to see if the line is the first visible // line in the viewport... - if (lineNetHeight > containerTop && - lineTop < containerNetHeight) { + if ($this.position().top >= 0) { // ...if it is, return the line number // for this line @@ -124,9 +127,15 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', // stop iterating over the standard out // lines once the first one has been // found + + $this = null; return false; - } - }); + } + + $this = null; + }); + + $container = null; return { visLine: visItem, @@ -140,22 +149,24 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', } else { scope.isMobile = false; } - // watch changes to the window size - $(window).resize(function() { + + resizer = function() { // and update the isMobile var accordingly if (window.innerWidth <= 1200 && !scope.isMobile) { scope.isMobile = true; } else if (window.innerWidth > 1200 & scope.isMobile) { scope.isMobile = false; } - }); + }; + // watch changes to the window size + $(window).resize(resizer); var lastScrollTop; var initScrollTop = function() { lastScrollTop = 0; }; - var scrollWatcher = function() { + scrollWatcher = function() { var st = $(this).scrollTop(); var netScroll = st + $(this).innerHeight(); var fullHeight; @@ -187,11 +198,15 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', } lastScrollTop = st; + + st = null; + netScroll = null; + fullHeight = null; }; // update scroll watchers when isMobile changes based on // window resize - scope.$watch('isMobile', function(val) { + toDestroy.push(scope.$watch('isMobile', function(val) { if (val === true) { // make sure ^ TOP always shown for mobile scope.stdoutOverflowed = true; @@ -213,7 +228,7 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', $(".JobResultsStdOut-stdoutContainer").on('scroll', scrollWatcher); } - }); + })); // called to scroll to follow anchor scope.followScroll = function() { @@ -246,7 +261,7 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', // if following becomes active, go ahead and get to the bottom // of the standard out pane - scope.$watch('followEngaged', function(val) { + toDestroy.push(scope.$watch('followEngaged', function(val) { // scroll to follow point if followEngaged is true if (val) { scope.followScroll(); @@ -260,7 +275,7 @@ export default [ 'templateUrl', '$timeout', '$location', '$anchorScroll', scope.followTooltip = "Click to follow standard out as it comes in."; } } - }); + })); // follow button ng-click function scope.followToggleClicked = function() { diff --git a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.partial.html b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.partial.html index 0ba992b146..b0b3d85f10 100644 --- a/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.partial.html +++ b/awx/ui/client/src/job-results/job-results-stdout/job-results-stdout.partial.html @@ -34,6 +34,16 @@
+
+
+ +
+
The standard output is too large to display. Please specify additional filters to narrow the standard out.
+
Too much previous output to display. Showing running standard output.
+
-1) { + // make it so that the search include a counter less than the + // first counter from the socket + let params = _.cloneDeep($stateParams.job_event_search); + params.counter__lte = "" + counter; + + Dataset = QuerySet.search(jobData.related.job_events, + params); + + Dataset.then(function(actualDataset) { + $scope.job_event_dataset = actualDataset.data; + }); + } + })); + + // used for tag search + $scope.job_event_dataset = Dataset.data; + + // used for tag search + $scope.list = { + basePath: jobData.related.job_events, + defaultSearchParams: function(term){ + return { + or__stdout__icontains: term, + }; + }, + }; + + // used for tag search + $scope.job_events = $scope.job_event_dataset.results; + var getTowerLinks = function() { var getTowerLink = function(key) { if ($scope.job.related[key]) { @@ -15,6 +64,7 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' $scope.machine_credential_link = getTowerLink('credential'); $scope.cloud_credential_link = getTowerLink('cloud_credential'); $scope.network_credential_link = getTowerLink('network_credential'); + $scope.schedule_link = getTowerLink('schedule'); }; // uses options to set scope variables to their readable string @@ -30,7 +80,6 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' } }; - $scope.status_label = getTowerLabel('status'); $scope.type_label = getTowerLabel('job_type'); $scope.verbosity_label = getTowerLabel('verbosity'); }; @@ -46,19 +95,83 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' $scope.labels = jobLabels; $scope.jobFinished = jobFinished; + // update label in left pane and tooltip in right pane when the job_status + // changes + toDestroy.push($scope.$watch('job_status', function(status) { + if (status) { + $scope.status_label = $scope.jobOptions.status.choices + .filter(val => val[0] === status) + .map(val => val[1])[0]; + $scope.status_tooltip = "Job " + $scope.status_label; + } + })); + + $scope.previousTaskFailed = false; + + toDestroy.push($scope.$watch('job.job_explanation', function(explanation) { + if (explanation && explanation.split(":")[0] === "Previous Task Failed") { + $scope.previousTaskFailed = true; + + var taskObj = JSON.parse(explanation.substring(explanation.split(":")[0].length + 1)); + // return a promise from the options request with the permission type choices (including adhoc) as a param + var fieldChoice = fieldChoices({ + $scope: $scope, + url: 'api/v1/unified_jobs/', + field: 'type' + }); + + // manipulate the choices from the options request to be set on + // scope and be usable by the list form + fieldChoice.then(function (choices) { + choices = + fieldLabels({ + choices: choices + }); + $scope.explanation_fail_type = choices[taskObj.job_type]; + $scope.explanation_fail_name = taskObj.job_name; + $scope.explanation_fail_id = taskObj.job_id; + $scope.task_detail = $scope.explanation_fail_type + " failed for " + $scope.explanation_fail_name + " with ID " + $scope.explanation_fail_id + "."; + }); + } else { + $scope.previousTaskFailed = false; + } + })); + + + // update the job_status value. Use the cached rootScope value if there + // is one. This is a workaround when the rest call for the jobData is + // made before some socket events come in for the job status + if ($rootScope['lastSocketStatus' + jobData.id]) { + $scope.job_status = $rootScope['lastSocketStatus' + jobData.id]; + delete $rootScope['lastSocketStatus' + jobData.id]; + } else { + $scope.job_status = jobData.status; + } + // turn related api browser routes into tower routes getTowerLinks(); + + // the links below can't be set in getTowerLinks because the + // links on the UI don't directly match the corresponding URL + // on the API browser if(jobData.summary_fields && jobData.summary_fields.job_template && jobData.summary_fields.job_template.id){ $scope.job_template_link = `/#/templates/job_template/${$scope.job.summary_fields.job_template.id}`; } if(jobData.summary_fields && jobData.summary_fields.project_update && jobData.summary_fields.project_update.status){ - $scope.project_status = jobData.summary_fields.project_update.status; + $scope.project_status = jobData.summary_fields.project_update.status; } if(jobData.summary_fields && jobData.summary_fields.project_update && jobData.summary_fields.project_update.id){ - $scope.project_update_link = `/#/scm_update/${jobData.summary_fields.project_update.id}`; + $scope.project_update_link = `/#/scm_update/${jobData.summary_fields.project_update.id}`; + } + if(jobData.summary_fields && jobData.summary_fields.source_workflow_job && + jobData.summary_fields.source_workflow_job.id){ + $scope.workflow_result_link = `/#/workflows/${jobData.summary_fields.source_workflow_job.id}`; + } + if(jobData.result_traceback) { + $scope.job.result_traceback = jobData.result_traceback.trim().split('\n').join('
'); } // use options labels to manipulate display of details @@ -75,6 +188,12 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' $scope.stdoutFullScreen = false; $scope.toggleStdoutFullscreen = function() { $scope.stdoutFullScreen = !$scope.stdoutFullScreen; + + if ($scope.stdoutFullScreen === true) { + $scope.toggleStdoutFullscreenTooltip = i18n._("Collapse Output"); + } else if ($scope.stdoutFullScreen === false) { + $scope.toggleStdoutFullscreenTooltip = i18n._("Expand Output"); + } }; $scope.deleteJob = function() { @@ -121,98 +240,189 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' $scope.events = {}; + // For elapsed time while a job is running, compute the differnce in seconds, + // from the time the job started until now. Moment() returns the current + // time as a moment object. + var start = ($scope.job.started === null) ? moment() : moment($scope.job.started); + if(jobFinished === false){ + var elapsedInterval = setInterval(function(){ + let now = moment(); + $scope.job.elapsed = now.diff(start, 'seconds'); + }, 1000); + } + // EVENT STUFF BELOW // This is where the async updates to the UI actually happen. // Flow is event queue munging in the service -> $scope setting in here - var processEvent = function(event) { + var processEvent = function(event, context) { + // only care about filter context checking when the event comes + // as a rest call + if (context && context !== currentContext) { + return; + } // put the event in the queue - eventQueue.populate(event).then(mungedEvent => { - // make changes to ui based on the event returned from the queue - if (mungedEvent.changes) { - mungedEvent.changes.forEach(change => { - // we've got a change we need to make to the UI! - // update the necessary scope and make the change - if (change === 'startTime' && !$scope.job.start) { - $scope.job.start = mungedEvent.startTime; - } + var mungedEvent = eventQueue.populate(event); - if (change === 'count' && !$scope.countFinished) { - // for all events that affect the host count, - // update the status bar as well as the host - // count badge - $scope.count = mungedEvent.count; - $scope.hostCount = getTotalHostCount(mungedEvent - .count); - } + // make changes to ui based on the event returned from the queue + if (mungedEvent.changes) { + mungedEvent.changes.forEach(change => { + // we've got a change we need to make to the UI! + // update the necessary scope and make the change + if (change === 'startTime' && !$scope.job.start) { + $scope.job.start = mungedEvent.startTime; + } - if (change === 'playCount') { - $scope.playCount = mungedEvent.playCount; - } + if (change === 'count' && !$scope.countFinished) { + // for all events that affect the host count, + // update the status bar as well as the host + // count badge + $scope.count = mungedEvent.count; + $scope.hostCount = getTotalHostCount(mungedEvent + .count); + } - if (change === 'taskCount') { - $scope.taskCount = mungedEvent.taskCount; - } + if (change === 'finishedTime' && !$scope.job.finished) { + $scope.job.finished = mungedEvent.finishedTime; + $scope.jobFinished = true; + $scope.followTooltip = "Jump to last line of standard out."; + } - if (change === 'finishedTime' && !$scope.job.finished) { - $scope.job.finished = mungedEvent.finishedTime; - $scope.jobFinished = true; - $scope.followTooltip = "Jump to last line of standard out."; - } + if (change === 'countFinished') { + // the playbook_on_stats event actually lets + // us know that we don't need to iteratively + // look at event to update the host counts + // any more. + $scope.countFinished = true; + } - if (change === 'countFinished') { - // the playbook_on_stats event actually lets - // us know that we don't need to iteratively - // look at event to update the host counts - // any more. - $scope.countFinished = true; - } + if(change === 'stdout'){ + // put stdout elements in stdout container - if(change === 'stdout'){ - // put stdout elements in stdout container - - // this scopes the event to that particular - // block of stdout. - // If you need to see the event a particular - // stdout block is from, you can: - // angular.element($0).scope().event - $scope.events[mungedEvent.counter] = $scope.$new(); - $scope.events[mungedEvent.counter] - .event = mungedEvent; + var appendToBottom = function(mungedEvent){ + // if we get here then the event type was either a + // header line, recap line, or one of the additional + // event types, so we append it to the bottom. + // These are the event types for captured + // stdout not directly related to playbook or runner + // events: + // (0, 'debug', _('Debug'), False), + // (0, 'verbose', _('Verbose'), False), + // (0, 'deprecated', _('Deprecated'), False), + // (0, 'warning', _('Warning'), False), + // (0, 'system_warning', _('System Warning'), False), + // (0, 'error', _('Error'), True), angular .element(".JobResultsStdOut-stdoutContainer") .append($compile(mungedEvent .stdout)($scope.events[mungedEvent .counter])); + }; - // move the followAnchor to the bottom of the - // container - $(".JobResultsStdOut-followAnchor") - .appendTo(".JobResultsStdOut-stdoutContainer"); - // if follow is engaged, - // scroll down to the followAnchor - if ($scope.followEngaged) { - if (!$scope.followScroll) { - $scope.followScroll = function() { - $log.error("follow scroll undefined, standard out directive not loaded yet?"); - }; - } - $scope.followScroll(); + // this scopes the event to that particular + // block of stdout. + // If you need to see the event a particular + // stdout block is from, you can: + // angular.element($0).scope().event + $scope.events[mungedEvent.counter] = $scope.$new(); + $scope.events[mungedEvent.counter] + .event = mungedEvent; + + if (mungedEvent.stdout.indexOf("not_skeleton") > -1) { + // put non-duplicate lines into the standard + // out pane where they should go (within the + // right header section, before the next line + // as indicated by start_line) + window.$ = $; + var putIn; + var classList = $("div", + "
"+mungedEvent.stdout+"
") + .attr("class").split(" "); + if (classList + .filter(v => v.indexOf("task_") > -1) + .length) { + putIn = classList + .filter(v => v.indexOf("task_") > -1)[0]; + } else if(classList + .filter(v => v.indexOf("play_") > -1) + .length) { + putIn = classList + .filter(v => v.indexOf("play_") > -1)[0]; + } else { + appendToBottom(mungedEvent); } - } - }); - } - // the changes have been processed in the ui, mark it in the queue + var putAfter; + var isDup = false; + $(".header_" + putIn + ",." + putIn) + .each((i, v) => { + if (angular.element(v).scope() + .event.start_line < mungedEvent + .start_line) { + putAfter = v; + } else if (angular.element(v).scope() + .event.start_line === mungedEvent + .start_line) { + isDup = true; + return false; + } else if (angular.element(v).scope() + .event.start_line > mungedEvent + .start_line) { + return false; + } + }); + + if (!isDup) { + $(putAfter).after($compile(mungedEvent + .stdout)($scope.events[mungedEvent + .counter])); + } + + + classList = null; + putIn = null; + } else { + appendToBottom(mungedEvent); + } + + // move the followAnchor to the bottom of the + // container + $(".JobResultsStdOut-followAnchor") + .appendTo(".JobResultsStdOut-stdoutContainer"); + + // if follow is engaged, + // scroll down to the followAnchor + if ($scope.followEngaged) { + if (!$scope.followScroll) { + $scope.followScroll = function() { + $log.error("follow scroll undefined, standard out directive not loaded yet?"); + }; + } + $scope.followScroll(); + } + } + }); + + // the changes have been processed in the ui, mark it in the + // queue eventQueue.markProcessed(event); - }); + } }; - // PULL! grab completed event data and process each event - // TODO: implement retry logic in case one of these requests fails - var getEvents = function(url) { + $scope.stdoutContainerAvailable = $q.defer(); + $scope.hasSkeleton = $q.defer(); + + eventQueue.initialize(); + + $scope.playCount = 0; + $scope.taskCount = 0; + + // get header and recap lines + var skeletonPlayCount = 0; + var skeletonTaskCount = 0; + var getSkeleton = function(url) { jobResultsService.getEvents(url) .then(events => { events.results.forEach(event => { @@ -220,28 +430,219 @@ export default ['jobData', 'jobDataOptions', 'jobLabels', 'jobFinished', 'count' // coming over the websocket event.event_name = event.event; delete event.event; + + // increment play and task count + if (event.event_name === "playbook_on_play_start") { + skeletonPlayCount++; + } else if (event.event_name === "playbook_on_task_start") { + skeletonTaskCount++; + } + processEvent(event); }); if (events.next) { - getEvents(events.next); + getSkeleton(events.next); + } else { + // after the skeleton requests have completed, + // put the play and task count into the dom + $scope.playCount = skeletonPlayCount; + $scope.taskCount = skeletonTaskCount; + $scope.hasSkeleton.resolve("skeleton resolved"); } }); }; - getEvents($scope.job.related.job_events); + + $scope.stdoutContainerAvailable.promise.then(() => { + getSkeleton(jobData.related.job_events + "?order_by=id&or__event__in=playbook_on_start,playbook_on_play_start,playbook_on_task_start,playbook_on_stats"); + }); + + var getEvents; + + var processPage = function(events, context) { + // currentContext is the context of the filter when this request + // to processPage was made + // + // currentContext is the context of the filter currently + // + // if they are not the same, make sure to stop process events/ + // making rest calls for next pages/etc. (you can see context is + // also passed into getEvents and processEvent and similar checks + // exist in these functions) + // + // also, if the page doesn't contain results (i.e.: the response + // returns an error), don't process the page + if (context !== currentContext || events === undefined || + events.results === undefined) { + return; + } + + events.results.forEach(event => { + // get the name in the same format as the data + // coming over the websocket + event.event_name = event.event; + delete event.event; + + processEvent(event, context); + }); + if (events.next && !cancelRequests) { + getEvents(events.next, context); + } else { + // put those paused events into the pane + $scope.gotPreviouslyRanEvents.resolve(""); + } + }; + + // grab non-header recap lines + getEvents = function(url, context) { + if (context !== currentContext) { + return; + } + + jobResultsService.getEvents(url) + .then(events => { + processPage(events, context); + }); + }; + + // grab non-header recap lines + toDestroy.push($scope.$watch('job_event_dataset', function(val) { + if (val) { + eventQueue.initialize(); + + Object.keys($scope.events) + .forEach(v => { + // dont destroy scope events for skeleton lines + let name = $scope.events[v].event.name; + + if (!(name === "playbook_on_play_start" || + name === "playbook_on_task_start" || + name === "playbook_on_stats")) { + $scope.events[v].$destroy(); + $scope.events[v] = null; + delete $scope.events[v]; + } + }); + + // pause websocket events from coming in to the pane + $scope.gotPreviouslyRanEvents = $q.defer(); + currentContext += 1; + + let context = currentContext; + + $( ".JobResultsStdOut-aLineOfStdOut.not_skeleton" ).remove(); + $scope.hasSkeleton.promise.then(() => { + if (val.count > parseInt(val.maxEvents)) { + $(".header_task").hide(); + $(".header_play").hide(); + $scope.standardOutTooltip = '
' + + i18n._('The output is too large to display. Please download.') + + '
' + + '
' + + '' + + '' + + '' + + '' + + '
' + + '
'; + + if ($scope.job_status === "successful" || + $scope.job_status === "failed" || + $scope.job_status === "error" || + $scope.job_status === "canceled") { + $scope.tooManyEvents = true; + $scope.tooManyPastEvents = false; + } else { + $scope.tooManyPastEvents = true; + $scope.tooManyEvents = false; + $scope.gotPreviouslyRanEvents.resolve(""); + } + } else { + $(".header_task").show(); + $(".header_play").show(); + $scope.tooManyEvents = false; + $scope.tooManyPastEvents = false; + processPage(val, context); + } + }); + } + })); + + // Processing of job_events messages from the websocket - $scope.$on(`ws-job_events-${$scope.job.id}`, function(e, data) { - processEvent(data); - }); + toDestroy.push($scope.$on(`ws-job_events-${$scope.job.id}`, function(e, data) { + + // use the lowest counter coming over the socket to retrigger pull data + // to only be for stuff lower than that id + // + // only do this for entering the jobs page mid-run (thus the + // data.counter is 1 conditional + if (data.counter === 1) { + $scope.firstCounterFromSocket = -2; + } + + if ($scope.firstCounterFromSocket !== -2 && + $scope.firstCounterFromSocket === -1 || + data.counter < $scope.firstCounterFromSocket) { + $scope.firstCounterFromSocket = data.counter; + } + + $q.all([$scope.gotPreviouslyRanEvents.promise, + $scope.hasSkeleton.promise]).then(() => { + // put the line in the + // standard out pane (and increment play and task + // count if applicable) + if (data.event_name === "playbook_on_play_start") { + $scope.playCount++; + } else if (data.event_name === "playbook_on_task_start") { + $scope.taskCount++; + } + processEvent(data); + }); + })); // Processing of job-status messages from the websocket - $scope.$on(`ws-jobs`, function(e, data) { - if (parseInt(data.unified_job_id, 10) === parseInt($scope.job.id,10)) { - $scope.job.status = data.status; - } - if (parseInt(data.project_id, 10) === parseInt($scope.job.project,10)) { + toDestroy.push($scope.$on(`ws-jobs`, function(e, data) { + if (parseInt(data.unified_job_id, 10) === + parseInt($scope.job.id,10)) { + // controller is defined, so set the job_status + $scope.job_status = data.status; + if (data.status === "successful" || + data.status === "failed" || + data.status === "error" || + data.status === "canceled") { + clearInterval(elapsedInterval); + // When the fob is finished retrieve the job data to + // correct anything that was out of sync from the job run + jobResultsService.getJobData($scope.job.id).then(function(data){ + $scope.job = data; + }); + } + } else if (parseInt(data.project_id, 10) === + parseInt($scope.job.project,10)) { + // this is a project status update message, so set the + // project status in the left pane $scope.project_status = data.status; - $scope.project_update_link = `/#/scm_update/${data.unified_job_id}`; + $scope.project_update_link = `/#/scm_update/${data + .unified_job_id}`; + } else { + // controller was previously defined, but is not yet defined + // for this job. cache the socket status on root scope + $rootScope['lastSocketStatus' + data.unified_job_id] = data.status; } + })); + + $scope.$on('$destroy', function(){ + $( ".JobResultsStdOut-aLineOfStdOut" ).remove(); + cancelRequests = true; + eventQueue.initialize(); + Object.keys($scope.events) + .forEach(v => { + $scope.events[v].$destroy(); + $scope.events[v] = null; + }); + $scope.events = {}; + clearInterval(elapsedInterval); + toDestroy.forEach(closureFunc => closureFunc()); }); }]; diff --git a/awx/ui/client/src/job-results/job-results.partial.html b/awx/ui/client/src/job-results/job-results.partial.html index a3bbd11cf5..016f3e745b 100644 --- a/awx/ui/client/src/job-results/job-results.partial.html +++ b/awx/ui/client/src/job-results/job-results.partial.html @@ -15,7 +15,7 @@
- RESULTS + DETAILS
@@ -36,9 +36,9 @@
+ + +
@@ -158,6 +226,7 @@
Extra Variables + +