Compare commits

..

77 Commits

Author SHA1 Message Date
Marliana Lara
2a1dffd363 Fix edit constructed inventory hanging loading state (#14343) 2023-08-21 12:36:36 -04:00
digitalbadger-uk
8c7ab8fcf2 Added required epoc time field for Splunk HEC Event Receiver (#14246)
Signed-off-by: Iain <iain@digitalbadger.com>
2023-08-21 09:44:52 -03:00
Hao Liu
3de8455960 Fix missing trailing / in PUBLIC_PATH for UI
Missing trialing `/` cause UI to load file from incorrect static dir location.
2023-08-17 15:16:59 -04:00
Hao Liu
d832e75e99 Fix ui-next build step file path issue
Add full path for the mv command so that the command can be run from ui_next and from project root.

Additionally move the rename of file to src build step.
2023-08-17 15:16:59 -04:00
abwalczyk
a89e266feb Fixed task and web docs (#14350) 2023-08-17 12:22:51 -04:00
Hao Liu
8e1516eeb7 Update UI_NEXT build to set PRODUCT and PUBLIC_PATH
https://github.com/ansible/ansible-ui/pull/792 added configurable public path (which was change to '/' in https://github.com/ansible/ansible-ui/pull/766/files#diff-2606df06d89b38ff979770f810c3c269083e7c0fbafb27aba7f9ea0297179828L128-R157)

This PR added the variable when building ui-next
2023-08-16 18:35:12 -04:00
Hao Liu
c7f2fdbe57 Rename ui_next index.html to index_awx.html during build process
Due to change made in https://github.com/ansible/ansible-ui/pull/766/files#diff-7ae45ad102eab3b6d7e7896acd08c427a9b25b346470d7bc6507b6481575d519R18 awx/ui_next/build/awx/index_awx.html was renamed to awx/ui_next/build/awx/index.html

This PR fixes the problem by renaming the file back
2023-08-16 18:35:12 -04:00
delinea-sagar
c75757bf22 Update python-tss-sdk dependency (#14207)
Signed-off-by: delinea-sagar <sagar.wani@c.delinea.com>
2023-08-16 20:07:35 +00:00
Kevin Pavon
b8ec7c4072 Schedule rruleset fix related #13446 (#13611)
Signed-off-by: Kevin Pavon <7450065+KaraokeKev@users.noreply.github.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-16 16:10:31 -03:00
jbreitwe-rh
bb1c155bc9 Fixed typos (#14347) 2023-08-16 15:05:23 -04:00
Jeff Bradberry
4822dd79fc Revert "Improve performance for awx cli export (#13182)" 2023-08-15 15:55:10 -04:00
jainnikhil30
4cd90163fc make the default JOB_EVENT_BUFFER_SECONDS 1 seconds (#14335) 2023-08-12 07:49:34 +05:30
Alan Rominger
8dc6ceffee Fix programming error in facts retry merge (#14336) 2023-08-11 13:54:18 -04:00
Alan Rominger
2c7184f9d2 Add a retry to update host facts on deadlocks (#14325) 2023-08-11 11:13:56 -04:00
Martin Slemr
5cf93febaa HostMetricSummaryMonthly: Analytics export 2023-08-11 09:38:23 -04:00
Alan Rominger
284bd8377a Integrate scheduler into dispatcher main loop (#14067)
Dispatcher refactoring to get pg_notify publish payload
  as separate method

Refactor periodic module under dispatcher entirely
  Use real numbers for schedule reference time
  Run based on due_to_run method

Review comments about naming and code comments
2023-08-10 14:43:07 -04:00
Jeff Bradberry
14992cee17 Add in an async task to migrate the data over 2023-08-10 13:48:58 -04:00
Jeff Bradberry
6db663eacb Modify main/0185 to set aside the json fields that might be a problem
Rename them, then create a new clean field of the new jsonb type.
We'll use a task to do the data conversion.
2023-08-10 13:48:58 -04:00
Ivanilson Junior
87bb70bcc0 Remove extra quote from Skipped task status string (#14318)
Signed-off-by: Ivanilson Junior <ivanilsonaraujojr@gmail.com>
Co-authored-by: kialam <digitalanime@gmail.com>
2023-08-09 15:58:46 -07:00
Pablo Hess
c2d02841e8 Allow importing licenses with a missing "usage" attribute (#14326) 2023-08-09 16:41:14 -04:00
onefourfive
e5a6007bf1 fix broken link to upgrade docs. related #11313 (#14296)
Signed-off-by: onefourfive <>
Co-authored-by: onefourfive <unknown>
2023-08-09 15:06:44 -04:00
Alan Rominger
6f9ea1892b AAP-14538 Only process ansible_facts for successful jobs (#14313) 2023-08-04 17:10:14 -04:00
Sean Sullivan
abc56305cc Add Request time out option for collection (#14157)
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-08-03 15:06:04 -03:00
kialam
9bb6786a58 Wait for new label IDs before setting label prompt values. (#14283) 2023-08-03 09:46:46 -04:00
Michael Abashian
aec9a9ca56 Fix rbac around credential access add button (#14290) 2023-08-03 09:18:21 -04:00
John Westcott IV
7e4cf859f5 Added PR check to ensure JIRA links are present (#13839) 2023-08-02 15:28:13 -04:00
mcen1
90c3d8a275 Update example service-account.yml for container group in documentation (#13479)
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
Co-authored-by: Nana <35573203+masbahnana@users.noreply.github.com>
2023-08-02 15:27:18 -04:00
lucas-benedito
6d1c8de4ed Fix trial status and host limit with sub (#14237)
Co-authored-by: Lucas Benedito <lbenedit@redhat.com>
2023-08-02 10:27:20 -04:00
Seth Foster
601b62deef bump python-daemon package (#14301) 2023-08-01 01:39:17 +00:00
Seth Foster
131dd088cd fix linting (#14302) 2023-07-31 20:37:37 -04:00
Rick Elrod
445d892050 Drop unused django-taggit dependency (#14241)
This drops the django-taggit dependency and drops the relevant fields
from old migrations.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-31 10:05:27 -05:00
Michael Abashian
35a576f2dd Adds autoComplete attribute to forms that were missing it (#14080) 2023-07-28 09:49:36 -04:00
John Westcott IV
7838641215 Fixed dependencies tag in PR labeler (#14286) 2023-07-28 08:30:30 -04:00
Alan Rominger
ab5cc2e69c Simplifications for DependencyManager (#13533) 2023-07-27 15:42:29 -04:00
John Westcott IV
5a63533967 Added support to collection for named urls (#14205) 2023-07-27 10:22:41 -03:00
Christian Adams
b549ae1efa Only show the product version header when the requester is authenticated (#14135) 2023-07-26 18:38:05 -04:00
Alex Corey
bd0089fd35 fixes docs link for controller versions >= 4.3 (#14287) 2023-07-26 21:54:39 +00:00
Christian Adams
40d18e95c2 Explicitly turn off autocomplete for API login form (#14232) 2023-07-26 15:33:26 -04:00
Andrew Klychkov
191a0f7f2a docs/execution_environments.md: add a link to EE getting started guide (#14263) 2023-07-26 15:05:36 -04:00
eric-zadara
852bb0717c Return back chdir to project sync to support project-local roles/collections
Signed-off-by: eric-zadara <eric@zadarastorage.com>
2023-07-25 09:58:43 -05:00
Alan Rominger
98bfe3f43f Add missing trigger for failed-to-start nodes (#13802) 2023-07-24 12:17:46 -04:00
John Westcott IV
53a7b7818e Updating release process doc for operator hub instructions (#13564) 2023-07-24 15:29:26 +01:00
Gabriel Muniz
e7c7454a3a Remove host update code which can be non performant (#14233) 2023-07-24 09:56:40 -04:00
Homero Pawlowski
63e82aa4a3 Fix collection module docs for names, IDs, and named URLs (#14269) 2023-07-24 08:57:46 -04:00
ZitaNemeckova
fc1b74aa68 Remove extra data for AoC (#14254) 2023-07-19 11:16:53 -04:00
Alan Rominger
ea455df9f4 Only push the production images for main repo (#14261) 2023-07-19 09:51:33 -04:00
Satoe Imaishi
8e2a5ed8ae Require pyyaml >= 6.0.1 (#14262) 2023-07-18 16:25:14 -05:00
Rick Elrod
1d7e54bd39 Wrap Django RedisCache to mute exceptions (#14243)
We introduce a thin wrapper over Django's RedisCache so that the functionality of DJANGO_REDIS_IGNORE_EXCEPTIONS is retained while still being able to drop the django-redis dependency.

Credit to django-redis's implementation for the idea of using a decorator for this and abstracting out the exception handling logic.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-18 15:31:09 -05:00
Cristiano Nicolai
83df056f71 Small doc fixes for workflow and task manager (#14242) 2023-07-18 19:23:48 +00:00
Rick Elrod
48edb15a03 Prevent Dispatcher deadlock when Redis disappears (#14249)
This fixes https://github.com/ansible/awx/issues/14245 which has
more information about this issue.

This change addresses both:
- A clashing signal handler (registering a callback to fire when
  the task manager times out, and hitting that callback in cases
  where we didn't expect to). Make dispatcher timeout use
  SIGUSR1, not SIGTERM.
- Metrics not being reported should not make us crash, so that is
  now fixed as well.

Signed-off-by: Rick Elrod <rick@elrod.me>
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-18 10:43:46 -05:00
John Westcott IV
8ddc19a927 Changing how associations work in awx collection (#13626)
Co-authored-by: Alan Rominger <arominge@redhat.com>
Co-authored-by: Jessica Steurer <70719005+jay-steurer@users.noreply.github.com>
2023-07-17 14:16:55 -03:00
Sean Sullivan
b021ad7b28 Allow job_template collection module to set verbosity to 5 (#14244) 2023-07-17 09:48:14 -05:00
Rick Elrod
b8ba2feecd Tell Makefile and pre-commit.sh that they are bash
On some systems, /bin/sh is a bash symlink and running it will launch
bash in sh compatibility mode. However, bash-specific syntax will still
work in this mode (for example using == or pipefail).

However, on systems where /bin/sh is a symlink to another shell (think:
Debian-based) they might not have those bashisms.

Set the shell in the Makefile, so that it uses bash (since it is already
depending on bash, even though it is calling it as /bin/sh by default),
and add a shebang to pre-commit.sh for the same reason.

Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-14 12:06:55 -05:00
Rick Elrod
8cfb704f86 Migrate from django-redis to Django's built-in Redis caching support (#14210)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-13 12:16:16 -05:00
John Westcott IV
efcac860de Upgrade django to 4.2.3 (#14228) 2023-07-13 08:52:50 -04:00
Martin Slemr
6c5590e0e6 HostMetricSummaryMonthly command + views + scheduled task (#13999)
Co-authored-by: Alan Rominger <arominge@redhat.com>
2023-07-12 16:40:09 -04:00
Erez Tamam
0edcd688a2 add organization column notification template list (#13998) 2023-07-12 15:11:47 -04:00
Alan Rominger
b8c48f7d50 Restore pre-upgrade pg_notify notifcation behavior (#14222) 2023-07-11 16:23:53 -04:00
John Westcott IV
07e30a3d5f Refined release documentation (#14221) 2023-07-10 19:45:34 +00:00
John Westcott IV
cb5a8aa194 Fix black pre-commit hook (#14212) 2023-07-06 16:36:50 -04:00
Seth Foster
8b49f910c7 Add settings.RECEPTOR_LOG_LEVEL, update work signing key path (#14098) 2023-07-06 11:39:30 -04:00
kialam
a4f808df34 Schedules form - pass time prop as string. (#14206) 2023-07-06 07:57:55 -07:00
Alan Rominger
82abd18927 Fix DELETE 500 KeyError due to eventless model events (#14172) 2023-07-05 15:37:52 -04:00
John Westcott IV
5e9d514e5e Added CSRF Origin in settings (#14062) 2023-07-05 15:18:23 -04:00
Rick Elrod
4a34ee1f1e Add optional pgbouncer to dev environment (#14083)
Signed-off-by: Rick Elrod <rick@elrod.me>
2023-07-05 13:41:47 -05:00
John Westcott IV
3624fe2cac Add combined roles/collection requirements on project sync (#14081) 2023-07-05 13:25:44 -03:00
Cesar Francisco San Nicolas Martinez
0f96d9aca2 Rename/relocate receptor crt in install bundle (#14201) 2023-07-05 14:50:55 +02:00
Shane McDonald
989b80e771 Fix selinux errors with Redis mount in dev env 2023-07-03 09:57:01 -04:00
John Westcott IV
cc64be937d Fix spelling errors in readme of awx_collection/tools
Signed-off-by: John Westcott <john.westcott.iv@redhat.com>
2023-06-30 15:41:47 -04:00
John Westcott IV
94183d602c Enhancing vault integration
Added persistent storage

Auto-create vault and awx via playbooks

Create a new pattern for custom containers where we can do initialization

Auto-install roles needed for plumbing via the Makefile
2023-06-30 10:05:15 -04:00
Vidya Nambiar
ac4ef141bf Fix filter experience when assigning access to teams (#14175) 2023-06-29 15:15:32 -04:00
jainnikhil30
86f6b54eec add the bulk api swagger topic for API reference docs (#14181) 2023-06-28 21:55:38 +05:30
Michael Abashian
bd8108b27c Fixed bug where a weekly rrule string without a BYDAY would result in the UI throwing a TypeError (#14182) 2023-06-28 11:10:49 -04:00
Alan Rominger
aed96fb365 Use the proper queryset to filter project update events (#14166) 2023-06-26 21:41:08 -04:00
Alan Rominger
fe2da52eec Upgrade Github actions issue labeler to fix 404 errors (#14163) 2023-06-26 17:14:53 -04:00
Alan Rominger
974465e46a Add hashivault option as docker-compose optional container (#14161)
Co-authored-by: Sarabraj Singh <singh.sarabraj@gmail.com>
2023-06-26 15:48:58 -04:00
Alan Rominger
c736986023 Try to fix CI by adding dropped coreapi lib (#14165) 2023-06-26 15:11:12 -04:00
180 changed files with 3275 additions and 1150 deletions

View File

@@ -15,5 +15,5 @@
"dependencies":
- any: ["awx/ui/package.json"]
- any: ["awx/requirements/*.txt"]
- any: ["awx/requirements/requirements.in"]
- any: ["requirements/*.txt"]
- any: ["requirements/requirements.in"]

View File

@@ -48,8 +48,11 @@ jobs:
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-dev-build
DEV_DOCKER_TAG_BASE=ghcr.io/${OWNER_LC} COMPOSE_TAG=${GITHUB_REF##*/} make awx-kube-build
- name: Push image
- name: Push development images
run: |
docker push ghcr.io/${OWNER_LC}/awx_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx_kube_devel:${GITHUB_REF##*/}
docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
- name: Push AWX k8s image, only for upstream and feature branches
run: docker push ghcr.io/${OWNER_LC}/awx:${GITHUB_REF##*/}
if: endsWith(github.repository, '/awx')

View File

@@ -17,7 +17,7 @@ jobs:
steps:
- name: Label Issue
uses: github/issue-labeler@v2.4.1
uses: github/issue-labeler@v3.1
with:
repo-token: "${{ secrets.GITHUB_TOKEN }}"
not-before: 2021-12-07T07:00:00Z

View File

@@ -0,0 +1,35 @@
---
name: Check body for reference to jira
on:
pull_request:
branches:
- release_**
jobs:
pr-check:
if: github.repository_owner == 'ansible' && github.repository != 'awx'
name: Scan PR description for JIRA links
runs-on: ubuntu-latest
permissions:
packages: write
contents: read
steps:
- name: Check for JIRA lines
env:
PR_BODY: ${{ github.event.pull_request.body }}
run: |
echo "$PR_BODY" | grep "JIRA: None" > no_jira
echo "$PR_BODY" | grep "JIRA: https://.*[0-9]+"> jira
exit 0
# We exit 0 and set the shell to prevent the returns from the greps from failing this step
# See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#exit-codes-and-error-action-preference
shell: bash {0}
- name: Check for exactly one item
run: |
if [ $(cat no_jira jira | wc -l) != 1 ] ; then
echo "The PR body must contain exactly one of [ 'JIRA: None' or 'JIRA: <one or more links>' ]"
echo "We counted $(cat no_jira jira | wc -l)"
exit 255;
else
exit 0;
fi

View File

@@ -4,6 +4,6 @@
Early versions of AWX did not support seamless upgrades between major versions and required the use of a backup and restore tool to perform upgrades.
Users who wish to upgrade modern AWX installations should follow the instructions at:
As of version 18.0, `awx-operator` is the preferred install/upgrade method. Users who wish to upgrade modern AWX installations should follow the instructions at:
https://github.com/ansible/awx/blob/devel/INSTALL.md#upgrading-from-previous-versions
https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md

View File

@@ -1,6 +1,7 @@
-include awx/ui_next/Makefile
PYTHON := $(notdir $(shell for i in python3.9 python3; do command -v $$i; done|sed 1q))
SHELL := bash
DOCKER_COMPOSE ?= docker-compose
OFFICIAL ?= no
NODE ?= node
@@ -27,6 +28,8 @@ COLLECTION_TEMPLATE_VERSION ?= false
# NOTE: This defaults the container image version to the branch that's active
COMPOSE_TAG ?= $(GIT_BRANCH)
MAIN_NODE_TYPE ?= hybrid
# If set to true docker-compose will also start a pgbouncer instance and use it
PGBOUNCER ?= false
# If set to true docker-compose will also start a keycloak instance
KEYCLOAK ?= false
# If set to true docker-compose will also start an ldap instance
@@ -37,6 +40,8 @@ SPLUNK ?= false
PROMETHEUS ?= false
# If set to true docker-compose will also start a grafana instance
GRAFANA ?= false
# If set to true docker-compose will also start a hashicorp vault instance
VAULT ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
@@ -520,15 +525,20 @@ docker-compose-sources: .git/hooks/pre-commit
-e control_plane_node_count=$(CONTROL_PLANE_NODE_COUNT) \
-e execution_node_count=$(EXECUTION_NODE_COUNT) \
-e minikube_container_group=$(MINIKUBE_CONTAINER_GROUP) \
-e enable_pgbouncer=$(PGBOUNCER) \
-e enable_keycloak=$(KEYCLOAK) \
-e enable_ldap=$(LDAP) \
-e enable_splunk=$(SPLUNK) \
-e enable_prometheus=$(PROMETHEUS) \
-e enable_grafana=$(GRAFANA) \
-e enable_vault=$(VAULT) \
-e enable_tacacs=$(TACACS) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)
docker-compose: awx/projects docker-compose-sources
ansible-galaxy install --ignore-certs -r tools/docker-compose/ansible/requirements.yml;
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/initialize_containers.yml \
-e enable_vault=$(VAULT);
$(DOCKER_COMPOSE) -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_OPTS) up $(COMPOSE_UP_OPTS) --remove-orphans
docker-compose-credential-plugins: awx/projects docker-compose-sources
@@ -580,7 +590,7 @@ docker-clean:
-$(foreach image_id,$(shell docker images --filter=reference='*/*/*awx_devel*' --filter=reference='*/*awx_devel*' --filter=reference='*awx_devel*' -aq),docker rmi --force $(image_id);)
docker-clean-volumes: docker-compose-clean docker-compose-container-group-clean
docker volume rm -f tools_awx_db tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker volume rm -f tools_awx_db tools_vault_1 tools_grafana_storage tools_prometheus_storage $(docker volume ls --filter name=tools_redis_socket_ -q)
docker-refresh: docker-clean docker-compose

View File

@@ -232,7 +232,8 @@ class APIView(views.APIView):
response = super(APIView, self).finalize_response(request, response, *args, **kwargs)
time_started = getattr(self, 'time_started', None)
response['X-API-Product-Version'] = get_awx_version()
if request.user.is_authenticated:
response['X-API-Product-Version'] = get_awx_version()
response['X-API-Product-Name'] = server_product_name()
response['X-API-Node'] = settings.CLUSTER_HOST_ID

View File

@@ -1629,8 +1629,8 @@ class ProjectUpdateDetailSerializer(ProjectUpdateSerializer):
fields = ('*', 'host_status_counts', 'playbook_counts')
def get_playbook_counts(self, obj):
task_count = obj.project_update_events.filter(event='playbook_on_task_start').count()
play_count = obj.project_update_events.filter(event='playbook_on_play_start').count()
task_count = obj.get_event_queryset().filter(event='playbook_on_task_start').count()
play_count = obj.get_event_queryset().filter(event='playbook_on_play_start').count()
data = {'play_count': play_count, 'task_count': task_count}

View File

@@ -12,7 +12,7 @@ receptor_work_commands:
custom_worksign_public_keyfile: receptor/work_public_key.pem
custom_tls_certfile: receptor/tls/receptor.crt
custom_tls_keyfile: receptor/tls/receptor.key
custom_ca_certfile: receptor/tls/ca/receptor-ca.crt
custom_ca_certfile: receptor/tls/ca/mesh-CA.crt
receptor_protocol: 'tcp'
receptor_listener: true
receptor_port: {{ instance.listener_port }}

View File

@@ -30,7 +30,7 @@ from awx.api.views import (
OAuth2TokenList,
ApplicationOAuth2TokenList,
OAuth2ApplicationDetail,
# HostMetricSummaryMonthlyList, # It will be enabled in future version of the AWX
HostMetricSummaryMonthlyList,
)
from awx.api.views.bulk import (
@@ -123,8 +123,7 @@ v2_urls = [
re_path(r'^constructed_inventories/', include(constructed_inventory_urls)),
re_path(r'^hosts/', include(host_urls)),
re_path(r'^host_metrics/', include(host_metric_urls)),
# It will be enabled in future version of the AWX
# re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^host_metric_summary_monthly/$', HostMetricSummaryMonthlyList.as_view(), name='host_metric_summary_monthly_list'),
re_path(r'^groups/', include(group_urls)),
re_path(r'^inventory_sources/', include(inventory_source_urls)),
re_path(r'^inventory_updates/', include(inventory_update_urls)),

View File

@@ -1564,16 +1564,15 @@ class HostMetricDetail(RetrieveDestroyAPIView):
return Response(status=status.HTTP_204_NO_CONTENT)
# It will be enabled in future version of the AWX
# class HostMetricSummaryMonthlyList(ListAPIView):
# name = _("Host Metrics Summary Monthly")
# model = models.HostMetricSummaryMonthly
# serializer_class = serializers.HostMetricSummaryMonthlySerializer
# permission_classes = (IsSystemAdminOrAuditor,)
# search_fields = ('date',)
#
# def get_queryset(self):
# return self.model.objects.all()
class HostMetricSummaryMonthlyList(ListAPIView):
name = _("Host Metrics Summary Monthly")
model = models.HostMetricSummaryMonthly
serializer_class = serializers.HostMetricSummaryMonthlySerializer
permission_classes = (IsSystemAdminOrAuditor,)
search_fields = ('date',)
def get_queryset(self):
return self.model.objects.all()
class HostList(HostRelatedSearchMixin, ListCreateAPIView):

View File

@@ -1,5 +1,7 @@
from collections import OrderedDict
from django.utils.translation import gettext_lazy as _
from rest_framework.permissions import IsAuthenticated
from rest_framework.renderers import JSONRenderer
from rest_framework.reverse import reverse
@@ -18,6 +20,9 @@ from awx.api import (
class BulkView(APIView):
name = _('Bulk')
swagger_topic = 'Bulk'
permission_classes = [IsAuthenticated]
renderer_classes = [
renderers.BrowsableAPIRenderer,

View File

@@ -107,8 +107,7 @@ class ApiVersionRootView(APIView):
data['groups'] = reverse('api:group_list', request=request)
data['hosts'] = reverse('api:host_list', request=request)
data['host_metrics'] = reverse('api:host_metric_list', request=request)
# It will be enabled in future version of the AWX
# data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['host_metric_summary_monthly'] = reverse('api:host_metric_summary_monthly_list', request=request)
data['job_templates'] = reverse('api:job_template_list', request=request)
data['jobs'] = reverse('api:job_list', request=request)
data['ad_hoc_commands'] = reverse('api:ad_hoc_command_list', request=request)

View File

@@ -14,7 +14,7 @@ class ConfConfig(AppConfig):
def ready(self):
self.module.autodiscover()
if not set(sys.argv) & {'migrate', 'check_migrations'}:
if not set(sys.argv) & {'migrate', 'check_migrations', 'showmigrations'}:
from .settings import SettingsWrapper
SettingsWrapper.initialize()

View File

@@ -366,9 +366,9 @@ class BaseAccess(object):
report_violation = lambda message: None
else:
report_violation = lambda message: logger.warning(message)
if validation_info.get('trial', False) is True or validation_info['instance_count'] == 10: # basic 10 license
if validation_info.get('trial', False) is True:
def report_violation(message):
def report_violation(message): # noqa
raise PermissionDenied(message)
if check_expiration and validation_info.get('time_remaining', None) is None:

View File

@@ -613,3 +613,20 @@ def host_metric_table(since, full_path, until, **kwargs):
since.isoformat(), until.isoformat(), since.isoformat(), until.isoformat()
)
return _copy_table(table='host_metric', query=host_metric_query, path=full_path)
@register('host_metric_summary_monthly_table', '1.0', format='csv', description=_('HostMetricSummaryMonthly export, full sync'), expensive=trivial_slicing)
def host_metric_summary_monthly_table(since, full_path, **kwargs):
query = '''
COPY (SELECT main_hostmetricsummarymonthly.id,
main_hostmetricsummarymonthly.date,
main_hostmetricsummarymonthly.license_capacity,
main_hostmetricsummarymonthly.license_consumed,
main_hostmetricsummarymonthly.hosts_added,
main_hostmetricsummarymonthly.hosts_deleted,
main_hostmetricsummarymonthly.indirectly_managed_hosts
FROM main_hostmetricsummarymonthly
ORDER BY main_hostmetricsummarymonthly.id ASC) TO STDOUT WITH CSV HEADER
'''
return _copy_table(table='host_metric_summary_monthly', query=query, path=full_path)

87
awx/main/cache.py Normal file
View File

@@ -0,0 +1,87 @@
import functools
from django.conf import settings
from django.core.cache.backends.base import DEFAULT_TIMEOUT
from django.core.cache.backends.redis import RedisCache
from redis.exceptions import ConnectionError, ResponseError, TimeoutError
import socket
# This list comes from what django-redis ignores and the behavior we are trying
# to retain while dropping the dependency on django-redis.
IGNORED_EXCEPTIONS = (TimeoutError, ResponseError, ConnectionError, socket.timeout)
CONNECTION_INTERRUPTED_SENTINEL = object()
def optionally_ignore_exceptions(func=None, return_value=None):
if func is None:
return functools.partial(optionally_ignore_exceptions, return_value=return_value)
@functools.wraps(func)
def wrapper(*args, **kwargs):
try:
return func(*args, **kwargs)
except IGNORED_EXCEPTIONS as e:
if settings.DJANGO_REDIS_IGNORE_EXCEPTIONS:
return return_value
raise e.__cause__ or e
return wrapper
class AWXRedisCache(RedisCache):
"""
We just want to wrap the upstream RedisCache class so that we can ignore
the exceptions that it raises when the cache is unavailable.
"""
@optionally_ignore_exceptions
def add(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().add(key, value, timeout, version)
@optionally_ignore_exceptions(return_value=CONNECTION_INTERRUPTED_SENTINEL)
def _get(self, key, default=None, version=None):
return super().get(key, default, version)
def get(self, key, default=None, version=None):
value = self._get(key, default, version)
if value is CONNECTION_INTERRUPTED_SENTINEL:
return default
return value
@optionally_ignore_exceptions
def set(self, key, value, timeout=DEFAULT_TIMEOUT, version=None):
return super().set(key, value, timeout, version)
@optionally_ignore_exceptions
def touch(self, key, timeout=DEFAULT_TIMEOUT, version=None):
return super().touch(key, timeout, version)
@optionally_ignore_exceptions
def delete(self, key, version=None):
return super().delete(key, version)
@optionally_ignore_exceptions
def get_many(self, keys, version=None):
return super().get_many(keys, version)
@optionally_ignore_exceptions
def has_key(self, key, version=None):
return super().has_key(key, version)
@optionally_ignore_exceptions
def incr(self, key, delta=1, version=None):
return super().incr(key, delta, version)
@optionally_ignore_exceptions
def set_many(self, data, timeout=DEFAULT_TIMEOUT, version=None):
return super().set_many(data, timeout, version)
@optionally_ignore_exceptions
def delete_many(self, keys, version=None):
return super().delete_many(keys, version)
@optionally_ignore_exceptions
def clear(self):
return super().clear()

View File

@@ -94,6 +94,20 @@ register(
category_slug='system',
)
register(
'CSRF_TRUSTED_ORIGINS',
default=[],
field_class=fields.StringListField,
label=_('CSRF Trusted Origins List'),
help_text=_(
"If the service is behind a reverse proxy/load balancer, use this setting "
"to configure the schema://addresses from which the service should trust "
"Origin header values. "
),
category=_('System'),
category_slug='system',
)
register(
'LICENSE',
field_class=fields.DictField,
@@ -848,6 +862,15 @@ register(
category_slug='system',
)
register(
'HOST_METRIC_SUMMARY_TASK_LAST_TS',
field_class=fields.DateTimeField,
label=_('Last computing date of HostMetricSummaryMonthly'),
allow_null=True,
category=_('System'),
category_slug='system',
)
register(
'AWX_CLEANUP_PATHS',
field_class=fields.BooleanField,

View File

@@ -265,6 +265,8 @@ def kv_backend(**kwargs):
if secret_key:
try:
if (secret_key != 'data') and (secret_key not in json['data']) and ('data' in json['data']):
return json['data']['data'][secret_key]
return json['data'][secret_key]
except KeyError:
raise RuntimeError('{} is not present at {}'.format(secret_key, secret_path))

View File

@@ -1,7 +1,10 @@
from .plugin import CredentialPlugin
from django.utils.translation import gettext_lazy as _
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
try:
from delinea.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
except ImportError:
from thycotic.secrets.server import DomainPasswordGrantAuthorizer, PasswordGrantAuthorizer, SecretServer, ServerSecret
tss_inputs = {
'fields': [

View File

@@ -40,8 +40,12 @@ def get_task_queuename():
class PubSub(object):
def __init__(self, conn):
def __init__(self, conn, select_timeout=None):
self.conn = conn
if select_timeout is None:
self.select_timeout = 5
else:
self.select_timeout = select_timeout
def listen(self, channel):
with self.conn.cursor() as cur:
@@ -55,16 +59,33 @@ class PubSub(object):
with self.conn.cursor() as cur:
cur.execute('SELECT pg_notify(%s, %s);', (channel, payload))
def events(self, select_timeout=5, yield_timeouts=False):
@staticmethod
def current_notifies(conn):
"""
Altered version of .notifies method from psycopg library
This removes the outer while True loop so that we only process
queued notifications
"""
with conn.lock:
try:
ns = conn.wait(psycopg.generators.notifies(conn.pgconn))
except psycopg.errors._NO_TRACEBACK as ex:
raise ex.with_traceback(None)
enc = psycopg._encodings.pgconn_encoding(conn.pgconn)
for pgn in ns:
n = psycopg.connection.Notify(pgn.relname.decode(enc), pgn.extra.decode(enc), pgn.be_pid)
yield n
def events(self, yield_timeouts=False):
if not self.conn.autocommit:
raise RuntimeError('Listening for events can only be done in autocommit mode')
while True:
if select.select([self.conn], [], [], select_timeout) == NOT_READY:
if select.select([self.conn], [], [], self.select_timeout) == NOT_READY:
if yield_timeouts:
yield None
else:
notification_generator = self.conn.notifies()
notification_generator = self.current_notifies(self.conn)
for notification in notification_generator:
yield notification
@@ -73,7 +94,7 @@ class PubSub(object):
@contextmanager
def pg_bus_conn(new_connection=False):
def pg_bus_conn(new_connection=False, select_timeout=None):
'''
Any listeners probably want to establish a new database connection,
separate from the Django connection used for queries, because that will prevent
@@ -98,7 +119,7 @@ def pg_bus_conn(new_connection=False):
raise RuntimeError('Unexpectedly could not connect to postgres for pg_notify actions')
conn = pg_connection.connection
pubsub = PubSub(conn)
pubsub = PubSub(conn, select_timeout=select_timeout)
yield pubsub
if new_connection:
conn.close()

View File

@@ -40,6 +40,9 @@ class Control(object):
def cancel(self, task_ids, *args, **kwargs):
return self.control_with_reply('cancel', *args, extra_data={'task_ids': task_ids}, **kwargs)
def schedule(self, *args, **kwargs):
return self.control_with_reply('schedule', *args, **kwargs)
@classmethod
def generate_reply_queue_name(cls):
return f"reply_to_{str(uuid.uuid4()).replace('-','_')}"
@@ -52,14 +55,14 @@ class Control(object):
if not connection.get_autocommit():
raise RuntimeError('Control-with-reply messages can only be done in autocommit mode')
with pg_bus_conn() as conn:
with pg_bus_conn(select_timeout=timeout) as conn:
conn.listen(reply_queue)
send_data = {'control': command, 'reply_to': reply_queue}
if extra_data:
send_data.update(extra_data)
conn.notify(self.queuename, json.dumps(send_data))
for reply in conn.events(select_timeout=timeout, yield_timeouts=True):
for reply in conn.events(yield_timeouts=True):
if reply is None:
logger.error(f'{self.service} did not reply within {timeout}s')
raise RuntimeError(f"{self.service} did not reply within {timeout}s")

View File

@@ -1,57 +1,142 @@
import logging
import os
import time
from multiprocessing import Process
import yaml
from datetime import datetime
from django.conf import settings
from django.db import connections
from schedule import Scheduler
from django_guid import set_guid
from django_guid.utils import generate_guid
from awx.main.dispatch.worker import TaskWorker
from awx.main.utils.db import set_connection_name
logger = logging.getLogger('awx.main.dispatch.periodic')
class Scheduler(Scheduler):
def run_continuously(self):
idle_seconds = max(1, min(self.jobs).period.total_seconds() / 2)
class ScheduledTask:
"""
Class representing schedules, very loosely modeled after python schedule library Job
the idea of this class is to:
- only deal in relative times (time since the scheduler global start)
- only deal in integer math for target runtimes, but float for current relative time
def run():
ppid = os.getppid()
logger.warning('periodic beat started')
Missed schedule policy:
Invariant target times are maintained, meaning that if interval=10s offset=0
and it runs at t=7s, then it calls for next run in 3s.
However, if a complete interval has passed, that is counted as a missed run,
and missed runs are abandoned (no catch-up runs).
"""
set_connection_name('periodic') # set application_name to distinguish from other dispatcher processes
def __init__(self, name: str, data: dict):
# parameters need for schedule computation
self.interval = int(data['schedule'].total_seconds())
self.offset = 0 # offset relative to start time this schedule begins
self.index = 0 # number of periods of the schedule that has passed
while True:
if os.getppid() != ppid:
# if the parent PID changes, this process has been orphaned
# via e.g., segfault or sigkill, we should exit too
pid = os.getpid()
logger.warning(f'periodic beat exiting gracefully pid:{pid}')
raise SystemExit()
try:
for conn in connections.all():
# If the database connection has a hiccup, re-establish a new
# connection
conn.close_if_unusable_or_obsolete()
set_guid(generate_guid())
self.run_pending()
except Exception:
logger.exception('encountered an error while scheduling periodic tasks')
time.sleep(idle_seconds)
# parameters that do not affect scheduling logic
self.last_run = None # time of last run, only used for debug
self.completed_runs = 0 # number of times schedule is known to run
self.name = name
self.data = data # used by caller to know what to run
process = Process(target=run)
process.daemon = True
process.start()
@property
def next_run(self):
"Time until the next run with t=0 being the global_start of the scheduler class"
return (self.index + 1) * self.interval + self.offset
def due_to_run(self, relative_time):
return bool(self.next_run <= relative_time)
def expected_runs(self, relative_time):
return int((relative_time - self.offset) / self.interval)
def mark_run(self, relative_time):
self.last_run = relative_time
self.completed_runs += 1
new_index = self.expected_runs(relative_time)
if new_index > self.index + 1:
logger.warning(f'Missed {new_index - self.index - 1} schedules of {self.name}')
self.index = new_index
def missed_runs(self, relative_time):
"Number of times job was supposed to ran but failed to, only used for debug"
missed_ct = self.expected_runs(relative_time) - self.completed_runs
# if this is currently due to run do not count that as a missed run
if missed_ct and self.due_to_run(relative_time):
missed_ct -= 1
return missed_ct
def run_continuously():
scheduler = Scheduler()
for task in settings.CELERYBEAT_SCHEDULE.values():
apply_async = TaskWorker.resolve_callable(task['task']).apply_async
total_seconds = task['schedule'].total_seconds()
scheduler.every(total_seconds).seconds.do(apply_async)
scheduler.run_continuously()
class Scheduler:
def __init__(self, schedule):
"""
Expects schedule in the form of a dictionary like
{
'job1': {'schedule': timedelta(seconds=50), 'other': 'stuff'}
}
Only the schedule nearest-second value is used for scheduling,
the rest of the data is for use by the caller to know what to run.
"""
self.jobs = [ScheduledTask(name, data) for name, data in schedule.items()]
min_interval = min(job.interval for job in self.jobs)
num_jobs = len(self.jobs)
# this is intentionally oppioniated against spammy schedules
# a core goal is to spread out the scheduled tasks (for worker management)
# and high-frequency schedules just do not work with that
if num_jobs > min_interval:
raise RuntimeError(f'Number of schedules ({num_jobs}) is more than the shortest schedule interval ({min_interval} seconds).')
# even space out jobs over the base interval
for i, job in enumerate(self.jobs):
job.offset = (i * min_interval) // num_jobs
# internally times are all referenced relative to startup time, add grace period
self.global_start = time.time() + 2.0
def get_and_mark_pending(self):
relative_time = time.time() - self.global_start
to_run = []
for job in self.jobs:
if job.due_to_run(relative_time):
to_run.append(job)
logger.debug(f'scheduler found {job.name} to run, {relative_time - job.next_run} seconds after target')
job.mark_run(relative_time)
return to_run
def time_until_next_run(self):
relative_time = time.time() - self.global_start
next_job = min(self.jobs, key=lambda j: j.next_run)
delta = next_job.next_run - relative_time
if delta <= 0.1:
# careful not to give 0 or negative values to the select timeout, which has unclear interpretation
logger.warning(f'Scheduler next run of {next_job.name} is {-delta} seconds in the past')
return 0.1
elif delta > 20.0:
logger.warning(f'Scheduler next run unexpectedly over 20 seconds in future: {delta}')
return 20.0
logger.debug(f'Scheduler next run is {next_job.name} in {delta} seconds')
return delta
def debug(self, *args, **kwargs):
data = dict()
data['title'] = 'Scheduler status'
now = datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S UTC')
start_time = datetime.fromtimestamp(self.global_start).strftime('%Y-%m-%d %H:%M:%S UTC')
relative_time = time.time() - self.global_start
data['started_time'] = start_time
data['current_time'] = now
data['current_time_relative'] = round(relative_time, 3)
data['total_schedules'] = len(self.jobs)
data['schedule_list'] = dict(
[
(
job.name,
dict(
last_run_seconds_ago=round(relative_time - job.last_run, 3) if job.last_run else None,
next_run_in_seconds=round(job.next_run - relative_time, 3),
offset_in_seconds=job.offset,
completed_runs=job.completed_runs,
missed_runs=job.missed_runs(relative_time),
),
)
for job in sorted(self.jobs, key=lambda job: job.interval)
]
)
return yaml.safe_dump(data, default_flow_style=False, sort_keys=False)

View File

@@ -417,16 +417,16 @@ class AutoscalePool(WorkerPool):
# the task manager to never do more work
current_task = w.current_task
if current_task and isinstance(current_task, dict):
endings = ['tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager']
endings = ('tasks.task_manager', 'tasks.dependency_manager', 'tasks.workflow_manager')
current_task_name = current_task.get('task', '')
if any(current_task_name.endswith(e) for e in endings):
if current_task_name.endswith(endings):
if 'started' not in current_task:
w.managed_tasks[current_task['uuid']]['started'] = time.time()
age = time.time() - current_task['started']
w.managed_tasks[current_task['uuid']]['age'] = age
if age > self.task_manager_timeout:
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGTERM to {w.pid}')
os.kill(w.pid, signal.SIGTERM)
logger.error(f'{current_task_name} has held the advisory lock for {age}, sending SIGUSR1 to {w.pid}')
os.kill(w.pid, signal.SIGUSR1)
for m in orphaned:
# if all the workers are dead, spawn at least one

View File

@@ -73,15 +73,15 @@ class task:
return cls.apply_async(args, kwargs)
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
def get_async_body(cls, args=None, kwargs=None, uuid=None, **kw):
"""
Get the python dict to become JSON data in the pg_notify message
This same message gets passed over the dispatcher IPC queue to workers
If a task is submitted to a multiprocessing pool, skipping pg_notify, this might be used directly
"""
task_id = uuid or str(uuid4())
args = args or []
kwargs = kwargs or {}
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = {'uuid': task_id, 'args': args, 'kwargs': kwargs, 'task': cls.name, 'time_pub': time.time()}
guid = get_guid()
if guid:
@@ -89,6 +89,16 @@ class task:
if bind_kwargs:
obj['bind_kwargs'] = bind_kwargs
obj.update(**kw)
return obj
@classmethod
def apply_async(cls, args=None, kwargs=None, queue=None, uuid=None, **kw):
queue = queue or getattr(cls.queue, 'im_func', cls.queue)
if not queue:
msg = f'{cls.name}: Queue value required and may not be None'
logger.error(msg)
raise ValueError(msg)
obj = cls.get_async_body(args=args, kwargs=kwargs, uuid=uuid, **kw)
if callable(queue):
queue = queue()
if not is_testing():
@@ -116,4 +126,5 @@ class task:
setattr(fn, 'name', cls.name)
setattr(fn, 'apply_async', cls.apply_async)
setattr(fn, 'delay', cls.delay)
setattr(fn, 'get_async_body', cls.get_async_body)
return fn

View File

@@ -11,11 +11,13 @@ import psycopg
import time
from uuid import UUID
from queue import Empty as QueueEmpty
from datetime import timedelta
from django import db
from django.conf import settings
from awx.main.dispatch.pool import WorkerPool
from awx.main.dispatch.periodic import Scheduler
from awx.main.dispatch import pg_bus_conn
from awx.main.utils.common import log_excess_runtime
from awx.main.utils.db import set_connection_name
@@ -64,10 +66,12 @@ class AWXConsumerBase(object):
def control(self, body):
logger.warning(f'Received control signal:\n{body}')
control = body.get('control')
if control in ('status', 'running', 'cancel'):
if control in ('status', 'schedule', 'running', 'cancel'):
reply_queue = body['reply_to']
if control == 'status':
msg = '\n'.join([self.listening_on, self.pool.debug()])
if control == 'schedule':
msg = self.scheduler.debug()
elif control == 'running':
msg = []
for worker in self.pool.workers:
@@ -93,16 +97,11 @@ class AWXConsumerBase(object):
else:
logger.error('unrecognized control message: {}'.format(control))
def process_task(self, body):
def dispatch_task(self, body):
"""This will place the given body into a worker queue to run method decorated as a task"""
if isinstance(body, dict):
body['time_ack'] = time.time()
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
if len(self.pool):
if "uuid" in body and body['uuid']:
try:
@@ -116,15 +115,24 @@ class AWXConsumerBase(object):
self.pool.write(queue, body)
self.total_messages += 1
def process_task(self, body):
"""Routes the task details in body as either a control task or a task-task"""
if 'control' in body:
try:
return self.control(body)
except Exception:
logger.exception(f"Exception handling control message: {body}")
return
self.dispatch_task(body)
@log_excess_runtime(logger)
def record_statistics(self):
if time.time() - self.last_stats > 1: # buffer stat recording to once per second
try:
self.redis.set(f'awx_{self.name}_statistics', self.pool.debug())
self.last_stats = time.time()
except Exception:
logger.exception(f"encountered an error communicating with redis to store {self.name} statistics")
self.last_stats = time.time()
self.last_stats = time.time()
def run(self, *args, **kwargs):
signal.signal(signal.SIGINT, self.stop)
@@ -151,9 +159,9 @@ class AWXConsumerRedis(AWXConsumerBase):
class AWXConsumerPG(AWXConsumerBase):
def __init__(self, *args, **kwargs):
def __init__(self, *args, schedule=None, **kwargs):
super().__init__(*args, **kwargs)
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE
self.pg_max_wait = settings.DISPATCHER_DB_DOWNTIME_TOLERANCE
# if no successful loops have ran since startup, then we should fail right away
self.pg_is_down = True # set so that we fail if we get database errors on startup
init_time = time.time()
@@ -162,24 +170,53 @@ class AWXConsumerPG(AWXConsumerBase):
self.subsystem_metrics = s_metrics.Metrics(auto_pipe_execute=False)
self.last_metrics_gather = init_time
self.listen_cumulative_time = 0.0
if schedule:
schedule = schedule.copy()
else:
schedule = {}
# add control tasks to be ran at regular schedules
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
schedule['pool_cleanup'] = {'control': self.pool.cleanup, 'schedule': timedelta(seconds=60)}
# record subsystem metrics for the dispatcher
schedule['metrics_gather'] = {'control': self.record_metrics, 'schedule': timedelta(seconds=20)}
self.scheduler = Scheduler(schedule)
def record_metrics(self):
current_time = time.time()
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
def run_periodic_tasks(self):
self.record_statistics() # maintains time buffer in method
"""
Run general periodic logic, and return maximum time in seconds before
the next requested run
This may be called more often than that when events are consumed
so this should be very efficient in that
"""
try:
self.record_statistics() # maintains time buffer in method
except Exception as exc:
logger.warning(f'Failed to save dispatcher statistics {exc}')
current_time = time.time()
if current_time - self.last_cleanup > 60: # same as cluster_node_heartbeat
# NOTE: if we run out of database connections, it is important to still run cleanup
# so that we scale down workers and free up connections
self.pool.cleanup()
self.last_cleanup = current_time
for job in self.scheduler.get_and_mark_pending():
if 'control' in job.data:
try:
job.data['control']()
except Exception:
logger.exception(f'Error running control task {job.data}')
elif 'task' in job.data:
body = self.worker.resolve_callable(job.data['task']).get_async_body()
# bypasses pg_notify for scheduled tasks
self.dispatch_task(body)
# record subsystem metrics for the dispatcher
if current_time - self.last_metrics_gather > 20:
self.pool.produce_subsystem_metrics(self.subsystem_metrics)
self.subsystem_metrics.set('dispatcher_availability', self.listen_cumulative_time / (current_time - self.last_metrics_gather))
self.subsystem_metrics.pipe_execute()
self.listen_cumulative_time = 0.0
self.last_metrics_gather = current_time
self.pg_is_down = False
self.listen_start = time.time()
return self.scheduler.time_until_next_run()
def run(self, *args, **kwargs):
super(AWXConsumerPG, self).run(*args, **kwargs)
@@ -195,14 +232,15 @@ class AWXConsumerPG(AWXConsumerBase):
if init is False:
self.worker.on_start()
init = True
self.listen_start = time.time()
# run_periodic_tasks run scheduled actions and gives time until next scheduled action
# this is saved to the conn (PubSub) object in order to modify read timeout in-loop
conn.select_timeout = self.run_periodic_tasks()
# this is the main operational loop for awx-manage run_dispatcher
for e in conn.events(yield_timeouts=True):
self.listen_cumulative_time += time.time() - self.listen_start
self.listen_cumulative_time += time.time() - self.listen_start # for metrics
if e is not None:
self.process_task(json.loads(e.payload))
self.run_periodic_tasks()
self.pg_is_down = False
self.listen_start = time.time()
conn.select_timeout = self.run_periodic_tasks()
if self.should_stop:
return
except psycopg.InterfaceError:
@@ -250,8 +288,8 @@ class BaseWorker(object):
break
except QueueEmpty:
continue
except Exception as e:
logger.error("Exception on worker {}, restarting: ".format(idx) + str(e))
except Exception:
logger.exception("Exception on worker {}, reconnecting: ".format(idx))
continue
try:
for conn in db.connections.all():

View File

@@ -17,6 +17,6 @@ class Command(BaseCommand):
months_ago = options.get('months-ago') or None
if not months_ago:
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_THRESHOLD', 12)
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)

View File

@@ -0,0 +1,9 @@
from django.core.management.base import BaseCommand
from awx.main.tasks.host_metrics import HostMetricSummaryMonthlyTask
class Command(BaseCommand):
help = 'Computing of HostMetricSummaryMonthly'
def handle(self, *args, **options):
HostMetricSummaryMonthlyTask().execute()

View File

@@ -3,15 +3,13 @@
import logging
import yaml
from django.core.cache import cache as django_cache
from django.conf import settings
from django.core.management.base import BaseCommand
from django.db import connection as django_connection
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.control import Control
from awx.main.dispatch.pool import AutoscalePool
from awx.main.dispatch.worker import AWXConsumerPG, TaskWorker
from awx.main.dispatch import periodic
logger = logging.getLogger('awx.main.dispatch')
@@ -21,6 +19,7 @@ class Command(BaseCommand):
def add_arguments(self, parser):
parser.add_argument('--status', dest='status', action='store_true', help='print the internal state of any running dispatchers')
parser.add_argument('--schedule', dest='schedule', action='store_true', help='print the current status of schedules being ran by dispatcher')
parser.add_argument('--running', dest='running', action='store_true', help='print the UUIDs of any tasked managed by this dispatcher')
parser.add_argument(
'--reload',
@@ -42,6 +41,9 @@ class Command(BaseCommand):
if options.get('status'):
print(Control('dispatcher').status())
return
if options.get('schedule'):
print(Control('dispatcher').schedule())
return
if options.get('running'):
print(Control('dispatcher').running())
return
@@ -58,21 +60,11 @@ class Command(BaseCommand):
print(Control('dispatcher').cancel(cancel_data))
return
# It's important to close these because we're _about_ to fork, and we
# don't want the forked processes to inherit the open sockets
# for the DB and cache connections (that way lies race conditions)
django_connection.close()
django_cache.close()
# spawn a daemon thread to periodically enqueues scheduled tasks
# (like the node heartbeat)
periodic.run_continuously()
consumer = None
try:
queues = ['tower_broadcast_all', 'tower_settings_change', get_task_queuename()]
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4))
consumer = AWXConsumerPG('dispatcher', TaskWorker(), queues, AutoscalePool(min_workers=4), schedule=settings.CELERYBEAT_SCHEDULE)
consumer.run()
except KeyboardInterrupt:
logger.debug('Terminating Task Dispatcher')

View File

@@ -9,13 +9,11 @@ from django.db import migrations, models
import django.utils.timezone
import django.db.models.deletion
from django.conf import settings
import taggit.managers
import awx.main.fields
class Migration(migrations.Migration):
dependencies = [
('taggit', '0002_auto_20150616_2121'),
('contenttypes', '0002_remove_content_type_name'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
]
@@ -184,12 +182,6 @@ class Migration(migrations.Migration):
null=True,
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
options={
'ordering': ('kind', 'name'),
@@ -529,12 +521,6 @@ class Migration(migrations.Migration):
null=True,
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
('users', models.ManyToManyField(related_name='organizations', to=settings.AUTH_USER_MODEL, blank=True)),
],
options={
@@ -589,12 +575,6 @@ class Migration(migrations.Migration):
null=True,
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
),
migrations.CreateModel(
@@ -644,12 +624,6 @@ class Migration(migrations.Migration):
null=True,
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
options={
'ordering': ['-next_run'],
@@ -687,12 +661,6 @@ class Migration(migrations.Migration):
),
),
('organization', models.ForeignKey(related_name='teams', on_delete=django.db.models.deletion.SET_NULL, to='main.Organization', null=True)),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
('users', models.ManyToManyField(related_name='teams', to=settings.AUTH_USER_MODEL, blank=True)),
],
options={
@@ -1267,13 +1235,6 @@ class Migration(migrations.Migration):
null=True,
),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='unifiedjob',
name='created_by',
@@ -1319,13 +1280,6 @@ class Migration(migrations.Migration):
name='schedule',
field=models.ForeignKey(on_delete=django.db.models.deletion.SET_NULL, default=None, editable=False, to='main.Schedule', null=True),
),
migrations.AddField(
model_name='unifiedjob',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='unifiedjob',
name='unified_job_template',
@@ -1370,13 +1324,6 @@ class Migration(migrations.Migration):
help_text='Organization containing this inventory.',
),
),
migrations.AddField(
model_name='inventory',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='host',
name='inventory',
@@ -1407,13 +1354,6 @@ class Migration(migrations.Migration):
null=True,
),
),
migrations.AddField(
model_name='host',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='group',
name='hosts',
@@ -1441,13 +1381,6 @@ class Migration(migrations.Migration):
name='parents',
field=models.ManyToManyField(related_name='children', to='main.Group', blank=True),
),
migrations.AddField(
model_name='group',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='custominventoryscript',
name='organization',
@@ -1459,13 +1392,6 @@ class Migration(migrations.Migration):
null=True,
),
),
migrations.AddField(
model_name='custominventoryscript',
name='tags',
field=taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
migrations.AddField(
model_name='credential',
name='team',

View File

@@ -12,8 +12,6 @@ import django.db.models.deletion
from django.conf import settings
from django.utils.timezone import now
import taggit.managers
def create_system_job_templates(apps, schema_editor):
"""
@@ -125,7 +123,6 @@ class Migration(migrations.Migration):
]
dependencies = [
('taggit', '0002_auto_20150616_2121'),
('contenttypes', '0002_remove_content_type_name'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('main', '0001_initial'),
@@ -256,12 +253,6 @@ class Migration(migrations.Migration):
'organization',
models.ForeignKey(related_name='notification_templates', on_delete=django.db.models.deletion.SET_NULL, to='main.Organization', null=True),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
),
migrations.AddField(
@@ -721,12 +712,6 @@ class Migration(migrations.Migration):
help_text='Organization this label belongs to.',
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
options={
'ordering': ('organization', 'name'),

View File

@@ -5,7 +5,6 @@ from __future__ import unicode_literals
# Django
from django.db import connection, migrations, models, OperationalError, ProgrammingError
from django.conf import settings
import taggit.managers
# AWX
import awx.main.fields
@@ -317,10 +316,6 @@ class Migration(migrations.Migration):
model_name='permission',
name='project',
),
migrations.RemoveField(
model_name='permission',
name='tags',
),
migrations.RemoveField(
model_name='permission',
name='team',
@@ -510,12 +505,6 @@ class Migration(migrations.Migration):
null=True,
),
),
(
'tags',
taggit.managers.TaggableManager(
to='taggit.Tag', through='taggit.TaggedItem', blank=True, help_text='A comma-separated list of tags.', verbose_name='Tags'
),
),
],
options={
'ordering': ('kind', 'name'),

View File

@@ -4,7 +4,6 @@ from __future__ import unicode_literals
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import taggit.managers
# AWX
import awx.main.fields
@@ -20,7 +19,6 @@ def setup_tower_managed_defaults(apps, schema_editor):
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('taggit', '0002_auto_20150616_2121'),
('main', '0066_v350_inventorysource_custom_virtualenv'),
]
@@ -60,12 +58,6 @@ class Migration(migrations.Migration):
'source_credential',
models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='target_input_sources', to='main.Credential'),
),
(
'tags',
taggit.managers.TaggableManager(
blank=True, help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags'
),
),
(
'target_credential',
models.ForeignKey(null=True, on_delete=django.db.models.deletion.CASCADE, related_name='input_sources', to='main.Credential'),

View File

@@ -4,12 +4,10 @@ from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.expressions
import taggit.managers
class Migration(migrations.Migration):
dependencies = [
('taggit', '0003_taggeditem_add_unique_index'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('main', '0123_drop_hg_support'),
]
@@ -69,12 +67,6 @@ class Migration(migrations.Migration):
to='main.Organization',
),
),
(
'tags',
taggit.managers.TaggableManager(
blank=True, help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags'
),
),
],
options={
'ordering': (django.db.models.expressions.OrderBy(django.db.models.expressions.F('organization_id'), nulls_first=True), 'image'),

View File

@@ -1,4 +1,4 @@
# Generated by Django 4.2 on 2023-06-09 19:51
# Generated by Django 4.2.3 on 2023-08-02 13:18
import awx.main.models.notifications
from django.db import migrations, models
@@ -11,16 +11,6 @@ class Migration(migrations.Migration):
]
operations = [
migrations.AlterField(
model_name='activitystream',
name='deleted_actor',
field=models.JSONField(null=True),
),
migrations.AlterField(
model_name='activitystream',
name='setting',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='instancegroup',
name='policy_instance_list',
@@ -28,31 +18,11 @@ class Migration(migrations.Migration):
blank=True, default=list, help_text='List of exact-match Instances that will always be automatically assigned to this group'
),
),
migrations.AlterField(
model_name='job',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='joblaunchconfig',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='joblaunchconfig',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='jobtemplate',
name='survey_spec',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='notification',
name='body',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='notificationtemplate',
name='messages',
@@ -94,31 +64,6 @@ class Migration(migrations.Migration):
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='unifiedjob',
name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='workflowjob',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='workflowjob',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='workflowjobnode',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.AlterField(
model_name='workflowjobnode',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.AlterField(
model_name='workflowjobtemplate',
name='char_prompts',
@@ -139,4 +84,194 @@ class Migration(migrations.Migration):
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
# These are potentially a problem. Move the existing fields
# aside while pretending like they've been deleted, then add
# in fresh empty fields. Make the old fields nullable where
# needed while we are at it, so that new rows don't hit
# IntegrityError. We'll do the data migration out-of-band
# using a task.
migrations.RunSQL( # Already nullable
"ALTER TABLE main_activitystream RENAME deleted_actor TO deleted_actor_old;",
state_operations=[
migrations.RemoveField(
model_name='activitystream',
name='deleted_actor',
),
],
),
migrations.AddField(
model_name='activitystream',
name='deleted_actor',
field=models.JSONField(null=True),
),
migrations.RunSQL(
"""
ALTER TABLE main_activitystream RENAME setting TO setting_old;
ALTER TABLE main_activitystream ALTER COLUMN setting_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='activitystream',
name='setting',
),
],
),
migrations.AddField(
model_name='activitystream',
name='setting',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_job RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_job ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='job',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='job',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='joblaunchconfig',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_joblaunchconfig RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_joblaunchconfig ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='joblaunchconfig',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='joblaunchconfig',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_notification RENAME body TO body_old;
ALTER TABLE main_notification ALTER COLUMN body_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='notification',
name='body',
),
],
),
migrations.AddField(
model_name='notification',
name='body',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_unifiedjob RENAME job_env TO job_env_old;
ALTER TABLE main_unifiedjob ALTER COLUMN job_env_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='unifiedjob',
name='job_env',
),
],
),
migrations.AddField(
model_name='unifiedjob',
name='job_env',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjob ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='workflowjob',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjob RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjob ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjob',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='workflowjob',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME char_prompts TO char_prompts_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN char_prompts_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',
name='char_prompts',
),
],
),
migrations.AddField(
model_name='workflowjobnode',
name='char_prompts',
field=models.JSONField(blank=True, default=dict),
),
migrations.RunSQL(
"""
ALTER TABLE main_workflowjobnode RENAME survey_passwords TO survey_passwords_old;
ALTER TABLE main_workflowjobnode ALTER COLUMN survey_passwords_old DROP NOT NULL;
""",
state_operations=[
migrations.RemoveField(
model_name='workflowjobnode',
name='survey_passwords',
),
],
),
migrations.AddField(
model_name='workflowjobnode',
name='survey_passwords',
field=models.JSONField(blank=True, default=dict, editable=False),
),
]

View File

@@ -0,0 +1,27 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations
def delete_taggit_contenttypes(apps, schema_editor):
ContentType = apps.get_model('contenttypes', 'ContentType')
ContentType.objects.filter(app_label='taggit').delete()
def delete_taggit_migration_records(apps, schema_editor):
recorder = migrations.recorder.MigrationRecorder(connection=schema_editor.connection)
recorder.migration_qs.filter(app='taggit').delete()
class Migration(migrations.Migration):
dependencies = [
('main', '0185_move_JSONBlob_to_JSONField'),
]
operations = [
migrations.RunSQL("DROP TABLE IF EXISTS taggit_tag CASCADE;"),
migrations.RunSQL("DROP TABLE IF EXISTS taggit_taggeditem CASCADE;"),
migrations.RunPython(delete_taggit_contenttypes),
migrations.RunPython(delete_taggit_migration_records),
]

View File

@@ -3,6 +3,7 @@
# Django
from django.conf import settings # noqa
from django.db import connection
from django.db.models.signals import pre_delete # noqa
# AWX
@@ -99,6 +100,58 @@ User.add_to_class('can_access_with_errors', check_user_access_with_errors)
User.add_to_class('accessible_objects', user_accessible_objects)
def convert_jsonfields():
if connection.vendor != 'postgresql':
return
# fmt: off
fields = [
('main_activitystream', 'id', (
'deleted_actor',
'setting',
)),
('main_job', 'unifiedjob_ptr_id', (
'survey_passwords',
)),
('main_joblaunchconfig', 'id', (
'char_prompts',
'survey_passwords',
)),
('main_notification', 'id', (
'body',
)),
('main_unifiedjob', 'id', (
'job_env',
)),
('main_workflowjob', 'unifiedjob_ptr_id', (
'char_prompts',
'survey_passwords',
)),
('main_workflowjobnode', 'id', (
'char_prompts',
'survey_passwords',
)),
]
# fmt: on
with connection.cursor() as cursor:
for table, pkfield, columns in fields:
# Do the renamed old columns still exist? If so, run the task.
old_columns = ','.join(f"'{column}_old'" for column in columns)
cursor.execute(
f"""
select count(1) from information_schema.columns
where
table_name = %s and column_name in ({old_columns});
""",
(table,),
)
if cursor.fetchone()[0]:
from awx.main.tasks.system import migrate_jsonfield
migrate_jsonfield.apply_async([table, pkfield, columns])
def cleanup_created_modified_by(sender, **kwargs):
# work around a bug in django-polymorphic that doesn't properly
# handle cascades for reverse foreign keys on the polymorphic base model

View File

@@ -7,9 +7,6 @@ from django.core.exceptions import ValidationError, ObjectDoesNotExist
from django.utils.translation import gettext_lazy as _
from django.utils.timezone import now
# Django-Taggit
from taggit.managers import TaggableManager
# Django-CRUM
from crum import get_current_user
@@ -301,8 +298,6 @@ class PrimordialModel(HasEditsMixin, CreatedModifiedModel):
on_delete=models.SET_NULL,
)
tags = TaggableManager(blank=True)
def __init__(self, *args, **kwargs):
r = super(PrimordialModel, self).__init__(*args, **kwargs)
if self.pk:

View File

@@ -899,18 +899,18 @@ class HostMetric(models.Model):
last_automation_before = now() - dateutil.relativedelta.relativedelta(months=months_ago)
logger.info(f'Cleanup [HostMetric]: soft-deleting records last automated before {last_automation_before}')
logger.info(f'cleanup_host_metrics: soft-deleting records last automated before {last_automation_before}')
HostMetric.active_objects.filter(last_automation__lt=last_automation_before).update(
deleted=True, deleted_counter=models.F('deleted_counter') + 1, last_deleted=now()
)
settings.CLEANUP_HOST_METRICS_LAST_TS = now()
except (TypeError, ValueError):
logger.error(f"Cleanup [HostMetric]: months_ago({months_ago}) has to be a positive integer value")
logger.error(f"cleanup_host_metrics: months_ago({months_ago}) has to be a positive integer value")
class HostMetricSummaryMonthly(models.Model):
"""
HostMetric summaries computed by scheduled task <TODO> monthly
HostMetric summaries computed by scheduled task 'awx.main.tasks.system.host_metric_summary_monthly' monthly
"""
date = models.DateField(unique=True)

View File

@@ -661,7 +661,11 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
@property
def event_processing_finished(self):
return True
return True # workflow jobs do not have events
@property
def has_unpartitioned_events(self):
return False # workflow jobs do not have events
def _get_parent_field_name(self):
if self.job_template_id:
@@ -914,7 +918,11 @@ class WorkflowApproval(UnifiedJob, JobNotificationMixin):
@property
def event_processing_finished(self):
return True
return True # approval jobs do not have events
@property
def has_unpartitioned_events(self):
return False # approval jobs do not have events
def send_approval_notification(self, approval_status):
from awx.main.tasks.system import send_notifications # avoid circular import

View File

@@ -3,8 +3,6 @@
from django.db.models.signals import pre_save, post_save, pre_delete, m2m_changed
from taggit.managers import TaggableManager
class ActivityStreamRegistrar(object):
def __init__(self):
@@ -21,8 +19,6 @@ class ActivityStreamRegistrar(object):
pre_delete.connect(activity_stream_delete, sender=model, dispatch_uid=str(self.__class__) + str(model) + "_delete")
for m2mfield in model._meta.many_to_many:
if isinstance(m2mfield, TaggableManager):
continue # Special case for taggit app
try:
m2m_attr = getattr(model, m2mfield.name)
m2m_changed.connect(

View File

@@ -25,7 +25,6 @@ from awx.main.models import (
InventoryUpdate,
Job,
Project,
ProjectUpdate,
UnifiedJob,
WorkflowApproval,
WorkflowJob,
@@ -102,27 +101,33 @@ class TaskBase:
def record_aggregate_metrics(self, *args):
if not is_testing():
# increment task_manager_schedule_calls regardless if the other
# metrics are recorded
s_metrics.Metrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
# Only record metrics if the last time recording was more
# than SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL ago.
# Prevents a short-duration task manager that runs directly after a
# long task manager to override useful metrics.
current_time = time.time()
time_last_recorded = current_time - self.subsystem_metrics.decode(f"{self.prefix}_recorded_timestamp")
if time_last_recorded > settings.SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL:
logger.debug(f"recording {self.prefix} metrics, last recorded {time_last_recorded} seconds ago")
self.subsystem_metrics.set(f"{self.prefix}_recorded_timestamp", current_time)
self.subsystem_metrics.pipe_execute()
else:
logger.debug(f"skipping recording {self.prefix} metrics, last recorded {time_last_recorded} seconds ago")
try:
# increment task_manager_schedule_calls regardless if the other
# metrics are recorded
s_metrics.Metrics(auto_pipe_execute=True).inc(f"{self.prefix}__schedule_calls", 1)
# Only record metrics if the last time recording was more
# than SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL ago.
# Prevents a short-duration task manager that runs directly after a
# long task manager to override useful metrics.
current_time = time.time()
time_last_recorded = current_time - self.subsystem_metrics.decode(f"{self.prefix}_recorded_timestamp")
if time_last_recorded > settings.SUBSYSTEM_METRICS_TASK_MANAGER_RECORD_INTERVAL:
logger.debug(f"recording {self.prefix} metrics, last recorded {time_last_recorded} seconds ago")
self.subsystem_metrics.set(f"{self.prefix}_recorded_timestamp", current_time)
self.subsystem_metrics.pipe_execute()
else:
logger.debug(f"skipping recording {self.prefix} metrics, last recorded {time_last_recorded} seconds ago")
except Exception:
logger.exception(f"Error saving metrics for {self.prefix}")
def record_aggregate_metrics_and_exit(self, *args):
self.record_aggregate_metrics()
sys.exit(1)
def schedule(self):
# Always be able to restore the original signal handler if we finish
original_sigusr1 = signal.getsignal(signal.SIGUSR1)
# Lock
with task_manager_bulk_reschedule():
with advisory_lock(f"{self.prefix}_lock", wait=False) as acquired:
@@ -131,9 +136,14 @@ class TaskBase:
logger.debug(f"Not running {self.prefix} scheduler, another task holds lock")
return
logger.debug(f"Starting {self.prefix} Scheduler")
# if sigterm due to timeout, still record metrics
signal.signal(signal.SIGTERM, self.record_aggregate_metrics_and_exit)
self._schedule()
# if sigusr1 due to timeout, still record metrics
signal.signal(signal.SIGUSR1, self.record_aggregate_metrics_and_exit)
try:
self._schedule()
finally:
# Reset the signal handler back to the default just in case anything
# else uses the same signal for other purposes
signal.signal(signal.SIGUSR1, original_sigusr1)
commit_start = time.time()
if self.prefix == "task_manager":
@@ -154,7 +164,6 @@ class WorkflowManager(TaskBase):
logger.warning("Workflow manager has reached time out while processing running workflows, exiting loop early")
ScheduleWorkflowManager().schedule()
# Do not process any more workflow jobs. Stop here.
# Maybe we should schedule another WorkflowManager run
break
dag = WorkflowDAG(workflow_job)
status_changed = False
@@ -169,8 +178,8 @@ class WorkflowManager(TaskBase):
workflow_job.save(update_fields=['status', 'start_args'])
status_changed = True
else:
workflow_nodes = dag.mark_dnr_nodes()
WorkflowJobNode.objects.bulk_update(workflow_nodes, ['do_not_run'])
dnr_nodes = dag.mark_dnr_nodes()
WorkflowJobNode.objects.bulk_update(dnr_nodes, ['do_not_run'])
# If workflow is now done, we do special things to mark it as done.
is_done = dag.is_workflow_done()
if is_done:
@@ -250,6 +259,7 @@ class WorkflowManager(TaskBase):
job.status = 'failed'
job.save(update_fields=['status', 'job_explanation'])
job.websocket_emit_status('failed')
ScheduleWorkflowManager().schedule()
# TODO: should we emit a status on the socket here similar to tasks.py awx_periodic_scheduler() ?
# emit_websocket_notification('/socket.io/jobs', '', dict(id=))
@@ -270,184 +280,115 @@ class WorkflowManager(TaskBase):
class DependencyManager(TaskBase):
def __init__(self):
super().__init__(prefix="dependency_manager")
self.all_projects = {}
self.all_inventory_sources = {}
def create_project_update(self, task, project_id=None):
if project_id is None:
project_id = task.project_id
project_task = Project.objects.get(id=project_id).create_project_update(_eager_fields=dict(launch_type='dependency'))
# Project created 1 seconds behind
project_task.created = task.created - timedelta(seconds=1)
project_task.status = 'pending'
project_task.save()
logger.debug('Spawned {} as dependency of {}'.format(project_task.log_format, task.log_format))
return project_task
def create_inventory_update(self, task, inventory_source_task):
inventory_task = InventorySource.objects.get(id=inventory_source_task.id).create_inventory_update(_eager_fields=dict(launch_type='dependency'))
inventory_task.created = task.created - timedelta(seconds=2)
inventory_task.status = 'pending'
inventory_task.save()
logger.debug('Spawned {} as dependency of {}'.format(inventory_task.log_format, task.log_format))
return inventory_task
def add_dependencies(self, task, dependencies):
with disable_activity_stream():
task.dependent_jobs.add(*dependencies)
def get_inventory_source_tasks(self):
def cache_projects_and_sources(self, task_list):
project_ids = set()
inventory_ids = set()
for task in self.all_tasks:
for task in task_list:
if isinstance(task, Job):
inventory_ids.add(task.inventory_id)
self.all_inventory_sources = [invsrc for invsrc in InventorySource.objects.filter(inventory_id__in=inventory_ids, update_on_launch=True)]
if task.project_id:
project_ids.add(task.project_id)
if task.inventory_id:
inventory_ids.add(task.inventory_id)
elif isinstance(task, InventoryUpdate):
if task.inventory_source and task.inventory_source.source_project_id:
project_ids.add(task.inventory_source.source_project_id)
def get_latest_inventory_update(self, inventory_source):
latest_inventory_update = InventoryUpdate.objects.filter(inventory_source=inventory_source).order_by("-created")
if not latest_inventory_update.exists():
return None
return latest_inventory_update.first()
for proj in Project.objects.filter(id__in=project_ids, scm_update_on_launch=True):
self.all_projects[proj.id] = proj
def should_update_inventory_source(self, job, latest_inventory_update):
now = tz_now()
for invsrc in InventorySource.objects.filter(inventory_id__in=inventory_ids, update_on_launch=True):
self.all_inventory_sources.setdefault(invsrc.inventory_id, [])
self.all_inventory_sources[invsrc.inventory_id].append(invsrc)
if latest_inventory_update is None:
@staticmethod
def should_update_again(update, cache_timeout):
'''
If it has never updated, we need to update
If there is already an update in progress then we do not need to a new create one
If the last update failed, we always need to try and update again
If current time is more than cache_timeout after last update, then we need a new one
'''
if (update is None) or (update.status in ['failed', 'canceled', 'error']):
return True
'''
If there's already a inventory update utilizing this job that's about to run
then we don't need to create one
'''
if latest_inventory_update.status in ['waiting', 'pending', 'running']:
if update.status in ['waiting', 'pending', 'running']:
return False
timeout_seconds = timedelta(seconds=latest_inventory_update.inventory_source.update_cache_timeout)
if (latest_inventory_update.finished + timeout_seconds) < now:
return True
if latest_inventory_update.inventory_source.update_on_launch is True and latest_inventory_update.status in ['failed', 'canceled', 'error']:
return True
return False
return bool(((update.finished + timedelta(seconds=cache_timeout))) < tz_now())
def get_latest_project_update(self, project_id):
latest_project_update = ProjectUpdate.objects.filter(project=project_id, job_type='check').order_by("-created")
if not latest_project_update.exists():
return None
return latest_project_update.first()
def should_update_related_project(self, job, latest_project_update):
now = tz_now()
if latest_project_update is None:
return True
if latest_project_update.status in ['failed', 'canceled']:
return True
'''
If there's already a project update utilizing this job that's about to run
then we don't need to create one
'''
if latest_project_update.status in ['waiting', 'pending', 'running']:
return False
'''
If the latest project update has a created time == job_created_time-1
then consider the project update found. This is so we don't enter an infinite loop
of updating the project when cache timeout is 0.
'''
if (
latest_project_update.project.scm_update_cache_timeout == 0
and latest_project_update.launch_type == 'dependency'
and latest_project_update.created == job.created - timedelta(seconds=1)
):
return False
'''
Normal Cache Timeout Logic
'''
timeout_seconds = timedelta(seconds=latest_project_update.project.scm_update_cache_timeout)
if (latest_project_update.finished + timeout_seconds) < now:
return True
return False
def get_or_create_project_update(self, project_id):
project = self.all_projects.get(project_id, None)
if project is not None:
latest_project_update = project.project_updates.filter(job_type='check').order_by("-created").first()
if self.should_update_again(latest_project_update, project.scm_update_cache_timeout):
project_task = project.create_project_update(_eager_fields=dict(launch_type='dependency'))
project_task.signal_start()
return [project_task]
else:
return [latest_project_update]
return []
def gen_dep_for_job(self, task):
created_dependencies = []
dependencies = []
# TODO: Can remove task.project None check after scan-job-default-playbook is removed
if task.project is not None and task.project.scm_update_on_launch is True:
latest_project_update = self.get_latest_project_update(task.project_id)
if self.should_update_related_project(task, latest_project_update):
latest_project_update = self.create_project_update(task)
created_dependencies.append(latest_project_update)
dependencies.append(latest_project_update)
dependencies = self.get_or_create_project_update(task.project_id)
# Inventory created 2 seconds behind job
try:
start_args = json.loads(decrypt_field(task, field_name="start_args"))
except ValueError:
start_args = dict()
# generator for inventory sources related to this task
task_inv_sources = (invsrc for invsrc in self.all_inventory_sources if invsrc.inventory_id == task.inventory_id)
for inventory_source in task_inv_sources:
# generator for update-on-launch inventory sources related to this task
for inventory_source in self.all_inventory_sources.get(task.inventory_id, []):
if "inventory_sources_already_updated" in start_args and inventory_source.id in start_args['inventory_sources_already_updated']:
continue
if not inventory_source.update_on_launch:
continue
latest_inventory_update = self.get_latest_inventory_update(inventory_source)
if self.should_update_inventory_source(task, latest_inventory_update):
inventory_task = self.create_inventory_update(task, inventory_source)
created_dependencies.append(inventory_task)
latest_inventory_update = inventory_source.inventory_updates.order_by("-created").first()
if self.should_update_again(latest_inventory_update, inventory_source.update_cache_timeout):
inventory_task = inventory_source.create_inventory_update(_eager_fields=dict(launch_type='dependency'))
inventory_task.signal_start()
dependencies.append(inventory_task)
else:
dependencies.append(latest_inventory_update)
if dependencies:
self.add_dependencies(task, dependencies)
return created_dependencies
return dependencies
def gen_dep_for_inventory_update(self, inventory_task):
created_dependencies = []
if inventory_task.source == "scm":
invsrc = inventory_task.inventory_source
if not invsrc.source_project.scm_update_on_launch:
return created_dependencies
latest_src_project_update = self.get_latest_project_update(invsrc.source_project_id)
if self.should_update_related_project(inventory_task, latest_src_project_update):
latest_src_project_update = self.create_project_update(inventory_task, project_id=invsrc.source_project_id)
created_dependencies.append(latest_src_project_update)
self.add_dependencies(inventory_task, [latest_src_project_update])
latest_src_project_update.scm_inventory_updates.add(inventory_task)
return created_dependencies
if invsrc:
return self.get_or_create_project_update(invsrc.source_project_id)
return []
@timeit
def generate_dependencies(self, undeped_tasks):
created_dependencies = []
dependencies = []
self.cache_projects_and_sources(undeped_tasks)
for task in undeped_tasks:
task.log_lifecycle("acknowledged")
if type(task) is Job:
created_dependencies += self.gen_dep_for_job(task)
job_deps = self.gen_dep_for_job(task)
elif type(task) is InventoryUpdate:
created_dependencies += self.gen_dep_for_inventory_update(task)
job_deps = self.gen_dep_for_inventory_update(task)
else:
continue
if job_deps:
dependencies += job_deps
with disable_activity_stream():
task.dependent_jobs.add(*dependencies)
logger.debug(f'Linked {[dep.log_format for dep in dependencies]} as dependencies of {task.log_format}')
UnifiedJob.objects.filter(pk__in=[task.pk for task in undeped_tasks]).update(dependencies_processed=True)
return created_dependencies
def process_tasks(self):
deps = self.generate_dependencies(self.all_tasks)
self.generate_dependencies(deps)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(self.all_tasks) + len(deps))
return dependencies
@timeit
def _schedule(self):
self.get_tasks(dict(status__in=["pending"], dependencies_processed=False))
if len(self.all_tasks) > 0:
self.get_inventory_source_tasks()
self.process_tasks()
deps = self.generate_dependencies(self.all_tasks)
undeped_deps = [dep for dep in deps if dep.dependencies_processed is False]
self.generate_dependencies(undeped_deps)
self.subsystem_metrics.inc(f"{self.prefix}_pending_processed", len(self.all_tasks) + len(undeped_deps))
ScheduleTaskManager().schedule()

View File

@@ -1 +1 @@
from . import jobs, receptor, system # noqa
from . import host_metrics, jobs, receptor, system # noqa

View File

@@ -29,8 +29,9 @@ class RunnerCallback:
self.safe_env = {}
self.event_ct = 0
self.model = model
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE / 5)
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.wrapup_event_dispatched = False
self.artifacts_processed = False
self.extra_update_fields = {}
def update_model(self, pk, _attempt=0, **updates):
@@ -211,6 +212,9 @@ class RunnerCallback:
if result_traceback:
self.delay_update(result_traceback=result_traceback)
def artifacts_handler(self, artifact_dir):
self.artifacts_processed = True
class RunnerCallbackForProjectUpdate(RunnerCallback):
def __init__(self, *args, **kwargs):

View File

@@ -9,6 +9,7 @@ from django.conf import settings
from django.db.models.query import QuerySet
from django.utils.encoding import smart_str
from django.utils.timezone import now
from django.db import OperationalError
# AWX
from awx.main.utils.common import log_excess_runtime
@@ -57,6 +58,28 @@ def start_fact_cache(hosts, destination, log_data, timeout=None, inventory_id=No
return None
def raw_update_hosts(host_list):
Host.objects.bulk_update(host_list, ['ansible_facts', 'ansible_facts_modified'])
def update_hosts(host_list, max_tries=5):
if not host_list:
return
for i in range(max_tries):
try:
raw_update_hosts(host_list)
except OperationalError as exc:
# Deadlocks can happen if this runs at the same time as another large query
# inventory updates and updating last_job_host_summary are candidates for conflict
# but these would resolve easily on a retry
if i + 1 < max_tries:
logger.info(f'OperationalError (suspected deadlock) saving host facts retry {i}, message: {exc}')
continue
else:
raise
break
@log_excess_runtime(
logger,
debug_cutoff=0.01,
@@ -111,7 +134,6 @@ def finish_fact_cache(hosts, destination, facts_write_time, log_data, job_id=Non
system_tracking_logger.info('Facts cleared for inventory {} host {}'.format(smart_str(host.inventory.name), smart_str(host.name)))
log_data['cleared_ct'] += 1
if len(hosts_to_update) > 100:
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])
update_hosts(hosts_to_update)
hosts_to_update = []
if hosts_to_update:
Host.objects.bulk_update(hosts_to_update, ['ansible_facts', 'ansible_facts_modified'])
update_hosts(hosts_to_update)

View File

@@ -0,0 +1,205 @@
import datetime
from dateutil.relativedelta import relativedelta
import logging
from django.conf import settings
from django.db.models import Count
from django.db.models.functions import TruncMonth
from django.utils.timezone import now
from rest_framework.fields import DateTimeField
from awx.main.dispatch import get_task_queuename
from awx.main.dispatch.publish import task
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.conf.license import get_license
logger = logging.getLogger('awx.main.tasks.host_metric_summary_monthly')
@task(queue=get_task_queuename)
def host_metric_summary_monthly():
"""Run cleanup host metrics summary monthly task each week"""
if _is_run_threshold_reached(
getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', None), getattr(settings, 'HOST_METRIC_SUMMARY_TASK_INTERVAL', 7) * 86400
):
logger.info(f"Executing host_metric_summary_monthly, last ran at {getattr(settings, 'HOST_METRIC_SUMMARY_TASK_LAST_TS', '---')}")
HostMetricSummaryMonthlyTask().execute()
logger.info("Finished host_metric_summary_monthly")
def _is_run_threshold_reached(setting, threshold_seconds):
last_time = DateTimeField().to_internal_value(setting) if setting else DateTimeField().to_internal_value('1970-01-01')
return (now() - last_time).total_seconds() > threshold_seconds
class HostMetricSummaryMonthlyTask:
"""
This task computes last [threshold] months of HostMetricSummaryMonthly table
[threshold] is setting CLEANUP_HOST_METRICS_HARD_THRESHOLD
Each record in the table represents changes in HostMetric table in one month
It always overrides all the months newer than <threshold>, never updates older months
Algorithm:
- hosts_added are HostMetric records with first_automation in given month
- hosts_deleted are HostMetric records with deleted=True and last_deleted in given month
- - HostMetrics soft-deleted before <threshold> also increases hosts_deleted in their last_deleted month
- license_consumed is license_consumed(previous month) + hosts_added - hosts_deleted
- - license_consumed for HostMetricSummaryMonthly.date < [threshold] is computed also from
all HostMetrics.first_automation < [threshold]
- license_capacity is set only for current month, and it's never updated (value taken from current subscription)
"""
def __init__(self):
self.host_metrics = {}
self.processed_month = self._get_first_month()
self.existing_summaries = None
self.existing_summaries_idx = 0
self.existing_summaries_cnt = 0
self.records_to_create = []
self.records_to_update = []
def execute(self):
self._load_existing_summaries()
self._load_hosts_added()
self._load_hosts_deleted()
# Get first month after last hard delete
month = self._get_first_month()
license_consumed = self._get_license_consumed_before(month)
# Fill record for each month
while month <= datetime.date.today().replace(day=1):
summary = self._find_or_create_summary(month)
# Update summary and update license_consumed by hosts added/removed this month
self._update_summary(summary, month, license_consumed)
license_consumed = summary.license_consumed
month = month + relativedelta(months=1)
# Create/Update stats
HostMetricSummaryMonthly.objects.bulk_create(self.records_to_create, batch_size=1000)
HostMetricSummaryMonthly.objects.bulk_update(self.records_to_update, ['license_consumed', 'hosts_added', 'hosts_deleted'], batch_size=1000)
# Set timestamp of last run
settings.HOST_METRIC_SUMMARY_TASK_LAST_TS = now()
def _get_license_consumed_before(self, month):
license_consumed = 0
for metric_month, metric in self.host_metrics.items():
if metric_month < month:
hosts_added = metric.get('hosts_added', 0)
hosts_deleted = metric.get('hosts_deleted', 0)
license_consumed = license_consumed + hosts_added - hosts_deleted
else:
break
return license_consumed
def _load_existing_summaries(self):
"""Find all summaries newer than host metrics delete threshold"""
self.existing_summaries = HostMetricSummaryMonthly.objects.filter(date__gte=self._get_first_month()).order_by('date')
self.existing_summaries_idx = 0
self.existing_summaries_cnt = len(self.existing_summaries)
def _load_hosts_added(self):
"""Aggregates hosts added each month, by the 'first_automation' timestamp"""
#
# -- SQL translation (for better code readability)
# SELECT date_trunc('month', first_automation) as month,
# count(first_automation) AS hosts_added
# FROM main_hostmetric
# GROUP BY month
# ORDER by month;
result = (
HostMetric.objects.annotate(month=TruncMonth('first_automation'))
.values('month')
.annotate(hosts_added=Count('first_automation'))
.values('month', 'hosts_added')
.order_by('month')
)
for host_metric in list(result):
month = host_metric['month']
if month:
beginning_of_month = datetime.date(month.year, month.month, 1)
if self.host_metrics.get(beginning_of_month) is None:
self.host_metrics[beginning_of_month] = {}
self.host_metrics[beginning_of_month]['hosts_added'] = host_metric['hosts_added']
def _load_hosts_deleted(self):
"""
Aggregates hosts deleted each month, by the 'last_deleted' timestamp.
Host metrics have to be deleted NOW to be counted as deleted before
(by intention - statistics can change retrospectively by re-automation of previously deleted host)
"""
#
# -- SQL translation (for better code readability)
# SELECT date_trunc('month', last_deleted) as month,
# count(last_deleted) AS hosts_deleted
# FROM main_hostmetric
# WHERE deleted = True
# GROUP BY 1 # equal to "GROUP BY month"
# ORDER by month;
result = (
HostMetric.objects.annotate(month=TruncMonth('last_deleted'))
.values('month')
.annotate(hosts_deleted=Count('last_deleted'))
.values('month', 'hosts_deleted')
.filter(deleted=True)
.order_by('month')
)
for host_metric in list(result):
month = host_metric['month']
if month:
beginning_of_month = datetime.date(month.year, month.month, 1)
if self.host_metrics.get(beginning_of_month) is None:
self.host_metrics[beginning_of_month] = {}
self.host_metrics[beginning_of_month]['hosts_deleted'] = host_metric['hosts_deleted']
def _find_or_create_summary(self, month):
summary = self._find_summary(month)
if not summary:
summary = HostMetricSummaryMonthly(date=month)
self.records_to_create.append(summary)
else:
self.records_to_update.append(summary)
return summary
def _find_summary(self, month):
"""
Existing summaries are ordered by month ASC.
This method is called with month in ascending order too => only 1 traversing is enough
"""
summary = None
while not summary and self.existing_summaries_idx < self.existing_summaries_cnt:
tmp = self.existing_summaries[self.existing_summaries_idx]
if tmp.date < month:
self.existing_summaries_idx += 1
elif tmp.date == month:
summary = tmp
elif tmp.date > month:
break
return summary
def _update_summary(self, summary, month, license_consumed):
"""Updates the metric with hosts added and deleted and set license info for current month"""
# Get month counts from host metrics, zero if not found
hosts_added, hosts_deleted = 0, 0
if metric := self.host_metrics.get(month, None):
hosts_added = metric.get('hosts_added', 0)
hosts_deleted = metric.get('hosts_deleted', 0)
summary.license_consumed = license_consumed + hosts_added - hosts_deleted
summary.hosts_added = hosts_added
summary.hosts_deleted = hosts_deleted
# Set subscription count for current month
if month == datetime.date.today().replace(day=1):
license_info = get_license()
summary.license_capacity = license_info.get('instance_count', 0)
return summary
@staticmethod
def _get_first_month():
"""Returns first month after host metrics hard delete threshold"""
threshold = getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36)
return datetime.date.today().replace(day=1) - relativedelta(months=int(threshold) - 1)

View File

@@ -112,7 +112,7 @@ class BaseTask(object):
def __init__(self):
self.cleanup_paths = []
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTOWN_TOLLERANCE / 5)
self.update_attempts = int(settings.DISPATCHER_DB_DOWNTIME_TOLERANCE / 5)
self.runner_callback = self.callback_class(model=self.model)
def update_model(self, pk, _attempt=0, **updates):
@@ -1094,7 +1094,7 @@ class RunJob(SourceControlMixin, BaseTask):
# actual `run()` call; this _usually_ means something failed in
# the pre_run_hook method
return
if self.should_use_fact_cache():
if self.should_use_fact_cache() and self.runner_callback.artifacts_processed:
job.log_lifecycle("finish_job_fact_cache")
finish_fact_cache(
job.get_hosts_for_fact_cache(),

View File

@@ -464,6 +464,7 @@ class AWXReceptorJob:
event_handler=self.task.runner_callback.event_handler,
finished_callback=self.task.runner_callback.finished_callback,
status_handler=self.task.runner_callback.status_handler,
artifacts_handler=self.task.runner_callback.artifacts_handler,
**self.runner_params,
)
@@ -639,7 +640,7 @@ class AWXReceptorJob:
#
RECEPTOR_CONFIG_STARTER = (
{'local-only': None},
{'log-level': 'info'},
{'log-level': settings.RECEPTOR_LOG_LEVEL},
{'node': {'firewallrules': [{'action': 'reject', 'tonode': settings.CLUSTER_HOST_ID, 'toservice': 'control'}]}},
{'control-service': {'service': 'control', 'filename': '/var/run/receptor/receptor.sock', 'permissions': '0660'}},
{'work-command': {'worktype': 'local', 'command': 'ansible-runner', 'params': 'worker', 'allowruntimeparams': True}},

View File

@@ -2,6 +2,7 @@
from collections import namedtuple
import functools
import importlib
import itertools
import json
import logging
import os
@@ -14,7 +15,7 @@ from datetime import datetime
# Django
from django.conf import settings
from django.db import transaction, DatabaseError, IntegrityError
from django.db import connection, transaction, DatabaseError, IntegrityError
from django.db.models.fields.related import ForeignKey
from django.utils.timezone import now, timedelta
from django.utils.encoding import smart_str
@@ -48,6 +49,7 @@ from awx.main.models import (
SmartInventoryMembership,
Job,
HostMetric,
convert_jsonfields,
)
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
@@ -86,6 +88,11 @@ def dispatch_startup():
if settings.IS_K8S:
write_receptor_config()
try:
convert_jsonfields()
except Exception:
logger.exception("Failed json field conversion, skipping.")
startup_logger.debug("Syncing Schedules")
for sch in Schedule.objects.all():
try:
@@ -129,6 +136,52 @@ def inform_cluster_of_shutdown():
logger.exception('Encountered problem with normal shutdown signal.')
@task(queue=get_task_queuename)
def migrate_jsonfield(table, pkfield, columns):
batchsize = 10000
with advisory_lock(f'json_migration_{table}', wait=False) as acquired:
if not acquired:
return
from django.db.migrations.executor import MigrationExecutor
# If Django is currently running migrations, wait until it is done.
while True:
executor = MigrationExecutor(connection)
if not executor.migration_plan(executor.loader.graph.leaf_nodes()):
break
time.sleep(120)
logger.warning(f"Migrating json fields for {table}: {', '.join(columns)}")
with connection.cursor() as cursor:
for i in itertools.count(0, batchsize):
# Are there even any rows in the table beyond this point?
cursor.execute(f"select count(1) from {table} where {pkfield} >= %s limit 1;", (i,))
if not cursor.fetchone()[0]:
break
column_expr = ', '.join(f"{colname} = {colname}_old::jsonb" for colname in columns)
# If any of the old columns have non-null values, the data needs to be cast and copied over.
empty_expr = ' or '.join(f"{colname}_old is not null" for colname in columns)
cursor.execute( # Only clobber the new fields if there is non-null data in the old ones.
f"""
update {table}
set {column_expr}
where {pkfield} >= %s and {pkfield} < %s
and {empty_expr};
""",
(i, i + batchsize),
)
rows = cursor.rowcount
logger.debug(f"Batch {i} to {i + batchsize} copied on {table}, {rows} rows affected.")
column_expr = ', '.join(f"DROP COLUMN {column}_old" for column in columns)
cursor.execute(f"ALTER TABLE {table} {column_expr};")
logger.warning(f"Migration of {table} to jsonb is finished.")
@task(queue=get_task_queuename)
def apply_cluster_membership_policies():
from awx.main.signals import disable_activity_stream
@@ -316,13 +369,8 @@ def send_notifications(notification_list, job_id=None):
@task(queue=get_task_queuename)
def gather_analytics():
from awx.conf.models import Setting
from rest_framework.fields import DateTimeField
last_gather = Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first()
last_time = DateTimeField().to_internal_value(last_gather.value) if last_gather and last_gather.value else None
gather_time = now()
if not last_time or ((gather_time - last_time).total_seconds() > settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
if is_run_threshold_reached(Setting.objects.filter(key='AUTOMATION_ANALYTICS_LAST_GATHER').first(), settings.AUTOMATION_ANALYTICS_GATHER_INTERVAL):
analytics.gather()
@@ -381,16 +429,25 @@ def cleanup_images_and_files():
@task(queue=get_task_queuename)
def cleanup_host_metrics():
"""Run cleanup host metrics ~each month"""
# TODO: move whole method to host_metrics in follow-up PR
from awx.conf.models import Setting
if is_run_threshold_reached(
Setting.objects.filter(key='CLEANUP_HOST_METRICS_LAST_TS').first(), getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400
):
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_SOFT_THRESHOLD', 12)
logger.info("Executing cleanup_host_metrics")
HostMetric.cleanup_task(months_ago)
logger.info("Finished cleanup_host_metrics")
def is_run_threshold_reached(setting, threshold_seconds):
from rest_framework.fields import DateTimeField
last_cleanup = Setting.objects.filter(key='CLEANUP_HOST_METRICS_LAST_TS').first()
last_time = DateTimeField().to_internal_value(last_cleanup.value) if last_cleanup and last_cleanup.value else None
last_time = DateTimeField().to_internal_value(setting.value) if setting and setting.value else DateTimeField().to_internal_value('1970-01-01')
cleanup_interval_secs = getattr(settings, 'CLEANUP_HOST_METRICS_INTERVAL', 30) * 86400
if not last_time or ((now() - last_time).total_seconds() > cleanup_interval_secs):
months_ago = getattr(settings, 'CLEANUP_HOST_METRICS_THRESHOLD', 12)
HostMetric.cleanup_task(months_ago)
return (now() - last_time).total_seconds() > threshold_seconds
@task(queue=get_task_queuename)
@@ -839,10 +896,7 @@ def delete_inventory(inventory_id, user_id, retries=5):
user = None
with ignore_inventory_computed_fields(), ignore_inventory_group_removal(), impersonate(user):
try:
i = Inventory.objects.get(id=inventory_id)
for host in i.hosts.iterator():
host.job_events_as_primary_host.update(host=None)
i.delete()
Inventory.objects.get(id=inventory_id).delete()
emit_channel_notification('inventories-status_changed', {'group_name': 'inventories', 'inventory_id': inventory_id, 'status': 'deleted'})
logger.debug('Deleted inventory {} as user {}.'.format(inventory_id, user_id))
except Inventory.DoesNotExist:

View File

@@ -1,6 +1,9 @@
import json
from django.contrib.auth.models import User
from django.core.exceptions import ValidationError
from unittest import mock
from awx.main.models import (
Organization,
@@ -20,6 +23,7 @@ from awx.main.models import (
WorkflowJobNode,
WorkflowJobTemplateNode,
)
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
# mk methods should create only a single object of a single type.
# they should also have the option of being persisted or not.
@@ -248,3 +252,42 @@ def mk_workflow_job_node(unified_job_template=None, success_nodes=None, failure_
if persisted:
workflow_node.save()
return workflow_node
def mk_host_metric(hostname, first_automation, last_automation=None, last_deleted=None, deleted=False, persisted=True):
ok, idx = False, 1
while not ok:
try:
with mock.patch("django.utils.timezone.now") as mock_now:
mock_now.return_value = first_automation
metric = HostMetric(
hostname=hostname or f"host-{first_automation}-{idx}",
first_automation=first_automation,
last_automation=last_automation or first_automation,
last_deleted=last_deleted,
deleted=deleted,
)
metric.validate_unique()
if persisted:
metric.save()
ok = True
except ValidationError as e:
# Repeat create for auto-generated hostname
if not hostname and e.message_dict.get('hostname', None):
idx += 1
else:
raise e
def mk_host_metric_summary(date, license_consumed=0, license_capacity=0, hosts_added=0, hosts_deleted=0, indirectly_managed_hosts=0, persisted=True):
summary = HostMetricSummaryMonthly(
date=date,
license_consumed=license_consumed,
license_capacity=license_capacity,
hosts_added=hosts_added,
hosts_deleted=hosts_deleted,
indirectly_managed_hosts=indirectly_managed_hosts,
)
if persisted:
summary.save()
return summary

View File

@@ -0,0 +1,382 @@
import pytest
import datetime
from dateutil.relativedelta import relativedelta
from django.conf import settings
from django.utils import timezone
from awx.main.management.commands.host_metric_summary_monthly import Command
from awx.main.models.inventory import HostMetric, HostMetricSummaryMonthly
from awx.main.tests.factories.fixtures import mk_host_metric, mk_host_metric_summary
@pytest.fixture
def threshold():
return int(getattr(settings, 'CLEANUP_HOST_METRICS_HARD_THRESHOLD', 36))
@pytest.mark.django_db
@pytest.mark.parametrize("metrics_cnt", [0, 1, 2, 3])
@pytest.mark.parametrize("mode", ["old_data", "actual_data", "all_data"])
def test_summaries_counts(threshold, metrics_cnt, mode):
assert HostMetricSummaryMonthly.objects.count() == 0
for idx in range(metrics_cnt):
if mode == "old_data" or mode == "all_data":
mk_host_metric(None, months_ago(threshold + idx, "dt"))
elif mode == "actual_data" or mode == "all_data":
mk_host_metric(None, (months_ago(threshold - idx, "dt")))
Command().handle()
# Number of records is equal to host metrics' hard cleanup months
assert HostMetricSummaryMonthly.objects.count() == threshold
# Records start with date in the month following to the threshold month
date = months_ago(threshold - 1)
for metric in list(HostMetricSummaryMonthly.objects.order_by('date').all()):
assert metric.date == date
date += relativedelta(months=1)
# Older record are untouched
mk_host_metric_summary(date=months_ago(threshold + 10))
Command().handle()
assert HostMetricSummaryMonthly.objects.count() == threshold + 1
@pytest.mark.django_db
@pytest.mark.parametrize("mode", ["old_data", "actual_data", "all_data"])
def test_summary_values(threshold, mode):
tester = {"old_data": MetricsTesterOldData(threshold), "actual_data": MetricsTesterActualData(threshold), "all_data": MetricsTesterCombinedData(threshold)}[
mode
]
for iteration in ["create_metrics", "add_old_summaries", "change_metrics", "delete_metrics", "add_metrics"]:
getattr(tester, iteration)() # call method by string
# Operation is idempotent, repeat twice
for _ in range(2):
Command().handle()
# call assert method by string
getattr(tester, f"assert_{iteration}")()
class MetricsTester:
def __init__(self, threshold, ignore_asserts=False):
self.threshold = threshold
self.expected_summaries = {}
self.ignore_asserts = ignore_asserts
def add_old_summaries(self):
"""These records don't correspond with Host metrics"""
mk_host_metric_summary(self.below(4), license_consumed=100, hosts_added=10, hosts_deleted=5)
mk_host_metric_summary(self.below(3), license_consumed=105, hosts_added=20, hosts_deleted=10)
mk_host_metric_summary(self.below(2), license_consumed=115, hosts_added=60, hosts_deleted=75)
def assert_add_old_summaries(self):
"""Old summary records should be untouched"""
self.expected_summaries[self.below(4)] = {"date": self.below(4), "license_consumed": 100, "hosts_added": 10, "hosts_deleted": 5}
self.expected_summaries[self.below(3)] = {"date": self.below(3), "license_consumed": 105, "hosts_added": 20, "hosts_deleted": 10}
self.expected_summaries[self.below(2)] = {"date": self.below(2), "license_consumed": 115, "hosts_added": 60, "hosts_deleted": 75}
self.assert_host_metric_summaries()
def assert_host_metric_summaries(self):
"""Ignore asserts when old/actual test object is used only as a helper for Combined test"""
if self.ignore_asserts:
return True
for summary in list(HostMetricSummaryMonthly.objects.order_by('date').all()):
assert self.expected_summaries.get(summary.date, None) is not None
assert self.expected_summaries[summary.date] == {
"date": summary.date,
"license_consumed": summary.license_consumed,
"hosts_added": summary.hosts_added,
"hosts_deleted": summary.hosts_deleted,
}
def below(self, months, fmt="date"):
"""months below threshold, returns first date of that month"""
date = months_ago(self.threshold + months)
if fmt == "dt":
return timezone.make_aware(datetime.datetime.combine(date, datetime.datetime.min.time()))
else:
return date
def above(self, months, fmt="date"):
"""months above threshold, returns first date of that month"""
date = months_ago(self.threshold - months)
if fmt == "dt":
return timezone.make_aware(datetime.datetime.combine(date, datetime.datetime.min.time()))
else:
return date
class MetricsTesterOldData(MetricsTester):
def create_metrics(self):
"""Creates 7 host metrics older than delete threshold"""
mk_host_metric("host_1", first_automation=self.below(3, "dt"))
mk_host_metric("host_2", first_automation=self.below(2, "dt"))
mk_host_metric("host_3", first_automation=self.below(2, "dt"), last_deleted=self.above(2, "dt"), deleted=False)
mk_host_metric("host_4", first_automation=self.below(2, "dt"), last_deleted=self.above(2, "dt"), deleted=True)
mk_host_metric("host_5", first_automation=self.below(2, "dt"), last_deleted=self.below(2, "dt"), deleted=True)
mk_host_metric("host_6", first_automation=self.below(1, "dt"), last_deleted=self.below(1, "dt"), deleted=False)
mk_host_metric("host_7", first_automation=self.below(1, "dt"))
def assert_create_metrics(self):
"""
Month 1 is computed from older host metrics,
Month 2 has deletion (host_4)
Other months are unchanged (same as month 2)
"""
self.expected_summaries = {
self.above(1): {"date": self.above(1), "license_consumed": 6, "hosts_added": 0, "hosts_deleted": 0},
self.above(2): {"date": self.above(2), "license_consumed": 5, "hosts_added": 0, "hosts_deleted": 1},
}
# no change in months 3+
idx = 3
month = self.above(idx)
while month <= beginning_of_the_month():
self.expected_summaries[self.above(idx)] = {"date": self.above(idx), "license_consumed": 5, "hosts_added": 0, "hosts_deleted": 0}
month += relativedelta(months=1)
idx += 1
self.assert_host_metric_summaries()
def add_old_summaries(self):
super().add_old_summaries()
def assert_add_old_summaries(self):
super().assert_add_old_summaries()
@staticmethod
def change_metrics():
"""Hosts 1,2 soft deleted, host_4 automated again (undeleted)"""
HostMetric.objects.filter(hostname='host_1').update(last_deleted=beginning_of_the_month("dt"), deleted=True)
HostMetric.objects.filter(hostname='host_2').update(last_deleted=timezone.now(), deleted=True)
HostMetric.objects.filter(hostname='host_4').update(deleted=False)
def assert_change_metrics(self):
"""
Summaries since month 2 were changed (host_4 restored == automated again)
Current month has 2 deletions (host_1, host_2)
"""
self.expected_summaries[self.above(2)] |= {'hosts_deleted': 0}
for idx in range(2, self.threshold):
self.expected_summaries[self.above(idx)] |= {'license_consumed': 6}
self.expected_summaries[beginning_of_the_month()] |= {'license_consumed': 4, 'hosts_deleted': 2}
self.assert_host_metric_summaries()
@staticmethod
def delete_metrics():
"""Deletes metric deleted before the threshold"""
HostMetric.objects.filter(hostname='host_5').delete()
def assert_delete_metrics(self):
"""No change"""
self.assert_host_metric_summaries()
@staticmethod
def add_metrics():
"""Adds new metrics"""
mk_host_metric("host_24", first_automation=beginning_of_the_month("dt"))
mk_host_metric("host_25", first_automation=beginning_of_the_month("dt")) # timezone.now())
def assert_add_metrics(self):
"""Summary in current month is updated"""
self.expected_summaries[beginning_of_the_month()]['license_consumed'] = 6
self.expected_summaries[beginning_of_the_month()]['hosts_added'] = 2
self.assert_host_metric_summaries()
class MetricsTesterActualData(MetricsTester):
def create_metrics(self):
"""Creates 16 host metrics newer than delete threshold"""
mk_host_metric("host_8", first_automation=self.above(1, "dt"))
mk_host_metric("host_9", first_automation=self.above(1, "dt"), last_deleted=self.above(1, "dt"))
mk_host_metric("host_10", first_automation=self.above(1, "dt"), last_deleted=self.above(1, "dt"), deleted=True)
mk_host_metric("host_11", first_automation=self.above(1, "dt"), last_deleted=self.above(2, "dt"))
mk_host_metric("host_12", first_automation=self.above(1, "dt"), last_deleted=self.above(2, "dt"), deleted=True)
mk_host_metric("host_13", first_automation=self.above(2, "dt"))
mk_host_metric("host_14", first_automation=self.above(2, "dt"), last_deleted=self.above(2, "dt"))
mk_host_metric("host_15", first_automation=self.above(2, "dt"), last_deleted=self.above(2, "dt"), deleted=True)
mk_host_metric("host_16", first_automation=self.above(2, "dt"), last_deleted=self.above(3, "dt"))
mk_host_metric("host_17", first_automation=self.above(2, "dt"), last_deleted=self.above(3, "dt"), deleted=True)
mk_host_metric("host_18", first_automation=self.above(4, "dt"))
# next one shouldn't happen in real (deleted=True, last_deleted = NULL)
mk_host_metric("host_19", first_automation=self.above(4, "dt"), deleted=True)
mk_host_metric("host_20", first_automation=self.above(4, "dt"), last_deleted=self.above(4, "dt"))
mk_host_metric("host_21", first_automation=self.above(4, "dt"), last_deleted=self.above(4, "dt"), deleted=True)
mk_host_metric("host_22", first_automation=self.above(4, "dt"), last_deleted=self.above(5, "dt"))
mk_host_metric("host_23", first_automation=self.above(4, "dt"), last_deleted=self.above(5, "dt"), deleted=True)
def assert_create_metrics(self):
self.expected_summaries = {
self.above(1): {"date": self.above(1), "license_consumed": 4, "hosts_added": 5, "hosts_deleted": 1},
self.above(2): {"date": self.above(2), "license_consumed": 7, "hosts_added": 5, "hosts_deleted": 2},
self.above(3): {"date": self.above(3), "license_consumed": 6, "hosts_added": 0, "hosts_deleted": 1},
self.above(4): {"date": self.above(4), "license_consumed": 11, "hosts_added": 6, "hosts_deleted": 1},
self.above(5): {"date": self.above(5), "license_consumed": 10, "hosts_added": 0, "hosts_deleted": 1},
}
# no change in months 6+
idx = 6
month = self.above(idx)
while month <= beginning_of_the_month():
self.expected_summaries[self.above(idx)] = {"date": self.above(idx), "license_consumed": 10, "hosts_added": 0, "hosts_deleted": 0}
month += relativedelta(months=1)
idx += 1
self.assert_host_metric_summaries()
def add_old_summaries(self):
super().add_old_summaries()
def assert_add_old_summaries(self):
super().assert_add_old_summaries()
@staticmethod
def change_metrics():
"""
- Hosts 12, 19, 21 were automated again (undeleted)
- Host 16 was soft deleted
- Host 17 was undeleted and soft deleted again
"""
HostMetric.objects.filter(hostname='host_12').update(deleted=False)
HostMetric.objects.filter(hostname='host_16').update(last_deleted=timezone.now(), deleted=True)
HostMetric.objects.filter(hostname='host_17').update(last_deleted=beginning_of_the_month("dt"), deleted=True)
HostMetric.objects.filter(hostname='host_19').update(deleted=False)
HostMetric.objects.filter(hostname='host_21').update(deleted=False)
def assert_change_metrics(self):
"""
Summaries since month 2 were changed
Current month has 2 deletions (host_16, host_17)
"""
self.expected_summaries[self.above(2)] |= {'license_consumed': 8, 'hosts_deleted': 1}
self.expected_summaries[self.above(3)] |= {'license_consumed': 8, 'hosts_deleted': 0}
self.expected_summaries[self.above(4)] |= {'license_consumed': 14, 'hosts_deleted': 0}
# month 5 had hosts_deleted 1 => license_consumed == 14 - 1
for idx in range(5, self.threshold):
self.expected_summaries[self.above(idx)] |= {'license_consumed': 13}
self.expected_summaries[beginning_of_the_month()] |= {'license_consumed': 11, 'hosts_deleted': 2}
self.assert_host_metric_summaries()
def delete_metrics(self):
"""Hard cleanup can't delete metrics newer than threshold. No change"""
pass
def assert_delete_metrics(self):
"""No change"""
self.assert_host_metric_summaries()
@staticmethod
def add_metrics():
"""Adds new metrics"""
mk_host_metric("host_26", first_automation=beginning_of_the_month("dt"))
mk_host_metric("host_27", first_automation=timezone.now())
def assert_add_metrics(self):
"""
Two metrics were deleted in current month by change_metrics()
Two metrics are added now
=> license_consumed is equal to the previous month (13 - 2 + 2)
"""
self.expected_summaries[beginning_of_the_month()] |= {'license_consumed': 13, 'hosts_added': 2}
self.assert_host_metric_summaries()
class MetricsTesterCombinedData(MetricsTester):
def __init__(self, threshold):
super().__init__(threshold)
self.old_data = MetricsTesterOldData(threshold, ignore_asserts=True)
self.actual_data = MetricsTesterActualData(threshold, ignore_asserts=True)
def assert_host_metric_summaries(self):
self._combine_expected_summaries()
super().assert_host_metric_summaries()
def create_metrics(self):
self.old_data.create_metrics()
self.actual_data.create_metrics()
def assert_create_metrics(self):
self.old_data.assert_create_metrics()
self.actual_data.assert_create_metrics()
self.assert_host_metric_summaries()
def add_old_summaries(self):
super().add_old_summaries()
def assert_add_old_summaries(self):
self.old_data.assert_add_old_summaries()
self.actual_data.assert_add_old_summaries()
self.assert_host_metric_summaries()
def change_metrics(self):
self.old_data.change_metrics()
self.actual_data.change_metrics()
def assert_change_metrics(self):
self.old_data.assert_change_metrics()
self.actual_data.assert_change_metrics()
self.assert_host_metric_summaries()
def delete_metrics(self):
self.old_data.delete_metrics()
self.actual_data.delete_metrics()
def assert_delete_metrics(self):
self.old_data.assert_delete_metrics()
self.actual_data.assert_delete_metrics()
self.assert_host_metric_summaries()
def add_metrics(self):
self.old_data.add_metrics()
self.actual_data.add_metrics()
def assert_add_metrics(self):
self.old_data.assert_add_metrics()
self.actual_data.assert_add_metrics()
self.assert_host_metric_summaries()
def _combine_expected_summaries(self):
"""
Expected summaries are sum of expected values for tests with old and actual data
Except data older than hard delete threshold (these summaries are untouched by task => the same in all tests)
"""
for date, summary in self.old_data.expected_summaries.items():
if date <= months_ago(self.threshold):
license_consumed = summary['license_consumed']
hosts_added = summary['hosts_added']
hosts_deleted = summary['hosts_deleted']
else:
license_consumed = summary['license_consumed'] + self.actual_data.expected_summaries[date]['license_consumed']
hosts_added = summary['hosts_added'] + self.actual_data.expected_summaries[date]['hosts_added']
hosts_deleted = summary['hosts_deleted'] + self.actual_data.expected_summaries[date]['hosts_deleted']
self.expected_summaries[date] = {'date': date, 'license_consumed': license_consumed, 'hosts_added': hosts_added, 'hosts_deleted': hosts_deleted}
def months_ago(num, fmt="date"):
if num is None:
return None
return beginning_of_the_month(fmt) - relativedelta(months=num)
def beginning_of_the_month(fmt="date"):
date = datetime.date.today().replace(day=1)
if fmt == "dt":
return timezone.make_aware(datetime.datetime.combine(date, datetime.datetime.min.time()))
else:
return date

View File

@@ -331,15 +331,13 @@ def test_single_job_dependencies_project_launch(controlplane_instance_group, job
p.save(skip_update=True)
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
dm = DependencyManager()
with mock.patch.object(DependencyManager, "create_project_update", wraps=dm.create_project_update) as mock_pu:
dm.schedule()
mock_pu.assert_called_once_with(j)
pu = [x for x in p.project_updates.all()]
assert len(pu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(pu[0], controlplane_instance_group, instance)
pu[0].status = "successful"
pu[0].save()
dm.schedule()
pu = [x for x in p.project_updates.all()]
assert len(pu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(pu[0], controlplane_instance_group, instance)
pu[0].status = "successful"
pu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@@ -359,15 +357,14 @@ def test_single_job_dependencies_inventory_update_launch(controlplane_instance_g
i.inventory_sources.add(ii)
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
dm = DependencyManager()
with mock.patch.object(DependencyManager, "create_inventory_update", wraps=dm.create_inventory_update) as mock_iu:
dm.schedule()
mock_iu.assert_called_once_with(j, ii)
iu = [x for x in ii.inventory_updates.all()]
assert len(iu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(iu[0], controlplane_instance_group, instance)
iu[0].status = "successful"
iu[0].save()
dm.schedule()
assert ii.inventory_updates.count() == 1
iu = [x for x in ii.inventory_updates.all()]
assert len(iu) == 1
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(iu[0], controlplane_instance_group, instance)
iu[0].status = "successful"
iu[0].save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@@ -382,11 +379,11 @@ def test_inventory_update_launches_project_update(controlplane_instance_group, s
iu = ii.create_inventory_update()
iu.status = "pending"
iu.save()
assert project.project_updates.count() == 0
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
dm = DependencyManager()
with mock.patch.object(DependencyManager, "create_project_update", wraps=dm.create_project_update) as mock_pu:
dm.schedule()
mock_pu.assert_called_with(iu, project_id=project.id)
dm.schedule()
assert project.project_updates.count() == 1
@pytest.mark.django_db
@@ -407,9 +404,8 @@ def test_job_dependency_with_already_updated(controlplane_instance_group, job_te
j.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
dm = DependencyManager()
with mock.patch.object(DependencyManager, "create_inventory_update", wraps=dm.create_inventory_update) as mock_iu:
dm.schedule()
mock_iu.assert_not_called()
dm.schedule()
assert ii.inventory_updates.count() == 0
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@@ -442,7 +438,9 @@ def test_shared_dependencies_launch(controlplane_instance_group, job_template_fa
TaskManager().schedule()
pu = p.project_updates.first()
iu = ii.inventory_updates.first()
TaskManager.start_task.assert_has_calls([mock.call(iu, controlplane_instance_group, instance), mock.call(pu, controlplane_instance_group, instance)])
TaskManager.start_task.assert_has_calls(
[mock.call(iu, controlplane_instance_group, instance), mock.call(pu, controlplane_instance_group, instance)], any_order=True
)
pu.status = "successful"
pu.finished = pu.created + timedelta(seconds=1)
pu.save()
@@ -451,7 +449,9 @@ def test_shared_dependencies_launch(controlplane_instance_group, job_template_fa
iu.save()
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, controlplane_instance_group, instance), mock.call(j2, controlplane_instance_group, instance)])
TaskManager.start_task.assert_has_calls(
[mock.call(j1, controlplane_instance_group, instance), mock.call(j2, controlplane_instance_group, instance)], any_order=True
)
pu = [x for x in p.project_updates.all()]
iu = [x for x in ii.inventory_updates.all()]
assert len(pu) == 1

View File

@@ -3,6 +3,7 @@ import multiprocessing
import random
import signal
import time
import yaml
from unittest import mock
from django.utils.timezone import now as tz_now
@@ -13,6 +14,7 @@ from awx.main.dispatch import reaper
from awx.main.dispatch.pool import StatefulPoolWorker, WorkerPool, AutoscalePool
from awx.main.dispatch.publish import task
from awx.main.dispatch.worker import BaseWorker, TaskWorker
from awx.main.dispatch.periodic import Scheduler
'''
@@ -439,3 +441,76 @@ class TestJobReaper(object):
assert job.started > ref_time
assert job.status == 'running'
assert job.job_explanation == ''
@pytest.mark.django_db
class TestScheduler:
def test_too_many_schedules_freak_out(self):
with pytest.raises(RuntimeError):
Scheduler({'job1': {'schedule': datetime.timedelta(seconds=1)}, 'job2': {'schedule': datetime.timedelta(seconds=1)}})
def test_spread_out(self):
scheduler = Scheduler(
{
'job1': {'schedule': datetime.timedelta(seconds=16)},
'job2': {'schedule': datetime.timedelta(seconds=16)},
'job3': {'schedule': datetime.timedelta(seconds=16)},
'job4': {'schedule': datetime.timedelta(seconds=16)},
}
)
assert [job.offset for job in scheduler.jobs] == [0, 4, 8, 12]
def test_missed_schedule(self, mocker):
scheduler = Scheduler({'job1': {'schedule': datetime.timedelta(seconds=10)}})
assert scheduler.jobs[0].missed_runs(time.time() - scheduler.global_start) == 0
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 50)
scheduler.get_and_mark_pending()
assert scheduler.jobs[0].missed_runs(50) > 1
def test_advance_schedule(self, mocker):
scheduler = Scheduler(
{
'job1': {'schedule': datetime.timedelta(seconds=30)},
'joba': {'schedule': datetime.timedelta(seconds=20)},
'jobb': {'schedule': datetime.timedelta(seconds=20)},
}
)
for job in scheduler.jobs:
# HACK: the offsets automatically added make this a hard test to write... so remove offsets
job.offset = 0.0
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 29)
to_run = scheduler.get_and_mark_pending()
assert set(job.name for job in to_run) == set(['joba', 'jobb'])
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=scheduler.global_start + 39)
to_run = scheduler.get_and_mark_pending()
assert len(to_run) == 1
assert to_run[0].name == 'job1'
@staticmethod
def get_job(scheduler, name):
for job in scheduler.jobs:
if job.name == name:
return job
def test_scheduler_debug(self, mocker):
scheduler = Scheduler(
{
'joba': {'schedule': datetime.timedelta(seconds=20)},
'jobb': {'schedule': datetime.timedelta(seconds=50)},
'jobc': {'schedule': datetime.timedelta(seconds=500)},
'jobd': {'schedule': datetime.timedelta(seconds=20)},
}
)
rel_time = 119.9 # slightly under the 6th 20-second bin, to avoid offset problems
current_time = scheduler.global_start + rel_time
mocker.patch('awx.main.dispatch.periodic.time.time', return_value=current_time - 1.0e-8)
self.get_job(scheduler, 'jobb').mark_run(rel_time)
self.get_job(scheduler, 'jobd').mark_run(rel_time - 20.0)
output = scheduler.debug()
data = yaml.safe_load(output)
assert data['schedule_list']['jobc']['last_run_seconds_ago'] is None
assert data['schedule_list']['joba']['missed_runs'] == 4
assert data['schedule_list']['jobd']['missed_runs'] == 3
assert data['schedule_list']['jobd']['completed_runs'] == 1
assert data['schedule_list']['jobb']['next_run_in_seconds'] > 25.0

View File

@@ -6,6 +6,7 @@ import json
from awx.main.models import (
Job,
Instance,
Host,
JobHostSummary,
InventoryUpdate,
InventorySource,
@@ -18,6 +19,9 @@ from awx.main.models import (
ExecutionEnvironment,
)
from awx.main.tasks.system import cluster_node_heartbeat
from awx.main.tasks.facts import update_hosts
from django.db import OperationalError
from django.test.utils import override_settings
@@ -112,6 +116,51 @@ def test_job_notification_host_data(inventory, machine_credential, project, job_
}
@pytest.mark.django_db
class TestAnsibleFactsSave:
current_call = 0
def test_update_hosts_deleted_host(self, inventory):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
last_pk = hosts[-1].pk
assert inventory.hosts.count() == 3
Host.objects.get(pk=last_pk).delete()
assert inventory.hosts.count() == 2
update_hosts(hosts)
assert inventory.hosts.count() == 2
for host in inventory.hosts.all():
host.refresh_from_db()
assert host.ansible_facts == {'foo': 'bar'}
def test_update_hosts_forever_deadlock(self, inventory, mocker):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
db_mock = mocker.patch('awx.main.tasks.facts.Host.objects.bulk_update')
db_mock.side_effect = OperationalError('deadlock detected')
with pytest.raises(OperationalError):
update_hosts(hosts)
def fake_bulk_update(self, host_list):
if self.current_call > 2:
return Host.objects.bulk_update(host_list, ['ansible_facts', 'ansible_facts_modified'])
self.current_call += 1
raise OperationalError('deadlock detected')
def test_update_hosts_resolved_deadlock(self, inventory, mocker):
hosts = [Host.objects.create(inventory=inventory, name=f'foo{i}') for i in range(3)]
for host in hosts:
host.ansible_facts = {'foo': 'bar'}
self.current_call = 0
mocker.patch('awx.main.tasks.facts.raw_update_hosts', new=self.fake_bulk_update)
update_hosts(hosts)
for host in inventory.hosts.all():
host.refresh_from_db()
assert host.ansible_facts == {'foo': 'bar'}
@pytest.mark.django_db
class TestLaunchConfig:
def test_null_creation_from_prompts(self):

View File

@@ -283,6 +283,7 @@ class LogstashFormatter(LogstashFormatterBase):
message.update(self.get_debug_fields(record))
if settings.LOG_AGGREGATOR_TYPE == 'splunk':
# splunk messages must have a top level "event" key
message = {'event': message}
# splunk messages must have a top level "event" key when using the /services/collector/event receiver.
# The event receiver wont scan an event for a timestamp field therefore a time field must also be supplied containing epoch timestamp
message = {'time': record.created, 'event': message}
return self.serialize(message)

View File

@@ -97,8 +97,6 @@ class SpecialInventoryHandler(logging.Handler):
self.event_handler(dispatch_data)
ColorHandler = logging.StreamHandler
if settings.COLOR_LOGS is True:
try:
from logutils.colorize import ColorizingStreamHandler
@@ -133,3 +131,5 @@ if settings.COLOR_LOGS is True:
except ImportError:
# logutils is only used for colored logs in the dev environment
pass
else:
ColorHandler = logging.StreamHandler

View File

@@ -175,7 +175,12 @@ class Licenser(object):
license.setdefault('pool_id', sub['pool']['id'])
license.setdefault('product_name', sub['pool']['productName'])
license.setdefault('valid_key', True)
license.setdefault('license_type', 'enterprise')
if sub['pool']['productId'].startswith('S'):
license.setdefault('trial', True)
license.setdefault('license_type', 'trial')
else:
license.setdefault('trial', False)
license.setdefault('license_type', 'enterprise')
license.setdefault('satellite', False)
# Use the nearest end date
endDate = parse_date(sub['endDate'])
@@ -287,7 +292,7 @@ class Licenser(object):
license['productId'] = sub['product_id']
license['quantity'] = int(sub['quantity'])
license['support_level'] = sub['support_level']
license['usage'] = sub['usage']
license['usage'] = sub.get('usage')
license['subscription_name'] = sub['name']
license['subscriptionId'] = sub['subscription_id']
license['accountNumber'] = sub['account_number']

View File

@@ -189,11 +189,12 @@
connection: local
name: Install content with ansible-galaxy command if necessary
vars:
galaxy_task_env: # configure in settings
additional_collections_env:
# These environment variables are used for installing collections, in addition to galaxy_task_env
# setting the collections paths silences warnings
galaxy_task_env: # configured in settings
# additional_galaxy_env contains environment variables are used for installing roles and collections and will take precedence over items in galaxy_task_env
additional_galaxy_env:
# These paths control where ansible-galaxy installs collections and roles on top the filesystem
ANSIBLE_COLLECTIONS_PATHS: "{{ projects_root }}/.__awx_cache/{{ local_path }}/stage/requirements_collections"
ANSIBLE_ROLES_PATH: "{{ projects_root }}/.__awx_cache/{{ local_path }}/stage/requirements_roles"
# Put the local tmp directory in same volume as collection destination
# otherwise, files cannot be moved accross volumes and will cause error
ANSIBLE_LOCAL_TEMP: "{{ projects_root }}/.__awx_cache/{{ local_path }}/stage/tmp"
@@ -212,40 +213,53 @@
- name: End play due to disabled content sync
ansible.builtin.meta: end_play
- name: Fetch galaxy roles from requirements.(yml/yaml)
ansible.builtin.command: >
ansible-galaxy role install -r {{ item }}
--roles-path {{ projects_root }}/.__awx_cache/{{ local_path }}/stage/requirements_roles
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{ project_path | quote }}"
register: galaxy_result
with_fileglob:
- "{{ project_path | quote }}/roles/requirements.yaml"
- "{{ project_path | quote }}/roles/requirements.yml"
changed_when: "'was installed successfully' in galaxy_result.stdout"
environment: "{{ galaxy_task_env }}"
when: roles_enabled | bool
tags:
- install_roles
- block:
- name: Fetch galaxy roles from roles/requirements.(yml/yaml)
ansible.builtin.command:
cmd: "ansible-galaxy role install -r {{ item }} {{ verbosity }}"
register: galaxy_result
with_fileglob:
- "{{ project_path | quote }}/roles/requirements.yaml"
- "{{ project_path | quote }}/roles/requirements.yml"
changed_when: "'was installed successfully' in galaxy_result.stdout"
when: roles_enabled | bool
tags:
- install_roles
- name: Fetch galaxy collections from collections/requirements.(yml/yaml)
ansible.builtin.command: >
ansible-galaxy collection install -r {{ item }}
--collections-path {{ projects_root }}/.__awx_cache/{{ local_path }}/stage/requirements_collections
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{ project_path | quote }}"
register: galaxy_collection_result
with_fileglob:
- "{{ project_path | quote }}/collections/requirements.yaml"
- "{{ project_path | quote }}/collections/requirements.yml"
- "{{ project_path | quote }}/requirements.yaml"
- "{{ project_path | quote }}/requirements.yml"
changed_when: "'Installing ' in galaxy_collection_result.stdout"
environment: "{{ additional_collections_env | combine(galaxy_task_env) }}"
when:
- "ansible_version.full is version_compare('2.9', '>=')"
- collections_enabled | bool
tags:
- install_collections
- name: Fetch galaxy collections from collections/requirements.(yml/yaml)
ansible.builtin.command:
cmd: "ansible-galaxy collection install -r {{ item }} {{ verbosity }}"
register: galaxy_collection_result
with_fileglob:
- "{{ project_path | quote }}/collections/requirements.yaml"
- "{{ project_path | quote }}/collections/requirements.yml"
changed_when: "'Nothing to do.' not in galaxy_collection_result.stdout"
when:
- "ansible_version.full is version_compare('2.9', '>=')"
- collections_enabled | bool
tags:
- install_collections
- name: Fetch galaxy roles and collections from requirements.(yml/yaml)
ansible.builtin.command:
cmd: "ansible-galaxy install -r {{ item }} {{ verbosity }}"
register: galaxy_combined_result
with_fileglob:
- "{{ project_path | quote }}/requirements.yaml"
- "{{ project_path | quote }}/requirements.yml"
changed_when: "'Nothing to do.' not in galaxy_combined_result.stdout"
when:
- "ansible_version.full is version_compare('2.10', '>=')"
- collections_enabled | bool
- roles_enabled | bool
tags:
- install_collections
- install_roles
module_defaults:
ansible.builtin.command:
chdir: "{{ project_path | quote }}"
# We combine our additional_galaxy_env into galaxy_task_env so that our values are preferred over anything a user would set
environment: "{{ galaxy_task_env | combine(additional_galaxy_env) }}"
vars:
verbosity: "{{ (ansible_verbosity) | ternary('-'+'v'*ansible_verbosity, '') }}"

View File

@@ -158,6 +158,11 @@ REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
# REMOTE_HOST_HEADERS will be trusted unconditionally')
PROXY_IP_ALLOWED_LIST = []
# If we are behind a reverse proxy/load balancer, use this setting to
# allow the scheme://addresses from which Tower should trust csrf requests from
# If this setting is an empty list (the default), we will only trust ourself
CSRF_TRUSTED_ORIGINS = []
CUSTOM_VENV_PATHS = []
# Warning: this is a placeholder for a database setting
@@ -205,7 +210,7 @@ JOB_EVENT_WORKERS = 4
# The number of seconds to buffer callback receiver bulk
# writes in memory before flushing via JobEvent.objects.bulk_create()
JOB_EVENT_BUFFER_SECONDS = 0.1
JOB_EVENT_BUFFER_SECONDS = 1
# The interval at which callback receiver statistics should be
# recorded
@@ -322,7 +327,6 @@ INSTALLED_APPS = [
'rest_framework',
'django_extensions',
'polymorphic',
'taggit',
'social_django',
'django_guid',
'corsheaders',
@@ -449,7 +453,7 @@ RECEPTOR_SERVICE_ADVERTISEMENT_PERIOD = 60 # https://github.com/ansible/recepto
EXECUTION_NODE_REMEDIATION_CHECKS = 60 * 30 # once every 30 minutes check if an execution node errors have been resolved
# Amount of time dispatcher will try to reconnect to database for jobs and consuming new work
DISPATCHER_DB_DOWNTOWN_TOLLERANCE = 40
DISPATCHER_DB_DOWNTIME_TOLERANCE = 40
BROKER_URL = 'unix:///var/run/redis/redis.sock'
CELERYBEAT_SCHEDULE = {
@@ -466,12 +470,13 @@ CELERYBEAT_SCHEDULE = {
'receptor_reaper': {'task': 'awx.main.tasks.system.awx_receptor_workunit_reaper', 'schedule': timedelta(seconds=60)},
'send_subsystem_metrics': {'task': 'awx.main.analytics.analytics_tasks.send_subsystem_metrics', 'schedule': timedelta(seconds=20)},
'cleanup_images': {'task': 'awx.main.tasks.system.cleanup_images_and_files', 'schedule': timedelta(hours=3)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.system.cleanup_host_metrics', 'schedule': timedelta(days=1)},
'cleanup_host_metrics': {'task': 'awx.main.tasks.system.cleanup_host_metrics', 'schedule': timedelta(hours=3, minutes=30)},
'host_metric_summary_monthly': {'task': 'awx.main.tasks.host_metrics.host_metric_summary_monthly', 'schedule': timedelta(hours=4)},
}
# Django Caching Configuration
DJANGO_REDIS_IGNORE_EXCEPTIONS = True
CACHES = {'default': {'BACKEND': 'django_redis.cache.RedisCache', 'LOCATION': 'unix:/var/run/redis/redis.sock?db=1'}}
CACHES = {'default': {'BACKEND': 'awx.main.cache.AWXRedisCache', 'LOCATION': 'unix:/var/run/redis/redis.sock?db=1'}}
# Social Auth configuration.
SOCIAL_AUTH_STRATEGY = 'social_django.strategy.DjangoStrategy'
@@ -959,6 +964,9 @@ AWX_RUNNER_KEEPALIVE_SECONDS = 0
# Delete completed work units in receptor
RECEPTOR_RELEASE_WORK = True
# K8S only. Use receptor_log_level on AWX spec to set this properly
RECEPTOR_LOG_LEVEL = 'info'
MIDDLEWARE = [
'django_guid.middleware.guid_middleware',
'awx.main.middleware.SettingsCacheMiddleware',
@@ -1046,4 +1054,12 @@ CLEANUP_HOST_METRICS_LAST_TS = None
# Host metrics cleanup - minimal interval between two cleanups in days
CLEANUP_HOST_METRICS_INTERVAL = 30 # days
# Host metrics cleanup - soft-delete HostMetric records with last_automation < [threshold] (in months)
CLEANUP_HOST_METRICS_THRESHOLD = 12 # months
CLEANUP_HOST_METRICS_SOFT_THRESHOLD = 12 # months
# Host metrics cleanup
# - delete HostMetric record with deleted=True and last_deleted < [threshold]
# - also threshold for computing HostMetricSummaryMonthly (command/scheduled task)
CLEANUP_HOST_METRICS_HARD_THRESHOLD = 36 # months
# Host metric summary monthly task - last time of run
HOST_METRIC_SUMMARY_TASK_LAST_TS = None
HOST_METRIC_SUMMARY_TASK_INTERVAL = 7 # days

View File

@@ -28,8 +28,8 @@ SHELL_PLUS_PRINT_SQL = False
# show colored logs in the dev environment
# to disable this, set `COLOR_LOGS = False` in awx/settings/local_settings.py
LOGGING['handlers']['console']['()'] = 'awx.main.utils.handlers.ColorHandler' # noqa
COLOR_LOGS = True
LOGGING['handlers']['console']['()'] = 'awx.main.utils.handlers.ColorHandler' # noqa
ALLOWED_HOSTS = ['*']

View File

@@ -19,7 +19,7 @@
<input type="text" name="username" maxlength="100"
autocapitalize="off"
autocorrect="off" class="form-control textinput textInput"
id="id_username" required autofocus
id="id_username" autocomplete="off" required autofocus
{% if form.username.value %}value="{{ form.username.value }}"{% endif %}>
{% if form.username.errors %}
<p class="text-error">{{ form.username.errors|striptags }}</p>
@@ -31,7 +31,8 @@
<div class="form-group">
<label for="id_password">Password:</label>
<input type="password" name="password" maxlength="100" autocapitalize="off"
autocorrect="off" class="form-control textinput textInput" id="id_password" required>
autocorrect="off" class="form-control textinput textInput" id="id_password"
autocomplete="off" required>
{% if form.password.errors %}
<p class="text-error">{{ form.password.errors|striptags }}</p>
{% endif %}

View File

@@ -6,5 +6,20 @@ class ConstructedInventories extends InstanceGroupsMixin(Base) {
super(http);
this.baseUrl = 'api/v2/constructed_inventories/';
}
async readConstructedInventoryOptions(id, method) {
const {
data: { actions },
} = await this.http.options(`${this.baseUrl}${id}/`);
if (actions[method]) {
return actions[method];
}
throw new Error(
`You have insufficient access to this Constructed Inventory.
Please contact your system administrator if there is an issue with your access.`
);
}
}
export default ConstructedInventories;

View File

@@ -0,0 +1,51 @@
import ConstructedInventories from './ConstructedInventories';
describe('ConstructedInventoriesAPI', () => {
const constructedInventoryId = 1;
const constructedInventoryMethod = 'PUT';
let ConstructedInventoriesAPI;
let mockHttp;
beforeEach(() => {
const optionsPromise = () =>
Promise.resolve({
data: {
actions: {
PUT: {},
},
},
});
mockHttp = {
options: jest.fn(optionsPromise),
};
ConstructedInventoriesAPI = new ConstructedInventories(mockHttp);
});
afterEach(() => {
jest.resetAllMocks();
});
test('readConstructedInventoryOptions calls options with the expected params', async () => {
await ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
constructedInventoryMethod
);
expect(mockHttp.options).toHaveBeenCalledTimes(1);
expect(mockHttp.options).toHaveBeenCalledWith(
`api/v2/constructed_inventories/${constructedInventoryId}/`
);
});
test('readConstructedInventory should throw an error if action method is missing', async () => {
try {
await ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
'POST'
);
} catch (error) {
expect(error.message).toContain(
'You have insufficient access to this Constructed Inventory.'
);
}
});
});

View File

@@ -91,7 +91,7 @@ function AdHocCredentialStep({ credentialTypeId }) {
{meta.touched && meta.error && (
<CredentialErrorAlert variant="danger" isInline title={meta.error} />
)}
<Form>
<Form autoComplete="off">
<FormGroup
fieldId="credential"
label={t`Machine Credential`}

View File

@@ -50,7 +50,7 @@ function AdHocDetailsStep({ moduleOptions }) {
: true;
return (
<Form>
<Form autoComplete="off">
<FormColumnLayout>
<FormFullWidthLayout>
<FormGroup

View File

@@ -84,7 +84,7 @@ function AdHocExecutionEnvironmentStep({ organizationId }) {
}
return (
<Form>
<Form autoComplete="off">
<FormGroup
fieldId="execution_enviroment"
label={t`Execution Environment`}

View File

@@ -50,7 +50,7 @@ const userSortColumns = [
const teamSearchColumns = [
{
name: t`Name`,
key: 'name',
key: 'name__icontains',
isDefault: true,
},
{

View File

@@ -77,7 +77,7 @@ function PromptModalForm({
}
if (launchConfig.ask_labels_on_launch) {
const { labelIds } = createNewLabels(
const { labelIds } = await createNewLabels(
values.labels,
resource.organization
);

View File

@@ -1,10 +1,10 @@
import React, { useCallback, useEffect, useState } from 'react';
import { useLocation } from 'react-router-dom';
import { t } from '@lingui/macro';
import { RolesAPI, TeamsAPI, UsersAPI, OrganizationsAPI } from 'api';
import { RolesAPI, TeamsAPI, UsersAPI } from 'api';
import { getQSConfig, parseQueryString } from 'util/qs';
import useRequest, { useDeleteItems } from 'hooks/useRequest';
import { useUserProfile, useConfig } from 'contexts/Config';
import { useUserProfile } from 'contexts/Config';
import AddResourceRole from '../AddRole/AddResourceRole';
import AlertModal from '../AlertModal';
import DataListToolbar from '../DataListToolbar';
@@ -25,8 +25,7 @@ const QS_CONFIG = getQSConfig('access', {
});
function ResourceAccessList({ apiModel, resource }) {
const { isSuperUser, isOrgAdmin } = useUserProfile();
const { me } = useConfig();
const { isSuperUser } = useUserProfile();
const [submitError, setSubmitError] = useState(null);
const [deletionRecord, setDeletionRecord] = useState(null);
const [deletionRole, setDeletionRole] = useState(null);
@@ -34,42 +33,15 @@ function ResourceAccessList({ apiModel, resource }) {
const [showDeleteModal, setShowDeleteModal] = useState(false);
const location = useLocation();
const {
isLoading: isFetchingOrgAdmins,
error: errorFetchingOrgAdmins,
request: fetchOrgAdmins,
result: { isCredentialOrgAdmin },
} = useRequest(
useCallback(async () => {
if (
isSuperUser ||
resource.type !== 'credential' ||
!isOrgAdmin ||
!resource?.organization
) {
return false;
}
const {
data: { count },
} = await OrganizationsAPI.readAdmins(resource.organization, {
id: me.id,
});
return { isCredentialOrgAdmin: !!count };
}, [me.id, isOrgAdmin, isSuperUser, resource.type, resource.organization]),
{
isCredentialOrgAdmin: false,
}
);
useEffect(() => {
fetchOrgAdmins();
}, [fetchOrgAdmins]);
let canAddAdditionalControls = false;
if (isSuperUser) {
canAddAdditionalControls = true;
}
if (resource.type === 'credential' && isOrgAdmin && isCredentialOrgAdmin) {
if (
resource.type === 'credential' &&
resource?.summary_fields?.user_capabilities?.edit &&
resource?.organization
) {
canAddAdditionalControls = true;
}
if (resource.type !== 'credential') {
@@ -195,8 +167,8 @@ function ResourceAccessList({ apiModel, resource }) {
return (
<>
<PaginatedTable
error={contentError || errorFetchingOrgAdmins}
hasContentLoading={isLoading || isDeleteLoading || isFetchingOrgAdmins}
error={contentError}
hasContentLoading={isLoading || isDeleteLoading}
items={accessRecords}
itemCount={itemCount}
pluralizedItemName={t`Roles`}

View File

@@ -463,7 +463,7 @@ describe('<ResourceAccessList />', () => {
expect(wrapper.find('ToolbarAddButton').length).toEqual(1);
});
test('should not show add button for non system admin & non org admin', async () => {
test('should not show add button for a user without edit permissions on the credential', async () => {
useUserProfile.mockImplementation(() => {
return {
isSuperUser: false,
@@ -476,7 +476,21 @@ describe('<ResourceAccessList />', () => {
let wrapper;
await act(async () => {
wrapper = mountWithContexts(
<ResourceAccessList resource={credential} apiModel={CredentialsAPI} />,
<ResourceAccessList
resource={{
...credential,
summary_fields: {
...credential.summary_fields,
user_capabilities: {
edit: false,
delete: false,
copy: false,
use: false,
},
},
}}
apiModel={CredentialsAPI}
/>,
{ context: { router: { credentialHistory } } }
);
});

View File

@@ -94,7 +94,7 @@ export default function FrequencyDetails({
value={getRunEveryLabel()}
dataCy={`${prefix}-run-every`}
/>
{type === 'week' ? (
{type === 'week' && options.daysOfWeek ? (
<Detail
label={t`On days`}
value={options.daysOfWeek

View File

@@ -24,10 +24,10 @@ function DateTimePicker({ dateFieldName, timeFieldName, label }) {
validate: combine([required(null), validateTime()]),
});
const onDateChange = (inputDate, newDate) => {
const onDateChange = (_, dateString, date) => {
dateHelpers.setTouched();
if (isValidDate(newDate) && inputDate === yyyyMMddFormat(newDate)) {
dateHelpers.setValue(inputDate);
if (isValidDate(date) && dateString === yyyyMMddFormat(date)) {
dateHelpers.setValue(dateString);
}
};
@@ -62,7 +62,7 @@ function DateTimePicker({ dateFieldName, timeFieldName, label }) {
}
time={timeField.value}
{...timeField}
onChange={(time) => timeHelpers.setValue(time)}
onChange={(_, time) => timeHelpers.setValue(time)}
/>
</DateTimeGroup>
</FormGroup>

View File

@@ -43,10 +43,11 @@ describe('<DateTimePicker/>', () => {
await act(async () => {
wrapper.find('DatePicker').prop('onChange')(
null,
'2021-05-29',
new Date('Sat May 29 2021 00:00:00 GMT-0400 (Eastern Daylight Time)')
);
wrapper.find('TimePicker').prop('onChange')('7:15 PM');
wrapper.find('TimePicker').prop('onChange')(null, '7:15 PM');
});
wrapper.update();
expect(wrapper.find('DatePicker').prop('value')).toBe('2021-05-29');

View File

@@ -885,6 +885,7 @@ describe('<ScheduleForm />', () => {
).toBe(true);
await act(async () => {
wrapper.find('DatePicker[aria-label="End date"]').prop('onChange')(
null,
'2020-03-14',
new Date('2020-03-14')
);
@@ -905,6 +906,7 @@ describe('<ScheduleForm />', () => {
const laterTime = DateTime.now().plus({ hours: 1 }).toFormat('h:mm a');
await act(async () => {
wrapper.find('DatePicker[aria-label="End date"]').prop('onChange')(
null,
today,
new Date(today)
);
@@ -919,6 +921,7 @@ describe('<ScheduleForm />', () => {
);
await act(async () => {
wrapper.find('TimePicker[aria-label="End time"]').prop('onChange')(
null,
laterTime
);
});

View File

@@ -47,7 +47,7 @@ export default function StatusLabel({ status, tooltipContent = '', children }) {
unreachable: t`Unreachable`,
running: t`Running`,
pending: t`Pending`,
skipped: t`Skipped'`,
skipped: t`Skipped`,
timedOut: t`Timed out`,
waiting: t`Waiting`,
disabled: t`Disabled`,

View File

@@ -8722,8 +8722,8 @@ msgid "Skipped"
msgstr "Skipped"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "Skipped'"
msgid "Skipped"
msgstr "Skipped"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8190,8 +8190,8 @@ msgid "Skipped"
msgstr "Omitido"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "Omitido'"
msgid "Skipped"
msgstr "Omitido"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8078,7 +8078,7 @@ msgid "Skipped"
msgstr "Ignoré"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgid "Skipped"
msgstr "Ignoré"
#: components/NotificationList/NotificationList.js:200

View File

@@ -8118,8 +8118,8 @@ msgid "Skipped"
msgstr "スキップ済"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "スキップ済'"
msgid "Skipped"
msgstr "スキップ済"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8072,8 +8072,8 @@ msgid "Skipped"
msgstr "건너뜀"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "건너뜀'"
msgid "Skipped"
msgstr "건너뜀"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8096,8 +8096,8 @@ msgid "Skipped"
msgstr "Overgeslagen"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "Overgeslagen'"
msgid "Skipped"
msgstr "Overgeslagen"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8072,8 +8072,8 @@ msgid "Skipped"
msgstr "跳过"
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgstr "跳过'"
msgid "Skipped"
msgstr "跳过"
#: components/NotificationList/NotificationList.js:200
#: screens/NotificationTemplate/NotificationTemplateList/NotificationTemplateList.js:141

View File

@@ -8503,7 +8503,7 @@ msgid "Skipped"
msgstr ""
#: components/StatusLabel/StatusLabel.js:50
msgid "Skipped'"
msgid "Skipped"
msgstr ""
#: components/NotificationList/NotificationList.js:200

View File

@@ -67,7 +67,7 @@ function MetadataStep() {
return (
<>
{fields.length > 0 && (
<Form>
<Form autoComplete="off">
<FormFullWidthLayout>
{fields.map((field) => {
if (field.type === 'string') {

View File

@@ -99,7 +99,7 @@ function ExternalTestModal({
</Button>,
]}
>
<Form>
<Form autoComplete="off">
<FormFullWidthLayout>
{credentialType.inputs.metadata.map((field) => {
const isRequired = credentialType.inputs?.required.includes(

View File

@@ -1,14 +1,43 @@
import React, { useState } from 'react';
import React, { useCallback, useEffect, useState } from 'react';
import { useHistory } from 'react-router-dom';
import { Card, PageSection } from '@patternfly/react-core';
import { ConstructedInventoriesAPI, InventoriesAPI } from 'api';
import useRequest from 'hooks/useRequest';
import { CardBody } from 'components/Card';
import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import ConstructedInventoryForm from '../shared/ConstructedInventoryForm';
function ConstructedInventoryAdd() {
const history = useHistory();
const [submitError, setSubmitError] = useState(null);
const {
isLoading: isLoadingOptions,
error: optionsError,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(async () => {
const res = await ConstructedInventoriesAPI.readOptions();
const { data } = res;
return data.actions.POST;
}, []),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
if (isLoadingOptions || (!options && !optionsError)) {
return <ContentLoading />;
}
if (optionsError) {
return <ContentError error={optionsError} />;
}
const handleCancel = () => {
history.push('/inventories');
};
@@ -48,6 +77,7 @@ function ConstructedInventoryAdd() {
onCancel={handleCancel}
onSubmit={handleSubmit}
submitError={submitError}
options={options}
/>
</CardBody>
</Card>

View File

@@ -55,6 +55,7 @@ describe('<ConstructedInventoryAdd />', () => {
context: { router: { history } },
});
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
});
afterEach(() => {

View File

@@ -20,6 +20,27 @@ function ConstructedInventoryEdit({ inventory }) {
const detailsUrl = `/inventories/constructed_inventory/${inventory.id}/details`;
const constructedInventoryId = inventory.id;
const {
isLoading: isLoadingOptions,
error: optionsError,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(
() =>
ConstructedInventoriesAPI.readConstructedInventoryOptions(
constructedInventoryId,
'PUT'
),
[constructedInventoryId]
),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
const {
result: { initialInstanceGroups, initialInputInventories },
request: fetchedRelatedData,
@@ -44,6 +65,7 @@ function ConstructedInventoryEdit({ inventory }) {
isLoading: true,
}
);
useEffect(() => {
fetchedRelatedData();
}, [fetchedRelatedData]);
@@ -99,12 +121,12 @@ function ConstructedInventoryEdit({ inventory }) {
const handleCancel = () => history.push(detailsUrl);
if (isLoading) {
return <ContentLoading />;
if (contentError || optionsError) {
return <ContentError error={contentError || optionsError} />;
}
if (contentError) {
return <ContentError error={contentError} />;
if (isLoading || isLoadingOptions || (!options && !optionsError)) {
return <ContentLoading />;
}
return (
@@ -116,6 +138,7 @@ function ConstructedInventoryEdit({ inventory }) {
constructedInventory={inventory}
instanceGroups={initialInstanceGroups}
inputInventories={initialInputInventories}
options={options}
/>
</CardBody>
);

View File

@@ -51,27 +51,22 @@ describe('<ConstructedInventoryEdit />', () => {
};
beforeEach(async () => {
ConstructedInventoriesAPI.readOptions.mockResolvedValue({
data: {
related: {},
actions: {
POST: {
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
},
ConstructedInventoriesAPI.readConstructedInventoryOptions.mockResolvedValue(
{
limit: {
label: 'Limit',
help_text: '',
},
},
});
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
}
);
InventoriesAPI.readInstanceGroups.mockResolvedValue({
data: {
results: associatedInstanceGroups,
@@ -169,6 +164,21 @@ describe('<ConstructedInventoryEdit />', () => {
expect(wrapper.find('ContentError').length).toBe(1);
});
test('should throw content error if user has insufficient options permissions', async () => {
expect(wrapper.find('ContentError').length).toBe(0);
ConstructedInventoriesAPI.readConstructedInventoryOptions.mockImplementationOnce(
() => Promise.reject(new Error())
);
await act(async () => {
wrapper = mountWithContexts(
<ConstructedInventoryEdit inventory={mockInv} />
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
expect(wrapper.find('ContentError').length).toBe(1);
});
test('unsuccessful form submission should show an error message', async () => {
const error = {
response: {

View File

@@ -1,14 +1,10 @@
import React, { useCallback, useEffect } from 'react';
import React, { useCallback } from 'react';
import { Formik, useField, useFormikContext } from 'formik';
import { func, shape } from 'prop-types';
import { t } from '@lingui/macro';
import { ConstructedInventoriesAPI } from 'api';
import { minMaxValue, required } from 'util/validators';
import useRequest from 'hooks/useRequest';
import { Form, FormGroup } from '@patternfly/react-core';
import { VariablesField } from 'components/CodeEditor';
import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import FormActionGroup from 'components/FormActionGroup/FormActionGroup';
import FormField, { FormSubmitError } from 'components/FormField';
import { FormFullWidthLayout, FormColumnLayout } from 'components/FormLayout';
@@ -165,6 +161,7 @@ function ConstructedInventoryForm({
onCancel,
onSubmit,
submitError,
options,
}) {
const initialValues = {
kind: 'constructed',
@@ -179,32 +176,6 @@ function ConstructedInventoryForm({
source_vars: constructedInventory?.source_vars || '---',
};
const {
isLoading,
error,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(async () => {
const res = await ConstructedInventoriesAPI.readOptions();
const { data } = res;
return data.actions.POST;
}, []),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
if (isLoading || (!options && !error)) {
return <ContentLoading />;
}
if (error) {
return <ContentError error={error} />;
}
return (
<Formik initialValues={initialValues} onSubmit={onSubmit}>
{(formik) => (

View File

@@ -1,6 +1,5 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { ConstructedInventoriesAPI } from 'api';
import {
mountWithContexts,
waitForElement,
@@ -19,38 +18,35 @@ const mockFormValues = {
inputInventories: [{ id: 100, name: 'East' }],
};
const options = {
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
};
describe('<ConstructedInventoryForm />', () => {
let wrapper;
const onSubmit = jest.fn();
beforeEach(async () => {
ConstructedInventoriesAPI.readOptions.mockResolvedValue({
data: {
related: {},
actions: {
POST: {
limit: {
label: 'Limit',
help_text: '',
},
update_cache_timeout: {
label: 'Update cache timeout',
help_text: 'help',
},
verbosity: {
label: 'Verbosity',
help_text: '',
},
},
},
},
});
await act(async () => {
wrapper = mountWithContexts(
<ConstructedInventoryForm onCancel={() => {}} onSubmit={onSubmit} />
<ConstructedInventoryForm
onCancel={() => {}}
onSubmit={onSubmit}
options={options}
/>
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
});
afterEach(() => {
@@ -104,20 +100,4 @@ describe('<ConstructedInventoryForm />', () => {
'The plugin parameter is required.'
);
});
test('should throw content error when option request fails', async () => {
let newWrapper;
ConstructedInventoriesAPI.readOptions.mockImplementationOnce(() =>
Promise.reject(new Error())
);
await act(async () => {
newWrapper = mountWithContexts(
<ConstructedInventoryForm onCancel={() => {}} onSubmit={() => {}} />
);
});
expect(newWrapper.find('ContentError').length).toBe(0);
newWrapper.update();
expect(newWrapper.find('ContentError').length).toBe(1);
jest.clearAllMocks();
});
});

View File

@@ -122,7 +122,7 @@ function ConstructedInventoryHint() {
<br />
<Panel>
<CardBody>
<Form>
<Form autoComplete="off">
<b>{t`Constructed inventory examples`}</b>
<LimitToIntersectionExample />
<FilterOnNestedGroupExample />

View File

@@ -178,6 +178,7 @@ function NotificationTemplatesList() {
<HeaderCell sortKey="name">{t`Name`}</HeaderCell>
<HeaderCell>{t`Status`}</HeaderCell>
<HeaderCell sortKey="notification_type">{t`Type`}</HeaderCell>
<HeaderCell sortKey="organization">{t`Organization`}</HeaderCell>
<HeaderCell>{t`Actions`}</HeaderCell>
</HeaderRow>
}

View File

@@ -20,6 +20,10 @@ const mockTemplates = {
url: '/notification_templates/1',
type: 'slack',
summary_fields: {
organization: {
id: 1,
name: 'Foo',
},
recent_notifications: [
{
status: 'success',
@@ -36,6 +40,10 @@ const mockTemplates = {
id: 2,
url: '/notification_templates/2',
summary_fields: {
organization: {
id: 2,
name: 'Bar',
},
recent_notifications: [],
user_capabilities: {
delete: true,
@@ -48,6 +56,10 @@ const mockTemplates = {
id: 3,
url: '/notification_templates/3',
summary_fields: {
organization: {
id: 3,
name: 'Test',
},
recent_notifications: [
{
status: 'failed',

View File

@@ -121,6 +121,13 @@ function NotificationTemplateListItem({
{NOTIFICATION_TYPES[template.notification_type] ||
template.notification_type}
</Td>
<Td dataLabel={t`Oragnization`}>
<Link
to={`/organizations/${template.summary_fields.organization.id}/details`}
>
<b>{template.summary_fields.organization.name}</b>
</Link>
</Td>
<ActionsTd dataLabel={t`Actions`}>
<ActionItem visible tooltip={t`Test notification`}>
<Button

View File

@@ -12,6 +12,10 @@ const template = {
notification_type: 'slack',
name: 'Test Notification',
summary_fields: {
organization: {
id: 1,
name: 'Foo',
},
user_capabilities: {
edit: true,
copy: true,
@@ -39,7 +43,7 @@ describe('<NotificationTemplateListItem />', () => {
);
const cells = wrapper.find('Td');
expect(cells).toHaveLength(5);
expect(cells).toHaveLength(6);
expect(cells.at(1).text()).toEqual('Test Notification');
expect(cells.at(2).text()).toEqual('Success');
expect(cells.at(3).text()).toEqual('Slack');
@@ -133,6 +137,10 @@ describe('<NotificationTemplateListItem />', () => {
template={{
...template,
summary_fields: {
organization: {
id: 3,
name: 'Test',
},
user_capabilities: {
copy: false,
edit: false,

View File

@@ -59,6 +59,7 @@ function MiscSystemDetail() {
'TOWER_URL_BASE',
'DEFAULT_EXECUTION_ENVIRONMENT',
'PROXY_IP_ALLOWED_LIST',
'CSRF_TRUSTED_ORIGINS',
'AUTOMATION_ANALYTICS_LAST_GATHER',
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
'UI_NEXT'

View File

@@ -29,6 +29,7 @@ describe('<MiscSystemDetail />', () => {
TOWER_URL_BASE: 'https://towerhost',
REMOTE_HOST_HEADERS: [],
PROXY_IP_ALLOWED_LIST: [],
CSRF_TRUSTED_ORIGINS: [],
LICENSE: null,
REDHAT_USERNAME: 'name1',
REDHAT_PASSWORD: '$encrypted$',

View File

@@ -53,6 +53,7 @@ function MiscSystemEdit() {
'TOWER_URL_BASE',
'DEFAULT_EXECUTION_ENVIRONMENT',
'PROXY_IP_ALLOWED_LIST',
'CSRF_TRUSTED_ORIGINS',
'UI_NEXT'
);
@@ -95,6 +96,7 @@ function MiscSystemEdit() {
await submitForm({
...form,
PROXY_IP_ALLOWED_LIST: formatJson(form.PROXY_IP_ALLOWED_LIST),
CSRF_TRUSTED_ORIGINS: formatJson(form.CSRF_TRUSTED_ORIGINS),
REMOTE_HOST_HEADERS: formatJson(form.REMOTE_HOST_HEADERS),
DEFAULT_EXECUTION_ENVIRONMENT:
form.DEFAULT_EXECUTION_ENVIRONMENT?.id || null,
@@ -239,6 +241,11 @@ function MiscSystemEdit() {
config={system.PROXY_IP_ALLOWED_LIST}
isRequired
/>
<ObjectField
name="CSRF_TRUSTED_ORIGINS"
config={system.CSRF_TRUSTED_ORIGINS}
isRequired
/>
{submitError && <FormSubmitError error={submitError} />}
{revertError && <FormSubmitError error={revertError} />}
</FormColumnLayout>

View File

@@ -39,6 +39,7 @@ const systemData = {
REMOTE_HOST_HEADERS: ['REMOTE_ADDR', 'REMOTE_HOST'],
TOWER_URL_BASE: 'https://localhost:3000',
PROXY_IP_ALLOWED_LIST: [],
CSRF_TRUSTED_ORIGINS: [],
UI_NEXT: false,
};

Some files were not shown because too many files have changed in this diff Show More