Compare commits

...

96 Commits

Author SHA1 Message Date
Christian Adams
677187a43e Merge pull request #12096 from rooftopcellist/localization-devel-4-24
Localization Update & Add KO to supported languages
2022-04-25 10:24:49 -04:00
Christian M. Adams
972cb82d16 Fix Localization syntax errors 2022-04-24 01:18:37 -04:00
Christian M. Adams
3102df0bf6 Update Localization Strings & Add KO 2022-04-24 00:52:12 -04:00
Alan Rominger
cb63d92bbf Remove committed_capacity field, delete supporting code (#12086)
* Remove committed_capacity field, delete supporting code

* Track consumed capacity to solve the negatives problem

* Use more verbose name for IG queryset
2022-04-22 13:41:32 -04:00
John Westcott IV
c43424ed09 Refactoring release_process docs and updating images (#11981) 2022-04-22 12:42:12 -04:00
John Westcott IV
a0ccc8c925 Merge pull request #5784 from ansible/runner_changes_42 (#12083) 2022-04-22 10:46:35 -04:00
Sarah Akus
47160f0118 Merge pull request #12067 from ansible/dependabot/npm_and_yarn/awx/ui/minimist-1.2.6
Bump minimist from 1.2.5 to 1.2.6 in /awx/ui
2022-04-22 09:54:38 -04:00
Alan Rominger
44f0609314 Merge pull request #11996 from AlanCoding/blockhead
Remove unnecessary blocks from project update playbook
2022-04-21 13:58:48 -04:00
Elijah DeLee
689a216726 move static methods used by task manager (#12050)
* move static methods used by task manager

These static methods were being used to act on Instance-like objects
that were SimpleNamespace objects with the necessary attributes.

This change introduces dedicated classes to replace the SimpleNamespace
objects and moves the formerlly staticmethods to a place where they are
more relevant instead of tacked onto models to which they were only
loosly related.

Accept in-memory data structure in init methods for tests

* initialize remaining capacity AFTER we built map of instances
2022-04-21 13:05:06 -04:00
Alan Rominger
4b45148614 Merge pull request #12016 from Ladas/analytics_collector_should_collect_full_license_data
Analytics collector should collect full license data
2022-04-21 11:12:33 -04:00
Alan Rominger
c84e603ac5 Remove unnecessary blocks from project update playbook 2022-04-21 10:04:14 -04:00
Kersom
c7049e1a0e Merge pull request #12077 from nixocio/ui_fix_typo
Update strings
2022-04-21 08:48:33 -04:00
nixocio
0b4c3e3046 Update strings
Update strings
2022-04-20 14:51:08 -04:00
Sarah Akus
8a5fd11506 Merge pull request #12062 from nixocio/ui_issue_11770
Fix notification template details for system auditors
2022-04-20 14:14:43 -04:00
Alan Rominger
b565038fdf Merge pull request #12066 from AlanCoding/resolved_role
Ship the resolved_role event data to analytics
2022-04-20 11:00:21 -04:00
Keith Grant
526b1e692a remove output/stderr tabs from host detail modals when not present (#12064) 2022-04-19 17:17:37 -04:00
Seth Foster
c93155132a Merge pull request #12031 from fosterseth/awxkit_import_more_verbose_error
awxkit log which resource failed to import
2022-04-19 15:44:37 -04:00
Alex Corey
ae7960e9d7 Adds popover help text to project details, and unifies those strings (used in the form and the details view) into 1 file (#12039) 2022-04-19 14:35:51 -04:00
Jeff Bradberry
3a1268de1e Merge pull request #12068 from jbradberry/fix-event-partition-alignment-devel
Fix the job event partition alignment
2022-04-19 10:36:48 -04:00
Alex Corey
10042df309 Merge pull request #12069 from nixocio/ui_fix_code_details
Fix rows type for CodeDetails
2022-04-19 10:01:27 -04:00
Alan Rominger
2530ada9d7 Bump analytics event_table version 2022-04-18 16:49:53 -04:00
Jeff Bradberry
11890f0eee Fix the job event partition alignment
it really should be always aligned to the hour, so that real job
events don't slip through the cracks.
2022-04-18 14:54:06 -04:00
nixocio
5cb3f31df0 Fix rows type for CodeDetails
Fix rows type for CodeDetails
2022-04-18 14:42:51 -04:00
dependabot[bot]
ac0624236e Bump minimist from 1.2.5 to 1.2.6 in /awx/ui
Bumps [minimist](https://github.com/substack/minimist) from 1.2.5 to 1.2.6.
- [Release notes](https://github.com/substack/minimist/releases)
- [Commits](https://github.com/substack/minimist/compare/1.2.5...1.2.6)

---
updated-dependencies:
- dependency-name: minimist
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-04-18 18:41:38 +00:00
nixocio
13eb174c9f Fix notification template details for system auditors
Fix notification template details for system auditors

See: https://github.com/ansible/awx/issues/11770
2022-04-18 14:02:44 -04:00
Rebeccah Hunter
a3e29317c5 default saved replies for triages (#12047)
* create a singular page with listed replies that can be copy and pasted for mailing list and bug scrub purposes

Co-authored-by: Alicia Cozine <879121+acozine@users.noreply.github.com>
2022-04-18 16:28:22 +00:00
Alan Rominger
75d7cb5bca Merge pull request #11989 from AlanCoding/deprecate_uopu
Mark inventory source field for deprecation
2022-04-18 11:59:05 -04:00
Alan Rominger
9059dce8af Merge pull request #12041 from AlanCoding/commitment_problems
Mark committed_capacity field for removal
2022-04-18 11:58:47 -04:00
Alan Rominger
1676c02611 Ship the resolved_role event data to analytics 2022-04-18 11:42:19 -04:00
Kersom
86a888f0d0 Merge pull request #12063 from nixocio/ui_remove_dupe_css
Remove duplicate CSS rules
2022-04-14 16:11:18 -04:00
nixocio
816652a8e2 Remove duplicate CSS rules
Remove duplicate CSS rules
2022-04-14 15:19:29 -04:00
Sarah Akus
c1817ab19e Merge pull request #12048 from nixocio/ui_issue_12046
Disable isCreatable on Advanced Search
2022-04-14 15:01:46 -04:00
Sarah Akus
b2dcc0d7e9 Merge pull request #12029 from nixocio/ui_issue_12008
Update when deleted is shown on job details
2022-04-14 14:44:13 -04:00
Shane McDonald
ba5361b25e Merge pull request #12056 from anxstj/doc_ansible_runner
Update file path in docs/ansible_runner_integration.md
2022-04-14 12:24:07 -04:00
Amol Gautam
ae826ed19d Merge pull request #12021 from amolgautam25/ctit_db
removed 'check_migrations' condition in _citi_db_wrapper
2022-04-14 08:59:48 -07:00
Elijah DeLee
e24fc43a45 Revert "Only fetch fields we need in task manager"
This reverts commit 868e811b3f.

Turns out this does not play well with polymorphic models.

Will try again with .defer()
2022-04-14 11:55:33 -04:00
Stefan Jakobs
b719e5771c Update file path 2022-04-14 17:31:10 +02:00
Shane McDonald
778862fe51 Merge pull request #12054 from shanemcd/new-autoreloader
Alternative code reloader for dev env
2022-04-14 11:18:13 -04:00
Shane McDonald
30d185a67f Make dev env reload faster 2022-04-14 10:40:07 -04:00
Shane McDonald
89c2a4c6ed Alternative code reloader for dev env
I verified what Seth found in https://github.com/ansible/awx/pull/12052, but would really hate to lose this functionality. Curious if folks on the API team can try this and see if it works for them.
2022-04-14 09:42:17 -04:00
Elijah DeLee
868e811b3f Only fetch fields we need in task manager
By using .only we select fewer columns, avoiding potentially large
fields that we never reference.

Also, small tweak to eliminate what was a duplicate dictionary of
hostname:instance, because we don't need build and carry two copies of
the same data.
2022-04-13 17:24:33 -04:00
nixocio
f6496c28fe Disable isCreatable on Advanced Search
Disable isCreatable on Advanced Search

See: https://github.com/ansible/awx/issues/12046
2022-04-13 15:34:13 -04:00
Sarah Akus
81cda0ba74 Merge pull request #12038 from keithjgrant/survey-array
Add array support to survey multiple choice questions
2022-04-13 15:00:49 -04:00
Elijah DeLee
2e9974133a calculate remaining capacity in static method
this is to avoid additional queries when we allready have all
the active jobs fetched in the task manager
2022-04-13 11:56:07 -04:00
Sarah Akus
49051c4aaf Merge pull request #12026 from AlexSCorey/11396-ofWordTranslation
Fixes pagination translation failure
2022-04-13 11:43:05 -04:00
Kersom
e2a89ad8a2 Add saved replies dir and default reply (#12028)
Add saved replies dir and default reply
2022-04-13 10:59:18 -04:00
Keith J. Grant
f4b0bd68bd add tests for array/string survey multi-select 2022-04-12 15:14:09 -07:00
Alan Rominger
5a304db840 Mark inventory source field for deprecation 2022-04-12 16:24:35 -04:00
Alan Rominger
e3044298bf Mark committed_capacity field for removal 2022-04-12 16:18:05 -04:00
John Mitchell
bbb9770a97 change back to Automation Analytics name (#12022) 2022-04-12 14:23:13 -04:00
Elijah DeLee
4328b4cb67 drop call that queries all running and waiting jobs
this is to fix one more place in the task manager where we end up
querying all running and waiting jobs.

Partial fix for https://github.com/ansible/awx/issues/11671
2022-04-12 10:31:47 -04:00
Keith J. Grant
a324753180 support survey choices in array format 2022-04-11 14:28:01 -07:00
Seth Foster
1462af61b0 awxkit log which resource failed to import 2022-04-11 17:03:13 -04:00
Alex Corey
8478a0f70b Fixes pagination translation failure 2022-04-11 14:45:11 -04:00
nixocio
8288655b30 Update when deleted is shown on job details
Update when deleted is show on job details.

Some job types should not display inventory or projects, update when
showing those fields.

Also, update when displaying information when
those fields where deleted.

See: https://github.com/ansible/awx/issues/12008
2022-04-11 14:20:42 -04:00
Rebeccah Hunter
ac8204427e Merge pull request #11914 from ansible/instances_list_filtering
add ID as default filter if no other filtering criteria is provided as well as some tests that should cover order integrity for future scenarios
2022-04-08 18:24:41 -04:00
Rebeccah
f6b8ce18d0 I don't think these tests actually add anything, so I am removing them even though I wrote them in the first place. 2022-04-08 18:04:34 -04:00
Amol Gautam
dc42946ff3 Removed migration check conditions in citi_db_wrapper 2022-04-08 17:53:02 -04:00
Rebeccah
44cc934c2b add projects to test that ordering functions correctly and when it gets a value it cannot order by it falls back to ID
add tests that check ordering for projects, organizations, inventories, groups, and hosts
2022-04-08 17:18:57 -04:00
Rebeccah
933956eccb have instances be filtered by ID in case of no filtering criteria passed in
and then switch from using order by ID as a fallback for all ordering and instead
just set instances ordering to ID as default to prevent
OrderedManyToMany fields ordering from being interrupted.
2022-04-08 17:01:58 -04:00
Kersom
27dc8caabd Do not truncate strings on activity stream dropdown (#12020)
Do not truncate strings on activity stream dropdown

See: https://github.com/ansible/awx/issues/11399
2022-04-08 16:45:32 -04:00
Sarah Akus
4b98df237e Merge pull request #12009 from nixocio/ui_issue_12006
Do not show inventory for project update on job details
2022-04-08 15:12:19 -04:00
Sarah Akus
0fa3ca8dc0 Merge pull request #12007 from marshmalien/11778-search-labels-placeholder
Add placeholder text when user selects a fuzzy search on labels
2022-04-08 14:48:05 -04:00
Kersom
0712affa9b Escape name__regex and name__iregex (#11964)
Escape name__regex and name__iregex. Escaping the value for those
keys when creating a smart inventory is a work around for the
pyparsing code on the API side for special characters. This will just
display an extra escape when showing the host_filter on details page.
2022-04-08 13:08:32 -04:00
Sarah Akus
b646aa03f8 Merge pull request #11920 from AlexSCorey/5210-t-WorkflowApprovalListRefactor
Improves UX of workflow approval list
2022-04-08 12:07:35 -04:00
Alex Corey
4beea35d9e Refactors workflow approval list toolbar and details acttions to add clarity. 2022-04-08 10:34:44 -04:00
Kersom
e8948a9d6e Merge pull request #12004 from nixocio/ui_downgrade_node
Downgrade min required node LTS
2022-04-07 15:33:44 -04:00
nixocio
28f25d5aba Downgrade min required node LTS
Downgrade min required node LTS
2022-04-07 14:56:52 -04:00
Keith Grant
7cbb783b2c Use new children-summary endpoint data to traverse job event tree (#11944)
* use new children-summary endpoint data to traverse job event tree

* update job output tests for new children summary data

* force flat mode if event child summary fails to load

* update childrenSummary data for endpoint changes

* don't add jobs to job tree until children summary loaded

* force job output into flat mode if job processing not complete
2022-04-06 13:10:04 -04:00
Ladislav Smola
1793f94f27 Analytics collector should collect full license data
Analytics collector should collect full license data
2022-04-06 14:09:19 +02:00
nixocio
0b7c9cd8ad Do not show inventory for project update on job details
Do not show inventory for project update on job details

See: https://github.com/ansible/awx/issues/12006
2022-04-05 13:26:19 -04:00
Marliana Lara
51b5b78084 Add placeholder text when user selects a fuzzy search on labels 2022-04-05 12:56:40 -04:00
Satoe Imaishi
bea924ddc6 Merge pull request #11983 from simaishi/update_cryptography
Update cryptography to >=35 for openssl 3 support
2022-04-05 17:09:46 +09:00
Sarah Akus
b5fcc6e541 Merge pull request #11963 from AlexSCorey/11467-RevertDraagandDrop
Revert "updated patternfly"
2022-04-04 21:44:08 -04:00
Alex Corey
ffb46fec52 Fixes test failure 2022-04-04 21:28:18 -04:00
Alex Corey
4190cf126c Reverts the code from 8b47106c63d7081b0cd9450694427ca9e92b2815 while keeping the depenedency upgrade 2022-04-04 21:16:43 -04:00
Seth Foster
58721098d5 Merge pull request #11928 from fosterseth/job_event_children_summary
Add JobJobEventsChildrenSummary endpoint
2022-04-04 17:23:28 -04:00
Seth Foster
cfd6df7a3b Add JobJobEventsChildrenSummary endpoint
- returns a special view to output the total number of children (and
grandchildren) events for all parents events for a job
value is the number of total children of that event
- intended to be consumed by the UI, as an efficient way to get the
number of children for a particular event
- see api/templates/api/job_job_events_children_summary.md for more info
2022-04-04 14:25:18 -04:00
Björn Pedersen
9f6fa4cf97 Grafana notifications: Fix panel/dashboardId type (#11083)
* Grafana notifications: Fix panel/dashboardId type

Latest grafana fails with
  Error sending notification grafana: 400
  [{"classification":"DeserializationError",
    "message":"json: cannot unmarshal string into Go struct
        field PostAnnotationsCmd.dashboardId of type int64"}]

So ensure the IDs are really int and not strings.

* Fix the dashboard/panelId=0 case

0 is avlaid valid for the ID's, so ensure to allow them.

* Update tests to new behavior

Panel/Dashboard Id fields are not sent if they where not requested.
Alos add tests for the ID=0 case.
2022-04-01 16:08:01 -04:00
Alan Rominger
7822da03fb Merge pull request #11865 from AlanCoding/galaxy_task_env
Add user-defined environment variables to ansible-galaxy commands
2022-04-01 15:24:54 -04:00
Alan Rominger
58cb3d5bdc Change indent to standard pattern 2022-04-01 13:46:00 -04:00
Sarah Akus
a3c97a51be Merge pull request #11988 from nixocio/ui_issue_11982
Fix notification template details
2022-04-01 13:40:02 -04:00
Elijah DeLee
202dc00f4c cast bool to str for runner env
It appears this was causing a fatal error for runner
2022-04-01 13:37:36 -04:00
Satoe Imaishi
309e58b6d7 Update cryptography to >=35 for openssl 3 support 2022-04-01 00:29:57 -04:00
Sarah Akus
34b20e26fa Merge pull request #11939 from marshmalien/8474-output-search-clear-all
Fix search toolbar clear all filters
2022-03-31 15:41:56 -04:00
Marliana Lara
1de2487e8f Fix search toolbar clear all filters 2022-03-31 13:52:56 -04:00
Sarah Akus
8d95b72527 Merge pull request #11846 from AlexSCorey/11203-WFToolbarIssues
Fixes Workflow visualizer toolbar disappearing.
2022-03-31 12:35:38 -04:00
nixocio
a920c9cc20 Fix notification template details
Fix notification template details

See: https://github.com/ansible/awx/issues/11982
2022-03-31 11:12:00 -04:00
Alex Corey
427f6d1687 Merge pull request #11791 from AlexSCorey/11713-PreventDisassociateHybridNodeFromControlplan
Prevents disassociate hybrid node on controlplane instance group
2022-03-31 10:34:21 -04:00
Alex Corey
dc64168ed4 Disallows disassociate of hubrid type instances from controlplane instance group
Introduce new pattern for is_valid_removal

Makes disassociate error message a bit more dynamic
2022-03-30 17:24:24 -04:00
Alan Rominger
4b913a0ae8 Merge pull request #11980 from AlanCoding/provision_cleanup
Delete dead code from get_or_register, move, and test
2022-03-30 15:44:44 -04:00
Alan Rominger
6c56f2b35b Delete dead code from get_or_register, move, and test 2022-03-30 13:35:42 -04:00
nixocio
be6657239d Add UI changes to JobsEdit
Add UI changes to JobsEdit
2022-03-29 10:25:29 -04:00
Alan Rominger
0caf263508 yaml cleanup 2022-03-29 09:57:40 -04:00
Alan Rominger
c77667788a Add user-defined environment variables to ansible-galaxy commands 2022-03-29 09:57:40 -04:00
Alex Corey
efb01f3c36 Fixes Workflow visualizer toolbar disappearing. 2022-03-28 10:55:23 -04:00
161 changed files with 25869 additions and 8997 deletions

31
.github/triage_replies.md vendored Normal file
View File

@@ -0,0 +1,31 @@
## General
- For the roundup of all the different mailing lists available from AWX, Ansible, and beyond visit: https://docs.ansible.com/ansible/latest/community/communication.html
- Hello, we think your question is answered in our FAQ. Does this: https://www.ansible.com/products/awx-project/faq cover your question?
- You can find the latest documentation here: https://docs.ansible.com/automation-controller/latest/html/userguide/index.html
## Visit our mailing list
- Hello, your question seems like a good one to ask on our mailing list at https://groups.google.com/g/awx-project. You can also join #ansible-awx on https://libera.chat/ and ask your question there.
## Create an issue
- Hello, thanks for reaching out on list. We think this merits an issue on our Github, https://github.com/ansible/awx/issues. If you could open an issue up on Github it will get tagged and integrated into our planning and workflow. All future work will be tracked there.
## Create a Pull Request
- Hello, we think your idea is good, please consider contributing a PR for this, following our contributing guidelines: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
## Receptor
- You can find the receptor docs here: https://receptor.readthedocs.io/en/latest/
- Hello, your issue seems related to receptor, could you please open an issue in the receptor repository? https://github.com/ansible/receptor. Thanks!
## Ansible Engine not AWX
- Hello, your question seems to be about Ansible development, not about AWX. Try asking on the Ansible-devel specific mailing list: https://groups.google.com/g/ansible-devel
- Hello, your question seems to be about using Ansible, not about AWX. https://groups.google.com/g/ansible-project is the best place to visit for user questions about Ansible. Thanks!
## Ansible Galaxy not AWX
- Hey there, that sounds like an FAQ question, did this: https://www.ansible.com/products/awx-project/faq cover your question?
## Contributing Guidelines
- AWX: https://github.com/ansible/awx/blob/devel/CONTRIBUTING.md
- AWX-Operator: https://github.com/ansible/awx-operator/blob/devel/CONTRIBUTING.md
## Code of Conduct
- Hello. Please keep in mind that Ansible adheres to a Code of Conduct in its community spaces. The spirit of the code of conduct is to be kind, and this is your friendly reminder to be so. Please see the full code of conduct here if you have questions: https://docs.ansible.com/ansible/latest/community/code_of_conduct.html

View File

@@ -177,7 +177,7 @@ collectstatic:
fi; \
mkdir -p awx/public/static && $(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
UWSGI_DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver
DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:*
uwsgi: collectstatic
@if [ "$(VENV_BASE)" ]; then \
@@ -192,12 +192,13 @@ uwsgi: collectstatic
--processes=5 \
--harakiri=120 --master \
--no-orphans \
--py-autoreload 1 \
--max-requests=1000 \
--stats /tmp/stats.socket \
--lazy-apps \
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)" \
--hook-accepting1="exec: $(UWSGI_DEV_RELOAD_COMMAND)"
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)"
awx-autoreload:
@/awx_devel/tools/docker-compose/awx-autoreload /awx_devel "$(DEV_RELOAD_COMMAND)"
daphne:
@if [ "$(VENV_BASE)" ]; then \

View File

@@ -398,11 +398,11 @@ class OrderByBackend(BaseFilterBackend):
order_by = value.split(',')
else:
order_by = (value,)
if order_by is None:
order_by = self.get_default_ordering(view)
default_order_by = self.get_default_ordering(view)
# glue the order by and default order by together so that the default is the backup option
order_by = list(order_by or []) + list(default_order_by or [])
if order_by:
order_by = self._validate_ordering_fields(queryset.model, order_by)
# Special handling of the type field for ordering. In this
# case, we're not sorting exactly on the type field, but
# given the limited number of views with multiple types,

View File

@@ -638,6 +638,11 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
# attaching/detaching them from the parent.
def is_valid_relation(self, parent, sub, created=False):
"Override in subclasses to do efficient validation of attaching"
return None
def is_valid_removal(self, parent, sub):
"Same as is_valid_relation but called on disassociation"
return None
def get_description_context(self):
@@ -722,6 +727,11 @@ class SubListCreateAttachDetachAPIView(SubListCreateAPIView):
if not request.user.can_access(self.parent_model, 'unattach', parent, sub, self.relationship, request.data):
raise PermissionDenied()
# Verify that removing the relationship is valid.
unattach_errors = self.is_valid_removal(parent, sub)
if unattach_errors is not None:
return Response(unattach_errors, status=status.HTTP_400_BAD_REQUEST)
if parent_key:
sub.delete()
else:

View File

@@ -113,6 +113,7 @@ from awx.main.utils import (
)
from awx.main.utils.filters import SmartFilter
from awx.main.utils.named_url_graph import reset_counters
from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups, TaskManagerInstances
from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.validators import vars_validate_or_raise
@@ -4873,7 +4874,6 @@ class InstanceGroupSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete']
committed_capacity = serializers.SerializerMethodField()
consumed_capacity = serializers.SerializerMethodField()
percent_capacity_remaining = serializers.SerializerMethodField()
jobs_running = serializers.IntegerField(
@@ -4922,7 +4922,6 @@ class InstanceGroupSerializer(BaseSerializer):
"created",
"modified",
"capacity",
"committed_capacity",
"consumed_capacity",
"percent_capacity_remaining",
"jobs_running",
@@ -5003,30 +5002,29 @@ class InstanceGroupSerializer(BaseSerializer):
return attrs
def get_capacity_dict(self):
def get_ig_mgr(self):
# Store capacity values (globally computed) in the context
if 'capacity_map' not in self.context:
ig_qs = None
if 'task_manager_igs' not in self.context:
instance_groups_queryset = None
jobs_qs = UnifiedJob.objects.filter(status__in=('running', 'waiting'))
if self.parent: # Is ListView:
ig_qs = self.parent.instance
self.context['capacity_map'] = InstanceGroup.objects.capacity_values(qs=ig_qs, tasks=jobs_qs, breakdown=True)
return self.context['capacity_map']
instance_groups_queryset = self.parent.instance
instances = TaskManagerInstances(jobs_qs)
instance_groups = TaskManagerInstanceGroups(instances_by_hostname=instances, instance_groups_queryset=instance_groups_queryset)
self.context['task_manager_igs'] = instance_groups
return self.context['task_manager_igs']
def get_consumed_capacity(self, obj):
return self.get_capacity_dict()[obj.name]['running_capacity']
def get_committed_capacity(self, obj):
return self.get_capacity_dict()[obj.name]['committed_capacity']
ig_mgr = self.get_ig_mgr()
return ig_mgr.get_consumed_capacity(obj.name)
def get_percent_capacity_remaining(self, obj):
if not obj.capacity:
return 0.0
consumed = self.get_consumed_capacity(obj)
if consumed >= obj.capacity:
return 0.0
else:
return float("{0:.2f}".format(((float(obj.capacity) - float(consumed)) / (float(obj.capacity))) * 100))
ig_mgr = self.get_ig_mgr()
return float("{0:.2f}".format((float(ig_mgr.get_remaining_capacity(obj.name)) / (float(obj.capacity))) * 100))
def get_instances(self, obj):
return obj.instances.count()

View File

@@ -0,0 +1,102 @@
# View a summary of children events
Special view to facilitate processing job output in the UI.
In order to collapse events and their children, the UI needs to know how
many children exist for a given event.
The UI also needs to know the order of the event (0 based index), which
usually matches the counter, but not always.
This view returns a JSON object where the key is the event counter, and the value
includes the number of children (and grandchildren) events.
Only events with children are included in the output.
## Example
e.g. Demo Job Template job
tuple(event counter, uuid, parent_uuid)
```
(1, '4598d19e-93b4-4e33-a0ae-b387a7348964', '')
(2, 'aae0d189-e3cb-102a-9f00-000000000006', '4598d19e-93b4-4e33-a0ae-b387a7348964')
(3, 'aae0d189-e3cb-102a-9f00-00000000000c', 'aae0d189-e3cb-102a-9f00-000000000006')
(4, 'f4194f14-e406-4124-8519-0fdb08b18f4b', 'aae0d189-e3cb-102a-9f00-00000000000c')
(5, '39f7ad99-dbf3-41e0-93f8-9999db4004f2', 'aae0d189-e3cb-102a-9f00-00000000000c')
(6, 'aae0d189-e3cb-102a-9f00-000000000008', 'aae0d189-e3cb-102a-9f00-000000000006')
(7, '39a49992-5ca4-4b6c-b178-e56d0b0333da', 'aae0d189-e3cb-102a-9f00-000000000008')
(8, '504f3b28-3ea8-4f6f-bd82-60cf8e807cc0', 'aae0d189-e3cb-102a-9f00-000000000008')
(9, 'a242be54-ebe6-4021-afab-f2878bff2e9f', '4598d19e-93b4-4e33-a0ae-b387a7348964')
```
output
```
{
"1": {
"rowNumber": 0,
"numChildren": 8
},
"2": {
"rowNumber": 1,
"numChildren": 6
},
"3": {
"rowNumber": 2,
"numChildren": 2
},
"6": {
"rowNumber": 5,
"numChildren": 2
}
}
"meta_event_nested_parent_uuid": {}
}
```
counter 1 is event 0, and has 8 children
counter 2 is event 1, and has 6 children
etc.
The UI also needs to be able to collapse over "meta" events -- events that
show up due to verbosity or warnings from the system while the play is running.
These events have a 0 level event, with no parent uuid.
```
playbook_on_start
verbose
playbook_on_play_start
playbook_on_task_start
runner_on_start <- level 3
verbose <- jump to level 0
verbose
runner_on_ok <- jump back to level 3
playbook_on_task_start
runner_on_start
runner_on_ok
verbose
verbose
playbook_on_stats
```
These verbose statements that fall in the middle of a series of children events
are problematic for the UI.
To help, this view will attempt to place the events into the hierarchy, without
the event level jumps.
```
playbook_on_start
verbose
playbook_on_play_start
playbook_on_task_start
runner_on_start <- A
verbose <- this maps to the uuid of A
verbose
runner_on_ok
playbook_on_task_start <- B
runner_on_start
runner_on_ok
verbose <- this maps to the uuid of B
verbose
playbook_on_stats
```
The output will include a JSON object where the key is the event counter,
and the value is the assigned nested uuid.

View File

@@ -10,6 +10,7 @@ from awx.api.views import (
JobRelaunch,
JobCreateSchedule,
JobJobHostSummariesList,
JobJobEventsChildrenSummary,
JobJobEventsList,
JobActivityStreamList,
JobStdout,
@@ -27,6 +28,7 @@ urls = [
re_path(r'^(?P<pk>[0-9]+)/create_schedule/$', JobCreateSchedule.as_view(), name='job_create_schedule'),
re_path(r'^(?P<pk>[0-9]+)/job_host_summaries/$', JobJobHostSummariesList.as_view(), name='job_job_host_summaries_list'),
re_path(r'^(?P<pk>[0-9]+)/job_events/$', JobJobEventsList.as_view(), name='job_job_events_list'),
re_path(r'^(?P<pk>[0-9]+)/job_events/children_summary/$', JobJobEventsChildrenSummary.as_view(), name='job_job_events_children_summary'),
re_path(r'^(?P<pk>[0-9]+)/activity_stream/$', JobActivityStreamList.as_view(), name='job_activity_stream_list'),
re_path(r'^(?P<pk>[0-9]+)/stdout/$', JobStdout.as_view(), name='job_stdout'),
re_path(r'^(?P<pk>[0-9]+)/notifications/$', JobNotificationsList.as_view(), name='job_notifications_list'),

View File

@@ -365,6 +365,7 @@ class InstanceList(ListAPIView):
model = models.Instance
serializer_class = serializers.InstanceSerializer
search_fields = ('hostname',)
ordering = ('id',)
class InstanceDetail(RetrieveUpdateAPIView):
@@ -409,7 +410,15 @@ class InstanceInstanceGroupsList(InstanceGroupMembershipMixin, SubListCreateAtta
if parent.node_type == 'control':
return {'msg': _(f"Cannot change instance group membership of control-only node: {parent.hostname}.")}
if parent.node_type == 'hop':
return {'msg': _(f"Cannot change instance group membership of hop node: {parent.hostname}.")}
return {'msg': _(f"Cannot change instance group membership of hop node : {parent.hostname}.")}
return None
def is_valid_removal(self, parent, sub):
res = self.is_valid_relation(parent, sub)
if res:
return res
if sub.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME and parent.node_type == 'hybrid':
return {'msg': _(f"Cannot disassociate hybrid instance {parent.hostname} from {sub.name}.")}
return None
@@ -511,7 +520,15 @@ class InstanceGroupInstanceList(InstanceGroupMembershipMixin, SubListAttachDetac
if sub.node_type == 'control':
return {'msg': _(f"Cannot change instance group membership of control-only node: {sub.hostname}.")}
if sub.node_type == 'hop':
return {'msg': _(f"Cannot change instance group membership of hop node: {sub.hostname}.")}
return {'msg': _(f"Cannot change instance group membership of hop node : {sub.hostname}.")}
return None
def is_valid_removal(self, parent, sub):
res = self.is_valid_relation(parent, sub)
if res:
return res
if sub.node_type == 'hybrid' and parent.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME:
return {'msg': _(f"Cannot disassociate hybrid node {sub.hostname} from {parent.name}.")}
return None
@@ -3826,6 +3843,84 @@ class JobJobEventsList(BaseJobEventsList):
return job.get_event_queryset().select_related('host').order_by('start_line')
class JobJobEventsChildrenSummary(APIView):
renderer_classes = [JSONRenderer]
meta_events = ('debug', 'verbose', 'warning', 'error', 'system_warning', 'deprecated')
def get(self, request, **kwargs):
resp = dict(children_summary={}, meta_event_nested_uuid={}, event_processing_finished=False)
job = get_object_or_404(models.Job, pk=kwargs['pk'])
if not job.event_processing_finished:
return Response(resp)
else:
resp["event_processing_finished"] = True
events = list(job.get_event_queryset().values('counter', 'uuid', 'parent_uuid', 'event').order_by('counter'))
if len(events) == 0:
return Response(resp)
# key is counter, value is number of total children (including children of children, etc.)
map_counter_children_tally = {i['counter']: {"rowNumber": 0, "numChildren": 0} for i in events}
# key is uuid, value is counter
map_uuid_counter = {i['uuid']: i['counter'] for i in events}
# key is uuid, value is parent uuid. Used as a quick lookup
map_uuid_puuid = {i['uuid']: i['parent_uuid'] for i in events}
# key is counter of meta events (i.e. verbose), value is uuid of the assigned parent
map_meta_counter_nested_uuid = {}
prev_non_meta_event = events[0]
for i, e in enumerate(events):
if not e['event'] in JobJobEventsChildrenSummary.meta_events:
prev_non_meta_event = e
if not e['uuid']:
continue
puuid = e['parent_uuid']
# if event is verbose (or debug, etc), we need to "assign" it a
# parent. This code looks at the event level of the previous
# non-verbose event, and the level of the next (by looking ahead)
# non-verbose event. The verbose event is assigned the same parent
# uuid of the higher level event.
# e.g.
# E1
# E2
# verbose
# verbose <- we are on this event currently
# E4
# We'll compare E2 and E4, and the verbose event
# will be assigned the parent uuid of E4 (higher event level)
if e['event'] in JobJobEventsChildrenSummary.meta_events:
event_level_before = models.JobEvent.LEVEL_FOR_EVENT[prev_non_meta_event['event']]
# find next non meta event
z = i
next_non_meta_event = events[-1]
while z < len(events):
if events[z]['event'] not in JobJobEventsChildrenSummary.meta_events:
next_non_meta_event = events[z]
break
z += 1
event_level_after = models.JobEvent.LEVEL_FOR_EVENT[next_non_meta_event['event']]
if event_level_after and event_level_after > event_level_before:
puuid = next_non_meta_event['parent_uuid']
else:
puuid = prev_non_meta_event['parent_uuid']
if puuid:
map_meta_counter_nested_uuid[e['counter']] = puuid
map_counter_children_tally[e['counter']]['rowNumber'] = i
if not puuid:
continue
# now traverse up the parent, grandparent, etc. events and tally those
while puuid:
map_counter_children_tally[map_uuid_counter[puuid]]['numChildren'] += 1
puuid = map_uuid_puuid.get(puuid, None)
# create new dictionary, dropping events with 0 children
resp["children_summary"] = {k: v for k, v in map_counter_children_tally.items() if v['numChildren'] != 0}
resp["meta_event_nested_uuid"] = map_meta_counter_nested_uuid
return Response(resp)
class AdHocCommandList(ListCreateAPIView):
model = models.AdHocCommand

View File

@@ -68,49 +68,27 @@ class InstanceGroupMembershipMixin(object):
membership.
"""
def attach_validate(self, request):
parent = self.get_parent_object()
sub_id, res = super().attach_validate(request)
if res: # handle an error
return sub_id, res
sub = get_object_or_400(self.model, pk=sub_id)
attach_errors = self.is_valid_relation(parent, sub)
if attach_errors:
return sub_id, Response(attach_errors, status=status.HTTP_400_BAD_REQUEST)
return sub_id, res
def attach(self, request, *args, **kwargs):
response = super(InstanceGroupMembershipMixin, self).attach(request, *args, **kwargs)
sub_id, res = self.attach_validate(request)
if status.is_success(response.status_code):
sub_id = request.data.get('id', None)
if self.parent_model is Instance:
inst_name = self.get_parent_object().hostname
else:
inst_name = get_object_or_400(self.model, pk=sub_id).hostname
with transaction.atomic():
ig_qs = InstanceGroup.objects.select_for_update()
instance_groups_queryset = InstanceGroup.objects.select_for_update()
if self.parent_model is Instance:
ig_obj = get_object_or_400(ig_qs, pk=sub_id)
ig_obj = get_object_or_400(instance_groups_queryset, pk=sub_id)
else:
# similar to get_parent_object, but selected for update
parent_filter = {self.lookup_field: self.kwargs.get(self.lookup_field, None)}
ig_obj = get_object_or_404(ig_qs, **parent_filter)
ig_obj = get_object_or_404(instance_groups_queryset, **parent_filter)
if inst_name not in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.append(inst_name)
ig_obj.save(update_fields=['policy_instance_list'])
return response
def unattach_validate(self, request):
parent = self.get_parent_object()
(sub_id, res) = super(InstanceGroupMembershipMixin, self).unattach_validate(request)
if res:
return (sub_id, res)
sub = get_object_or_400(self.model, pk=sub_id)
attach_errors = self.is_valid_relation(parent, sub)
if attach_errors:
return (sub_id, Response(attach_errors, status=status.HTTP_400_BAD_REQUEST))
return (sub_id, res)
def unattach(self, request, *args, **kwargs):
response = super(InstanceGroupMembershipMixin, self).unattach(request, *args, **kwargs)
if status.is_success(response.status_code):
@@ -120,13 +98,13 @@ class InstanceGroupMembershipMixin(object):
else:
inst_name = get_object_or_400(self.model, pk=sub_id).hostname
with transaction.atomic():
ig_qs = InstanceGroup.objects.select_for_update()
instance_groups_queryset = InstanceGroup.objects.select_for_update()
if self.parent_model is Instance:
ig_obj = get_object_or_400(ig_qs, pk=sub_id)
ig_obj = get_object_or_400(instance_groups_queryset, pk=sub_id)
else:
# similar to get_parent_object, but selected for update
parent_filter = {self.lookup_field: self.kwargs.get(self.lookup_field, None)}
ig_obj = get_object_or_404(ig_qs, **parent_filter)
ig_obj = get_object_or_404(instance_groups_queryset, **parent_filter)
if inst_name in ig_obj.policy_instance_list:
ig_obj.policy_instance_list.pop(ig_obj.policy_instance_list.index(inst_name))
ig_obj.save(update_fields=['policy_instance_list'])

View File

@@ -81,17 +81,16 @@ def _ctit_db_wrapper(trans_safe=False):
yield
except DBError as exc:
if trans_safe:
if 'migrate' not in sys.argv and 'check_migrations' not in sys.argv:
level = logger.exception
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level('Database settings are not available, using defaults.')
level = logger.exception
if isinstance(exc, ProgrammingError):
if 'relation' in str(exc) and 'does not exist' in str(exc):
# this generally means we can't fetch Tower configuration
# because the database hasn't actually finished migrating yet;
# this is usually a sign that a service in a container (such as ws_broadcast)
# has come up *before* the database has finished migrating, and
# especially that the conf.settings table doesn't exist yet
level = logger.debug
level('Database settings are not available, using defaults.')
else:
logger.exception('Error modifying something related to database settings.')
finally:

View File

@@ -1,3 +1,6 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
msgid ""
@@ -8,8 +11,7 @@ msgstr ""
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: es \n"
"MIME-Version: 1.0\n"
"Language: \n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
@@ -21,9 +23,7 @@ msgstr "Tiempo de inactividad fuerza desconexión"
msgid ""
"Number of seconds that a user is inactive before they will need to login "
"again."
msgstr ""
"Número de segundos que un usuario es inactivo antes de que ellos vuelvan a "
"conectarse de nuevo."
msgstr "Número de segundos que un usuario es inactivo antes de que ellos vuelvan a conectarse de nuevo."
#: awx/api/conf.py:21 awx/api/conf.py:31 awx/api/conf.py:42 awx/api/conf.py:50
#: awx/api/conf.py:70 awx/api/conf.py:85 awx/api/conf.py:96 awx/sso/conf.py:105
@@ -45,9 +45,7 @@ msgstr "Número máximo de sesiones activas en simultáneo"
msgid ""
"Maximum number of simultaneous logged in sessions a user may have. To "
"disable enter -1."
msgstr ""
"Número máximo de sesiones activas en simultáneo que un usuario puede tener. "
"Para deshabilitar, introduzca -1."
msgstr "Número máximo de sesiones activas en simultáneo que un usuario puede tener. Para deshabilitar, introduzca -1."
#: awx/api/conf.py:37
msgid "Disable the built-in authentication system"
@@ -58,10 +56,7 @@ msgid ""
"Controls whether users are prevented from using the built-in authentication "
"system. You probably want to do this if you are using an LDAP or SAML "
"integration."
msgstr ""
"Controla si se impide que los usuarios utilicen el sistema de autenticación "
"integrado. Probablemente desee hacer esto si está usando una integración de "
"LDAP o SAML."
msgstr "Controla si se impide que los usuarios utilicen el sistema de autenticación integrado. Probablemente desee hacer esto si está usando una integración de LDAP o SAML."
#: awx/api/conf.py:48
msgid "Enable HTTP Basic Auth"
@@ -83,13 +78,7 @@ msgid ""
"authorization codes in the number of seconds, and "
"`REFRESH_TOKEN_EXPIRE_SECONDS`, the duration of refresh tokens, after "
"expired access tokens, in the number of seconds."
msgstr ""
"Diccionario para personalizar los tiempos de espera de OAuth2; los elementos "
"disponibles son `ACCESS_TOKEN_EXPIRE_SECONDS`, la duración de los tokens de "
"acceso en cantidad de segundos; `AUTHORIZATION_CODE_EXPIRE_SECONDS`, la "
"duración de los códigos de autorización en cantidad de segundos; y "
"`REFRESH_TOKEN_EXPIRE_SECONDS`, la duración de los tokens de actualización, "
"después de los tokens de acceso expirados, en cantidad de segundos."
msgstr "Diccionario para personalizar los tiempos de espera de OAuth2; los elementos disponibles son `ACCESS_TOKEN_EXPIRE_SECONDS`, la duración de los tokens de acceso en cantidad de segundos; `AUTHORIZATION_CODE_EXPIRE_SECONDS`, la duración de los códigos de autorización en cantidad de segundos; y `REFRESH_TOKEN_EXPIRE_SECONDS`, la duración de los tokens de actualización, después de los tokens de acceso expirados, en cantidad de segundos."
#: awx/api/conf.py:78
msgid "Allow External Users to Create OAuth2 Tokens"
@@ -101,11 +90,7 @@ msgid ""
"Radius, and others) are not allowed to create OAuth2 tokens. To change this "
"behavior, enable this setting. Existing tokens will not be deleted when this "
"setting is toggled off."
msgstr ""
"Por motivos de seguridad, los usuarios de proveedores de autenticación "
"externos (LDAP, SAML, SSO, Radius y otros) no tienen permitido crear tokens "
"de OAuth2. Habilite este ajuste para cambiar este comportamiento. Los tokens "
"existentes no se eliminarán cuando se desactive este ajuste."
msgstr "Por motivos de seguridad, los usuarios de proveedores de autenticación externos (LDAP, SAML, SSO, Radius y otros) no tienen permitido crear tokens de OAuth2. Habilite este ajuste para cambiar este comportamiento. Los tokens existentes no se eliminarán cuando se desactive este ajuste."
#: awx/api/conf.py:94
msgid "Login redirect override URL"
@@ -115,10 +100,7 @@ msgstr "URL de invalidación de redireccionamiento de inicio de sesión"
msgid ""
"URL to which unauthorized users will be redirected to log in. If blank, "
"users will be sent to the login page."
msgstr ""
"URL a la que los usuarios no autorizados serán redirigidos para iniciar "
"sesión. Si está en blanco, los usuarios serán enviados a la página de inicio "
"de sesión."
msgstr "URL a la que los usuarios no autorizados serán redirigidos para iniciar sesión. Si está en blanco, los usuarios serán enviados a la página de inicio de sesión."
#: awx/api/conf.py:114
msgid "There are no remote authentication systems configured."
@@ -167,17 +149,13 @@ msgstr "ID{field_name} no válido: {field_id}"
msgid ""
"Cannot apply role_level filter to this list because its model does not use "
"roles for access control."
msgstr ""
"No se puede aplicar el filtro role_level a esta lista debido a que su modelo "
"no usa roles para el control de acceso."
msgstr "No se puede aplicar el filtro role_level a esta lista debido a que su modelo no usa roles para el control de acceso."
#: awx/api/generics.py:179
msgid ""
"You did not use correct Content-Type in your HTTP request. If you are using "
"our REST API, the Content-Type must be application/json"
msgstr ""
"No utilizó el Tipo de contenido correcto en su solicitud HTTP. Si está "
"usando nuestra API REST, el Tipo de contenido debe ser aplicación/json."
msgstr "No utilizó el Tipo de contenido correcto en su solicitud HTTP. Si está usando nuestra API REST, el Tipo de contenido debe ser aplicación/json."
#: awx/api/generics.py:220
msgid " To establish a login session, visit"
@@ -223,9 +201,7 @@ msgstr "Estructura de datos con URLs de recursos relacionados."
msgid ""
"Data structure with name/description for related resources. The output for "
"some objects may be limited for performance reasons."
msgstr ""
"Estructura de datos con nombre/descripción de los recursos relacionados. La "
"salida de algunos objetos puede estar limitada por motivos de rendimiento."
msgstr "Estructura de datos con nombre/descripción de los recursos relacionados. La salida de algunos objetos puede estar limitada por motivos de rendimiento."
#: awx/api/metadata.py:75
msgid "Timestamp when this {} was created."
@@ -248,17 +224,14 @@ msgstr "Error de análisis JSON; no es un objeto JSON"
msgid ""
"JSON parse error - %s\n"
"Possible cause: trailing comma."
msgstr ""
"Error de análisis JSON - %s\n"
msgstr "Error de análisis JSON - %s\n"
"Posible causa: coma final."
#: awx/api/serializers.py:205
msgid ""
"The original object is already named {}, a copy from it cannot have the same "
"name."
msgstr ""
"El objeto original ya tiene el nombre {}, por lo que una copia de este no "
"puede tener el mismo nombre."
msgstr "El objeto original ya tiene el nombre {}, por lo que una copia de este no puede tener el mismo nombre."
#: awx/api/serializers.py:334
#, python-format
@@ -301,9 +274,7 @@ msgstr "Plantilla de trabajo"
msgid ""
"Indicates whether all of the events generated by this unified job have been "
"saved to the database."
msgstr ""
"Indica si todos los eventos generados por esta tarea unificada se guardaron "
"en la base de datos."
msgstr "Indica si todos los eventos generados por esta tarea unificada se guardaron en la base de datos."
#: awx/api/serializers.py:939
msgid "Write-only field used to change the password."
@@ -324,8 +295,7 @@ msgstr "No se puede cambiar %s en el usuario gestionado por LDAP."
#: awx/api/serializers.py:1153
msgid "Must be a simple space-separated string with allowed scopes {}."
msgstr ""
"Debe ser una cadena simple separada por espacios con alcances permitidos {}."
msgstr "Debe ser una cadena simple separada por espacios con alcances permitidos {}."
#: awx/api/serializers.py:1238
msgid "Authorization Grant Type"
@@ -378,9 +348,7 @@ msgstr "SCM track_submodules solo puede usarse con proyectos git."
msgid ""
"Only Container Registry credentials can be associated with an Execution "
"Environment"
msgstr ""
"Solo las credenciales del registro de contenedores pueden asociarse a un "
"entorno de ejecución"
msgstr "Solo las credenciales del registro de contenedores pueden asociarse a un entorno de ejecución"
#: awx/api/serializers.py:1440
msgid "Cannot change the organization of an execution environment"
@@ -390,9 +358,7 @@ msgstr "No se puede modificar la organización de un entorno de ejecución"
msgid ""
"One or more job templates depend on branch override behavior for this "
"project (ids: {})."
msgstr ""
"Una o más plantillas de trabajo dependen del comportamiento de invalidación "
"de ramas para este proyecto (ids: {})."
msgstr "Una o más plantillas de trabajo dependen del comportamiento de invalidación de ramas para este proyecto (ids: {})."
#: awx/api/serializers.py:1530
msgid "Update options must be set to false for manual projects."
@@ -406,9 +372,7 @@ msgstr "Colección de playbooks disponibles dentro de este proyecto."
msgid ""
"Array of inventory files and directories available within this project, not "
"comprehensive."
msgstr ""
"Colección de archivos de inventario y directorios disponibles dentro de este "
"proyecto, no global."
msgstr "Colección de archivos de inventario y directorios disponibles dentro de este proyecto, no global."
#: awx/api/serializers.py:1599 awx/api/serializers.py:3098
#: awx/api/serializers.py:3311
@@ -560,7 +524,7 @@ msgstr "Playbook no encontrado para el proyecto."
#: awx/api/serializers.py:2842
msgid "Must select playbook for project."
msgstr "Debe seleccionar un playbook para el proyecto"
msgstr "Debe seleccionar un playbook para el proyecto."
#: awx/api/serializers.py:2844 awx/api/serializers.py:2846
msgid "Project does not allow overriding branch."
@@ -893,8 +857,7 @@ msgid "Containerized instances may not be managed via the API"
msgstr "Las instancias contenedorizadas no pueden ser gestionadas a través de la API."
#: awx/api/serializers.py:4919 awx/api/serializers.py:4922
#, fuzzy, python-format
#| msgid "tower instance group name may not be changed."
#, python-format
msgid "%s instance group name may not be changed."
msgstr "El nombre del grupo de instancia %s no puede modificarse."
@@ -1049,8 +1012,6 @@ msgid ""
msgstr "No puede asignar credenciales de acceso a un equipo cuando el campo de organización no está establecido o pertenezca a una organización diferente."
#: awx/api/views/__init__.py:720
#, fuzzy
#| msgid "The instance that managed the execution environment."
msgid "Only the 'pull' field can be edited for managed execution environments."
msgstr "Sólo se puede editar el campo \"pull\" para los entornos de ejecución gestionados."
@@ -2087,8 +2048,6 @@ msgid "Unique identifier for an installation"
msgstr "Identificador único para una instalación"
#: awx/main/conf.py:183
#, fuzzy
#| msgid "The Instance group the job was run under"
msgid "The instance group where control plane tasks run"
msgstr "Grupo de instancias donde se ejecutan las tareas del plano de control"
@@ -2796,8 +2755,6 @@ msgid "The identifier for the secret e.g., /some/identifier"
msgstr "El identificador para el secreto; por ejemplo, /some/identifier"
#: awx/main/credential_plugins/dsv.py:12
#, fuzzy
#| msgid "Tenant ID"
msgid "Tenant"
msgstr "Inquilino"
@@ -2816,8 +2773,6 @@ msgid ""
msgstr "El TLD del inquilino, por ejemplo, \"com\" cuando la URL es https://ex.secretservercloud.com"
#: awx/main/credential_plugins/dsv.py:34
#, fuzzy
#| msgid "Secret Name"
msgid "Secret Path"
msgstr "Ruta secreta"
@@ -2826,8 +2781,6 @@ msgid "The secret path e.g. /test/secret1"
msgstr "La ruta secreta, por ejemplo, /test/secret1"
#: awx/main/credential_plugins/dsv.py:46
#, fuzzy
#| msgid "Job Template"
msgid "URL template"
msgstr "Plantilla URL"
@@ -2960,8 +2913,6 @@ msgid ""
msgstr "Principales válidos (ya sea nombres de usuario o nombres de host) para los que se debería firmar el certificado:"
#: awx/main/credential_plugins/tss.py:10
#, fuzzy
#| msgid "Auth Server URL"
msgid "Secret Server URL"
msgstr "URL del servidor secreto"
@@ -2972,8 +2923,6 @@ msgid ""
msgstr "La URL base del servidor secreto, por ejemplo, https://myserver/SecretServer o https://mytenant.secretservercloud.com"
#: awx/main/credential_plugins/tss.py:17
#, fuzzy
#| msgid "Red Hat customer username"
msgid "The (Application) user username"
msgstr "El nombre de usuario (de la aplicación)"
@@ -2995,20 +2944,14 @@ msgid "The corresponding password"
msgstr "La contraseña correspondiente"
#: awx/main/credential_plugins/tss.py:31
#, fuzzy
#| msgid "Secret Key"
msgid "Secret ID"
msgstr "Identificación secreta"
#: awx/main/credential_plugins/tss.py:32
#, fuzzy
#| msgid "The name of the secret to look up."
msgid "The integer ID of the secret"
msgstr "El ID entero del secreto"
#: awx/main/credential_plugins/tss.py:37
#, fuzzy
#| msgid "Secret Key"
msgid "Secret Field"
msgstr "Campo secreto"
@@ -3568,22 +3511,14 @@ msgstr "Ruta de archivo absoluta al archivo CA por usar (opcional)"
#: awx/main/models/credential/__init__.py:1023
#: awx/main/models/credential/__init__.py:1029 awx/main/models/inventory.py:813
#, fuzzy
#| msgid "Gather data for Insights for Ansible Automation Platform"
msgid "Red Hat Ansible Automation Platform"
msgstr "Plataforma Red Hat Ansible Automation"
#: awx/main/models/credential/__init__.py:1031
#, fuzzy
#| msgid "The Ansible Tower base URL to authenticate with."
msgid "Red Hat Ansible Automation Platform base URL to authenticate with."
msgstr "URL base de Red Hat Ansible Automation Platform para autenticar."
#: awx/main/models/credential/__init__.py:1038
#, fuzzy
#| msgid ""
#| "The Ansible Tower user to authenticate as.This should not be set if an "
#| "OAuth token is being used."
msgid ""
"Red Hat Ansible Automation Platform username id to authenticate as.This "
"should not be set if an OAuth token is being used."
@@ -4488,7 +4423,7 @@ msgstr "El número de segundos desde de que la última actualización del proyec
msgid ""
"Allow changing the SCM branch or revision in a job template that uses this "
"project."
msgstr "Permitir el cambio de la rama o revisión del SCM en una plantilla de trabajo que utilice este proyecto."
msgstr "Permitir el cambio de la rama o revisión de SCM en una plantilla de trabajo que utilice este proyecto."
#: awx/main/models/projects.py:294
msgid "The last revision fetched by a project update"
@@ -4925,7 +4860,7 @@ msgid "Exception connecting to PagerDuty: {}"
msgstr "Excepción conectando a PagerDuty: {}"
#: awx/main/notifications/pagerduty_backend.py:87
#: awx/main/notifications/slack_backend.py:48
#: awx/main/notifications/slack_backend.py:49
#: awx/main/notifications/twilio_backend.py:47
msgid "Exception sending messages: {}"
msgstr "Excepción enviando mensajes: {}"
@@ -5181,7 +5116,7 @@ msgstr "Al menos un certificado es necesario."
msgid ""
"At least %(min_certs)d certificates are required, only %(cert_count)d "
"provided."
msgstr "Se requieren al menos %(min_certs)d certificados, sólo %(cert_count)d proporcionados."
msgstr "Al menos %(min_certs)d certificados son necesarios, solo se proporcionó %(cert_count)d."
#: awx/main/validators.py:152
#, python-format
@@ -5195,8 +5130,7 @@ msgid ""
msgstr "No se permiten más de %(max_certs)d certificados, %(cert_count)d proporcionado."
#: awx/main/validators.py:289
#, fuzzy, python-brace-format
#| msgid "The container image to be used for execution."
#, python-brace-format
msgid "The container image name {value} is not valid"
msgstr "El nombre de la imagen del contenedor {value} no es válido"
@@ -5666,17 +5600,17 @@ msgstr "La clave secreta OAuth2 (Clave secreta del cliente) from your GitHub org
#: awx/sso/conf.py:751
msgid "GitHub Organization Name"
msgstr "Nombre de la organización de GitHub"
msgstr "Nombre para la organización GitHub"
#: awx/sso/conf.py:752
msgid ""
"The name of your GitHub organization, as used in your organization's URL: "
"https://github.com/<yourorg>/."
msgstr "El nombre de su organización de GitHub, como se utiliza en la URL de su organización: https://github.com/<votreorg>/."
msgstr "El nombre de su organización de GitHub, como se utiliza en la URL de su organización: https://github.com/<yourorg>/."
#: awx/sso/conf.py:762
msgid "GitHub Organization OAuth2 Organization Map"
msgstr "Mapa de organización de OAuth2 de la organización de GitHub"
msgstr "Mapa de organización OAuth2 para organizaciones GitHub"
#: awx/sso/conf.py:774
msgid "GitHub Organization OAuth2 Team Map"
@@ -5684,7 +5618,7 @@ msgstr "Mapa de equipos OAuth2 para equipos GitHub"
#: awx/sso/conf.py:790
msgid "GitHub Team OAuth2 Callback URL"
msgstr "URL de devolución de OAuth2 del equipo de GitHub"
msgstr "URL callback OAuth2 para los equipos GitHub"
#: awx/sso/conf.py:792 awx/sso/conf.py:1060
msgid ""
@@ -5692,12 +5626,12 @@ msgid ""
"<yourorg>/settings/applications and obtain an OAuth2 key (Client ID) and "
"secret (Client Secret). Provide this URL as the callback URL for your "
"application."
msgstr "Cree una aplicación propiedad de la organización en https://github.com/organizations/<votreorg>/settings/applications y obtenga una clave de OAuth2 (ID del cliente) y secreta (clave secreta de cliente). Proporcione esta URL como URL de devolución para su aplicación."
msgstr "Cree una aplicación propiedad de la organización en https://github.com/organizations/<yourorg>/settings/applications y obtenga una clave de OAuth2 (ID del cliente) y secreta (clave secreta de cliente). Proporcione esta URL como URL de devolución para su aplicación."
#: awx/sso/conf.py:797 awx/sso/conf.py:809 awx/sso/conf.py:820
#: awx/sso/conf.py:832 awx/sso/conf.py:843 awx/sso/conf.py:855
msgid "GitHub Team OAuth2"
msgstr "OAuth2 de equipo de GitHub"
msgstr "OAuth2 para equipos GitHub"
#: awx/sso/conf.py:807
msgid "GitHub Team OAuth2 Key"
@@ -5828,7 +5762,7 @@ msgstr "Nombre de la organización de GitHub Enterprise"
msgid ""
"The name of your GitHub Enterprise organization, as used in your "
"organization's URL: https://github.com/<yourorg>/."
msgstr "El nombre de su organización de GitHub Enterprise, como se utiliza en la URL de su organización: https://github.com/<votreorg>/."
msgstr "El nombre de su organización de GitHub Enterprise, como se utiliza en la URL de su organización: https://github.com/<yourorg>/."
#: awx/sso/conf.py:1030
msgid "GitHub Enterprise Organization OAuth2 Organization Map"
@@ -6303,68 +6237,4 @@ msgstr "%s se está actualizando."
#: awx/ui/urls.py:24
msgid "This page will refresh when complete."
msgstr "Esta página se actualizará cuando se complete."
#~ msgid "SSLError while trying to connect to {}"
#~ msgstr "SSLError al intentar conectarse a {}"
#~ msgid "Request to {} timed out."
#~ msgstr "El tiempo de solicitud {} caducó."
#~ msgid "Unknown exception {} while trying to GET {}"
#~ msgstr "Excepción desconocida {} al intentar usar GET {}"
#~ msgid ""
#~ "Unauthorized access. Please check your Insights Credential username and "
#~ "password."
#~ msgstr ""
#~ "Acceso no autorizado. Verifique su nombre de usuario y contraseña de "
#~ "Insights."
#~ msgid ""
#~ "Failed to access the Insights API at URL {}. Server responded with {} "
#~ "status code and message {}"
#~ msgstr ""
#~ "No se pudo acceder a la API de Insights en la URL {}. El servidor "
#~ "respondió con el código de estado {} y el mensaje {}"
#~ msgid "Expected JSON response from Insights at URL {} but instead got {}"
#~ msgstr ""
#~ "Respuesta JSON esperada de Insights en la URL {}; en cambio, se recibió {}"
#~ msgid ""
#~ "Could not translate Insights system ID {} into an Insights platform ID."
#~ msgstr ""
#~ "No se pudo traducir el ID del sistema Insights {} en un ID de plataforma "
#~ "de Insights."
#~ msgid "This host is not recognized as an Insights host."
#~ msgstr "Este host no se reconoce como un host de Insights."
#~ msgid "The Insights Credential for \"{}\" was not found."
#~ msgstr "No se encontró la credencial de Insights para \"{}\"."
#~ msgid ""
#~ "The path to the secret stored in the secret backend e.g, /some/secret/"
#~ msgstr ""
#~ "La ruta al secreto almacenado en el backend de secretos; por ejemplo, /"
#~ "some/secret/"
#~ msgid "Ansible Tower"
#~ msgstr "Ansible Tower"
#~ msgid "Ansible Tower Hostname"
#~ msgstr "Nombre de host de Ansible Tower"
#~ msgid ""
#~ "Credentials to be used by hosts belonging to this inventory when "
#~ "accessing Red Hat Insights API."
#~ msgstr ""
#~ "Credenciales que utilizarán los hosts que pertenecen a este inventario "
#~ "cuando accedan a la API de Red Hat Insights."
#~ msgid "Assignment not allowed for Smart Inventory"
#~ msgstr "Tarea no permitida para el inventario inteligente"
#~ msgid "Red Hat Insights host unique identifier."
#~ msgstr "Identificador único de host de Red Hat Insights."
msgstr "Esta página se actualizará cuando se complete."

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,3 +1,6 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
msgid ""
@@ -8,8 +11,7 @@ msgstr ""
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
"Language: nl \n"
"MIME-Version: 1.0\n"
"Language: \n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
@@ -21,9 +23,7 @@ msgstr "Niet-actieve tijd voor forceren van afmelding"
msgid ""
"Number of seconds that a user is inactive before they will need to login "
"again."
msgstr ""
"Maximumaantal seconden dat een gebruiker niet-actief is voordat deze zich "
"opnieuw moet aanmelden."
msgstr "Maximumaantal seconden dat een gebruiker niet-actief is voordat deze zich opnieuw moet aanmelden."
#: awx/api/conf.py:21 awx/api/conf.py:31 awx/api/conf.py:42 awx/api/conf.py:50
#: awx/api/conf.py:70 awx/api/conf.py:85 awx/api/conf.py:96 awx/sso/conf.py:105
@@ -45,9 +45,7 @@ msgstr "Maximumaantal gelijktijdige aangemelde sessies"
msgid ""
"Maximum number of simultaneous logged in sessions a user may have. To "
"disable enter -1."
msgstr ""
"Maximumaantal gelijktijdige aangemelde sessies dat een gebruiker kan hebben. "
"Voer -1 in om dit uit te schakelen."
msgstr "Maximumaantal gelijktijdige aangemelde sessies dat een gebruiker kan hebben. Voer -1 in om dit uit te schakelen."
#: awx/api/conf.py:37
msgid "Disable the built-in authentication system"
@@ -58,10 +56,7 @@ msgid ""
"Controls whether users are prevented from using the built-in authentication "
"system. You probably want to do this if you are using an LDAP or SAML "
"integration."
msgstr ""
"Bepaalt of gebruikers het ingebouwde authenticatiesysteem niet mogen "
"gebruiken. U wilt dit waarschijnlijk doen als u een LDAP- of SAML-integratie "
"gebruikt."
msgstr "Bepaalt of gebruikers het ingebouwde authenticatiesysteem niet mogen gebruiken. U wilt dit waarschijnlijk doen als u een LDAP- of SAML-integratie gebruikt."
#: awx/api/conf.py:48
msgid "Enable HTTP Basic Auth"
@@ -83,13 +78,7 @@ msgid ""
"authorization codes in the number of seconds, and "
"`REFRESH_TOKEN_EXPIRE_SECONDS`, the duration of refresh tokens, after "
"expired access tokens, in the number of seconds."
msgstr ""
"Termenlijst voor het aanpassen van OAuth 2-time-outs. Beschikbare items zijn "
"`ACCESS_TOKEN_EXPIRE_SECONDS`, de duur van de toegangstokens in het aantal "
"seconden, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, de duur van de "
"autorisatiecodes in het aantal seconden. en `REFRESH_TOKEN_EXPIRE_SECONDS`, "
"de duur van de verversingstokens na verlopen toegangstokens, in het aantal "
"seconden."
msgstr "Termenlijst voor het aanpassen van OAuth 2-time-outs. Beschikbare items zijn `ACCESS_TOKEN_EXPIRE_SECONDS`, de duur van de toegangstokens in het aantal seconden, `AUTHORIZATION_CODE_EXPIRE_SECONDS`, de duur van de autorisatiecodes in het aantal seconden. en `REFRESH_TOKEN_EXPIRE_SECONDS`, de duur van de verversingstokens na verlopen toegangstokens, in het aantal seconden."
#: awx/api/conf.py:78
msgid "Allow External Users to Create OAuth2 Tokens"
@@ -476,7 +465,7 @@ msgstr "'ask_at_runtime' wordt niet ondersteund voor aangepaste referenties."
#: awx/api/serializers.py:2547
msgid "Credential Type"
msgstr "Type toegangsgegevens"
msgstr "Soort toegangsgegevens"
#: awx/api/serializers.py:2614
msgid "Modifications not allowed for managed credentials"
@@ -868,8 +857,7 @@ msgid "Containerized instances may not be managed via the API"
msgstr "Geclusterde instanties worden mogelijk niet beheerd via de API"
#: awx/api/serializers.py:4919 awx/api/serializers.py:4922
#, fuzzy, python-format
#| msgid "tower instance group name may not be changed."
#, python-format
msgid "%s instance group name may not be changed."
msgstr "Naam van de %s-instantiegroep mag niet worden gewijzigd."
@@ -1024,8 +1012,6 @@ msgid ""
msgstr "U kunt een team geen referentietoegang verlenen wanneer het veld Organisatie niet is ingesteld of behoort tot een andere organisatie"
#: awx/api/views/__init__.py:720
#, fuzzy
#| msgid "The instance that managed the execution environment."
msgid "Only the 'pull' field can be edited for managed execution environments."
msgstr "Alleen het veld \"pull\" kan worden bewerkt voor beheerde uitvoeringsomgevingen."
@@ -2062,8 +2048,6 @@ msgid "Unique identifier for an installation"
msgstr "Unieke identificatiecode voor installatie"
#: awx/main/conf.py:183
#, fuzzy
#| msgid "The Instance group the job was run under"
msgid "The instance group where control plane tasks run"
msgstr "De instantiegroep waar control plane-taken worden uitgevoerd"
@@ -2637,7 +2621,7 @@ msgstr "Vault-URL (DNS-naam)"
#: awx/main/credential_plugins/dsv.py:23
#: awx/main/models/credential/__init__.py:895
msgid "Client ID"
msgstr "Client-id"
msgstr "Klant-ID"
#: awx/main/credential_plugins/azure_kv.py:29
#: awx/main/models/credential/__init__.py:902
@@ -2771,8 +2755,6 @@ msgid "The identifier for the secret e.g., /some/identifier"
msgstr "De identificatiecode voor het geheim, bijv. /some/identifier"
#: awx/main/credential_plugins/dsv.py:12
#, fuzzy
#| msgid "Tenant ID"
msgid "Tenant"
msgstr "Tenant"
@@ -2791,8 +2773,6 @@ msgid ""
msgstr "Het TLD van de tenant, bv. \"com\" wanneer de URL https://ex.secretservercloud.com is"
#: awx/main/credential_plugins/dsv.py:34
#, fuzzy
#| msgid "Secret Name"
msgid "Secret Path"
msgstr "Geheim pad"
@@ -2801,8 +2781,6 @@ msgid "The secret path e.g. /test/secret1"
msgstr "Het geheime pad, bv. /test/secret1"
#: awx/main/credential_plugins/dsv.py:46
#, fuzzy
#| msgid "Job Template"
msgid "URL template"
msgstr "URL-sjabloon"
@@ -2935,8 +2913,6 @@ msgid ""
msgstr "Geldige principes (gebruikersnamen of hostnamen) waarvoor het certificaat moet worden ondertekend."
#: awx/main/credential_plugins/tss.py:10
#, fuzzy
#| msgid "Auth Server URL"
msgid "Secret Server URL"
msgstr "Geheime-server-URL"
@@ -2947,8 +2923,6 @@ msgid ""
msgstr "De basis-URL van de geheime server, bv. https://myserver/SecretServer of https://mytenant.secretservercloud.com"
#: awx/main/credential_plugins/tss.py:17
#, fuzzy
#| msgid "Red Hat customer username"
msgid "The (Application) user username"
msgstr "De gebruikersnaam van de (applicatie) gebruiker"
@@ -2970,20 +2944,14 @@ msgid "The corresponding password"
msgstr "Het bijbehorende wachtwoord"
#: awx/main/credential_plugins/tss.py:31
#, fuzzy
#| msgid "Secret Key"
msgid "Secret ID"
msgstr "Geheime id"
#: awx/main/credential_plugins/tss.py:32
#, fuzzy
#| msgid "The name of the secret to look up."
msgid "The integer ID of the secret"
msgstr "De id van het geheim als geheel getal"
#: awx/main/credential_plugins/tss.py:37
#, fuzzy
#| msgid "Secret Key"
msgid "Secret Field"
msgstr "Geheim veld"
@@ -3543,22 +3511,14 @@ msgstr "Absoluut bestandspad naar het CA-bestand om te gebruiken (optioneel)"
#: awx/main/models/credential/__init__.py:1023
#: awx/main/models/credential/__init__.py:1029 awx/main/models/inventory.py:813
#, fuzzy
#| msgid "Gather data for Insights for Ansible Automation Platform"
msgid "Red Hat Ansible Automation Platform"
msgstr "Automatiseringsplatform voor Red Hat Ansible"
#: awx/main/models/credential/__init__.py:1031
#, fuzzy
#| msgid "The Ansible Tower base URL to authenticate with."
msgid "Red Hat Ansible Automation Platform base URL to authenticate with."
msgstr "De basis-URL van het automatiseringsplatform voor Red Hat Ansible voor authenticatie."
#: awx/main/models/credential/__init__.py:1038
#, fuzzy
#| msgid ""
#| "The Ansible Tower user to authenticate as.This should not be set if an "
#| "OAuth token is being used."
msgid ""
"Red Hat Ansible Automation Platform username id to authenticate as.This "
"should not be set if an OAuth token is being used."
@@ -4463,7 +4423,7 @@ msgstr "Het aantal seconden na uitvoering van de laatste projectupdate waarna ee
msgid ""
"Allow changing the SCM branch or revision in a job template that uses this "
"project."
msgstr "Wijzigen van de SCM-vertakking of de revisie toelaten in een taaksjabloon die gebruik maakt van dit project."
msgstr "Maak het mogelijk om de SCM-tak of de revisie te wijzigen in een taaksjabloon die gebruik maakt van dit project."
#: awx/main/models/projects.py:294
msgid "The last revision fetched by a project update"
@@ -4900,7 +4860,7 @@ msgid "Exception connecting to PagerDuty: {}"
msgstr "Uitzondering bij het maken van de verbinding met PagerDuty: {}"
#: awx/main/notifications/pagerduty_backend.py:87
#: awx/main/notifications/slack_backend.py:48
#: awx/main/notifications/slack_backend.py:49
#: awx/main/notifications/twilio_backend.py:47
msgid "Exception sending messages: {}"
msgstr "Uitzondering bij het verzenden van berichten: {}"
@@ -5170,8 +5130,7 @@ msgid ""
msgstr "Niet meer dan %(max_certs)d certificaten zijn toegestaan, %(cert_count)d geleverd."
#: awx/main/validators.py:289
#, fuzzy, python-brace-format
#| msgid "The container image to be used for execution."
#, python-brace-format
msgid "The container image name {value} is not valid"
msgstr "De naam van de containerafbeelding {value} is ongeldig"
@@ -6278,68 +6237,4 @@ msgstr "Er wordt momenteel een upgrade van%s geïnstalleerd."
#: awx/ui/urls.py:24
msgid "This page will refresh when complete."
msgstr "Deze pagina wordt vernieuwd als hij klaar is."
#~ msgid "SSLError while trying to connect to {}"
#~ msgstr "SSLError tijdens poging om verbinding te maken met {}"
#~ msgid "Request to {} timed out."
#~ msgstr "Er is een time-out opgetreden voor de aanvraag naar {}"
#~ msgid "Unknown exception {} while trying to GET {}"
#~ msgstr "Onbekende uitzondering {} tijdens poging tot OPHALEN {}"
#~ msgid ""
#~ "Unauthorized access. Please check your Insights Credential username and "
#~ "password."
#~ msgstr ""
#~ "Geen toegang. Controleer uw Insights Credential gebruikersnaam en "
#~ "wachtwoord."
#~ msgid ""
#~ "Failed to access the Insights API at URL {}. Server responded with {} "
#~ "status code and message {}"
#~ msgstr ""
#~ "Openen van Insights API via URL {} mislukt. Server reageerde met {} "
#~ "statuscode en de melding {}"
#~ msgid "Expected JSON response from Insights at URL {} but instead got {}"
#~ msgstr ""
#~ "Verwachte JSON-reactie van Insights via URL {}, maar in plaats daarvan {} "
#~ "verkregen."
#~ msgid ""
#~ "Could not translate Insights system ID {} into an Insights platform ID."
#~ msgstr ""
#~ "Omzetten van Insights systeem-ID {} naar een Insights platform-ID mislukt."
#~ msgid "This host is not recognized as an Insights host."
#~ msgstr "Deze host wordt niet herkend als een Insights-host."
#~ msgid "The Insights Credential for \"{}\" was not found."
#~ msgstr "De Insights-referentie voor {} is niet gevonden."
#~ msgid ""
#~ "The path to the secret stored in the secret backend e.g, /some/secret/"
#~ msgstr ""
#~ "De pad naar het geheim dat in de geheime back-end is opgeslagen, bijv. /"
#~ "some/secret/"
#~ msgid "Ansible Tower"
#~ msgstr "Ansible Tower"
#~ msgid "Ansible Tower Hostname"
#~ msgstr "Hostnaam Ansible Tower"
#~ msgid ""
#~ "Credentials to be used by hosts belonging to this inventory when "
#~ "accessing Red Hat Insights API."
#~ msgstr ""
#~ "Referenties die worden gebruikt door hosts die behoren tot deze "
#~ "inventaris bij toegang tot de Red Hat Insights API."
#~ msgid "Assignment not allowed for Smart Inventory"
#~ msgstr "Toewijzing niet toegestaan voor Smart-inventaris"
#~ msgid "Red Hat Insights host unique identifier."
#~ msgstr "Unieke id van Red Hat Insights-host."
msgstr "Deze pagina wordt vernieuwd als hij klaar is."

File diff suppressed because it is too large Load Diff

View File

@@ -84,7 +84,7 @@ def _identify_lower(key, since, until, last_gather):
return lower, last_entries
@register('config', '1.3', description=_('General platform configuration.'))
@register('config', '1.4', description=_('General platform configuration.'))
def config(since, **kwargs):
license_info = get_license()
install_type = 'traditional'
@@ -104,6 +104,22 @@ def config(since, **kwargs):
'tower_url_base': settings.TOWER_URL_BASE,
'tower_version': get_awx_version(),
'license_type': license_info.get('license_type', 'UNLICENSED'),
'license_date': license_info.get('license_date'),
'subscription_name': license_info.get('subscription_name'),
'sku': license_info.get('sku'),
'support_level': license_info.get('support_level'),
'product_name': license_info.get('product_name'),
'valid_key': license_info.get('valid_key'),
'satellite': license_info.get('satellite'),
'pool_id': license_info.get('pool_id'),
'current_instances': license_info.get('current_instances'),
'automated_instances': license_info.get('automated_instances'),
'automated_since': license_info.get('automated_since'),
'trial': license_info.get('trial'),
'grace_period_remaining': license_info.get('grace_period_remaining'),
'compliant': license_info.get('compliant'),
'date_warning': license_info.get('date_warning'),
'date_expired': license_info.get('date_expired'),
'free_instances': license_info.get('free_instances', 0),
'total_licensed_instances': license_info.get('instance_count', 0),
'license_expiry': license_info.get('time_remaining', 0),
@@ -338,6 +354,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
{tbl}.event,
task_action,
resolved_action,
resolved_role,
-- '-' operator listed here:
-- https://www.postgresql.org/docs/12/functions-json.html
-- note that operator is only supported by jsonb objects
@@ -357,7 +374,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
x.duration AS duration,
x.res->'warnings' AS warnings,
x.res->'deprecations' AS deprecations
FROM {tbl}, jsonb_to_record({event_data}) AS x("res" json, "duration" text, "task_action" text, "resolved_action" text, "start" text, "end" text)
FROM {tbl}, jsonb_to_record({event_data}) AS x("res" json, "duration" text, "task_action" text, "resolved_action" text, "resolved_role" text, "start" text, "end" text)
WHERE ({tbl}.{where_column} > '{since.isoformat()}' AND {tbl}.{where_column} <= '{until.isoformat()}')) TO STDOUT WITH CSV HEADER'''
return query
@@ -367,12 +384,12 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
return _copy_table(table='events', query=query(f"replace({tbl}.event_data::text, '\\u0000', '')::jsonb"), path=full_path)
@register('events_table', '1.4', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_unpartitioned(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, '_unpartitioned_main_jobevent', 'created', **kwargs)
@register('events_table', '1.4', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
@register('events_table', '1.5', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_partitioned_modified(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, 'main_jobevent', 'modified', project_job_created=True, **kwargs)

View File

@@ -177,7 +177,7 @@ def gather(dest=None, module=None, subset=None, since=None, until=None, collecti
if collection_type != 'dry-run':
if not settings.INSIGHTS_TRACKING_STATE:
logger.log(log_level, "Insights for Ansible Automation Platform not enabled. Use --dry-run to gather locally without sending.")
logger.log(log_level, "Automation Analytics not enabled. Use --dry-run to gather locally without sending.")
return None
if not (settings.AUTOMATION_ANALYTICS_URL and settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD):
@@ -332,10 +332,10 @@ def ship(path):
Ship gathered metrics to the Insights API
"""
if not path:
logger.error('Insights for Ansible Automation Platform TAR not found')
logger.error('Automation Analytics TAR not found')
return False
if not os.path.exists(path):
logger.error('Insights for Ansible Automation Platform TAR {} not found'.format(path))
logger.error('Automation Analytics TAR {} not found'.format(path))
return False
if "Error:" in str(path):
return False

View File

@@ -112,7 +112,7 @@ register(
encrypted=False,
read_only=False,
label=_('Red Hat customer username'),
help_text=_('This username is used to send data to Insights for Ansible Automation Platform'),
help_text=_('This username is used to send data to Automation Analytics'),
category=_('System'),
category_slug='system',
)
@@ -125,7 +125,7 @@ register(
encrypted=True,
read_only=False,
label=_('Red Hat customer password'),
help_text=_('This password is used to send data to Insights for Ansible Automation Platform'),
help_text=_('This password is used to send data to Automation Analytics'),
category=_('System'),
category_slug='system',
)
@@ -162,8 +162,8 @@ register(
default='https://example.com',
schemes=('http', 'https'),
allow_plain_hostname=True, # Allow hostname only without TLD.
label=_('Insights for Ansible Automation Platform upload URL'),
help_text=_('This setting is used to to configure the upload URL for data collection for Red Hat Insights.'),
label=_('Automation Analytics upload URL'),
help_text=_('This setting is used to to configure the upload URL for data collection for Automation Analytics.'),
category=_('System'),
category_slug='system',
)
@@ -282,12 +282,25 @@ register(
placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
)
register(
'GALAXY_TASK_ENV',
field_class=fields.KeyValueField,
label=_('Environment Variables for Galaxy Commands'),
help_text=_(
'Additional environment variables set for invocations of ansible-galaxy within project updates. '
'Useful if you must use a proxy server for ansible-galaxy but not git.'
),
category=_('Jobs'),
category_slug='jobs',
placeholder={'HTTP_PROXY': 'myproxy.local:8080'},
)
register(
'INSIGHTS_TRACKING_STATE',
field_class=fields.BooleanField,
default=False,
label=_('Gather data for Insights for Ansible Automation Platform'),
help_text=_('Enables the service to gather data on automation and send it to Red Hat Insights.'),
label=_('Gather data for Automation Analytics'),
help_text=_('Enables the service to gather data on automation and send it to Automation Analytics.'),
category=_('System'),
category_slug='system',
)
@@ -714,7 +727,7 @@ register(
register(
'AUTOMATION_ANALYTICS_LAST_GATHER',
field_class=fields.DateTimeField,
label=_('Last gather date for Insights for Ansible Automation Platform.'),
label=_('Last gather date for Automation Analytics.'),
allow_null=True,
category=_('System'),
category_slug='system',
@@ -722,7 +735,7 @@ register(
register(
'AUTOMATION_ANALYTICS_LAST_ENTRIES',
field_class=fields.CharField,
label=_('Last gathered entries from the data collection service of Insights for Ansible Automation Platform'),
label=_('Last gathered entries from the data collection service of Automation Analytics'),
default='',
allow_blank=True,
category=_('System'),
@@ -733,7 +746,7 @@ register(
register(
'AUTOMATION_ANALYTICS_GATHER_INTERVAL',
field_class=fields.IntegerField,
label=_('Insights for Ansible Automation Platform Gather Interval'),
label=_('Automation Analytics Gather Interval'),
help_text=_('Interval (in seconds) between data gathering.'),
default=14400, # every 4 hours
min_value=1800, # every 30 minutes

View File

@@ -100,3 +100,9 @@ JOB_VARIABLE_PREFIXES = [
'awx',
'tower',
]
# Note, the \u001b[... are ansi color codes. We don't currenly import any of the python modules which define the codes.
# Importing a library just for this message seemed like overkill
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE = (
'\u001b[31m \u001b[1m This can be caused if the version of ansible-runner in your execution environment is out of date.\u001b[0m'
)

View File

@@ -1,6 +1,8 @@
# Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved
import os
from django.core.management.base import BaseCommand, CommandError
from django.db import transaction
from django.conf import settings
@@ -14,7 +16,12 @@ class Command(BaseCommand):
Register this instance with the database for HA tracking.
"""
help = "Add instance to the database. When no options are provided, the hostname of the current system will be used. Override with `--hostname`."
help = (
"Add instance to the database. "
"When no options are provided, values from Django settings will be used to register the current system, "
"as well as the default queues if needed (only used or enabled for Kubernetes installs). "
"Override with `--hostname`."
)
def add_arguments(self, parser):
parser.add_argument('--hostname', dest='hostname', type=str, help="Hostname used during provisioning")
@@ -25,7 +32,14 @@ class Command(BaseCommand):
if not hostname:
if not settings.AWX_AUTO_DEPROVISION_INSTANCES:
raise CommandError('Registering with values from settings only intended for use in K8s installs')
(changed, instance) = Instance.objects.get_or_register()
from awx.main.management.commands.register_queue import RegisterQueue
(changed, instance) = Instance.objects.register(ip_address=os.environ.get('MY_POD_IP'), node_type='control', uuid=settings.SYSTEM_UUID)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME, 100, 0, [], is_container_group=True, pod_spec_override=settings.DEFAULT_EXECUTION_QUEUE_POD_SPEC_OVERRIDE
).register()
else:
(changed, instance) = Instance.objects.register(hostname=hostname, node_type=node_type, uuid=uuid)
if changed:

View File

@@ -2,16 +2,14 @@
# All Rights Reserved.
import logging
import os
from django.db import models
from django.conf import settings
from django.db.models.functions import Lower
from awx.main.utils.filters import SmartFilter
from awx.main.utils.pglock import advisory_lock
from awx.main.utils.common import get_capacity_type
from awx.main.constants import RECEPTOR_PENDING
___all__ = ['HostManager', 'InstanceManager', 'InstanceGroupManager', 'DeferJobCreatedManager', 'UUID_DEFAULT']
___all__ = ['HostManager', 'InstanceManager', 'DeferJobCreatedManager', 'UUID_DEFAULT']
logger = logging.getLogger('awx.main.managers')
UUID_DEFAULT = '00000000-0000-0000-0000-000000000000'
@@ -163,136 +161,3 @@ class InstanceManager(models.Manager):
create_defaults['version'] = RECEPTOR_PENDING
instance = self.create(hostname=hostname, ip_address=ip_address, node_type=node_type, **create_defaults, **uuid_option)
return (True, instance)
def get_or_register(self):
if settings.AWX_AUTO_DEPROVISION_INSTANCES:
from awx.main.management.commands.register_queue import RegisterQueue
pod_ip = os.environ.get('MY_POD_IP')
if settings.IS_K8S:
registered = self.register(ip_address=pod_ip, node_type='control', uuid=settings.SYSTEM_UUID)
else:
registered = self.register(ip_address=pod_ip, uuid=settings.SYSTEM_UUID)
RegisterQueue(settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, 100, 0, [], is_container_group=False).register()
RegisterQueue(
settings.DEFAULT_EXECUTION_QUEUE_NAME, 100, 0, [], is_container_group=True, pod_spec_override=settings.DEFAULT_EXECUTION_QUEUE_POD_SPEC_OVERRIDE
).register()
return registered
else:
return (False, self.me())
class InstanceGroupManager(models.Manager):
"""A custom manager class for the Instance model.
Used for global capacity calculations
"""
def capacity_mapping(self, qs=None):
"""
Another entry-point to Instance manager method by same name
"""
if qs is None:
qs = self.all().prefetch_related('instances')
instance_ig_mapping = {}
ig_instance_mapping = {}
# Create dictionaries that represent basic m2m memberships
for group in qs:
ig_instance_mapping[group.name] = set(instance.hostname for instance in group.instances.all() if instance.capacity != 0)
for inst in group.instances.all():
if inst.capacity == 0:
continue
instance_ig_mapping.setdefault(inst.hostname, set())
instance_ig_mapping[inst.hostname].add(group.name)
# Get IG capacity overlap mapping
ig_ig_mapping = get_ig_ig_mapping(ig_instance_mapping, instance_ig_mapping)
return instance_ig_mapping, ig_ig_mapping
@staticmethod
def zero_out_group(graph, name, breakdown):
if name not in graph:
graph[name] = {}
graph[name]['consumed_capacity'] = 0
for capacity_type in ('execution', 'control'):
graph[name][f'consumed_{capacity_type}_capacity'] = 0
if breakdown:
graph[name]['committed_capacity'] = 0
graph[name]['running_capacity'] = 0
def capacity_values(self, qs=None, tasks=None, breakdown=False, graph=None):
"""
Returns a dictionary of capacity values for all IGs
"""
if qs is None: # Optionally BYOQS - bring your own queryset
qs = self.all().prefetch_related('instances')
instance_ig_mapping, ig_ig_mapping = self.capacity_mapping(qs=qs)
if tasks is None:
tasks = self.model.unifiedjob_set.related.related_model.objects.filter(status__in=('running', 'waiting'))
if graph is None:
graph = {group.name: {} for group in qs}
for group_name in graph:
self.zero_out_group(graph, group_name, breakdown)
for t in tasks:
# TODO: dock capacity for isolated job management tasks running in queue
impact = t.task_impact
control_groups = []
if t.controller_node:
control_groups = instance_ig_mapping.get(t.controller_node, [])
if not control_groups:
logger.warning(f"No instance group found for {t.controller_node}, capacity consumed may be innaccurate.")
if t.status == 'waiting' or (not t.execution_node and not t.is_container_group_task):
# Subtract capacity from any peer groups that share instances
if not t.instance_group:
impacted_groups = []
elif t.instance_group.name not in ig_ig_mapping:
# Waiting job in group with 0 capacity has no collateral impact
impacted_groups = [t.instance_group.name]
else:
impacted_groups = ig_ig_mapping[t.instance_group.name]
for group_name in impacted_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name]['consumed_capacity'] += impact
capacity_type = get_capacity_type(t)
graph[group_name][f'consumed_{capacity_type}_capacity'] += impact
if breakdown:
graph[group_name]['committed_capacity'] += impact
for group_name in control_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name][f'consumed_control_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
if breakdown:
graph[group_name]['committed_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
elif t.status == 'running':
# Subtract capacity from all groups that contain the instance
if t.execution_node not in instance_ig_mapping:
if not t.is_container_group_task:
logger.warning('Detected %s running inside lost instance, ' 'may still be waiting for reaper.', t.log_format)
if t.instance_group:
impacted_groups = [t.instance_group.name]
else:
impacted_groups = []
else:
impacted_groups = instance_ig_mapping[t.execution_node]
for group_name in impacted_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name]['consumed_capacity'] += impact
capacity_type = get_capacity_type(t)
graph[group_name][f'consumed_{capacity_type}_capacity'] += impact
if breakdown:
graph[group_name]['running_capacity'] += impact
for group_name in control_groups:
if group_name not in graph:
self.zero_out_group(graph, group_name, breakdown)
graph[group_name][f'consumed_control_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
if breakdown:
graph[group_name]['running_capacity'] += settings.AWX_CONTROL_NODE_TASK_IMPACT
else:
logger.error('Programming error, %s not in ["running", "waiting"]', t.log_format)
return graph

View File

@@ -0,0 +1,21 @@
# Generated by Django 3.2.12 on 2022-03-31 17:28
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0158_make_instance_cpu_decimal'),
]
operations = [
migrations.AlterField(
model_name='inventorysource',
name='update_on_project_update',
field=models.BooleanField(
default=False,
help_text='This field is deprecated and will be removed in a future release. In future release, functionality will be migrated to source project update_on_launch.',
),
),
]

View File

@@ -2,7 +2,6 @@
# All Rights Reserved.
from decimal import Decimal
import random
import logging
import os
@@ -20,7 +19,7 @@ from solo.models import SingletonModel
from awx import __version__ as awx_application_version
from awx.api.versioning import reverse
from awx.main.fields import JSONBlob
from awx.main.managers import InstanceManager, InstanceGroupManager, UUID_DEFAULT
from awx.main.managers import InstanceManager, UUID_DEFAULT
from awx.main.constants import JOB_FOLDER_PREFIX
from awx.main.models.base import BaseModel, HasEditsMixin, prevent_search
from awx.main.models.unified_jobs import UnifiedJob
@@ -175,12 +174,6 @@ class Instance(HasPolicyEditsMixin, BaseModel):
def jobs_total(self):
return UnifiedJob.objects.filter(execution_node=self.hostname).count()
@staticmethod
def choose_online_control_plane_node():
return random.choice(
Instance.objects.filter(enabled=True, capacity__gt=0).filter(node_type__in=['control', 'hybrid']).values_list('hostname', flat=True)
)
def get_cleanup_task_kwargs(self, **kwargs):
"""
Produce options to use for the command: ansible-runner worker cleanup
@@ -307,8 +300,6 @@ class Instance(HasPolicyEditsMixin, BaseModel):
class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
"""A model representing a Queue/Group of AWX Instances."""
objects = InstanceGroupManager()
name = models.CharField(max_length=250, unique=True)
created = models.DateTimeField(auto_now_add=True)
modified = models.DateTimeField(auto_now=True)
@@ -366,37 +357,6 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
class Meta:
app_label = 'main'
@staticmethod
def fit_task_to_most_remaining_capacity_instance(task, instances, impact=None, capacity_type=None, add_hybrid_control_cost=False):
impact = impact if impact else task.task_impact
capacity_type = capacity_type if capacity_type else task.capacity_type
instance_most_capacity = None
most_remaining_capacity = -1
for i in instances:
if i.node_type not in (capacity_type, 'hybrid'):
continue
would_be_remaining = i.remaining_capacity - impact
# hybrid nodes _always_ control their own tasks
if add_hybrid_control_cost and i.node_type == 'hybrid':
would_be_remaining -= settings.AWX_CONTROL_NODE_TASK_IMPACT
if would_be_remaining >= 0 and (instance_most_capacity is None or would_be_remaining > most_remaining_capacity):
instance_most_capacity = i
most_remaining_capacity = would_be_remaining
return instance_most_capacity
@staticmethod
def find_largest_idle_instance(instances, capacity_type='execution'):
largest_instance = None
for i in instances:
if i.node_type not in (capacity_type, 'hybrid'):
continue
if i.jobs_running == 0:
if largest_instance is None:
largest_instance = i
elif i.capacity > largest_instance.capacity:
largest_instance = i
return largest_instance
def set_default_policy_fields(self):
self.policy_instance_list = []
self.policy_instance_minimum = 0

View File

@@ -993,6 +993,10 @@ class InventorySource(UnifiedJobTemplate, InventorySourceOptions, CustomVirtualE
)
update_on_project_update = models.BooleanField(
default=False,
help_text=_(
'This field is deprecated and will be removed in a future release. '
'In future release, functionality will be migrated to source project update_on_launch.'
),
)
update_on_launch = models.BooleanField(
default=False,

View File

@@ -54,8 +54,8 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
):
super(GrafanaBackend, self).__init__(fail_silently=fail_silently)
self.grafana_key = grafana_key
self.dashboardId = dashboardId
self.panelId = panelId
self.dashboardId = int(dashboardId) if dashboardId is not None else None
self.panelId = int(panelId) if panelId is not None else None
self.annotation_tags = annotation_tags if annotation_tags is not None else []
self.grafana_no_verify_ssl = grafana_no_verify_ssl
self.isRegion = isRegion
@@ -86,8 +86,10 @@ class GrafanaBackend(AWXBaseEmailBackend, CustomNotificationBase):
if not self.fail_silently:
raise Exception(smart_str(_("Error converting time {} and/or timeEnd {} to int.").format(m.body['started'], m.body['finished'])))
grafana_data['isRegion'] = self.isRegion
grafana_data['dashboardId'] = self.dashboardId
grafana_data['panelId'] = self.panelId
if self.dashboardId is not None:
grafana_data['dashboardId'] = self.dashboardId
if self.panelId is not None:
grafana_data['panelId'] = self.panelId
if self.annotation_tags:
grafana_data['tags'] = self.annotation_tags
grafana_data['text'] = m.subject

View File

@@ -6,7 +6,6 @@ from datetime import timedelta
import logging
import uuid
import json
from types import SimpleNamespace
# Django
from django.db import transaction, connection
@@ -19,7 +18,6 @@ from awx.main.dispatch.reaper import reap_job
from awx.main.models import (
AdHocCommand,
Instance,
InstanceGroup,
InventorySource,
InventoryUpdate,
Job,
@@ -37,6 +35,8 @@ from awx.main.utils import get_type_for_model, task_manager_bulk_reschedule, sch
from awx.main.utils.common import create_partition
from awx.main.signals import disable_activity_stream
from awx.main.scheduler.dependency_graph import DependencyGraph
from awx.main.scheduler.task_manager_models import TaskManagerInstances
from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups
from awx.main.utils import decrypt_field
@@ -54,47 +54,22 @@ class TaskManager:
The NOOP case is short-circuit logic. If the task manager realizes that another instance
of the task manager is already running, then it short-circuits and decides not to run.
"""
self.graph = dict()
# start task limit indicates how many pending jobs can be started on this
# .schedule() run. Starting jobs is expensive, and there is code in place to reap
# the task manager after 5 minutes. At scale, the task manager can easily take more than
# 5 minutes to start pending jobs. If this limit is reached, pending jobs
# will no longer be started and will be started on the next task manager cycle.
self.start_task_limit = settings.START_TASK_LIMIT
self.time_delta_job_explanation = timedelta(seconds=30)
def after_lock_init(self):
def after_lock_init(self, all_sorted_tasks):
"""
Init AFTER we know this instance of the task manager will run because the lock is acquired.
"""
instances = Instance.objects.filter(hostname__isnull=False, enabled=True).exclude(node_type='hop')
self.real_instances = {i.hostname: i for i in instances}
self.controlplane_ig = None
self.dependency_graph = DependencyGraph()
instances_partial = [
SimpleNamespace(
obj=instance,
node_type=instance.node_type,
remaining_capacity=instance.remaining_capacity,
capacity=instance.capacity,
jobs_running=instance.jobs_running,
hostname=instance.hostname,
)
for instance in instances
]
instances_by_hostname = {i.hostname: i for i in instances_partial}
for rampart_group in InstanceGroup.objects.prefetch_related('instances'):
if rampart_group.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME:
self.controlplane_ig = rampart_group
self.graph[rampart_group.name] = dict(
instances=[
instances_by_hostname[instance.hostname] for instance in rampart_group.instances.all() if instance.hostname in instances_by_hostname
],
)
self.instances = TaskManagerInstances(all_sorted_tasks)
self.instance_groups = TaskManagerInstanceGroups(instances_by_hostname=self.instances)
self.controlplane_ig = self.instance_groups.controlplane_ig
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
@@ -242,7 +217,7 @@ class TaskManager:
schedule_task_manager()
return result
def start_task(self, task, rampart_group, dependent_tasks=None, instance=None):
def start_task(self, task, instance_group, dependent_tasks=None, instance=None):
self.start_task_limit -= 1
if self.start_task_limit == 0:
# schedule another run immediately after this task manager
@@ -275,10 +250,10 @@ class TaskManager:
schedule_task_manager()
# at this point we already have control/execution nodes selected for the following cases
else:
task.instance_group = rampart_group
task.instance_group = instance_group
execution_node_msg = f' and execution node {task.execution_node}' if task.execution_node else ''
logger.debug(
f'Submitting job {task.log_format} controlled by {task.controller_node} to instance group {rampart_group.name}{execution_node_msg}.'
f'Submitting job {task.log_format} controlled by {task.controller_node} to instance group {instance_group.name}{execution_node_msg}.'
)
with disable_activity_stream():
task.celery_task_id = str(uuid.uuid4())
@@ -476,8 +451,8 @@ class TaskManager:
control_impact = task.task_impact + settings.AWX_CONTROL_NODE_TASK_IMPACT
else:
control_impact = settings.AWX_CONTROL_NODE_TASK_IMPACT
control_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(
task, self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['instances'], impact=control_impact, capacity_type='control'
control_instance = self.instance_groups.fit_task_to_most_remaining_capacity_instance(
task, instance_group_name=settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME, impact=control_impact, capacity_type='control'
)
if not control_instance:
self.task_needs_capacity(task, tasks_to_update_job_explanation)
@@ -489,33 +464,31 @@ class TaskManager:
# All task.capacity_type == 'control' jobs should run on control plane, no need to loop over instance groups
if task.capacity_type == 'control':
task.execution_node = control_instance.hostname
control_instance.remaining_capacity = max(0, control_instance.remaining_capacity - control_impact)
control_instance.jobs_running += 1
control_instance.consume_capacity(control_impact)
self.dependency_graph.add_job(task)
execution_instance = self.real_instances[control_instance.hostname]
execution_instance = self.instances[control_instance.hostname].obj
task.log_lifecycle("controller_node_chosen")
task.log_lifecycle("execution_node_chosen")
self.start_task(task, self.controlplane_ig, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
continue
for rampart_group in preferred_instance_groups:
if rampart_group.is_container_group:
control_instance.jobs_running += 1
for instance_group in preferred_instance_groups:
if instance_group.is_container_group:
self.dependency_graph.add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
self.start_task(task, instance_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True
break
# TODO: remove this after we have confidence that OCP control nodes are reporting node_type=control
if settings.IS_K8S and task.capacity_type == 'execution':
logger.debug("Skipping group {}, task cannot run on control plane".format(rampart_group.name))
logger.debug("Skipping group {}, task cannot run on control plane".format(instance_group.name))
continue
# at this point we know the instance group is NOT a container group
# because if it was, it would have started the task and broke out of the loop.
execution_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(
task, self.graph[rampart_group.name]['instances'], add_hybrid_control_cost=True
) or InstanceGroup.find_largest_idle_instance(self.graph[rampart_group.name]['instances'], capacity_type=task.capacity_type)
execution_instance = self.instance_groups.fit_task_to_most_remaining_capacity_instance(
task, instance_group_name=instance_group.name, add_hybrid_control_cost=True
) or self.instance_groups.find_largest_idle_instance(instance_group_name=instance_group.name, capacity_type=task.capacity_type)
if execution_instance:
task.execution_node = execution_instance.hostname
@@ -524,27 +497,24 @@ class TaskManager:
control_instance = execution_instance
task.controller_node = execution_instance.hostname
control_instance.remaining_capacity = max(0, control_instance.remaining_capacity - settings.AWX_CONTROL_NODE_TASK_IMPACT)
control_instance.consume_capacity(settings.AWX_CONTROL_NODE_TASK_IMPACT)
task.log_lifecycle("controller_node_chosen")
if control_instance != execution_instance:
control_instance.jobs_running += 1
execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact)
execution_instance.jobs_running += 1
execution_instance.consume_capacity(task.task_impact)
task.log_lifecycle("execution_node_chosen")
logger.debug(
"Starting {} in group {} instance {} (remaining_capacity={})".format(
task.log_format, rampart_group.name, execution_instance.hostname, execution_instance.remaining_capacity
task.log_format, instance_group.name, execution_instance.hostname, execution_instance.remaining_capacity
)
)
execution_instance = self.real_instances[execution_instance.hostname]
execution_instance = self.instances[execution_instance.hostname].obj
self.dependency_graph.add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), execution_instance)
self.start_task(task, instance_group, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
break
else:
logger.debug(
"No instance available in group {} to run job {} w/ capacity requirement {}".format(
rampart_group.name, task.log_format, task.task_impact
instance_group.name, task.log_format, task.task_impact
)
)
if not found_acceptable_queue:
@@ -609,7 +579,7 @@ class TaskManager:
finished_wfjs = []
all_sorted_tasks = self.get_tasks()
self.after_lock_init()
self.after_lock_init(all_sorted_tasks)
if len(all_sorted_tasks) > 0:
# TODO: Deal with

View File

@@ -0,0 +1,123 @@
# Copyright (c) 2022 Ansible by Red Hat
# All Rights Reserved.
import logging
from django.conf import settings
from awx.main.models import (
Instance,
InstanceGroup,
)
logger = logging.getLogger('awx.main.scheduler')
class TaskManagerInstance:
"""A class representing minimal data the task manager needs to represent an Instance."""
def __init__(self, obj):
self.obj = obj
self.node_type = obj.node_type
self.consumed_capacity = 0
self.capacity = obj.capacity
self.hostname = obj.hostname
def consume_capacity(self, impact):
self.consumed_capacity += impact
@property
def remaining_capacity(self):
remaining = self.capacity - self.consumed_capacity
if remaining < 0:
return 0
return remaining
class TaskManagerInstances:
def __init__(self, active_tasks, instances=None):
self.instances_by_hostname = dict()
if instances is None:
instances = (
Instance.objects.filter(hostname__isnull=False, enabled=True).exclude(node_type='hop').only('node_type', 'capacity', 'hostname', 'enabled')
)
for instance in instances:
self.instances_by_hostname[instance.hostname] = TaskManagerInstance(instance)
# initialize remaining capacity based on currently waiting and running tasks
for task in active_tasks:
if task.status not in ['waiting', 'running']:
continue
control_instance = self.instances_by_hostname.get(task.controller_node, '')
execution_instance = self.instances_by_hostname.get(task.execution_node, '')
if execution_instance and execution_instance.node_type in ('hybrid', 'execution'):
self.instances_by_hostname[task.execution_node].consume_capacity(task.task_impact)
if control_instance and control_instance.node_type in ('hybrid', 'control'):
self.instances_by_hostname[task.controller_node].consume_capacity(settings.AWX_CONTROL_NODE_TASK_IMPACT)
def __getitem__(self, hostname):
return self.instances_by_hostname.get(hostname)
def __contains__(self, hostname):
return hostname in self.instances_by_hostname
class TaskManagerInstanceGroups:
"""A class representing minimal data the task manager needs to represent an InstanceGroup."""
def __init__(self, instances_by_hostname=None, instance_groups=None, instance_groups_queryset=None):
self.instance_groups = dict()
self.controlplane_ig = None
if instance_groups is not None: # for testing
self.instance_groups = instance_groups
else:
if instance_groups_queryset is None:
instance_groups_queryset = InstanceGroup.objects.prefetch_related('instances').only('name', 'instances')
for instance_group in instance_groups_queryset:
if instance_group.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME:
self.controlplane_ig = instance_group
self.instance_groups[instance_group.name] = dict(
instances=[
instances_by_hostname[instance.hostname] for instance in instance_group.instances.all() if instance.hostname in instances_by_hostname
],
)
def get_remaining_capacity(self, group_name):
instances = self.instance_groups[group_name]['instances']
return sum(inst.remaining_capacity for inst in instances)
def get_consumed_capacity(self, group_name):
instances = self.instance_groups[group_name]['instances']
return sum(inst.consumed_capacity for inst in instances)
def fit_task_to_most_remaining_capacity_instance(self, task, instance_group_name, impact=None, capacity_type=None, add_hybrid_control_cost=False):
impact = impact if impact else task.task_impact
capacity_type = capacity_type if capacity_type else task.capacity_type
instance_most_capacity = None
most_remaining_capacity = -1
instances = self.instance_groups[instance_group_name]['instances']
for i in instances:
if i.node_type not in (capacity_type, 'hybrid'):
continue
would_be_remaining = i.remaining_capacity - impact
# hybrid nodes _always_ control their own tasks
if add_hybrid_control_cost and i.node_type == 'hybrid':
would_be_remaining -= settings.AWX_CONTROL_NODE_TASK_IMPACT
if would_be_remaining >= 0 and (instance_most_capacity is None or would_be_remaining > most_remaining_capacity):
instance_most_capacity = i
most_remaining_capacity = would_be_remaining
return instance_most_capacity
def find_largest_idle_instance(self, instance_group_name, capacity_type='execution'):
largest_instance = None
instances = self.instance_groups[instance_group_name]['instances']
for i in instances:
if i.node_type not in (capacity_type, 'hybrid'):
continue
if (hasattr(i, 'jobs_running') and i.jobs_running == 0) or i.remaining_capacity == i.capacity:
if largest_instance is None:
largest_instance = i
elif i.capacity > largest_instance.capacity:
largest_instance = i
return largest_instance

View File

@@ -17,7 +17,6 @@ import time
import urllib.parse as urlparse
from uuid import uuid4
# Django
from django.conf import settings
from django.db import transaction
@@ -32,15 +31,16 @@ from gitdb.exc import BadName as BadGitName
# AWX
from awx.main.constants import ACTIVE_STATES
from awx.main.dispatch.publish import task
from awx.main.dispatch import get_local_queuename
from awx.main.constants import (
ACTIVE_STATES,
PRIVILEGE_ESCALATION_METHODS,
STANDARD_INVENTORY_UPDATE_ENV,
JOB_FOLDER_PREFIX,
MAX_ISOLATED_PATH_COLON_DELIMITER,
CONTAINER_VOLUMES_MOUNT_TYPES,
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE,
)
from awx.main.models import (
Instance,
@@ -119,6 +119,26 @@ class BaseTask(object):
def update_model(self, pk, _attempt=0, **updates):
return update_model(self.model, pk, _attempt=0, _max_attempts=self.update_attempts, **updates)
def write_private_data_file(self, private_data_dir, file_name, data, sub_dir=None, permissions=0o600):
base_path = private_data_dir
if sub_dir:
base_path = os.path.join(private_data_dir, sub_dir)
if not os.path.exists(base_path):
os.mkdir(base_path, 0o700)
# If we got a file name create it, otherwise we want a temp file
if file_name:
file_path = os.path.join(base_path, file_name)
else:
handle, file_path = tempfile.mkstemp(dir=base_path)
os.close(handle)
file = Path(file_path)
file.touch(mode=permissions, exist_ok=True)
with open(file_path, 'w') as f:
f.write(data)
return file_path
def get_path_to(self, *args):
"""
Return absolute path relative to this file.
@@ -222,6 +242,7 @@ class BaseTask(object):
"""
private_data = self.build_private_data(instance, private_data_dir)
private_data_files = {'credentials': {}}
ssh_key_data = None
if private_data is not None:
for credential, data in private_data.get('credentials', {}).items():
# OpenSSH formatted keys must have a trailing newline to be
@@ -231,34 +252,15 @@ class BaseTask(object):
# For credentials used with ssh-add, write to a named pipe which
# will be read then closed, instead of leaving the SSH key on disk.
if credential and credential.credential_type.namespace in ('ssh', 'scm'):
try:
os.mkdir(os.path.join(private_data_dir, 'env'))
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(private_data_dir, 'env', 'ssh_key')
ansible_runner.utils.open_fifo_write(path, data.encode())
private_data_files['credentials']['ssh'] = path
ssh_key_data = data
# Ansible network modules do not yet support ssh-agent.
# Instead, ssh private key file is explicitly passed via an
# env variable.
else:
handle, path = tempfile.mkstemp(dir=os.path.join(private_data_dir, 'env'))
f = os.fdopen(handle, 'w')
f.write(data)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
private_data_files['credentials'][credential] = path
private_data_files['credentials'][credential] = self.write_private_data_file(private_data_dir, None, data, 'env')
for credential, data in private_data.get('certificates', {}).items():
artifact_dir = os.path.join(private_data_dir, 'artifacts', str(self.instance.id))
if not os.path.exists(artifact_dir):
os.makedirs(artifact_dir, mode=0o700)
path = os.path.join(artifact_dir, 'ssh_key_data-cert.pub')
with open(path, 'w') as f:
f.write(data)
f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
return private_data_files
self.write_private_data_file(private_data_dir, 'ssh_key_data-cert.pub', data, 'artifacts')
return private_data_files, ssh_key_data
def build_passwords(self, instance, runtime_passwords):
"""
@@ -276,23 +278,11 @@ class BaseTask(object):
"""
def _write_extra_vars_file(self, private_data_dir, vars, safe_dict={}):
env_path = os.path.join(private_data_dir, 'env')
try:
os.mkdir(env_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(env_path, 'extravars')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
if settings.ALLOW_JINJA_IN_EXTRA_VARS == 'always':
f.write(yaml.safe_dump(vars))
content = yaml.safe_dump(vars)
else:
f.write(safe_dump(vars, safe_dict))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
content = safe_dump(vars, safe_dict)
return self.write_private_data_file(private_data_dir, 'extravars', content, 'env')
def add_awx_venv(self, env):
env['VIRTUAL_ENV'] = settings.AWX_VENV_PATH
@@ -330,32 +320,14 @@ class BaseTask(object):
# maintain a list of host_name --> host_id
# so we can associate emitted events to Host objects
self.runner_callback.host_map = {hostname: hv.pop('remote_tower_id', '') for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items()}
json_data = json.dumps(script_data)
path = os.path.join(private_data_dir, 'inventory')
fn = os.path.join(path, 'hosts')
with open(fn, 'w') as f:
os.chmod(fn, stat.S_IRUSR | stat.S_IXUSR | stat.S_IWUSR)
f.write('#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\nprint(%r)\n' % json_data)
return fn
file_content = '#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\nprint(%r)\n' % json.dumps(script_data)
return self.write_private_data_file(private_data_dir, 'hosts', file_content, 'inventory', 0o700)
def build_args(self, instance, private_data_dir, passwords):
raise NotImplementedError
def write_args_file(self, private_data_dir, args):
env_path = os.path.join(private_data_dir, 'env')
try:
os.mkdir(env_path, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
except OSError as e:
if e.errno != errno.EEXIST:
raise
path = os.path.join(env_path, 'cmdline')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(ansible_runner.utils.args2cmdline(*args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
return self.write_private_data_file(private_data_dir, 'cmdline', ansible_runner.utils.args2cmdline(*args), 'env')
def build_credentials_list(self, instance):
return []
@@ -477,7 +449,7 @@ class BaseTask(object):
)
# May have to serialize the value
private_data_files = self.build_private_data_files(self.instance, private_data_dir)
private_data_files, ssh_key_data = self.build_private_data_files(self.instance, private_data_dir)
passwords = self.build_passwords(self.instance, kwargs)
self.build_extra_vars_file(self.instance, private_data_dir)
args = self.build_args(self.instance, private_data_dir, passwords)
@@ -512,17 +484,12 @@ class BaseTask(object):
'playbook': self.build_playbook_path_relative_to_cwd(self.instance, private_data_dir),
'inventory': self.build_inventory(self.instance, private_data_dir),
'passwords': expect_passwords,
'suppress_env_files': getattr(settings, 'AWX_RUNNER_OMIT_ENV_FILES', True),
'envvars': env,
'settings': {
'job_timeout': self.get_instance_timeout(self.instance),
'suppress_ansible_output': True,
'suppress_output_file': True,
},
}
idle_timeout = getattr(settings, 'DEFAULT_JOB_IDLE_TIMEOUT', 0)
if idle_timeout > 0:
params['settings']['idle_timeout'] = idle_timeout
if ssh_key_data is not None:
params['ssh_key'] = ssh_key_data
if isinstance(self.instance, AdHocCommand):
params['module'] = self.build_module_name(self.instance)
@@ -545,6 +512,19 @@ class BaseTask(object):
if not params[v]:
del params[v]
runner_settings = {
'job_timeout': self.get_instance_timeout(self.instance),
'suppress_ansible_output': True,
'suppress_output_file': getattr(settings, 'AWX_RUNNER_SUPPRESS_OUTPUT_FILE', True),
}
idle_timeout = getattr(settings, 'DEFAULT_JOB_IDLE_TIMEOUT', 0)
if idle_timeout > 0:
runner_settings['idle_timeout'] = idle_timeout
# Write out our own settings file
self.write_private_data_file(private_data_dir, 'settings', json.dumps(runner_settings), 'env')
self.instance.log_lifecycle("running_playbook")
if isinstance(self.instance, SystemJob):
res = ansible_runner.interface.run(
@@ -596,6 +576,10 @@ class BaseTask(object):
except Exception:
logger.exception('{} Post run hook errored.'.format(self.instance.log_format))
# We really shouldn't get into this one but just in case....
if 'got an unexpected keyword argument' in extra_update_fields.get('result_traceback', ''):
extra_update_fields['result_traceback'] = "{}\n\n{}".format(extra_update_fields['result_traceback'], ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE)
self.instance = self.update_model(pk)
self.instance = self.update_model(pk, status=status, emitted_events=self.runner_callback.event_ct, **extra_update_fields)
@@ -1054,7 +1038,7 @@ class RunProjectUpdate(BaseTask):
env['TMP'] = settings.AWX_ISOLATION_BASE_PATH
env['PROJECT_UPDATE_ID'] = str(project_update.pk)
if settings.GALAXY_IGNORE_CERTS:
env['ANSIBLE_GALAXY_IGNORE'] = True
env['ANSIBLE_GALAXY_IGNORE'] = str(True)
# build out env vars for Galaxy credentials (in order)
galaxy_server_list = []
@@ -1161,6 +1145,7 @@ class RunProjectUpdate(BaseTask):
'scm_track_submodules': project_update.scm_track_submodules,
'roles_enabled': galaxy_creds_are_defined and settings.AWX_ROLES_ENABLED,
'collections_enabled': galaxy_creds_are_defined and settings.AWX_COLLECTIONS_ENABLED,
'galaxy_task_env': settings.GALAXY_TASK_ENV,
}
)
# apply custom refspec from user for PR refs and the like
@@ -1568,13 +1553,7 @@ class RunInventoryUpdate(BaseTask):
return env
def write_args_file(self, private_data_dir, args):
path = os.path.join(private_data_dir, 'args')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(' '.join(args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
return self.write_private_data_file(private_data_dir, 'args', ' '.join(args))
def build_args(self, inventory_update, private_data_dir, passwords):
"""Build the command line argument list for running an inventory
@@ -1630,11 +1609,7 @@ class RunInventoryUpdate(BaseTask):
if injector is not None:
content = injector.inventory_contents(inventory_update, private_data_dir)
# must be a statically named file
inventory_path = os.path.join(private_data_dir, 'inventory', injector.filename)
with open(inventory_path, 'w') as f:
f.write(content)
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
self.write_private_data_file(private_data_dir, injector.filename, content, 'inventory', 0o700)
rel_path = os.path.join('inventory', injector.filename)
elif src == 'scm':
rel_path = os.path.join('project', inventory_update.source_path)
@@ -1961,13 +1936,7 @@ class RunSystemJob(BaseTask):
return args
def write_args_file(self, private_data_dir, args):
path = os.path.join(private_data_dir, 'args')
handle = os.open(path, os.O_RDWR | os.O_CREAT, stat.S_IREAD | stat.S_IWRITE)
f = os.fdopen(handle, 'w')
f.write(' '.join(args))
f.close()
os.chmod(path, stat.S_IRUSR)
return path
return self.write_private_data_file(private_data_dir, 'args', ' '.join(args))
def build_env(self, instance, private_data_dir, private_data_files=None):
base_env = super(RunSystemJob, self).build_env(instance, private_data_dir, private_data_files=private_data_files)

View File

@@ -24,8 +24,10 @@ from awx.main.utils.common import (
parse_yaml_or_json,
cleanup_new_process,
)
from awx.main.constants import MAX_ISOLATED_PATH_COLON_DELIMITER
from awx.main.constants import (
MAX_ISOLATED_PATH_COLON_DELIMITER,
ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE,
)
# Receptorctl
from receptorctl.socket_interface import ReceptorControl
@@ -375,6 +377,8 @@ class AWXReceptorJob:
receptor_output = b"".join(lines).decode()
if receptor_output:
self.task.instance.result_traceback = receptor_output
if 'got an unexpected keyword argument' in receptor_output:
self.task.instance.result_traceback = "{}\n\n{}".format(receptor_output, ANSIBLE_RUNNER_NEEDS_UPDATE_MESSAGE)
self.task.instance.save(update_fields=['result_traceback'])
elif detail:
self.task.instance.result_traceback = detail

View File

@@ -46,3 +46,41 @@ def test_ad_hoc_events_sublist_truncation(get, organization_factory, job_templat
response = get(url, user=objs.superusers.admin, expect=200)
assert (len(response.data['results'][0]['stdout']) == 1025) == expected
@pytest.mark.django_db
def test_job_job_events_children_summary(get, organization_factory, job_template_factory):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization, inventory='test_inv', project='test_proj').job_template
job = jt.create_unified_job()
url = reverse('api:job_job_events_children_summary', kwargs={'pk': job.pk})
response = get(url, user=objs.superusers.admin, expect=200)
assert response.data["event_processing_finished"] == False
'''
E1
E2
E3
E4 (verbose)
E5
'''
JobEvent.create_from_data(
job_id=job.pk, uuid='uuid1', parent_uuid='', event="playbook_on_start", counter=1, stdout='a' * 1024, job_created=job.created
).save()
JobEvent.create_from_data(
job_id=job.pk, uuid='uuid2', parent_uuid='uuid1', event="playbook_on_play_start", counter=2, stdout='a' * 1024, job_created=job.created
).save()
JobEvent.create_from_data(
job_id=job.pk, uuid='uuid3', parent_uuid='uuid2', event="runner_on_start", counter=3, stdout='a' * 1024, job_created=job.created
).save()
JobEvent.create_from_data(job_id=job.pk, uuid='uuid4', parent_uuid='', event='verbose', counter=4, stdout='a' * 1024, job_created=job.created).save()
JobEvent.create_from_data(
job_id=job.pk, uuid='uuid5', parent_uuid='uuid1', event="playbook_on_task_start", counter=5, stdout='a' * 1024, job_created=job.created
).save()
job.emitted_events = job.get_event_queryset().count()
job.status = "successful"
job.save()
url = reverse('api:job_job_events_children_summary', kwargs={'pk': job.pk})
response = get(url, user=objs.superusers.admin, expect=200)
assert response.data["children_summary"] == {1: {"rowNumber": 0, "numChildren": 4}, 2: {"rowNumber": 1, "numChildren": 2}}
assert response.data["meta_event_nested_uuid"] == {4: "uuid2"}
assert response.data["event_processing_finished"] == True

View File

@@ -301,3 +301,17 @@ def test_instance_group_unattach_from_instance(post, instance_group, node_type_i
assert new_activity.instance_group.first() == instance_group
else:
assert not new_activity
@pytest.mark.django_db
def test_cannot_remove_controlplane_hybrid_instances(post, controlplane_instance_group, node_type_instance, admin_user):
instance = node_type_instance(hostname='hybrid_node', node_type='hybrid')
controlplane_instance_group.instances.add(instance)
url = reverse('api:instance_group_instance_list', kwargs={'pk': controlplane_instance_group.pk})
r = post(url, {'disassociate': True, 'id': instance.id}, admin_user, expect=400)
assert 'Cannot disassociate hybrid node' in str(r.data)
url = reverse('api:instance_instance_groups_list', kwargs={'pk': instance.pk})
r = post(url, {'disassociate': True, 'id': controlplane_instance_group.id}, admin_user, expect=400)
assert f'Cannot disassociate hybrid instance' in str(r.data)

View File

@@ -76,6 +76,22 @@ def test_inventory_host_name_unique(scm_inventory, post, admin_user):
assert "A Group with that name already exists." in json.dumps(resp.data)
@pytest.mark.django_db
def test_inventory_host_list_ordering(scm_inventory, get, admin_user):
# create 3 hosts, hit the inventory host list view 3 times and get the order visible there each time and compare
inv_src = scm_inventory.inventory_sources.first()
host1 = inv_src.hosts.create(name='1', inventory=scm_inventory)
host2 = inv_src.hosts.create(name='2', inventory=scm_inventory)
host3 = inv_src.hosts.create(name='3', inventory=scm_inventory)
expected_ids = [host1.id, host2.id, host3.id]
resp = get(
reverse('api:inventory_hosts_list', kwargs={'pk': scm_inventory.id}),
admin_user,
).data['results']
host_list = [host['id'] for host in resp]
assert host_list == expected_ids
@pytest.mark.django_db
def test_inventory_group_name_unique(scm_inventory, post, admin_user):
inv_src = scm_inventory.inventory_sources.first()
@@ -94,6 +110,24 @@ def test_inventory_group_name_unique(scm_inventory, post, admin_user):
assert "A Host with that name already exists." in json.dumps(resp.data)
@pytest.mark.django_db
def test_inventory_group_list_ordering(scm_inventory, get, put, admin_user):
# create 3 groups, hit the inventory groups list view 3 times and get the order visible there each time and compare
inv_src = scm_inventory.inventory_sources.first()
group1 = inv_src.groups.create(name='1', inventory=scm_inventory)
group2 = inv_src.groups.create(name='2', inventory=scm_inventory)
group3 = inv_src.groups.create(name='3', inventory=scm_inventory)
expected_ids = [group1.id, group2.id, group3.id]
group_ids = {}
for x in range(3):
resp = get(
reverse('api:inventory_groups_list', kwargs={'pk': scm_inventory.id}),
admin_user,
).data['results']
group_ids[x] = [group['id'] for group in resp]
assert group_ids[0] == group_ids[1] == group_ids[2] == expected_ids
@pytest.mark.parametrize("role_field,expected_status_code", [(None, 403), ('admin_role', 200), ('update_role', 403), ('adhoc_role', 403), ('use_role', 403)])
@pytest.mark.django_db
def test_edit_inventory(put, inventory, alice, role_field, expected_status_code):

View File

@@ -71,6 +71,17 @@ def test_organization_list_integrity(organization, get, admin, alice):
assert field in res.data['results'][0]
@pytest.mark.django_db
def test_organization_list_order_integrity(organizations, get, admin):
# check that the order of the organization list retains integrity.
orgs = organizations(4)
org_ids = {}
for x in range(3):
res = get(reverse('api:organization_list'), user=admin).data['results']
org_ids[x] = [org['id'] for org in res]
assert org_ids[0] == org_ids[1] == org_ids[2] == [orgs[0].id, orgs[1].id, orgs[2].id, orgs[3].id]
@pytest.mark.django_db
def test_organization_list_visibility(organizations, get, admin, alice):
orgs = organizations(2)
@@ -127,6 +138,18 @@ def test_organization_inventory_list(organization, inventory_factory, get, alice
get(reverse('api:organization_inventories_list', kwargs={'pk': organization.id}), user=rando, expect=403)
@pytest.mark.django_db
def test_organization_inventory_list_order_integrity(organization, admin, inventory_factory, get):
inv1 = inventory_factory('inventory')
inv2 = inventory_factory('inventory2')
inv3 = inventory_factory('inventory3')
inv_ids = {}
for x in range(3):
res = get(reverse('api:organization_inventories_list', kwargs={'pk': organization.id}), user=admin).data['results']
inv_ids[x] = [inv['id'] for inv in res]
assert inv_ids[0] == inv_ids[1] == inv_ids[2] == [inv1.id, inv2.id, inv3.id]
@pytest.mark.django_db
def test_create_organization(post, admin, alice):
new_org = {'name': 'new org', 'description': 'my description'}

View File

@@ -0,0 +1,40 @@
import pytest
from awx.main.management.commands.provision_instance import Command
from awx.main.models.ha import InstanceGroup, Instance
from awx.main.tasks.system import apply_cluster_membership_policies
from django.test.utils import override_settings
@pytest.mark.django_db
def test_traditional_registration():
assert not Instance.objects.exists()
assert not InstanceGroup.objects.exists()
Command().handle(hostname='bar_node', node_type='execution', uuid='4321')
inst = Instance.objects.first()
assert inst.hostname == 'bar_node'
assert inst.node_type == 'execution'
assert inst.uuid == '4321'
assert not InstanceGroup.objects.exists()
@pytest.mark.django_db
def test_register_self_openshift():
assert not Instance.objects.exists()
assert not InstanceGroup.objects.exists()
with override_settings(AWX_AUTO_DEPROVISION_INSTANCES=True, CLUSTER_HOST_ID='foo_node', SYSTEM_UUID='12345'):
Command().handle()
inst = Instance.objects.first()
assert inst.hostname == 'foo_node'
assert inst.uuid == '12345'
assert inst.node_type == 'control'
apply_cluster_membership_policies() # populate instance list using policy rules
assert list(InstanceGroup.objects.get(name='default').instances.all()) == [] # container group
assert list(InstanceGroup.objects.get(name='controlplane').instances.all()) == [inst]

View File

@@ -4,9 +4,10 @@ from awx.main.models import (
Instance,
InstanceGroup,
)
from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups, TaskManagerInstances
class TestCapacityMapping(TransactionTestCase):
class TestInstanceGroupInstanceMapping(TransactionTestCase):
def sample_cluster(self):
ig_small = InstanceGroup.objects.create(name='ig_small')
ig_large = InstanceGroup.objects.create(name='ig_large')
@@ -21,10 +22,12 @@ class TestCapacityMapping(TransactionTestCase):
def test_mapping(self):
self.sample_cluster()
with self.assertNumQueries(2):
inst_map, ig_map = InstanceGroup.objects.capacity_mapping()
assert inst_map['i1'] == set(['ig_small'])
assert inst_map['i2'] == set(['ig_large', 'default'])
assert ig_map['ig_small'] == set(['ig_small'])
assert ig_map['ig_large'] == set(['ig_large', 'default'])
assert ig_map['default'] == set(['ig_large', 'default'])
with self.assertNumQueries(3):
instances = TaskManagerInstances([]) # empty task list
instance_groups = TaskManagerInstanceGroups(instances_by_hostname=instances)
ig_instance_map = instance_groups.instance_groups
assert set(i.hostname for i in ig_instance_map['ig_small']['instances']) == set(['i1'])
assert set(i.hostname for i in ig_instance_map['ig_large']['instances']) == set(['i2', 'i3'])
assert set(i.hostname for i in ig_instance_map['default']['instances']) == set(['i2'])

View File

@@ -150,11 +150,14 @@ def read_content(private_data_dir, raw_env, inventory_update):
referenced_paths.add(target_path)
dir_contents[abs_file_path] = file_content.replace(target_path, '{{ ' + other_alias + ' }}')
# The env/settings file should be ignored, nothing needs to reference it as its picked up directly from runner
ignore_files = [os.path.join(private_data_dir, 'env', 'settings')]
# build dict content which is the directory contents keyed off the file aliases
content = {}
for abs_file_path, file_content in dir_contents.items():
# assert that all files laid down are used
if abs_file_path not in referenced_paths:
if abs_file_path not in referenced_paths and abs_file_path not in ignore_files:
raise AssertionError(
"File {} is not referenced. References and files:\n{}\n{}".format(abs_file_path, json.dumps(env, indent=4), json.dumps(dir_contents, indent=4))
)

View File

@@ -408,3 +408,46 @@ def test_project_delete(delete, organization, admin_user):
),
admin_user,
)
@pytest.mark.parametrize(
'order_by, expected_names, expected_ids',
[
('name', ['alice project', 'bob project', 'shared project'], [1, 2, 3]),
('-name', ['shared project', 'bob project', 'alice project'], [3, 2, 1]),
],
)
@pytest.mark.django_db
def test_project_list_ordering_by_name(get, order_by, expected_names, expected_ids, organization_factory):
'ensure sorted order of project list is maintained correctly when the requested order is invalid or not applicable'
objects = organization_factory(
'org1',
projects=['alice project', 'bob project', 'shared project'],
superusers=['admin'],
)
project_names = []
project_ids = []
# TODO: ask for an order by here that doesn't apply
results = get(reverse('api:project_list'), objects.superusers.admin, QUERY_STRING='order_by=%s' % order_by).data['results']
for x in range(len(results)):
project_names.append(results[x]['name'])
project_ids.append(results[x]['id'])
assert project_names == expected_names and project_ids == expected_ids
@pytest.mark.parametrize('order_by', ('name', '-name'))
@pytest.mark.django_db
def test_project_list_ordering_with_duplicate_names(get, order_by, organization_factory):
# why? because all the '1' mean that all the names are the same, you can't sort based on that,
# meaning you have to fall back on the default sort order, which in this case, is ID
'ensure sorted order of project list is maintained correctly when the project names the same'
objects = organization_factory(
'org1',
projects=['1', '1', '1', '1', '1'],
superusers=['admin'],
)
project_ids = {}
for x in range(3):
results = get(reverse('api:project_list'), objects.superusers.admin, QUERY_STRING='order_by=%s' % order_by).data['results']
project_ids[x] = [proj['id'] for proj in results]
assert project_ids[0] == project_ids[1] == project_ids[2] == [1, 2, 3, 4, 5]

View File

@@ -4,6 +4,7 @@ from unittest.mock import Mock
from decimal import Decimal
from awx.main.models import InstanceGroup, Instance
from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups
@pytest.mark.parametrize('capacity_adjustment', [0.0, 0.25, 0.5, 0.75, 1, 1.5, 3])
@@ -59,9 +60,10 @@ class TestInstanceGroup(object):
],
)
def test_fit_task_to_most_remaining_capacity_instance(self, task, instances, instance_fit_index, reason):
ig = InstanceGroup(id=10)
InstanceGroup(id=10)
tm_igs = TaskManagerInstanceGroups(instance_groups={'controlplane': {'instances': instances}})
instance_picked = ig.fit_task_to_most_remaining_capacity_instance(task, instances)
instance_picked = tm_igs.fit_task_to_most_remaining_capacity_instance(task, 'controlplane')
if instance_fit_index is None:
assert instance_picked is None, reason
@@ -82,13 +84,14 @@ class TestInstanceGroup(object):
def filter_offline_instances(*args):
return filter(lambda i: i.capacity > 0, instances)
ig = InstanceGroup(id=10)
InstanceGroup(id=10)
instances_online_only = filter_offline_instances(instances)
tm_igs = TaskManagerInstanceGroups(instance_groups={'controlplane': {'instances': instances_online_only}})
if instance_fit_index is None:
assert ig.find_largest_idle_instance(instances_online_only) is None, reason
assert tm_igs.find_largest_idle_instance('controlplane') is None, reason
else:
assert ig.find_largest_idle_instance(instances_online_only) == instances[instance_fit_index], reason
assert tm_igs.find_largest_idle_instance('controlplane') == instances[instance_fit_index], reason
def test_cleanup_params_defaults():

View File

@@ -1,6 +1,7 @@
from unittest import mock
import datetime as dt
from django.core.mail.message import EmailMessage
import pytest
import awx.main.notifications.grafana_backend as grafana_backend
@@ -29,7 +30,7 @@ def test_send_messages():
requests_mock.post.assert_called_once_with(
'https://example.com/api/annotations',
headers={'Content-Type': 'application/json', 'Authorization': 'Bearer testapikey'},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': None, 'time': 60000, 'dashboardId': None},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'time': 60000},
verify=True,
)
assert sent_messages == 1
@@ -59,20 +60,21 @@ def test_send_messages_with_no_verify_ssl():
requests_mock.post.assert_called_once_with(
'https://example.com/api/annotations',
headers={'Content-Type': 'application/json', 'Authorization': 'Bearer testapikey'},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': None, 'time': 60000, 'dashboardId': None},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'time': 60000},
verify=False,
)
assert sent_messages == 1
def test_send_messages_with_dashboardid():
@pytest.mark.parametrize("dashboardId", [42, 0])
def test_send_messages_with_dashboardid(dashboardId):
with mock.patch('awx.main.notifications.grafana_backend.requests') as requests_mock:
requests_mock.post.return_value.status_code = 200
m = {}
m['started'] = dt.datetime.utcfromtimestamp(60).isoformat()
m['finished'] = dt.datetime.utcfromtimestamp(120).isoformat()
m['subject'] = "test subject"
backend = grafana_backend.GrafanaBackend("testapikey", dashboardId=42)
backend = grafana_backend.GrafanaBackend("testapikey", dashboardId=dashboardId)
message = EmailMessage(
m['subject'],
{"started": m['started'], "finished": m['finished']},
@@ -89,20 +91,21 @@ def test_send_messages_with_dashboardid():
requests_mock.post.assert_called_once_with(
'https://example.com/api/annotations',
headers={'Content-Type': 'application/json', 'Authorization': 'Bearer testapikey'},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': None, 'time': 60000, 'dashboardId': 42},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'time': 60000, 'dashboardId': dashboardId},
verify=True,
)
assert sent_messages == 1
def test_send_messages_with_panelid():
@pytest.mark.parametrize("panelId", [42, 0])
def test_send_messages_with_panelid(panelId):
with mock.patch('awx.main.notifications.grafana_backend.requests') as requests_mock:
requests_mock.post.return_value.status_code = 200
m = {}
m['started'] = dt.datetime.utcfromtimestamp(60).isoformat()
m['finished'] = dt.datetime.utcfromtimestamp(120).isoformat()
m['subject'] = "test subject"
backend = grafana_backend.GrafanaBackend("testapikey", dashboardId=None, panelId=42)
backend = grafana_backend.GrafanaBackend("testapikey", dashboardId=None, panelId=panelId)
message = EmailMessage(
m['subject'],
{"started": m['started'], "finished": m['finished']},
@@ -119,7 +122,7 @@ def test_send_messages_with_panelid():
requests_mock.post.assert_called_once_with(
'https://example.com/api/annotations',
headers={'Content-Type': 'application/json', 'Authorization': 'Bearer testapikey'},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': 42, 'time': 60000, 'dashboardId': None},
json={'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': panelId, 'time': 60000},
verify=True,
)
assert sent_messages == 1
@@ -179,7 +182,7 @@ def test_send_messages_with_tags():
requests_mock.post.assert_called_once_with(
'https://example.com/api/annotations',
headers={'Content-Type': 'application/json', 'Authorization': 'Bearer testapikey'},
json={'tags': ['ansible'], 'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'panelId': None, 'time': 60000, 'dashboardId': None},
json={'tags': ['ansible'], 'text': 'test subject', 'isRegion': True, 'timeEnd': 120000, 'time': 60000},
verify=True,
)
assert sent_messages == 1

View File

@@ -1,6 +1,6 @@
import pytest
from awx.main.models import InstanceGroup
from awx.main.scheduler.task_manager_models import TaskManagerInstanceGroups, TaskManagerInstances
class FakeMeta(object):
@@ -52,9 +52,9 @@ def sample_cluster():
ig_small = InstanceGroup(name='ig_small')
ig_large = InstanceGroup(name='ig_large')
default = InstanceGroup(name='default')
i1 = Instance(hostname='i1', capacity=200)
i2 = Instance(hostname='i2', capacity=200)
i3 = Instance(hostname='i3', capacity=200)
i1 = Instance(hostname='i1', capacity=200, node_type='hybrid')
i2 = Instance(hostname='i2', capacity=200, node_type='hybrid')
i3 = Instance(hostname='i3', capacity=200, node_type='hybrid')
ig_small.instances.add(i1)
ig_large.instances.add(i2, i3)
default.instances.add(i2)
@@ -63,59 +63,66 @@ def sample_cluster():
return stand_up_cluster
def test_committed_capacity(sample_cluster):
default, ig_large, ig_small = sample_cluster()
tasks = [Job(status='waiting', instance_group=default), Job(status='waiting', instance_group=ig_large), Job(status='waiting', instance_group=ig_small)]
capacities = InstanceGroup.objects.capacity_values(qs=[default, ig_large, ig_small], tasks=tasks, breakdown=True)
# Jobs submitted to either tower or ig_larg must count toward both
assert capacities['default']['committed_capacity'] == 43 * 2
assert capacities['ig_large']['committed_capacity'] == 43 * 2
assert capacities['ig_small']['committed_capacity'] == 43
@pytest.fixture
def create_ig_manager():
def _rf(ig_list, tasks):
instances = TaskManagerInstances(tasks, instances=set(inst for ig in ig_list for inst in ig.instance_list))
seed_igs = {}
for ig in ig_list:
seed_igs[ig.name] = {'instances': [instances[inst.hostname] for inst in ig.instance_list]}
instance_groups = TaskManagerInstanceGroups(instance_groups=seed_igs)
return instance_groups
return _rf
def test_running_capacity(sample_cluster):
@pytest.mark.parametrize('ig_name,consumed_capacity', [('default', 43), ('ig_large', 43 * 2), ('ig_small', 43)])
def test_running_capacity(sample_cluster, ig_name, consumed_capacity, create_ig_manager):
default, ig_large, ig_small = sample_cluster()
ig_list = [default, ig_large, ig_small]
tasks = [Job(status='running', execution_node='i1'), Job(status='running', execution_node='i2'), Job(status='running', execution_node='i3')]
capacities = InstanceGroup.objects.capacity_values(qs=[default, ig_large, ig_small], tasks=tasks, breakdown=True)
# Tower is only given 1 instance
assert capacities['default']['running_capacity'] == 43
# Large IG has 2 instances
assert capacities['ig_large']['running_capacity'] == 43 * 2
assert capacities['ig_small']['running_capacity'] == 43
instance_groups_mgr = create_ig_manager(ig_list, tasks)
assert instance_groups_mgr.get_consumed_capacity(ig_name) == consumed_capacity
def test_offline_node_running(sample_cluster):
def test_offline_node_running(sample_cluster, create_ig_manager):
"""
Assure that algorithm doesn't explode if a job is marked running
in an offline node
"""
default, ig_large, ig_small = sample_cluster()
ig_small.instance_list[0].capacity = 0
tasks = [Job(status='running', execution_node='i1', instance_group=ig_small)]
capacities = InstanceGroup.objects.capacity_values(qs=[default, ig_large, ig_small], tasks=tasks)
assert capacities['ig_small']['consumed_execution_capacity'] == 43
tasks = [Job(status='running', execution_node='i1')]
instance_groups_mgr = create_ig_manager([default, ig_large, ig_small], tasks)
assert instance_groups_mgr.get_consumed_capacity('ig_small') == 43
assert instance_groups_mgr.get_remaining_capacity('ig_small') == 0
def test_offline_node_waiting(sample_cluster):
def test_offline_node_waiting(sample_cluster, create_ig_manager):
"""
Same but for a waiting job
"""
default, ig_large, ig_small = sample_cluster()
ig_small.instance_list[0].capacity = 0
tasks = [Job(status='waiting', instance_group=ig_small)]
capacities = InstanceGroup.objects.capacity_values(qs=[default, ig_large, ig_small], tasks=tasks)
assert capacities['ig_small']['consumed_execution_capacity'] == 43
tasks = [Job(status='waiting', execution_node='i1')]
instance_groups_mgr = create_ig_manager([default, ig_large, ig_small], tasks)
assert instance_groups_mgr.get_consumed_capacity('ig_small') == 43
assert instance_groups_mgr.get_remaining_capacity('ig_small') == 0
def test_RBAC_reduced_filter(sample_cluster):
def test_RBAC_reduced_filter(sample_cluster, create_ig_manager):
"""
User can see jobs that are running in `ig_small` and `ig_large` IGs,
but user does not have permission to see those actual instance groups.
Verify that this does not blow everything up.
"""
default, ig_large, ig_small = sample_cluster()
tasks = [Job(status='waiting', instance_group=default), Job(status='waiting', instance_group=ig_large), Job(status='waiting', instance_group=ig_small)]
capacities = InstanceGroup.objects.capacity_values(qs=[default], tasks=tasks, breakdown=True)
tasks = [Job(status='waiting', execution_node='i1'), Job(status='waiting', execution_node='i2'), Job(status='waiting', execution_node='i3')]
instance_groups_mgr = create_ig_manager([default], tasks)
# Cross-links between groups not visible to current user,
# so a naieve accounting of capacities is returned instead
assert capacities['default']['committed_capacity'] == 43
assert instance_groups_mgr.get_consumed_capacity('default') == 43

View File

@@ -988,7 +988,7 @@ class TestJobCredentials(TestJobExecution):
credential.inputs['password'] = encrypt_field(credential, 'password')
job.credentials.add(credential)
private_data_files = task.build_private_data_files(job, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(job, private_data_dir)
env = task.build_env(job, private_data_dir, private_data_files=private_data_files)
credential.credential_type.inject_credential(credential, env, {}, [], private_data_dir)
@@ -1058,7 +1058,7 @@ class TestJobCredentials(TestJobExecution):
credential.inputs[field] = encrypt_field(credential, field)
job.credentials.add(credential)
private_data_files = task.build_private_data_files(job, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(job, private_data_dir)
env = task.build_env(job, private_data_dir, private_data_files=private_data_files)
safe_env = build_safe_env(env)
credential.credential_type.inject_credential(credential, env, safe_env, [], private_data_dir)
@@ -1346,7 +1346,7 @@ class TestProjectUpdateGalaxyCredentials(TestJobExecution):
task.instance = project_update
env = task.build_env(project_update, private_data_dir)
if ignore:
assert env['ANSIBLE_GALAXY_IGNORE'] is True
assert env['ANSIBLE_GALAXY_IGNORE'] == 'True'
else:
assert 'ANSIBLE_GALAXY_IGNORE' not in env
@@ -1510,7 +1510,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = mocker.Mock(return_value=None)
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
assert 'AWS_ACCESS_KEY_ID' not in env
@@ -1530,7 +1530,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = build_safe_env(env)
@@ -1554,7 +1554,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = {}
@@ -1591,7 +1591,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = build_safe_env(env)
@@ -1621,7 +1621,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = build_safe_env(env)
@@ -1648,7 +1648,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
def run(expected_gce_zone):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = {}
credentials = task.build_credentials_list(inventory_update)
@@ -1682,7 +1682,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
path = to_host_path(env['OS_CLIENT_CONFIG_FILE'], private_data_dir)
@@ -1717,7 +1717,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_cloud_credential = get_cred
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
safe_env = build_safe_env(env)
@@ -1832,7 +1832,7 @@ class TestInventoryUpdateCredentials(TestJobExecution):
inventory_update.get_extra_credentials = mocker.Mock(return_value=[])
settings.AWX_TASK_ENV = {'FOO': 'BAR'}
private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
private_data_files, ssh_key_data = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, private_data_files)
assert env['FOO'] == 'BAR'

View File

@@ -1113,33 +1113,18 @@ def deepmerge(a, b):
return b
def create_partition(tblname, start=None, end=None, partition_label=None, minutely=False):
"""Creates new partition table for events.
- start defaults to beginning of current hour
- end defaults to end of current hour
- partition_label defaults to YYYYMMDD_HH
def create_partition(tblname, start=None):
"""Creates new partition table for events. By default it covers the current hour."""
if start is None:
start = now()
start = start.replace(microsecond=0, second=0, minute=0)
end = start + timedelta(hours=1)
- minutely will create partitions that span _a single minute_ for testing purposes
"""
current_time = now()
if not start:
if minutely:
start = current_time.replace(microsecond=0, second=0)
else:
start = current_time.replace(microsecond=0, second=0, minute=0)
if not end:
if minutely:
end = start.replace(microsecond=0, second=0) + timedelta(minutes=1)
else:
end = start.replace(microsecond=0, second=0, minute=0) + timedelta(hours=1)
start_timestamp = str(start)
end_timestamp = str(end)
if not partition_label:
if minutely:
partition_label = start.strftime('%Y%m%d_%H%M')
else:
partition_label = start.strftime('%Y%m%d_%H')
partition_label = start.strftime('%Y%m%d_%H')
try:
with transaction.atomic():

View File

@@ -15,6 +15,7 @@
# scm_track_submodules: true/false
# roles_enabled: Value of the global setting to enable roles downloading
# collections_enabled: Value of the global setting to enable collections downloading
# galaxy_task_env: environment variables to use specifically for ansible-galaxy commands
# awx_version: Current running version of the awx or tower as a string
# awx_license_type: "open" for AWX; else presume Tower
@@ -154,67 +155,63 @@
gather_facts: false
connection: local
name: Install content with ansible-galaxy command if necessary
vars:
galaxy_task_env: # configure in settings
additional_collections_env:
# These environment variables are used for installing collections, in addition to galaxy_task_env
# setting the collections paths silences warnings
ANSIBLE_COLLECTIONS_PATHS: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections"
# Put the local tmp directory in same volume as collection destination
# otherwise, files cannot be moved accross volumes and will cause error
ANSIBLE_LOCAL_TEMP: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/tmp"
tasks:
- name: Check content sync settings
debug:
msg: "Collection and role syncing disabled. Check the AWX_ROLES_ENABLED and AWX_COLLECTIONS_ENABLED settings and Galaxy credentials on the project's organization."
block:
- debug:
msg: >
Collection and role syncing disabled. Check the AWX_ROLES_ENABLED and
AWX_COLLECTIONS_ENABLED settings and Galaxy credentials on the project's organization.
- meta: end_play
when: not roles_enabled|bool and not collections_enabled|bool
tags:
- install_roles
- install_collections
- name:
meta: end_play
when: not roles_enabled|bool and not collections_enabled|bool
tags:
- install_roles
- install_collections
- block:
- name: fetch galaxy roles from requirements.(yml/yaml)
command: >
ansible-galaxy role install -r {{ item }}
--roles-path {{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_roles
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}"
register: galaxy_result
with_fileglob:
- "{{project_path|quote}}/roles/requirements.yaml"
- "{{project_path|quote}}/roles/requirements.yml"
changed_when: "'was installed successfully' in galaxy_result.stdout"
environment:
ANSIBLE_FORCE_COLOR: false
GIT_SSH_COMMAND: "ssh -o StrictHostKeyChecking=no"
- name: fetch galaxy roles from requirements.(yml/yaml)
command: >
ansible-galaxy role install -r {{ item }}
--roles-path {{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_roles
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}"
register: galaxy_result
with_fileglob:
- "{{project_path|quote}}/roles/requirements.yaml"
- "{{project_path|quote}}/roles/requirements.yml"
changed_when: "'was installed successfully' in galaxy_result.stdout"
environment: "{{ galaxy_task_env }}"
when: roles_enabled|bool
tags:
- install_roles
- block:
- name: fetch galaxy collections from collections/requirements.(yml/yaml)
command: >
ansible-galaxy collection install -r {{ item }}
--collections-path {{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}"
register: galaxy_collection_result
with_fileglob:
- "{{project_path|quote}}/collections/requirements.yaml"
- "{{project_path|quote}}/collections/requirements.yml"
- "{{project_path|quote}}/requirements.yaml"
- "{{project_path|quote}}/requirements.yml"
changed_when: "'Installing ' in galaxy_collection_result.stdout"
environment:
ANSIBLE_FORCE_COLOR: false
ANSIBLE_COLLECTIONS_PATHS: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections"
GIT_SSH_COMMAND: "ssh -o StrictHostKeyChecking=no"
# Put the local tmp directory in same volume as collection destination
# otherwise, files cannot be moved accross volumes and will cause error
ANSIBLE_LOCAL_TEMP: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/tmp"
- name: fetch galaxy collections from collections/requirements.(yml/yaml)
command: >
ansible-galaxy collection install -r {{ item }}
--collections-path {{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections
{{ ' -' + 'v' * ansible_verbosity if ansible_verbosity else '' }}
args:
chdir: "{{project_path|quote}}"
register: galaxy_collection_result
with_fileglob:
- "{{project_path|quote}}/collections/requirements.yaml"
- "{{project_path|quote}}/collections/requirements.yml"
- "{{project_path|quote}}/requirements.yaml"
- "{{project_path|quote}}/requirements.yml"
changed_when: "'Installing ' in galaxy_collection_result.stdout"
environment: "{{ additional_collections_env | combine(galaxy_task_env) }}"
when:
- "ansible_version.full is version_compare('2.9', '>=')"
- collections_enabled|bool

View File

@@ -561,6 +561,10 @@ ANSIBLE_INVENTORY_UNPARSED_FAILED = True
# Additional environment variables to be passed to the ansible subprocesses
AWX_TASK_ENV = {}
# Additional environment variables to apply when running ansible-galaxy commands
# to fetch Ansible content - roles and collections
GALAXY_TASK_ENV = {'ANSIBLE_FORCE_COLOR': 'false', 'GIT_SSH_COMMAND': "ssh -o StrictHostKeyChecking=no"}
# Rebuild Host Smart Inventory memberships.
AWX_REBUILD_SMART_MEMBERSHIP = False
@@ -940,6 +944,12 @@ AWX_CALLBACK_PROFILE = False
# Delete temporary directories created to store playbook run-time
AWX_CLEANUP_PATHS = True
# Allow ansible-runner to store env folder (may contain sensitive information)
AWX_RUNNER_OMIT_ENV_FILES = True
# Allow ansible-runner to save ansible output (may cause performance issues)
AWX_RUNNER_SUPPRESS_OUTPUT_FILE = True
# Delete completed work units in receptor
RECEPTOR_RELEASE_WORK = True

View File

@@ -95,7 +95,8 @@
"href",
"modifier",
"data-cy",
"fieldName"
"fieldName",
"splitButtonVariant"
],
"ignore": ["Ansible", "Tower", "JSON", "YAML", "lg", "hh:mm AM/PM", "Twilio"],
"ignoreComponent": [

View File

@@ -8,7 +8,7 @@
"compilerBabelOptions": {},
"fallbackLocales": { "default": "en"},
"format": "po",
"locales": ["en","es","fr","nl","zh","ja","zu"],
"locales": ["en","es","fr","ko","nl","zh","ja","zu"],
"orderBy": "messageId",
"pseudoLocale": "zu",
"rootDir": "./src",

View File

@@ -56,7 +56,7 @@ The UI is built using [ReactJS](https://reactjs.org/docs/getting-started.html) a
The AWX UI requires the following:
- Node >= 16.14.0 LTS
- Node >= 16.13.1 LTS
- NPM 8.x
Run the following to install all the dependencies:

View File

@@ -1,4 +1,4 @@
FROM node:16.14.0
FROM node:16.13.1
ARG NPMRC_FILE=.npmrc
ENV NPMRC_FILE=${NPMRC_FILE}
ARG TARGET='https://awx:8043'

View File

@@ -1,7 +1,7 @@
# AWX-UI
## Requirements
- node >= 16.14.0, npm >= 8.x make, git
- node >= 16.13.1, npm >= 8.x make, git
## Development
The API development server will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md).

View File

@@ -68,7 +68,7 @@
"react-scripts": "5.0.0"
},
"engines": {
"node": ">=16.14.0"
"node": ">=16.13.1"
}
},
"node_modules/@babel/code-frame": {
@@ -15538,9 +15538,9 @@
}
},
"node_modules/minimist": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
"integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw==",
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
"integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==",
"dev": true
},
"node_modules/mkdirp": {
@@ -34035,9 +34035,9 @@
}
},
"minimist": {
"version": "1.2.5",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.5.tgz",
"integrity": "sha512-FM9nNUYrRBAELZQT3xeZQ7fmMOBg6nWNmJKTcgsJeaLstP/UODVpGsr5OhXhhXg6f+qtJ8uiZ+PUxkDWcgIXLw==",
"version": "1.2.6",
"resolved": "https://registry.npmjs.org/minimist/-/minimist-1.2.6.tgz",
"integrity": "sha512-Jsjnk4bw3YJqYzbdyBiNsPWHPfO++UGG749Cxs6peCu5Xg4nrena6OVxOYxrQTqww0Jmwt+Ref8rggumkTLz9Q==",
"dev": true
},
"mkdirp": {

View File

@@ -3,7 +3,7 @@
"homepage": ".",
"private": true,
"engines": {
"node": ">=16.14.0"
"node": ">=16.13.1"
},
"dependencies": {
"@lingui/react": "3.9.0",

View File

@@ -19,6 +19,10 @@ class Jobs extends RunnableMixin(Base) {
readDetail(id) {
return this.http.get(`${this.baseUrl}${id}/`);
}
readChildrenSummary(id) {
return this.http.get(`${this.baseUrl}${id}/job_events/children_summary/`);
}
}
export default Jobs;

View File

@@ -52,13 +52,13 @@ export default function useAdHocDetailsStep(
validate: () => {
if (Object.keys(touched).includes('module_name' || 'module_args')) {
if (!values.module_name) {
setFieldError('module_name', t`This field is must not be blank.`);
setFieldError('module_name', t`This field must not be blank.`);
}
if (
values.module_name === ('command' || 'shell') &&
!values.module_args
) {
setFieldError('module_args', t`This field is must not be blank`);
setFieldError('module_args', t`This field must not be blank`);
}
}
},

View File

@@ -1,7 +1,6 @@
import React, { useEffect, useMemo, useState } from 'react';
import PropTypes from 'prop-types';
import styled from 'styled-components';
import { useLocation } from 'react-router-dom';
import { t } from '@lingui/macro';
import {
Button,
@@ -58,7 +57,6 @@ function DataListToolbar({
handleIsAnsibleFactsSelected,
isFilterCleared,
}) {
const { search } = useLocation();
const showExpandCollapse = onCompact && onExpand;
const [isKebabOpen, setIsKebabOpen] = useState(false);
const [isKebabModalOpen, setIsKebabModalOpen] = useState(false);
@@ -93,7 +91,7 @@ function DataListToolbar({
ouiaId={`${qsConfig.namespace}-list-toolbar`}
clearAllFilters={clearAllFilters}
collapseListedFiltersBreakpoint="lg"
clearFiltersButtonText={Boolean(search) && t`Clear all filters`}
clearFiltersButtonText={t`Clear all filters`}
>
<ToolbarContent>
{onExpandAll && (

View File

@@ -5,14 +5,6 @@ import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import DataListToolbar from './DataListToolbar';
import AddDropDownButton from '../AddDropDownButton/AddDropDownButton';
jest.mock('react-router-dom', () => ({
...jest.requireActual('react-router-dom'),
useLocation: () => ({
pathname: '/organizations',
search: 'template.name__icontains=name',
}),
}));
describe('<DataListToolbar />', () => {
let toolbar;

View File

@@ -1,5 +1,6 @@
import React from 'react';
import { node, bool, string } from 'prop-types';
import { oneOfType, node, bool, string } from 'prop-types';
import { TextListItem, TextListItemVariants } from '@patternfly/react-core';
import styled from 'styled-components';
import Popover from '../Popover';
@@ -81,7 +82,7 @@ Detail.propTypes = {
value: node,
fullWidth: bool,
alwaysVisible: bool,
helpText: string,
helpText: oneOfType([string, node]),
};
Detail.defaultProps = {
value: null,

View File

@@ -18,6 +18,7 @@ function DisassociateButton({
modalTitle = t`Disassociate?`,
onDisassociate,
verifyCannotDisassociate = true,
isProtectedInstanceGroup = false,
}) {
const [isOpen, setIsOpen] = useState(false);
const { isKebabified, onKebabModalChange } = useContext(KebabifiedContext);
@@ -37,7 +38,10 @@ function DisassociateButton({
return !item.summary_fields?.user_capabilities?.delete;
}
function cannotDisassociateInstances(item) {
return item.node_type === 'control';
return (
item.node_type === 'control' ||
(isProtectedInstanceGroup && item.node_type === 'hybrid')
);
}
const cannotDisassociate = itemsToDisassociate.some(
@@ -73,11 +77,7 @@ function DisassociateButton({
let isDisabled = false;
if (verifyCannotDisassociate) {
isDisabled =
itemsToDisassociate.length === 0 ||
itemsToDisassociate.some(cannotDisassociate);
} else {
isDisabled = itemsToDisassociate.length === 0;
isDisabled = itemsToDisassociate.some(cannotDisassociate);
}
// NOTE: Once PF supports tooltips on disabled elements,
@@ -89,7 +89,7 @@ function DisassociateButton({
<DropdownItem
key="add"
aria-label={t`disassociate`}
isDisabled={isDisabled}
isDisabled={isDisabled || !itemsToDisassociate.length}
component="button"
ouiaId="disassociate-tooltip"
onClick={() => setIsOpen(true)}
@@ -108,7 +108,7 @@ function DisassociateButton({
variant="secondary"
aria-label={t`Disassociate`}
onClick={() => setIsOpen(true)}
isDisabled={isDisabled}
isDisabled={isDisabled || !itemsToDisassociate.length}
>
{t`Disassociate`}
</Button>

View File

@@ -124,5 +124,21 @@ describe('<DisassociateButton />', () => {
);
expect(wrapper.find('button[disabled]')).toHaveLength(1);
});
test('should disable button when selected items contain instances thaat are hybrid and are inside a protected instances', () => {
const wrapper = mountWithContexts(
<DisassociateButton
onDisassociate={() => {}}
isProectedInstanceGroup
itemsToDelete={[
{
id: 1,
hostname: 'awx',
node_type: 'control',
},
]}
/>
);
expect(wrapper.find('button[disabled]')).toHaveLength(1);
});
});
});

View File

@@ -68,11 +68,12 @@ export function toQueryString(config, searchParams = {}) {
/**
* Escape a string with double quote in case there was a white space
* @param {string} key The key of the value to be parsed
* @param {string} value A string to be parsed
* @return {string} string
*/
const escapeString = (value) => {
if (verifySpace(value)) {
const escapeString = (key, value) => {
if (verifySpace(value) || key.includes('regex')) {
return `"${value}"`;
}
return value;
@@ -95,9 +96,11 @@ export function toHostFilter(searchParams = {}) {
.sort()
.flatMap((key) => {
if (Array.isArray(searchParams[key])) {
return searchParams[key].map((val) => `${key}=${escapeString(val)}`);
return searchParams[key].map(
(val) => `${key}=${escapeString(key, val)}`
);
}
return `${key}=${escapeString(searchParams[key])}`;
return `${key}=${escapeString(key, searchParams[key])}`;
});
const filteredSearchParams = flattenSearchParams.filter(

View File

@@ -136,6 +136,17 @@ describe('toHostFilter', () => {
);
});
test('should escape name__regex and name__iregex', () => {
const object = {
or__name__regex: '(t|e)st',
or__name__iregex: '(f|o)',
or__name: 'foo',
};
expect(toHostFilter(object)).toEqual(
'name=foo or name__iregex="(f|o)" or name__regex="(t|e)st"'
);
});
test('should return a host filter with or conditional when value is array', () => {
const object = {
or__groups__id: ['1', '2'],

View File

@@ -129,9 +129,6 @@ function PaginatedTable({
onSetPage={handleSetPage}
onPerPageSelect={handleSetPageSize}
ouiaId="top-pagination"
titles={{
paginationTitle: t`Top Pagination`,
}}
/>
);

View File

@@ -10,6 +10,8 @@ const PopoverButton = styled.button`
padding: var(--pf-global--spacer--xs);
margin: -(var(--pf-global--spacer--xs));
font-size: var(--pf-global--FontSize--sm);
--pf-c-form__group-label-help--Color: var(--pf-global--Color--200);
--pf-c-form__group-label-help--hover--Color: var(--pf-global--Color--100);
`;
function Popover({ ariaLabel, content, header, id, maxWidth, ...rest }) {

View File

@@ -195,6 +195,11 @@ function AdvancedSearch({
};
const renderTextInput = () => {
let placeholderText;
if (keySelection === 'labels' && lookupSelection === 'search') {
placeholderText = 'e.g. label_1,label_2';
}
if (isTextInputDisabled) {
return (
<Tooltip
@@ -222,6 +227,7 @@ function AdvancedSearch({
value={(!keySelection && t`First, select a key`) || searchValue}
onChange={setSearchValue}
onKeyDown={handleAdvancedTextKeyDown}
placeholder={placeholderText}
/>
);
};
@@ -254,7 +260,6 @@ function AdvancedSearch({
selections={keySelection}
isOpen={isKeyDropdownOpen}
placeholderText={t`Key`}
isCreatable
isGrouped
onCreateOption={setKeySelection}
maxHeight={maxSelectHeight}

View File

@@ -1,5 +1,5 @@
import 'styled-components/macro';
import React, { useState } from 'react';
import React, { useState, useEffect } from 'react';
import PropTypes from 'prop-types';
import { t } from '@lingui/macro';
@@ -65,6 +65,26 @@ function Search({
const [searchValue, setSearchValue] = useState('');
const [isFilterDropdownOpen, setIsFilterDropdownOpen] = useState(false);
const params = parseQueryString(qsConfig, location.search);
if (params?.host_filter) {
params.ansible_facts = params.host_filter.substring(
'ansible_facts__'.length
);
delete params.host_filter;
}
const searchChips = getChipsByKey(params, columns, qsConfig);
const [chipsByKey, setChipsByKey] = useState(
JSON.parse(JSON.stringify(searchChips))
);
useEffect(() => {
Object.keys(chipsByKey).forEach((el) => {
chipsByKey[el].chips = [];
});
setChipsByKey({ ...chipsByKey, ...searchChips });
}, [location.search]); // eslint-disable-line react-hooks/exhaustive-deps
const handleDropdownSelect = ({ target }) => {
const { key: actualSearchKey } = columns.find(
({ name }) => name === target.innerText
@@ -98,15 +118,6 @@ function Search({
}
};
const params = parseQueryString(qsConfig, location.search);
if (params?.host_filter) {
params.ansible_facts = params.host_filter.substring(
'ansible_facts__'.length
);
delete params.host_filter;
}
const chipsByKey = getChipsByKey(params, columns, qsConfig);
const { name: searchColumnName } = columns.find(
({ key }) => key === searchKey
);
@@ -179,7 +190,7 @@ function Search({
onSelect={(event, selection) =>
handleFilterDropdownSelect(key, event, selection)
}
selections={chipsByKey[key].chips.map((chip) => {
selections={chipsByKey[key]?.chips.map((chip) => {
const [, ...value] = chip.key.split(':');
return value.join(':');
})}
@@ -258,7 +269,6 @@ function Search({
{/* Add a ToolbarFilter for any key that doesn't have it's own
search column so the chips show up */}
{Object.keys(chipsByKey)
.filter((val) => chipsByKey[val].chips.length > 0)
.filter((val) => columns.map((val2) => val2.key).indexOf(val) === -1)
.map((leftoverKey) => (
<ToolbarFilter

View File

@@ -1,18 +1,15 @@
import React from 'react';
import React, { useState } from 'react';
import PropTypes from 'prop-types';
import {
Button,
DataListAction,
DragDrop,
Droppable,
Draggable,
DataListItemRow,
DataListItemCells,
DataList,
DataListAction,
DataListItem,
DataListCell,
DataListItemRow,
DataListControl,
DataListDragButton,
DataListItemCells,
} from '@patternfly/react-core';
import { TimesIcon } from '@patternfly/react-icons';
import styled from 'styled-components';
@@ -26,85 +23,112 @@ const RemoveActionSection = styled(DataListAction)`
`;
function DraggableSelectedList({ selected, onRemove, onRowDrag }) {
const removeItem = (item) => {
onRemove(selected.find((i) => i.name === item));
const [liveText, setLiveText] = useState('');
const [id, setId] = useState('');
const [isDragging, setIsDragging] = useState(false);
const onDragStart = (newId) => {
setId(newId);
setLiveText(t`Dragging started for item id: ${newId}.`);
setIsDragging(true);
};
function reorder(list, startIndex, endIndex) {
const result = Array.from(list);
const [removed] = result.splice(startIndex, 1);
result.splice(endIndex, 0, removed);
return result;
}
const onDragMove = (oldIndex, newIndex) => {
setLiveText(
t`Dragging item ${id}. Item with index ${oldIndex} in now ${newIndex}.`
);
};
const dragItem = (item, dest) => {
if (!dest || item.index === dest.index) {
return false;
}
const onDragCancel = () => {
setLiveText(t`Dragging cancelled. List is unchanged.`);
setIsDragging(false);
};
const newItems = reorder(selected, item.index, dest.index);
onRowDrag(newItems);
return true;
const onDragFinish = (newItemOrder) => {
const selectedItems = newItemOrder.map((item) =>
selected.find((i) => i.name === item)
);
onRowDrag(selectedItems);
setIsDragging(false);
};
const removeItem = (item) => {
onRemove(selected.find((i) => i.name === item));
};
if (selected.length <= 0) {
return null;
}
const orderedList = selected.map((item) => item?.name);
return (
<DragDrop onDrop={dragItem}>
<Droppable>
<DataList data-cy="draggable-list">
{selected.map(({ name: label, id }, index) => {
const rowPosition = index + 1;
return (
<Draggable value={id} key={rowPosition}>
<DataListItem>
<DataListItemRow>
<DataListControl>
<DataListDragButton
isDisabled={selected.length < 2}
data-cy={`reorder-${label}`}
/>
</DataListControl>
<DataListItemCells
dataListCells={[
<DataListCell key={label}>
<span
id={rowPosition}
>{`${rowPosition}. ${label}`}</span>
</DataListCell>,
]}
/>
<RemoveActionSection>
<Button
onClick={() => removeItem(label)}
variant="plain"
aria-label={t`Remove`}
ouiaId={`draggable-list-remove-${label}`}
>
<TimesIcon />
</Button>
</RemoveActionSection>
</DataListItemRow>
</DataListItem>
</Draggable>
);
})}
</DataList>
</Droppable>
</DragDrop>
<>
<DataList
aria-label={t`Draggable list to reorder and remove selected items.`}
data-cy="draggable-list"
itemOrder={orderedList}
onDragCancel={onDragCancel}
onDragFinish={onDragFinish}
onDragMove={onDragMove}
onDragStart={onDragStart}
>
{orderedList.map((label, index) => {
const rowPosition = index + 1;
return (
<DataListItem id={label} key={rowPosition}>
<DataListItemRow>
<DataListControl>
<DataListDragButton
aria-label={t`Reorder`}
aria-labelledby={rowPosition}
aria-describedby={t`Press space or enter to begin dragging,
and use the arrow keys to navigate up or down.
Press enter to confirm the drag, or any other key to
cancel the drag operation.`}
aria-pressed="false"
data-cy={`reorder-${label}`}
isDisabled={selected.length === 1}
/>
</DataListControl>
<DataListItemCells
dataListCells={[
<DataListCell key={label}>
<span id={rowPosition}>{`${rowPosition}. ${label}`}</span>
</DataListCell>,
]}
/>
<RemoveActionSection aria-label={t`Actions`} id={rowPosition}>
<Button
onClick={() => removeItem(label)}
variant="plain"
aria-label={t`Remove`}
ouiaId={`draggable-list-remove-${label}`}
isDisabled={isDragging}
>
<TimesIcon />
</Button>
</RemoveActionSection>
</DataListItemRow>
</DataListItem>
);
})}
</DataList>
<div className="pf-screen-reader" aria-live="assertive">
{liveText}
</div>
</>
);
}
const SelectedListItem = PropTypes.shape({
const ListItem = PropTypes.shape({
id: PropTypes.number.isRequired,
name: PropTypes.string.isRequired,
});
DraggableSelectedList.propTypes = {
onRemove: PropTypes.func,
onRowDrag: PropTypes.func,
selected: PropTypes.arrayOf(SelectedListItem),
selected: PropTypes.arrayOf(ListItem),
};
DraggableSelectedList.defaultProps = {
onRemove: () => null,

View File

@@ -1,8 +1,14 @@
// These tests have been turned off because they fail due to a console wanring coming from patternfly.
// The warning is that the onDrag api has been deprecated. It's replacement is a DragDrop component,
// however that component is not keyboard accessible. Therefore we have elected to turn off these tests.
//github.com/patternfly/patternfly-react/issues/6317s
import React from 'react';
import { act } from 'react-dom/test-utils';
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import DraggableSelectedList from './DraggableSelectedList';
describe('<DraggableSelectedList />', () => {
describe.skip('<DraggableSelectedList />', () => {
let wrapper;
afterEach(() => {
jest.clearAllMocks();
@@ -27,16 +33,16 @@ describe('<DraggableSelectedList />', () => {
/>
);
expect(wrapper.find('DraggableSelectedList').length).toBe(1);
expect(wrapper.find('Draggable').length).toBe(2);
expect(wrapper.find('DataListItem').length).toBe(2);
expect(
wrapper
.find('Draggable')
.find('DataListItem DataListCell')
.first()
.containsMatchingElement(<span>1. foo</span>)
).toEqual(true);
expect(
wrapper
.find('Draggable')
.find('DataListItem DataListCell')
.last()
.containsMatchingElement(<span>2. bar</span>)
).toEqual(true);
@@ -64,10 +70,64 @@ describe('<DraggableSelectedList />', () => {
wrapper = mountWithContexts(
<DraggableSelectedList selected={mockSelected} onRemove={onRemove} />
);
wrapper.find('Button[aria-label="Remove"]').simulate('click');
expect(
wrapper
.find('DataListDragButton[aria-label="Reorder"]')
.prop('isDisabled')
).toBe(true);
wrapper
.find('DataListItem[id="foo"] Button[aria-label="Remove"]')
.simulate('click');
expect(onRemove).toBeCalledWith({
id: 1,
name: 'foo',
});
});
test('should disable remove button when dragging item', () => {
const mockSelected = [
{
id: 1,
name: 'foo',
},
{
id: 2,
name: 'bar',
},
];
wrapper = mountWithContexts(
<DraggableSelectedList
selected={mockSelected}
onRemove={() => {}}
onRowDrag={() => {}}
/>
);
expect(
wrapper.find('Button[aria-label="Remove"]').at(0).prop('isDisabled')
).toBe(false);
expect(
wrapper.find('Button[aria-label="Remove"]').at(1).prop('isDisabled')
).toBe(false);
act(() => {
wrapper.find('DataList').prop('onDragStart')();
});
wrapper.update();
expect(
wrapper.find('Button[aria-label="Remove"]').at(0).prop('isDisabled')
).toBe(true);
expect(
wrapper.find('Button[aria-label="Remove"]').at(1).prop('isDisabled')
).toBe(true);
act(() => {
wrapper.find('DataList').prop('onDragCancel')();
});
wrapper.update();
expect(
wrapper.find('Button[aria-label="Remove"]').at(0).prop('isDisabled')
).toBe(false);
expect(
wrapper.find('Button[aria-label="Remove"]').at(1).prop('isDisabled')
).toBe(false);
});
});

View File

@@ -6,6 +6,7 @@ import {
ExclamationTriangleIcon,
ClockIcon,
MinusCircleIcon,
InfoCircleIcon,
} from '@patternfly/react-icons';
const Spin = keyframes`
@@ -23,6 +24,8 @@ const RunningIcon = styled(SyncAltIcon)`
RunningIcon.displayName = 'RunningIcon';
const icons = {
approved: CheckCircleIcon,
denied: InfoCircleIcon,
success: CheckCircleIcon,
healthy: CheckCircleIcon,
successful: CheckCircleIcon,

View File

@@ -7,6 +7,8 @@ import { Label, Tooltip } from '@patternfly/react-core';
import icons from '../StatusIcon/icons';
const colors = {
approved: 'green',
denied: 'red',
success: 'green',
successful: 'green',
ok: 'green',
@@ -17,14 +19,17 @@ const colors = {
running: 'blue',
pending: 'blue',
skipped: 'blue',
timedOut: 'red',
waiting: 'grey',
disabled: 'grey',
canceled: 'orange',
changed: 'orange',
};
export default function StatusLabel({ status, tooltipContent = '' }) {
export default function StatusLabel({ status, tooltipContent = '', children }) {
const upperCaseStatus = {
approved: t`Approved`,
denied: t`Denied`,
success: t`Success`,
healthy: t`Healthy`,
successful: t`Successful`,
@@ -35,6 +40,7 @@ export default function StatusLabel({ status, tooltipContent = '' }) {
running: t`Running`,
pending: t`Pending`,
skipped: t`Skipped'`,
timedOut: t`Timed out`,
waiting: t`Waiting`,
disabled: t`Disabled`,
canceled: t`Canceled`,
@@ -46,7 +52,7 @@ export default function StatusLabel({ status, tooltipContent = '' }) {
const renderLabel = () => (
<Label variant="outline" color={color} icon={Icon ? <Icon /> : null}>
{label}
{children || label}
</Label>
);
@@ -65,6 +71,8 @@ export default function StatusLabel({ status, tooltipContent = '' }) {
StatusLabel.propTypes = {
status: oneOf([
'approved',
'denied',
'success',
'successful',
'ok',
@@ -75,6 +83,7 @@ StatusLabel.propTypes = {
'running',
'pending',
'skipped',
'timedOut',
'waiting',
'disabled',
'canceled',

View File

@@ -79,4 +79,11 @@ describe('StatusLabel', () => {
expect(wrapper.find('Tooltip')).toHaveLength(1);
expect(wrapper.find('Tooltip').prop('content')).toEqual('Foo');
});
test('should render children', () => {
const wrapper = mount(
<StatusLabel tooltipContent="Foo" status="success" children="children" />
);
expect(wrapper.text()).toEqual('children');
});
});

View File

@@ -13,7 +13,6 @@ import getResourceAccessConfig from './getResourceAccessConfig';
const Grid = styled.div`
display: grid;
grid-gap: 20px;
grid-template-columns: 33% 33% 33%;
grid-template-columns: repeat(auto-fill, minmax(250px, 1fr));
`;

View File

@@ -1,5 +1,5 @@
import { i18n } from '@lingui/core';
import { en, fr, es, nl, ja, zh, zu } from 'make-plural/plurals';
import { en, fr, es, ko, nl, ja, zh, zu } from 'make-plural/plurals';
export const locales = {
en: 'English',
@@ -7,6 +7,7 @@ export const locales = {
zu: 'Zulu',
fr: 'French',
es: 'Spanish',
ko: 'Korean',
zh: 'Chinese',
nl: 'Dutch',
};
@@ -15,6 +16,7 @@ i18n.loadLocaleData({
en: { plurals: en },
fr: { plurals: fr },
es: { plurals: es },
ko: { plurals: ko },
nl: { plurals: nl },
ja: { plurals: ja },
zh: { plurals: zh },

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,13 @@
import React, { useState, useEffect, useCallback } from 'react';
import { useLocation, useHistory } from 'react-router-dom';
import styled from 'styled-components';
import { t } from '@lingui/macro';
import {
Card,
PageSection,
PageSectionVariants,
SelectGroup,
Select,
Select as PFSelect,
SelectVariant,
SelectOption,
Title,
@@ -26,6 +26,14 @@ import { ActivityStreamAPI } from 'api';
import ActivityStreamListItem from './ActivityStreamListItem';
const Select = styled(PFSelect)`
&& {
width: auto;
white-space: nowrap;
max-height: 480px;
}
`;
function ActivityStream() {
const { light } = PageSectionVariants;
@@ -116,8 +124,6 @@ function ActivityStream() {
{t`Activity Stream type selector`}
</span>
<Select
width="250px"
maxHeight="480px"
variant={SelectVariant.single}
aria-labelledby="grouped-type-select-id"
typeAheadAriaLabel={t`Select an activity type`}

View File

@@ -275,10 +275,11 @@ function InstanceDetails({ setBreadcrumb, instanceGroup }) {
</Tooltip>
{me.is_superuser && instance.node_type !== 'control' && (
<DisassociateButton
verifyCannotDisassociate={!me.is_superuser}
verifyCannotDisassociate={instanceGroup.name === 'controlplane'}
key="disassociate"
onDisassociate={disassociateInstance}
itemsToDisassociate={[instance]}
isProtectedInstanceGroup={instanceGroup.name === 'controlplane'}
modalTitle={t`Disassociate instance from instance group?`}
/>
)}

View File

@@ -31,7 +31,7 @@ const QS_CONFIG = getQSConfig('instance', {
order_by: 'hostname',
});
function InstanceList() {
function InstanceList({ instanceGroup }) {
const [isModalOpen, setIsModalOpen] = useState(false);
const location = useLocation();
const { id: instanceGroupId } = useParams();
@@ -224,13 +224,15 @@ function InstanceList() {
]
: []),
<DisassociateButton
verifyCannotDisassociate={selected.some(
(s) => s.node_type === 'control'
)}
verifyCannotDisassociate={
selected.some((s) => s.node_type === 'control') ||
instanceGroup.name === 'controlplane'
}
key="disassociate"
onDisassociate={handleDisassociate}
itemsToDisassociate={selected}
modalTitle={t`Disassociate instance from instance group?`}
isProtectedInstanceGroup={instanceGroup.name === 'controlplane'}
/>,
<HealthCheckButton
isDisabled={!canAdd}

View File

@@ -123,7 +123,7 @@ describe('<InstanceList/>', () => {
await act(async () => {
wrapper = mountWithContexts(
<Route path="/instance_groups/:id/instances">
<InstanceList />
<InstanceList instanceGroup={{ name: 'Alex' }} />
</Route>,
{
context: {

View File

@@ -21,7 +21,7 @@ function Instances({ setBreadcrumb, instanceGroup }) {
/>
</Route>
<Route key="instanceList" path="/instance_groups/:id/instances">
<InstanceList />
<InstanceList instanceGroup={instanceGroup} />
</Route>
</Switch>
);

View File

@@ -92,6 +92,89 @@ function JobDetail({ job, inventorySourceLabels }) {
<Link to={`/instance_groups/container_group/${item.id}`}>{item.name}</Link>
);
const renderInventoryDetail = () => {
if (
job.type !== 'project_update' &&
job.type !== 'system_job' &&
job.type !== 'workflow_job'
) {
return inventory ? (
<Detail
dataCy="job-inventory"
label={t`Inventory`}
value={
<Link
to={
inventory.kind === 'smart'
? `/inventories/smart_inventory/${inventory.id}`
: `/inventories/inventory/${inventory.id}`
}
>
{inventory.name}
</Link>
}
/>
) : (
<DeletedDetail label={t`Inventory`} />
);
}
if (job.type === 'workflow_job') {
return inventory ? (
<Detail
dataCy="job-inventory"
label={t`Inventory`}
value={
<Link
to={
inventory.kind === 'smart'
? `/inventories/smart_inventory/${inventory.id}`
: `/inventories/inventory/${inventory.id}`
}
>
{inventory.name}
</Link>
}
/>
) : null;
}
return null;
};
const renderProjectDetail = () => {
if (
job.type !== 'ad_hoc_command' &&
job.type !== 'inventory_update' &&
job.type !== 'system_job' &&
job.type !== 'workflow_job'
) {
return project ? (
<>
<Detail
dataCy="job-project"
label={t`Project`}
value={<Link to={`/projects/${project.id}`}>{project.name}</Link>}
/>
<Detail
dataCy="job-project-status"
label={t`Project Status`}
value={
projectUpdate ? (
<Link to={`/jobs/project/${projectUpdate.id}`}>
<StatusLabel status={project.status} />
</Link>
) : (
<StatusLabel status={project.status} />
)
}
/>
</>
) : (
<DeletedDetail label={t`Project`} />
);
}
return null;
};
return (
<CardBody>
<DetailList>
@@ -159,25 +242,7 @@ function JobDetail({ job, inventorySourceLabels }) {
value={jobTypes[job.type]}
/>
<LaunchedByDetail dataCy="job-launched-by" job={job} />
{inventory ? (
<Detail
dataCy="job-inventory"
label={t`Inventory`}
value={
<Link
to={
inventory.kind === 'smart'
? `/inventories/smart_inventory/${inventory.id}`
: `/inventories/inventory/${inventory.id}`
}
>
{inventory.name}
</Link>
}
/>
) : (
<DeletedDetail label={t`Inventory`} />
)}
{renderInventoryDetail()}
{inventory_source && (
<>
<Detail
@@ -218,30 +283,7 @@ function JobDetail({ job, inventorySourceLabels }) {
}
/>
)}
{project ? (
<>
<Detail
dataCy="job-project"
label={t`Project`}
value={<Link to={`/projects/${project.id}`}>{project.name}</Link>}
/>
<Detail
dataCy="job-project-status"
label={t`Project Status`}
value={
projectUpdate ? (
<Link to={`/jobs/project/${projectUpdate.id}`}>
<StatusLabel status={project.status} />
</Link>
) : (
<StatusLabel status={project.status} />
)
}
/>
</>
) : (
<DeletedDetail label={t`Project`} />
)}
{renderProjectDetail()}
{scmBranch && (
<Detail
dataCy="source-control-branch"

View File

@@ -63,7 +63,6 @@ describe('<JobDetail />', () => {
'Instance Group',
mockJobData.summary_fields.instance_group.name
);
assertDetail('Job Slice', '0/1');
assertDetail('Credentials', 'SSH: Demo Credential');
assertDetail('Machine Credential', 'SSH: Machine cred');
assertDetail('Source Control Branch', 'main');
@@ -104,6 +103,23 @@ describe('<JobDetail />', () => {
expect(projectStatusLabel.prop('status')).toEqual('successful');
});
test('should display Deleted for Inventory and Project for job type run', () => {
const job = {
...mockJobData,
summary_fields: {
...mockJobData.summary_fields,
project: null,
inventory: null,
},
project: null,
inventory: null,
};
wrapper = mountWithContexts(<JobDetail job={job} />);
expect(wrapper.find(`DeletedDetail[label="Project"]`).length).toBe(1);
expect(wrapper.find(`DeletedDetail[label="Inventory"]`).length).toBe(1);
});
test('should not display finished date', () => {
wrapper = mountWithContexts(
<JobDetail
@@ -146,6 +162,7 @@ describe('<JobDetail />', () => {
assertDetail('Module Name', 'command');
assertDetail('Module Arguments', 'echo hello_world');
assertDetail('Job Type', 'Run Command');
expect(wrapper.find(`Detail[label="Project"]`).length).toBe(0);
});
test('should display source data', () => {
@@ -182,6 +199,7 @@ describe('<JobDetail />', () => {
/>
);
assertDetail('Source', 'Sourced from Project');
expect(wrapper.find(`Detail[label="Project"]`).length).toBe(0);
});
test('should show schedule that launched workflow job', async () => {
@@ -215,7 +233,7 @@ describe('<JobDetail />', () => {
).toHaveLength(1);
});
test('should hide "Launched By" detail for JT launched from a workflow launched by a schedule', async () => {
test('should hide "Launched By" detail for JT launched from a workflow launched by a schedule', () => {
wrapper = mountWithContexts(
<JobDetail
job={{
@@ -317,6 +335,7 @@ describe('<JobDetail />', () => {
expect(
wrapper.find('Button[aria-label="Cancel Demo Job Template"]')
).toHaveLength(0);
expect(wrapper.find(`Detail[label="Project"]`).length).toBe(0);
});
test('should not show cancel job button, job completed', async () => {

View File

@@ -1,7 +1,6 @@
import React, { useEffect, useState } from 'react';
import { Modal, Tab, Tabs, TabTitleText } from '@patternfly/react-core';
import PropTypes from 'prop-types';
import { t } from '@lingui/macro';
import { encode } from 'html-entities';
import StatusLabel from '../../../components/StatusLabel';
@@ -37,10 +36,8 @@ const processEventStatus = (event) => {
const processCodeEditorValue = (value) => {
let codeEditorValue;
if (value === undefined) {
codeEditorValue = false;
} else if (value === '') {
codeEditorValue = ' ';
if (!value) {
codeEditorValue = '';
} else if (typeof value === 'string') {
codeEditorValue = encode(value);
} else {
@@ -49,8 +46,8 @@ const processCodeEditorValue = (value) => {
return codeEditorValue;
};
const processStdOutValue = (hostEvent) => {
const taskAction = hostEvent?.event_data?.taskAction;
const getStdOutValue = (hostEvent) => {
const taskAction = hostEvent?.event_data?.task_action;
const res = hostEvent?.event_data?.res;
let stdOut;
@@ -61,8 +58,8 @@ const processStdOutValue = (hostEvent) => {
res.results &&
Array.isArray(res.results)
) {
[stdOut] = res.results;
} else if (res) {
stdOut = res.results.join('\n');
} else if (res?.stdout) {
stdOut = res.stdout;
}
return stdOut;
@@ -81,8 +78,8 @@ function HostEventModal({ onClose, hostEvent = {}, isOpen = false }) {
};
const jsonObj = processCodeEditorValue(hostEvent?.event_data?.res);
const stdErr = processCodeEditorValue(hostEvent?.event_data?.res?.stderr);
const stdOut = processCodeEditorValue(processStdOutValue(hostEvent));
const stdErr = hostEvent?.event_data?.res?.stderr;
const stdOut = processCodeEditorValue(getStdOutValue(hostEvent));
return (
<Modal
@@ -147,13 +144,13 @@ function HostEventModal({ onClose, hostEvent = {}, isOpen = false }) {
<ContentEmpty title={t`No JSON Available`} />
)}
</Tab>
<Tab
eventKey={2}
title={<TabTitleText>{t`Standard Out`}</TabTitleText>}
aria-label={t`Standard out tab`}
ouiaId="standard-out-tab"
>
{activeTabKey === 2 && stdOut ? (
{stdOut?.length ? (
<Tab
eventKey={2}
title={<TabTitleText>{t`Output`}</TabTitleText>}
aria-label={t`Output tab`}
ouiaId="standard-out-tab"
>
<CodeEditor
mode="javascript"
readOnly
@@ -162,17 +159,15 @@ function HostEventModal({ onClose, hostEvent = {}, isOpen = false }) {
rows={20}
hasErrors={false}
/>
) : (
<ContentEmpty title={t`No Standard Out Available`} />
)}
</Tab>
<Tab
eventKey={3}
title={<TabTitleText>{t`Standard Error`}</TabTitleText>}
aria-label={t`Standard error tab`}
ouiaId="standard-error-tab"
>
{activeTabKey === 3 && stdErr ? (
</Tab>
) : null}
{stdErr?.length ? (
<Tab
eventKey={3}
title={<TabTitleText>{t`Standard Error`}</TabTitleText>}
aria-label={t`Standard error tab`}
ouiaId="standard-error-tab"
>
<CodeEditor
mode="javascript"
readOnly
@@ -181,10 +176,8 @@ function HostEventModal({ onClose, hostEvent = {}, isOpen = false }) {
hasErrors={false}
rows={20}
/>
) : (
<ContentEmpty title={t`No Standard Error Available`} />
)}
</Tab>
</Tab>
) : null}
</Tabs>
</Modal>
);

View File

@@ -17,6 +17,7 @@ const hostEvent = {
msg: 'This is a debug message: 1',
stdout:
' total used free shared buff/cache available\nMem: 7973 3005 960 30 4007 4582\nSwap: 1023 0 1023',
stderr: 'problems',
cmd: ['free', '-m'],
stderr_lines: [],
stdout_lines: [
@@ -51,6 +52,7 @@ const jsonValue = `{
\"item\": \"1\",
\"msg\": \"This is a debug message: 1\",
\"stdout\": \" total used free shared buff/cache available\\nMem: 7973 3005 960 30 4007 4582\\nSwap: 1023 0 1023\",
\"stderr\": \"problems\",
\"cmd\": [
\"free\",
\"-m\"
@@ -169,7 +171,7 @@ describe('HostEventModal', () => {
handleTabClick(null, 1);
wrapper.update();
const codeEditor = wrapper.find('CodeEditor');
const codeEditor = wrapper.find('Tab[eventKey=1] CodeEditor');
expect(codeEditor.prop('mode')).toBe('javascript');
expect(codeEditor.prop('readOnly')).toBe(true);
expect(codeEditor.prop('value')).toEqual(jsonValue);
@@ -184,7 +186,7 @@ describe('HostEventModal', () => {
handleTabClick(null, 2);
wrapper.update();
const codeEditor = wrapper.find('CodeEditor');
const codeEditor = wrapper.find('Tab[eventKey=2] CodeEditor');
expect(codeEditor.prop('mode')).toBe('javascript');
expect(codeEditor.prop('readOnly')).toBe(true);
expect(codeEditor.prop('value')).toEqual(hostEvent.event_data.res.stdout);
@@ -195,7 +197,7 @@ describe('HostEventModal', () => {
...hostEvent,
event_data: {
res: {
stderr: '',
stderr: 'error content',
},
},
};
@@ -207,10 +209,10 @@ describe('HostEventModal', () => {
handleTabClick(null, 3);
wrapper.update();
const codeEditor = wrapper.find('CodeEditor');
const codeEditor = wrapper.find('Tab[eventKey=3] CodeEditor');
expect(codeEditor.prop('mode')).toBe('javascript');
expect(codeEditor.prop('readOnly')).toBe(true);
expect(codeEditor.prop('value')).toEqual(' ');
expect(codeEditor.prop('value')).toEqual('error content');
});
test('should pass onClose to Modal', () => {
@@ -226,7 +228,7 @@ describe('HostEventModal', () => {
const debugTaskAction = {
...hostEvent,
event_data: {
taskAction: 'debug',
task_action: 'debug',
res: {
result: {
stdout: 'foo bar',
@@ -242,7 +244,7 @@ describe('HostEventModal', () => {
handleTabClick(null, 2);
wrapper.update();
const codeEditor = wrapper.find('CodeEditor');
const codeEditor = wrapper.find('Tab[eventKey=2] CodeEditor');
expect(codeEditor.prop('mode')).toBe('javascript');
expect(codeEditor.prop('readOnly')).toBe(true);
expect(codeEditor.prop('value')).toEqual('foo bar');
@@ -252,7 +254,7 @@ describe('HostEventModal', () => {
const yumTaskAction = {
...hostEvent,
event_data: {
taskAction: 'yum',
task_action: 'yum',
res: {
results: ['baz', 'bar'],
},
@@ -266,9 +268,9 @@ describe('HostEventModal', () => {
handleTabClick(null, 2);
wrapper.update();
const codeEditor = wrapper.find('CodeEditor');
const codeEditor = wrapper.find('Tab[eventKey=2] CodeEditor');
expect(codeEditor.prop('mode')).toBe('javascript');
expect(codeEditor.prop('readOnly')).toBe(true);
expect(codeEditor.prop('value')).toEqual('baz');
expect(codeEditor.prop('value')).toEqual('baz\nbar');
});
});

View File

@@ -18,7 +18,7 @@ import ContentError from 'components/ContentError';
import ContentLoading from 'components/ContentLoading';
import ErrorDetail from 'components/ErrorDetail';
import StatusLabel from 'components/StatusLabel';
import { JobEventsAPI } from 'api';
import { JobsAPI } from 'api';
import { getJobModel, isJobRunning } from 'util/jobs';
import useRequest, { useDismissableError } from 'hooks/useRequest';
@@ -99,8 +99,6 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
const scrollHeight = useRef(0);
const history = useHistory();
const eventByUuidRequests = useRef([]);
const siblingRequests = useRef([]);
const numEventsRequests = useRef([]);
const fetchEventByUuid = async (uuid) => {
let promise = eventByUuidRequests.current[uuid];
@@ -113,60 +111,15 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
return data.results[0] || null;
};
const fetchNextSibling = async (parentEventId, counter) => {
const key = `${parentEventId}-${counter}`;
let promise = siblingRequests.current[key];
if (!promise) {
promise = JobEventsAPI.readChildren(parentEventId, {
page_size: 1,
order_by: 'counter',
counter__gt: counter,
});
siblingRequests.current[key] = promise;
}
const { data } = await promise;
siblingRequests.current[key] = null;
return data.results[0] || null;
};
const fetchNextRootNode = async (counter) => {
const { data } = await getJobModel(job.type).readEvents(job.id, {
page_size: 1,
order_by: 'counter',
counter__gt: counter,
parent_uuid: '',
});
return data.results[0] || null;
};
const fetchNumEvents = async (startCounter, endCounter) => {
if (endCounter <= startCounter + 1) {
return 0;
}
const key = `${startCounter}-${endCounter}`;
let promise = numEventsRequests.current[key];
if (!promise) {
const params = {
page_size: 1,
order_by: 'counter',
counter__gt: startCounter,
};
if (endCounter) {
params.counter__lt = endCounter;
}
promise = getJobModel(job.type).readEvents(job.id, params);
numEventsRequests.current[key] = promise;
}
const { data } = await promise;
numEventsRequests.current[key] = null;
return data.count || 0;
};
const fetchChildrenSummary = () => JobsAPI.readChildrenSummary(job.id);
const [jobStatus, setJobStatus] = useState(job.status ?? 'waiting');
const [forceFlatMode, setForceFlatMode] = useState(false);
const isFlatMode = isJobRunning(jobStatus) || location.search.length > 1;
const [isTreeReady, setIsTreeReady] = useState(false);
const [onReadyEvents, setOnReadyEvents] = useState([]);
const {
addEvents,
toggleNodeIsCollapsed,
@@ -181,11 +134,12 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
} = useJobEvents(
{
fetchEventByUuid,
fetchNextSibling,
fetchNextRootNode,
fetchNumEvents,
fetchChildrenSummary,
setForceFlatMode,
setJobTreeReady: () => setIsTreeReady(true),
},
isFlatMode
job.id,
isFlatMode || forceFlatMode
);
const [wsEvents, setWsEvents] = useState([]);
const [cssMap, setCssMap] = useState({});
@@ -203,6 +157,14 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
const [isMonitoringWebsocket, setIsMonitoringWebsocket] = useState(false);
const [lastScrollPosition, setLastScrollPosition] = useState(0);
useEffect(() => {
if (!isTreeReady || !onReadyEvents.length) {
return;
}
addEvents(onReadyEvents);
setOnReadyEvents([]);
}, [isTreeReady, onReadyEvents]); // eslint-disable-line react-hooks/exhaustive-deps
const totalNonCollapsedRows = Math.max(
remoteRowCount - getNumCollapsedEvents(),
0
@@ -216,13 +178,9 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
);
useEffect(() => {
const pendingRequests = [
...Object.values(eventByUuidRequests.current || {}),
...Object.values(siblingRequests.current || {}),
...Object.values(numEventsRequests.current || {}),
];
const pendingRequests = Object.values(eventByUuidRequests.current || {});
setHasContentLoading(true); // prevents "no content found" screen from flashing
Promise.all(pendingRequests).then(() => {
Promise.allSettled(pendingRequests).then(() => {
setRemoteRowCount(0);
clearLoadedEvents();
loadJobEvents();
@@ -412,7 +370,11 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
...newCssMap,
}));
const lastCounter = events[events.length - 1]?.counter || 50;
addEvents(events);
if (isTreeReady) {
addEvents(events);
} else {
setOnReadyEvents((prev) => prev.concat(events));
}
setHighestLoadedCounter(lastCounter);
setRemoteRowCount(count + countOffset);
} catch (err) {
@@ -707,7 +669,7 @@ function JobOutput({ job, eventRelatedSearchableKeys, eventSearchableKeys }) {
onScrollNext={handleScrollNext}
onScrollPrevious={handleScrollPrevious}
toggleExpandCollapseAll={handleExpandCollapseAll}
isFlatMode={isFlatMode}
isFlatMode={isFlatMode || forceFlatMode}
isTemplateJob={job.type === 'job'}
isAllCollapsed={isAllCollapsed}
/>

View File

@@ -1,4 +1,3 @@
/* eslint-disable max-len */
import React from 'react';
import { act } from 'react-dom/test-utils';
import { JobsAPI, JobEventsAPI } from 'api';
@@ -26,14 +25,9 @@ const applyJobEventMock = (mockJobEvents) => {
};
};
JobsAPI.readEvents = jest.fn().mockImplementation(mockReadEvents);
JobEventsAPI.readChildren = jest.fn().mockResolvedValue({
JobsAPI.readChildrenSummary = jest.fn().mockResolvedValue({
data: {
results: [
{
counter: 20,
uuid: 'abc-020',
},
],
1: [0, 100],
},
});
};

View File

@@ -44,9 +44,9 @@ describe('JobOutputSearch', () => {
wrapper.find(searchBtn).simulate('click');
});
expect(wrapper.find('Search').prop('columns')).toHaveLength(3);
expect(wrapper.find('Search').prop('columns').at(0).name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns').at(1).name).toBe('Event');
expect(wrapper.find('Search').prop('columns').at(2).name).toBe('Advanced');
expect(wrapper.find('Search').prop('columns')[0].name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns')[1].name).toBe('Event');
expect(wrapper.find('Search').prop('columns')[2].name).toBe('Advanced');
expect(history.location.search).toEqual('?stdout__icontains=99');
});
test('Should not have Event key in search drop down for system job', () => {
@@ -70,8 +70,8 @@ describe('JobOutputSearch', () => {
}
);
expect(wrapper.find('Search').prop('columns')).toHaveLength(2);
expect(wrapper.find('Search').prop('columns').at(0).name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns').at(1).name).toBe('Advanced');
expect(wrapper.find('Search').prop('columns')[0].name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns')[1].name).toBe('Advanced');
});
test('Should not have Event key in search drop down for inventory update job', () => {
@@ -94,8 +94,9 @@ describe('JobOutputSearch', () => {
context: { router: { history } },
}
);
expect(wrapper.find('Search').prop('columns')).toHaveLength(2);
expect(wrapper.find('Search').prop('columns').at(0).name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns').at(1).name).toBe('Advanced');
expect(wrapper.find('Search').prop('columns')[0].name).toBe('Stdout');
expect(wrapper.find('Search').prop('columns')[1].name).toBe('Advanced');
});
});

View File

@@ -4,7 +4,6 @@ export default styled.div`
display: flex;
&:hover {
background-color: white;
cursor: ${(props) => (props.isClickable ? 'pointer' : 'default')};
}

View File

@@ -11,16 +11,21 @@ const initialState = {
// events with parent events that aren't yet loaded.
// arrays indexed by parent uuid
eventsWithoutParents: {},
// object in the form { counter: {rowNumber: n, numChildren: m}} for parent nodes
childrenSummary: {},
// parent_uuid's for "meta" events that need to be injected into the tree to
// maintain tree integrity
metaEventParentUuid: {},
isAllCollapsed: false,
};
export const ADD_EVENTS = 'ADD_EVENTS';
export const TOGGLE_NODE_COLLAPSED = 'TOGGLE_NODE_COLLAPSED';
export const SET_EVENT_NUM_CHILDREN = 'SET_EVENT_NUM_CHILDREN';
export const CLEAR_EVENTS = 'CLEAR_EVENTS';
export const REBUILD_TREE = 'REBUILD_TREE';
export const TOGGLE_COLLAPSE_ALL = 'TOGGLE_COLLAPSE_ALL';
export const SET_CHILDREN_SUMMARY = 'SET_CHILDREN_SUMMARY';
export default function useJobEvents(callbacks, isFlatMode) {
export default function useJobEvents(callbacks, jobId, isFlatMode) {
const [actionQueue, setActionQueue] = useState([]);
const enqueueAction = (action) => {
setActionQueue((queue) => queue.concat(action));
@@ -42,6 +47,31 @@ export default function useJobEvents(callbacks, isFlatMode) {
});
}, [actionQueue]);
useEffect(() => {
if (isFlatMode) {
return;
}
callbacks
.fetchChildrenSummary()
.then((result) => {
if (result.data.event_processing_finished === false) {
callbacks.setForceFlatMode(true);
callbacks.setJobTreeReady();
return;
}
enqueueAction({
type: SET_CHILDREN_SUMMARY,
childrenSummary: result.data.children_summary,
metaEventParentUuid: result.data.meta_event_nested_uuid,
});
})
.catch(() => {
callbacks.setForceFlatMode(true);
callbacks.setJobTreeReady();
});
}, [jobId, isFlatMode]); // eslint-disable-line react-hooks/exhaustive-deps
return {
addEvents: (events) => dispatch({ type: ADD_EVENTS, events }),
getNodeByUuid: (uuid) => getNodeByUuid(state, uuid),
@@ -53,10 +83,14 @@ export default function useJobEvents(callbacks, isFlatMode) {
getNodeForRow: (rowIndex) => getNodeForRow(state, rowIndex),
getTotalNumChildren: (uuid) => {
const node = getNodeByUuid(state, uuid);
return getTotalNumChildren(node);
return getTotalNumChildren(node, state.childrenSummary);
},
getNumCollapsedEvents: () =>
state.tree.reduce((sum, node) => sum + getNumCollapsedChildren(node), 0),
state.tree.reduce(
(sum, node) =>
sum + getNumCollapsedChildren(node, state.childrenSummary),
0
),
getCounterForRow: (rowIndex) => getCounterForRow(state, rowIndex),
getEvent: (eventIndex) => getEvent(state, eventIndex),
clearLoadedEvents: () => dispatch({ type: CLEAR_EVENTS }),
@@ -74,12 +108,17 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
return toggleCollapseAll(state, action.isCollapsed);
case TOGGLE_NODE_COLLAPSED:
return toggleNodeIsCollapsed(state, action.uuid);
case SET_EVENT_NUM_CHILDREN:
return setEventNumChildren(state, action.uuid, action.numChildren);
case CLEAR_EVENTS:
return initialState;
case REBUILD_TREE:
return rebuildTree(state);
case SET_CHILDREN_SUMMARY:
callbacks.setJobTreeReady();
return {
...state,
childrenSummary: action.childrenSummary || {},
metaEventParentUuid: action.metaEventParentUuid || {},
};
default:
throw new Error(`Unrecognized action: ${action.type}`);
}
@@ -100,6 +139,9 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
throw new Error('Cannot add event; missing rowNumber');
}
const eventIndex = event.counter;
if (!event.parent_uuid && state.metaEventParentUuid[eventIndex]) {
event.parent_uuid = state.metaEventParentUuid[eventIndex];
}
if (state.events[eventIndex]) {
state.events[eventIndex] = event;
state = _gatherEventsForNewParent(state, event.uuid);
@@ -113,22 +155,21 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
let isParentFound;
[state, isParentFound] = _addNestedLevelEvent(state, event);
if (!isParentFound) {
parentsToFetch[event.parent_uuid] = {
childCounter: event.counter,
childRowNumber: event.rowNumber,
};
parentsToFetch[event.parent_uuid] = true;
state = _addEventWithoutParent(state, event);
}
});
Object.keys(parentsToFetch).forEach(async (uuid) => {
const { childCounter, childRowNumber } = parentsToFetch[uuid];
const parent = await callbacks.fetchEventByUuid(uuid);
const numPrevSiblings = await callbacks.fetchNumEvents(
parent.counter,
childCounter
);
parent.rowNumber = childRowNumber - numPrevSiblings - 1;
if (!state.childrenSummary || !state.childrenSummary[parent.counter]) {
// eslint-disable-next-line no-console
console.error('No row number found for ', parent.counter);
return;
}
parent.rowNumber = state.childrenSummary[parent.counter].rowNumber;
enqueueAction({
type: ADD_EVENTS,
events: [parent],
@@ -180,7 +221,6 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
const index = parent.children.findIndex(
(node) => node.eventIndex >= eventIndex
);
const length = parent.children.length + 1;
if (index === -1) {
state = updateNodeByUuid(state, event.parent_uuid, (node) => {
node.children.push(newNode);
@@ -206,9 +246,6 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
},
event.uuid
);
if (length === 1) {
_fetchNumChildren(state, parent);
}
return [state, true];
}
@@ -231,45 +268,6 @@ export function jobEventsReducer(callbacks, isFlatMode, enqueueAction) {
};
}
async function _fetchNumChildren(state, node) {
const event = state.events[node.eventIndex];
if (!event) {
throw new Error(
`Cannot fetch numChildren; event ${node.eventIndex} not found`
);
}
const sibling = await _getNextSibling(state, event);
const numChildren = await callbacks.fetchNumEvents(
event.counter,
sibling?.counter
);
enqueueAction({
type: SET_EVENT_NUM_CHILDREN,
uuid: event.uuid,
numChildren,
});
if (sibling) {
sibling.rowNumber = event.rowNumber + numChildren + 1;
enqueueAction({
type: ADD_EVENTS,
events: [sibling],
});
}
}
async function _getNextSibling(state, event) {
if (!event.parent_uuid) {
return callbacks.fetchNextRootNode(event.counter);
}
const parentNode = getNodeByUuid(state, event.parent_uuid);
const parent = state.events[parentNode.eventIndex];
const sibling = await callbacks.fetchNextSibling(parent.id, event.counter);
if (!sibling) {
return _getNextSibling(state, parent);
}
return sibling;
}
function _gatherEventsForNewParent(state, parentUuid) {
if (!state.eventsWithoutParents[parentUuid]) {
return state;
@@ -303,8 +301,13 @@ function getEventForRow(state, rowIndex) {
return null;
}
function getNodeForRow(state, rowToFind) {
const { node } = _getNodeForRow(state, rowToFind, state.tree);
function getNodeForRow(state, rowToFind, childrenSummary) {
const { node } = _getNodeForRow(
state,
rowToFind,
state.tree,
childrenSummary
);
return node;
}
@@ -329,8 +332,14 @@ function _getNodeForRow(state, rowToFind, nodes) {
if (event.rowNumber === rowToFind) {
return { node };
}
const totalNodeDescendants = getTotalNumChildren(node);
const numCollapsedChildren = getNumCollapsedChildren(node);
const totalNodeDescendants = getTotalNumChildren(
node,
state.childrenSummary
);
const numCollapsedChildren = getNumCollapsedChildren(
node,
state.childrenSummary
);
const nodeChildren = totalNodeDescendants - numCollapsedChildren;
if (event.rowNumber + nodeChildren >= rowToFind) {
// requested row is in children/descendants
@@ -370,8 +379,8 @@ function _getNodeForRow(state, rowToFind, nodes) {
function _getNodeInChildren(state, node, rowToFind) {
const event = state.events[node.eventIndex];
const firstChild = state.events[node.children[0].eventIndex];
if (rowToFind < firstChild.rowNumber) {
const firstChild = state.events[node.children[0]?.eventIndex];
if (!firstChild || rowToFind < firstChild.rowNumber) {
const rowDiff = rowToFind - event.rowNumber;
return {
node: null,
@@ -391,25 +400,25 @@ function _getLastDescendantNode(nodes) {
return lastDescendant;
}
function getTotalNumChildren(node) {
if (typeof node.numChildren !== 'undefined') {
return node.numChildren;
function getTotalNumChildren(node, childrenSummary) {
if (childrenSummary[node.eventIndex]) {
return childrenSummary[node.eventIndex].numChildren;
}
let estimatedNumChildren = node.children.length;
node.children.forEach((child) => {
estimatedNumChildren += getTotalNumChildren(child);
estimatedNumChildren += getTotalNumChildren(child, childrenSummary);
});
return estimatedNumChildren;
}
function getNumCollapsedChildren(node) {
function getNumCollapsedChildren(node, childrenSummary) {
if (node.isCollapsed) {
return getTotalNumChildren(node);
return getTotalNumChildren(node, childrenSummary);
}
let sum = 0;
node.children.forEach((child) => {
sum += getNumCollapsedChildren(child);
sum += getNumCollapsedChildren(child, childrenSummary);
});
return sum;
}
@@ -514,16 +523,6 @@ function _getNodeByIndex(arr, index) {
return _getNodeByIndex(arr[i - 1].children, index);
}
function setEventNumChildren(state, uuid, numChildren) {
if (!state.uuidMap[uuid]) {
return state;
}
return updateNodeByUuid(state, uuid, (node) => ({
...node,
numChildren,
}));
}
function getEvent(state, eventIndex) {
const event = state.events[eventIndex];
if (event) {

View File

@@ -8,24 +8,22 @@ import useJobEvents, {
SET_EVENT_NUM_CHILDREN,
} from './useJobEvents';
const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
function Child() {
return <div />;
}
function HookTest({
fetchEventByUuid = () => {},
fetchNextSibling = () => {},
fetchNextRootNode = () => {},
fetchNumEvents = () => {},
fetchChildrenSummary = () => {},
setForceFlatMode = () => {},
setJobTreeReady = () => {},
isFlatMode = false,
}) {
const hookFuncs = useJobEvents(
{
fetchEventByUuid,
fetchNextSibling,
fetchNextRootNode,
fetchNumEvents,
fetchChildrenSummary,
setForceFlatMode,
setJobTreeReady,
},
isFlatMode
);
@@ -153,19 +151,19 @@ describe('useJobEvents', () => {
beforeEach(() => {
callbacks = {
fetchEventByUuid: jest.fn(),
fetchNextSibling: jest.fn(),
fetchNextRootNode: jest.fn(),
fetchNumEvents: jest.fn(),
fetchChildrenSummary: jest.fn(),
setForceFlatMode: jest.fn(),
setJobTreeReady: jest.fn(),
};
enqueueAction = jest.fn();
callbacks.fetchNextSibling.mockResolvedValue(eventsList[9]);
callbacks.fetchNextRootNode.mockResolvedValue(eventsList[9]);
reducer = jobEventsReducer(callbacks, false, enqueueAction);
emptyState = {
tree: [],
events: {},
uuidMap: {},
eventsWithoutParents: {},
childrenSummary: {},
metaEventParentUuid: {},
eventGaps: [],
isAllCollapsed: false,
};
@@ -380,10 +378,18 @@ describe('useJobEvents', () => {
callbacks.fetchEventByUuid.mockResolvedValue({
counter: 10,
});
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
const state = reducer(
{
...emptyState,
childrenSummary: {
10: [9, 2],
},
},
{
type: ADD_EVENTS,
events: eventsList,
}
);
const newEvents = [
{
@@ -404,10 +410,18 @@ describe('useJobEvents', () => {
callbacks.fetchEventByUuid.mockResolvedValue({
counter: 10,
});
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
const state = reducer(
{
...emptyState,
childrenSummary: {
10: [9, 2],
},
},
{
type: ADD_EVENTS,
events: eventsList,
}
);
const newEvents = [
{
@@ -437,10 +451,18 @@ describe('useJobEvents', () => {
callbacks.fetchEventByUuid.mockResolvedValue({
counter: 10,
});
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
const state = reducer(
{
...emptyState,
childrenSummary: {
10: [9, 1],
},
},
{
type: ADD_EVENTS,
events: eventsList,
}
);
const newEvents = [
{
@@ -471,10 +493,18 @@ describe('useJobEvents', () => {
callbacks.fetchEventByUuid.mockResolvedValue({
counter: 10,
});
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
const state = reducer(
{
...emptyState,
childrenSummary: {
10: [9, 2],
},
},
{
type: ADD_EVENTS,
events: eventsList,
}
);
const newEvents = [
{
@@ -561,10 +591,19 @@ describe('useJobEvents', () => {
event_level: 2,
parent_uuid: 'abc-002',
};
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: [event3],
});
const state = reducer(
{
...emptyState,
childrenSummary: {
1: [0, 3],
2: [1, 2],
},
},
{
type: ADD_EVENTS,
events: [event3],
}
);
expect(callbacks.fetchEventByUuid).toHaveBeenCalledWith('abc-002');
const event2 = {
@@ -741,152 +780,49 @@ describe('useJobEvents', () => {
});
});
describe('fetchNumChildren', () => {
test('should find child count for root node', async () => {
callbacks.fetchNextRootNode.mockResolvedValue({
id: 121,
counter: 21,
rowNumber: 20,
uuid: 'abc-021',
event_level: 0,
parent_uuid: '',
});
callbacks.fetchNumEvents.mockResolvedValue(19);
reducer(emptyState, {
type: ADD_EVENTS,
events: [eventsList[0], eventsList[1]],
});
expect(callbacks.fetchNextSibling).toHaveBeenCalledTimes(0);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledWith(1);
await sleep(0);
expect(callbacks.fetchNumEvents).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNumEvents).toHaveBeenCalledWith(1, 21);
expect(enqueueAction).toHaveBeenCalledWith({
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-001',
numChildren: 19,
});
});
test('should find child count for last root node', async () => {
callbacks.fetchNextRootNode.mockResolvedValue(null);
callbacks.fetchNumEvents.mockResolvedValue(19);
reducer(emptyState, {
type: ADD_EVENTS,
events: [eventsList[0], eventsList[1]],
});
expect(callbacks.fetchNextSibling).toHaveBeenCalledTimes(0);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledWith(1);
await sleep(0);
expect(callbacks.fetchNumEvents).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNumEvents).toHaveBeenCalledWith(1, undefined);
expect(enqueueAction).toHaveBeenCalledWith({
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-001',
numChildren: 19,
});
});
test('should find child count for nested node', async () => {
const state = {
events: {
1: eventsList[0],
2: eventsList[1],
test('should nest "meta" event based on given parent uuid', () => {
const state = reducer(
{
...emptyState,
childrenSummary: {
2: { rowNumber: 1, numChildren: 3 },
},
tree: [
metaEventParentUuid: {
4: 'abc-002',
},
},
{
type: ADD_EVENTS,
events: [...eventsList.slice(0, 3)],
}
);
const state2 = reducer(state, {
type: ADD_EVENTS,
events: [
{
counter: 4,
rowNumber: 3,
parent_uuid: '',
},
],
});
expect(state2.tree).toEqual([
{
eventIndex: 1,
isCollapsed: false,
children: [
{
children: [{ children: [], eventIndex: 2, isCollapsed: false }],
eventIndex: 1,
eventIndex: 2,
isCollapsed: false,
children: [
{ eventIndex: 3, isCollapsed: false, children: [] },
{ eventIndex: 4, isCollapsed: false, children: [] },
],
},
],
uuidMap: {
'abc-001': 1,
'abc-002': 2,
},
eventsWithoutParents: {},
};
callbacks.fetchNextSibling.mockResolvedValue({
id: 20,
counter: 20,
rowNumber: 19,
uuid: 'abc-020',
event_level: 1,
parent_uuid: 'abc-001',
});
callbacks.fetchNumEvents.mockResolvedValue(18);
reducer(state, {
type: ADD_EVENTS,
events: [eventsList[2]],
});
expect(callbacks.fetchNextSibling).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNextSibling).toHaveBeenCalledWith(101, 2);
await sleep(0);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledTimes(0);
expect(callbacks.fetchNumEvents).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNumEvents).toHaveBeenCalledWith(2, 20);
expect(enqueueAction).toHaveBeenCalledWith({
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-002',
numChildren: 18,
});
});
test('should find child count for nested node, last sibling', async () => {
const state = {
events: {
1: eventsList[0],
2: eventsList[1],
},
tree: [
{
children: [{ children: [], eventIndex: 2, isCollapsed: false }],
eventIndex: 1,
isCollapsed: false,
},
],
uuidMap: {
'abc-001': 1,
'abc-002': 2,
},
eventsWithoutParents: {},
};
callbacks.fetchNextSibling.mockResolvedValue(null);
callbacks.fetchNextRootNode.mockResolvedValue({
id: 121,
counter: 21,
rowNumber: 20,
uuid: 'abc-021',
event_level: 0,
parent_uuid: '',
});
callbacks.fetchNumEvents.mockResolvedValue(19);
reducer(state, {
type: ADD_EVENTS,
events: [eventsList[2]],
});
expect(callbacks.fetchNextSibling).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNextSibling).toHaveBeenCalledWith(101, 2);
await sleep(0);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNextRootNode).toHaveBeenCalledWith(1);
await sleep(0);
expect(callbacks.fetchNumEvents).toHaveBeenCalledTimes(1);
expect(callbacks.fetchNumEvents).toHaveBeenCalledWith(2, 21);
expect(enqueueAction).toHaveBeenCalledWith({
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-002',
numChildren: 19,
});
});
},
]);
});
});
@@ -968,40 +904,6 @@ describe('useJobEvents', () => {
});
});
describe('setEventNumChildren', () => {
test('should set number of children on root node', () => {
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
expect(state.tree[0].numChildren).toEqual(undefined);
const { tree } = reducer(state, {
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-001',
numChildren: 8,
});
expect(tree[0].numChildren).toEqual(8);
});
test('should set number of children on nested node', () => {
const state = reducer(emptyState, {
type: ADD_EVENTS,
events: eventsList,
});
expect(state.tree[0].numChildren).toEqual(undefined);
const { tree } = reducer(state, {
type: SET_EVENT_NUM_CHILDREN,
uuid: 'abc-006',
numChildren: 3,
});
expect(tree[0].children[1].numChildren).toEqual(3);
});
});
describe('getNodeForRow', () => {
let wrapper;
beforeEach(() => {
@@ -1266,16 +1168,19 @@ describe('useJobEvents', () => {
});
test('should get node after gap in loaded children', async () => {
const fetchNumEvents = jest.fn();
fetchNumEvents.mockImplementation((index) => {
const counts = {
1: 52,
2: 3,
6: 47,
};
return Promise.resolve(counts[index]);
const fetchChildrenSummary = jest.fn();
fetchChildrenSummary.mockResolvedValue({
data: {
children_summary: {
1: { rowNumber: 0, numChildren: 52 },
2: { rowNumber: 1, numChildren: 3 },
6: { rowNumber: 5, numChildren: 47 },
},
meta_event_nested_uuid: {},
},
});
wrapper = mount(<HookTest fetchNumEvents={fetchNumEvents} />);
wrapper = mount(<HookTest fetchChildrenSummary={fetchChildrenSummary} />);
const laterEvents = [
{
id: 151,
@@ -1424,13 +1329,12 @@ describe('useJobEvents', () => {
});
test('should return estimated counter when node is non-loaded child', async () => {
callbacks.fetchNumEvents.mockImplementation((counter) => {
const children = {
1: 28,
2: 3,
6: 23,
};
return children[counter];
callbacks.fetchChildrenSummary.mockResolvedValue({
data: {
1: { rowNumber: 0, numChildren: 28 },
2: { rowNumber: 1, numChildren: 3 },
6: { rowNumber: 5, numChidren: 23 },
},
});
const wrapper = mount(<HookTest {...callbacks} />);
wrapper.update();
@@ -1463,13 +1367,15 @@ describe('useJobEvents', () => {
});
test('should estimate counter after skipping collapsed subtree', async () => {
callbacks.fetchNumEvents.mockImplementation((counter) => {
const children = {
1: 85,
2: 66,
69: 17,
};
return children[counter];
callbacks.fetchChildrenSummary.mockResolvedValue({
data: {
children_summary: {
1: { rowNumber: 0, numChildren: 85 },
2: { rowNumber: 1, numChildren: 66 },
69: { rowNumber: 68, numChildren: 17 },
},
meta_event_nested_uuid: {},
},
});
const wrapper = mount(<HookTest {...callbacks} />);
await act(async () => {
@@ -1497,12 +1403,14 @@ describe('useJobEvents', () => {
});
test('should estimate counter in gap between loaded events', async () => {
callbacks.fetchNumEvents.mockImplementation(
(counter) =>
({
1: 30,
}[counter])
);
callbacks.fetchChildrenSummary.mockResolvedValue({
data: {
children_summary: {
1: { rowNumber: 0, numChildren: 30 },
},
meta_event_nested_uuid: {},
},
});
const wrapper = mount(<HookTest {...callbacks} />);
await act(async () => {
wrapper.find('#test').prop('addEvents')([
@@ -1556,12 +1464,14 @@ describe('useJobEvents', () => {
});
test('should estimate counter in gap before loaded sibling events', async () => {
callbacks.fetchNumEvents.mockImplementation(
(counter) =>
({
1: 30,
}[counter])
);
callbacks.fetchChildrenSummary.mockResolvedValue({
data: {
children_summary: {
1: { rowNumber: 0, numChildren: 30 },
},
meta_event_nested_uuid: {},
},
});
const wrapper = mount(<HookTest {...callbacks} />);
await act(async () => {
wrapper.find('#test').prop('addEvents')([
@@ -1599,12 +1509,14 @@ describe('useJobEvents', () => {
});
test('should get counter for node between unloaded siblings', async () => {
callbacks.fetchNumEvents.mockImplementation(
(counter) =>
({
1: 30,
}[counter])
);
callbacks.fetchChildrenSummary.mockResolvedValue({
data: {
children_summary: {
1: { rowNumber: 0, numChildren: 30 },
},
meta_event_nested_uuid: {},
},
});
const wrapper = mount(<HookTest {...callbacks} />);
await act(async () => {
wrapper.find('#test').prop('addEvents')([

View File

@@ -38,7 +38,7 @@ function NotificationTemplate({ setBreadcrumb }) {
setBreadcrumb(detail.data);
return {
template: detail.data,
defaultMessages: options.data.actions.POST.messages,
defaultMessages: options.data.actions?.POST?.messages,
};
}, [templateId, setBreadcrumb]),
{ template: null, defaultMessages: null }
@@ -53,7 +53,7 @@ function NotificationTemplate({ setBreadcrumb }) {
<PageSection>
<Card>
<ContentError error={error}>
{error.response.status === 404 && (
{error.response?.status === 404 && (
<span>
{t`Notification Template not found.`}{' '}
<Link to="/notification_templates">

View File

@@ -99,7 +99,7 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
);
const { error, dismissError } = useDismissableError(deleteError || testError);
const typeMessageDefaults = defaultMessages[template.notification_type];
const typeMessageDefaults = defaultMessages?.[template?.notification_type];
return (
<CardBody>
<DetailList gutter="sm">
@@ -384,13 +384,14 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
date={modified}
user={summary_fields?.modified_by}
/>
{hasCustomMessages(messages, typeMessageDefaults) && (
{typeMessageDefaults &&
hasCustomMessages(messages, typeMessageDefaults) ? (
<CustomMessageDetails
messages={messages}
defaults={typeMessageDefaults}
type={template.notification_type}
/>
)}
) : null}
</DetailList>
<CardActionsRow>
{summary_fields.user_capabilities?.edit && (
@@ -447,54 +448,54 @@ function CustomMessageDetails({ messages, defaults, type }) {
{showMessages && (
<CodeDetail
label={t`Start message`}
value={messages.started.message || defaults.started.message}
value={messages.started?.message || defaults.started?.message}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
{showBodies && (
<CodeDetail
label={t`Start message body`}
value={messages.started.body || defaults.started.body}
value={messages.started?.body || defaults.started?.body}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
{showMessages && (
<CodeDetail
label={t`Success message`}
value={messages.success.message || defaults.success.message}
value={messages.success?.message || defaults.success?.message}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
{showBodies && (
<CodeDetail
label={t`Success message body`}
value={messages.success.body || defaults.success.body}
value={messages.success?.body || defaults.success?.body}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
{showMessages && (
<CodeDetail
label={t`Error message`}
value={messages.error.message || defaults.error.message}
value={messages.error?.message || defaults.error?.message}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
{showBodies && (
<CodeDetail
label={t`Error message body`}
value={messages.error.body || defaults.error.body}
value={messages.error?.body || defaults.error?.body}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
@@ -506,7 +507,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.approved.message
}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
@@ -518,7 +519,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.approved.body
}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
@@ -530,7 +531,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.denied.message
}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
@@ -542,7 +543,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.denied.body
}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
@@ -554,7 +555,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.running.message
}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
@@ -566,7 +567,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.running.body
}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}
@@ -578,7 +579,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.timed_out.message
}
mode="jinja2"
rows="2"
rows={2}
fullWidth
/>
)}
@@ -590,7 +591,7 @@ function CustomMessageDetails({ messages, defaults, type }) {
defaults.workflow_approval.timed_out.body
}
mode="jinja2"
rows="6"
rows={6}
fullWidth
/>
)}

View File

@@ -70,7 +70,11 @@ const mockTemplate = {
describe('<NotificationTemplateDetail />', () => {
let wrapper;
beforeEach(async () => {
afterEach(() => {
jest.clearAllMocks();
});
test('should render Details', async () => {
await act(async () => {
wrapper = mountWithContexts(
<NotificationTemplateDetail
@@ -80,13 +84,29 @@ describe('<NotificationTemplateDetail />', () => {
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
function assertDetail(label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);
}
assertDetail('Name', mockTemplate.name);
assertDetail('Description', mockTemplate.description);
expect(
wrapper
.find('Detail[label="Email Options"]')
.containsAllMatchingElements([<li>Use SSL</li>, <li>Use TLS</li>])
).toEqual(true);
});
afterEach(() => {
jest.clearAllMocks();
});
test('should render Details', () => {
test('should render Details when defaultMessages is missing', async () => {
await act(async () => {
wrapper = mountWithContexts(
<NotificationTemplateDetail
template={mockTemplate}
defaultMessages={null}
/>
);
});
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
function assertDetail(label, value) {
expect(wrapper.find(`Detail[label="${label}"] dt`).text()).toBe(label);
expect(wrapper.find(`Detail[label="${label}"] dd`).text()).toBe(value);

View File

@@ -30,10 +30,10 @@ function isCustomized(message, defaultMessage) {
if (!message) {
return false;
}
if (message.message && message.message !== defaultMessage.message) {
if (message?.message !== defaultMessage?.message) {
return true;
}
if (message.body && message.body !== defaultMessage.body) {
if (message?.body !== defaultMessage?.body) {
return true;
}
return false;

View File

@@ -27,7 +27,9 @@ import useRequest, { useDismissableError } from 'hooks/useRequest';
import { relatedResourceDeleteRequests } from 'util/getRelatedResourceDeleteDetails';
import StatusLabel from 'components/StatusLabel';
import { formatDateString } from 'util/dates';
import Popover from 'components/Popover';
import ProjectSyncButton from '../shared/ProjectSyncButton';
import ProjectHelpTextStrings from '../shared/Project.helptext';
import useWsProject from './useWsProject';
const Label = styled.span`
@@ -57,7 +59,7 @@ function ProjectDetail({ project }) {
summary_fields,
} = useWsProject(project);
const history = useHistory();
const projectHelpText = ProjectHelpTextStrings();
const {
request: deleteProject,
isLoading,
@@ -82,29 +84,37 @@ function ProjectDetail({ project }) {
optionsList = (
<TextList component={TextListVariants.ul}>
{scm_clean && (
<TextListItem
component={TextListItemVariants.li}
>{t`Discard local changes before syncing`}</TextListItem>
<TextListItem component={TextListItemVariants.li}>
{t`Discard local changes before syncing`}
<Popover content={projectHelpText.options.clean} />
</TextListItem>
)}
{scm_delete_on_update && (
<TextListItem
component={TextListItemVariants.li}
>{t`Delete the project before syncing`}</TextListItem>
<TextListItem component={TextListItemVariants.li}>
{t`Delete the project before syncing`}{' '}
<Popover
content={projectHelpText.options.delete}
id="scm-delete-on-update"
/>
</TextListItem>
)}
{scm_track_submodules && (
<TextListItem
component={TextListItemVariants.li}
>{t`Track submodules latest commit on branch`}</TextListItem>
<TextListItem component={TextListItemVariants.li}>
{t`Track submodules latest commit on branch`}{' '}
<Popover content={projectHelpText.options.trackSubModules} />
</TextListItem>
)}
{scm_update_on_launch && (
<TextListItem
component={TextListItemVariants.li}
>{t`Update revision on job launch`}</TextListItem>
<TextListItem component={TextListItemVariants.li}>
{t`Update revision on job launch`}{' '}
<Popover content={projectHelpText.options.updateOnLaunch} />
</TextListItem>
)}
{allow_override && (
<TextListItem
component={TextListItemVariants.li}
>{t`Allow branch override`}</TextListItem>
<TextListItem component={TextListItemVariants.li}>
{t`Allow branch override`}{' '}
<Popover content={projectHelpText.options.allowBranchOverride} />
</TextListItem>
)}
</TextList>
);
@@ -134,7 +144,10 @@ function ProjectDetail({ project }) {
} else if (summary_fields?.last_job) {
job = summary_fields.last_job;
}
const getSourceControlUrlHelpText = () =>
scm_type === 'git'
? projectHelpText.githubSourceControlUrl
: projectHelpText.svnSourceControlUrl;
return (
<CardBody>
<DetailList gutter="sm">
@@ -197,9 +210,25 @@ function ProjectDetail({ project }) {
}
alwaysVisible
/>
<Detail label={t`Source Control URL`} value={scm_url} />
<Detail label={t`Source Control Branch`} value={scm_branch} />
<Detail label={t`Source Control Refspec`} value={scm_refspec} />
<Detail
helpText={
scm_type === 'git' || scm_type === 'svn'
? getSourceControlUrlHelpText()
: ''
}
label={t`Source Control URL`}
value={scm_url}
/>
<Detail
helpText={projectHelpText.branchFormField}
label={t`Source Control Branch`}
value={scm_branch}
/>
<Detail
helpText={projectHelpText.sourceControlRefspec}
label={t`Source Control Refspec`}
value={scm_refspec}
/>
{summary_fields.credential && (
<Detail
label={t`Source Control Credential`}
@@ -217,16 +246,25 @@ function ProjectDetail({ project }) {
value={`${scm_update_cache_timeout} ${t`Seconds`}`}
/>
<ExecutionEnvironmentDetail
helpText={projectHelpText.executionEnvironment}
virtualEnvironment={custom_virtualenv}
executionEnvironment={summary_fields?.default_environment}
isDefaultEnvironment
/>
<Config>
{({ project_base_dir }) => (
<Detail label={t`Project Base Path`} value={project_base_dir} />
<Detail
helpText={projectHelpText.projectBasePath}
label={t`Project Base Path`}
value={project_base_dir}
/>
)}
</Config>
<Detail label={t`Playbook Directory`} value={local_path} />
<Detail
helpText={projectHelpText.projectLocalPath}
label={t`Playbook Directory`}
value={local_path}
/>
<UserDateDetail
label={t`Created`}
date={created}

View File

@@ -20,6 +20,12 @@ jest.mock('react-router-dom', () => ({
url: '/projects/1/details',
}),
}));
jest.mock('hooks/useBrandName', () => ({
__esModule: true,
default: () => ({
current: 'AWX',
}),
}));
describe('<ProjectDetail />', () => {
const mockProject = {
id: 1,
@@ -126,16 +132,18 @@ describe('<ProjectDetail />', () => {
'2019-10-10T01:15:06.780490Z'
);
expect(
wrapper
.find('Detail[label="Enabled Options"]')
.containsAllMatchingElements([
<li>Discard local changes before syncing</li>,
<li>Delete the project before syncing</li>,
<li>Track submodules latest commit on branch</li>,
<li>Update revision on job launch</li>,
<li>Allow branch override</li>,
])
).toEqual(true);
wrapper.find('Detail[label="Enabled Options"]').find('li')
).toHaveLength(5);
const options = [
'Discard local changes before syncing',
'Delete the project before syncing',
'Track submodules latest commit on branch',
'Update revision on job launch',
'Allow branch override',
];
wrapper.find('li').map((item, index) => {
expect(item.text().includes(options[index]));
});
});
test('should hide options label when all project options return false', () => {
@@ -237,7 +245,7 @@ describe('<ProjectDetail />', () => {
expect(history.location.pathname).toEqual('/projects/1/edit');
});
test('sync button should call api to syn project', async () => {
test('sync button should call api to sync project', async () => {
ProjectsAPI.readSync.mockResolvedValue({ data: { can_update: true } });
const wrapper = mountWithContexts(<ProjectDetail project={mockProject} />);
await act(() =>

View File

@@ -6,7 +6,12 @@ import { mountWithContexts } from '../../../../testUtils/enzymeHelpers';
import ProjectsListItem from './ProjectListItem';
jest.mock('../../../api/models/Projects');
jest.mock('hooks/useBrandName', () => ({
__esModule: true,
default: () => ({
current: 'AWX',
}),
}));
describe('<ProjectsListItem />', () => {
test('launch button shown to users with start capabilities', () => {
const wrapper = mountWithContexts(

Some files were not shown because too many files have changed in this diff Show More