Compare commits

..

80 Commits

Author SHA1 Message Date
Marliana Lara
4c9d028a35 Disable checkbox while job is running in project and inventory source lists (#11841) 2022-03-08 13:04:35 -05:00
Shane McDonald
123a3a22c9 Merge pull request #11859 from shanemcd/dev-env-test
Add a CI check for the development environment
2022-03-08 11:12:45 -05:00
Tiago Góes
82d91f8dbd Merge pull request #11830 from marshmalien/fix-duplicate-keys-subscription-modal
Add unique row id to subscription modal list items
2022-03-08 11:48:58 -03:00
Shane McDonald
f04d7733bb Add a CI check for the development environment 2022-03-08 09:00:30 -05:00
Shane McDonald
b2fe1c46ee Fix playbook error when files do not exist.
I was seeing "Failed to template loop_control.label: 'dict object' has no attribute 'path'"
2022-03-08 08:18:05 -05:00
Shane McDonald
4450b11e61 Merge pull request #11844 from AlanCoding/shane_forward
Adopt changes to AWX_ISOLATION_SHOW_PATHS for trust store
2022-03-07 16:28:42 -05:00
Shane McDonald
9f021b780c Move default show paths to production.py
This breaks the dev env
2022-03-07 16:08:58 -05:00
Shane McDonald
7df66eff5e Merge pull request #11855 from Spredzy/addpackaging
requirements: Add packaging deps following runner upgrade
2022-03-07 15:23:19 -05:00
Yanis Guenane
6e5cde0b05 requirements: Add packaging deps following runner upgrade 2022-03-07 20:51:11 +01:00
Marliana Lara
a65948de69 Add unique row id to subscription modal list items 2022-03-07 13:31:03 -05:00
Marliana Lara
0d0a8fdc9a Merge pull request #11850 from marshmalien/11626-hide-user-only-access-roles
Remove user_only roles from User and Team permission modal
2022-03-07 12:12:31 -05:00
Shane McDonald
a5b888c193 Add default container mounts to AWX_ISOLATION_SHOW_PATHS 2022-03-07 11:45:23 -05:00
Jeff Bradberry
32cc8e1a63 Merge pull request #11845 from jbradberry/awxkit-import-role-precedence
Expand out the early membership role assignment
2022-03-07 11:21:48 -05:00
Jeff Bradberry
69ea456cf6 Expand out the early membership role assignment
The Member role can derive from e.g. the Org Admin role, so basically
all organization and team roles should be assigned first, so that RBAC
conditions are met when assigning later roles.
2022-03-07 09:30:10 -05:00
Alan Rominger
e02e91adaa Merge pull request #11837 from AlanCoding/thread_key_error
Move model and settings operations out of threaded code
2022-03-05 14:55:13 -05:00
Alan Rominger
264c508c80 Move model and settings operations out of threaded code
This is to avoid references to settings in threads,
  this is known to create problems when caches expire
  this leads to KeyError in environments with heavy load
2022-03-04 15:31:12 -05:00
Kersom
c6209df1e0 Api issue float (#11757)
* Fix integer/float errors in survey

* Add SURVEY_TYPE_MAPPING to constants

Add SURVEY_TYPE_MAPPING to constants, and replace usage in a couple of
files.

Co-authored-by: Alexander Komarov <akomarov.me@gmail.com>
2022-03-04 14:03:17 -05:00
Marliana Lara
a155f5561f Remove user_only roles from User and Team permission modal 2022-03-04 13:56:03 -05:00
Shane McDonald
0eac63b844 Merge pull request #11836 from nixocio/ui_ci_matrix
Split UI tests run
2022-03-04 11:50:28 -05:00
Sarah Akus
d07c2973e0 Merge pull request #11792 from marshmalien/8321-job-list-schedule-name
Add schedule detail to job list expanded view
2022-03-04 11:46:45 -05:00
nixocio
f1efc578cb Split UI test run
Split UI test run

See: https://github.com/ansible/awx/issues/10678
2022-03-03 16:22:32 -05:00
Seth Foster
0b486762fa Merge pull request #11840 from fosterseth/meta_vars_priority
load job meta vars after JT extra vars
2022-03-03 13:13:34 -05:00
Alan Rominger
17756f0e72 Add job execution environment image to analytics data (#11835)
* Add job execution environment image to analytics data

* Add EE image to UJT analytics data

* Bump the unified job templates table
2022-03-03 11:13:11 -05:00
Alan Rominger
128400bfb5 Add resolved_action to analytics event data (#11816)
* Add resolved_action to analytics event data

* Bump collector version
2022-03-03 10:11:54 -05:00
Seth Foster
de1df8bf28 load job meta vars after JT extra vars 2022-03-02 14:42:47 -05:00
Alex Corey
fe01f13edb Merge pull request #11790 from AlexSCorey/11712-SelectRelatedQuery
Use select_related on db queries to reduce db calls
2022-03-02 11:33:45 -05:00
Shane McDonald
3b6cd18283 Merge pull request #11834 from shanemcd/automate-galaxy-and-pypi
Automate publishing to galaxy and pypi
2022-03-01 16:22:39 -05:00
Keith Grant
4f505486e3 Add Toast messages when resources are copied (#11758)
* create useToast hook

* add copy success toast message to credentials/inventories

* add Toast tests

* add copy success toast to template/ee/project lists

* move Toast type to types.js
2022-03-01 15:59:24 -05:00
Shane McDonald
f6e18bbf06 Publish to galaxy and pypi in promote workflow 2022-03-01 15:42:13 -05:00
Marcelo Moreira de Mello
a988ad0c4e Merge pull request #11659 from ansible/expose_isolate_path_k8s
Allow isolated paths as hostPath volume @ k8s/ocp/container groups
2022-03-01 10:52:36 -05:00
Shane McDonald
a815e94209 Merge pull request #11737 from ansible/update-minikube-docs
update minkube docs with steps for using custom operator
2022-03-01 07:49:21 -05:00
Shane McDonald
650bee1dea Merge pull request #11749 from rh-dluong/fix-ocp-cred-desc
Fixed doc string for Container Groups credential type
2022-03-01 07:48:37 -05:00
Shane McDonald
80c188586c Merge pull request #11798 from john-westcott-iv/saml_attr_lists
SAML superuse/auditor working with lists
2022-03-01 07:42:35 -05:00
Shane McDonald
b5cf8f9326 Merge pull request #11819 from shanemcd/transmitter-future
Reimplement transmitter thread as future
2022-03-01 07:33:26 -05:00
Marliana Lara
1aefd39782 Show deleted detail for deleted schedules 2022-02-28 15:51:36 -05:00
Marliana Lara
8c21a2aa9e Add schedule detail to job list expanded view 2022-02-28 14:59:03 -05:00
Shane McDonald
2df3ca547b Reimplement transmitter thread as future
This avoids the need for an explicit `.join()`, and removes the need for the TransmitterThread wrapper class.
2022-02-28 11:21:53 -05:00
Marcelo Moreira de Mello
8645147292 Renamed scontext variable to mount_options 2022-02-28 10:22:24 -05:00
Marliana Lara
169da866f3 Add UI unit tests to job settings 2022-02-28 10:22:24 -05:00
Marcelo Moreira de Mello
5e8107621e Allow isolated paths as hostPath volume @ k8s/ocp/container groups 2022-02-28 10:22:20 -05:00
Alan Rominger
eb52095670 Fix bug where translated strings will cause log error to error (#11813)
* Fix bug where translated strings will cause log error to error

* Use force_str for ensuring string
2022-02-28 08:38:01 -05:00
John Westcott IV
cb57752903 Changing session cookie name and added a way for clients to know what the name is #11413 (#11679)
* Changing session cookie name and added a way for clients to know what the key name is
* Adding session information to docs
* Fixing how awxkit gets the session id header
2022-02-27 07:27:25 -05:00
Shane McDonald
895c05a84a Merge pull request #11808 from john-westcott-iv/fix_minicube
Chaning API version from v1beta1 to v1
2022-02-24 16:32:21 -05:00
John Westcott IV
4d47f24dd4 Chaning API version from v1beta1 to v1 2022-02-24 11:17:36 -05:00
Elijah DeLee
4bd6c2a804 set max dispatch workers to same as max forks
Right now, without this, we end up with a different number for max_workers than max_forks. For example, on a control node with 16 Gi of RAM,
  max_mem_capacity  w/ 100 MB/fork = (16*1024)/100 --> 164
  max_workers = 5 * 16 --> 80

This means we would allow that control node to control up to 164 jobs, but all jobs after the 80th job will be stuck in `waiting` waiting for a dispatch worker to free up to run the job.
2022-02-24 10:53:54 -05:00
Shane McDonald
48fa947692 Merge pull request #11756 from shanemcd/ipv6-podman
Enable Podman ipv6 support by default
2022-02-24 09:58:20 -05:00
Shane McDonald
88f66d5c51 Enable Podman ipv6 support by default 2022-02-24 08:51:51 -05:00
Marcelo Moreira de Mello
e9a8175fd7 Merge pull request #11702 from ansible/fact_insights_mount_issues
Do not mount /etc/redhat-access-insights into EEs
2022-02-23 14:44:10 -05:00
Marcelo Moreira de Mello
0d75a25bf0 Do not mount /etc/redhat-access-insights into EEs
Sharing the /etc/redhat-access-insights is no longer
required for EEs. Furthermore, this fixes a SELinux issue
when launching multiple jobs with concurrency and fact_caching enabled.

i.e:
lsetxattr /etc/redhat-access-insights: operation not permitted
2022-02-23 14:12:33 -05:00
Tiago Góes
6af294e9a4 Merge pull request #11794 from jainnikhil30/fix_credential_types_drop_down
Allow more than 400 credential types in drop down while adding new credential
2022-02-23 16:08:28 -03:00
Elijah DeLee
38f50f014b fix missing job lifecycle messages (#11801)
we were missing these messages for control type jobs that call start_task earlier than other types of jobs
2022-02-23 13:56:25 -05:00
Alex Corey
a394f11d07 Resolves occassions where missing table data moves items to the left (#11772) 2022-02-23 11:36:20 -05:00
Kersom
3ab73ddf84 Fix TypeError when running a command on a host in a smart inventory (#11768)
Fix TypeError when running a command on a host in a smart inventory

See: https://github.com/ansible/awx/issues/11611
2022-02-23 10:32:27 -05:00
John Westcott IV
c7a1fb67d0 SAML superuse/auditor now searching all fields in a list instead of just the first 2022-02-23 09:35:11 -05:00
nixocio
afb8be4f0b Refactor fetch of credential types
Refactor fetch of credential types
2022-02-23 09:29:23 -05:00
Nikhil Jain
dc2a392f4c forgot to run prettier earlier 2022-02-23 12:09:51 +05:30
Nikhil Jain
61323c7f85 allow more than 400 credential types in drop down while adding new credential 2022-02-23 11:30:55 +05:30
Alex Corey
fa47e48a15 Fixes broken link from User to UserOrg (#11759) 2022-02-22 16:34:30 -05:00
Kersom
eb859b9812 Fix TypeError when running a command on a host in a smart inventory (#11768)
Fix TypeError when running a command on a host in a smart inventory

See: https://github.com/ansible/awx/issues/11611
2022-02-21 16:34:31 -05:00
Kersom
7cf0523561 Display roles for organization listed when using non-English web browser (#11762)
Display roles for organization listed when using non-English web browser
2022-02-21 15:53:32 -05:00
Alex Corey
aae2e3f835 Merge pull request #11785 from ansible/dependabot/npm_and_yarn/awx/ui/url-parse-1.5.9
Bump url-parse from 1.5.3 to 1.5.9 in /awx/ui
2022-02-21 14:02:17 -05:00
dependabot[bot]
a60a65cd2a Bump url-parse from 1.5.3 to 1.5.9 in /awx/ui
Bumps [url-parse](https://github.com/unshiftio/url-parse) from 1.5.3 to 1.5.9.
- [Release notes](https://github.com/unshiftio/url-parse/releases)
- [Commits](https://github.com/unshiftio/url-parse/compare/1.5.3...1.5.9)

---
updated-dependencies:
- dependency-name: url-parse
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-02-21 15:06:19 +00:00
Kersom
b7d0ec53e8 Merge pull request #11776 from nixocio/ui_ternary
Use ternary rather than &&
2022-02-17 18:24:09 -05:00
nixocio
f20cd8c203 Use ternary rather than &&
Use ternary rather than && to avoid display 0.
2022-02-17 15:34:03 -05:00
Tiago Góes
1ed0b70601 Merge pull request #11764 from ansible/filter_hopcontrol_from_associatemodal
filter out both hop and control nodes instead of just one or the other
2022-02-17 14:48:59 -03:00
Shane McDonald
c3621f1e89 Merge pull request #11742 from kdelee/drop_unused_capacity_tracking
drop unused logic in task manager
2022-02-17 09:46:00 -05:00
Shane McDonald
7de86fc4b4 Merge pull request #11747 from AlanCoding/loop_label
Add loop label with docker-compose playbook
2022-02-17 09:45:03 -05:00
Shane McDonald
963948b5c8 Merge pull request #11767 from simaishi/rekey_existing
Allow rekey with an existing key
2022-02-17 09:39:05 -05:00
Shane McDonald
d9749e8975 Merge pull request #11734 from shanemcd/fix-image-push
Fix image push when overriding awx_image_tag
2022-02-17 07:21:29 -05:00
Julen Landa Alustiza
f6e4e53728 Merge pull request #11766 from Zokormazo/collection-pep8
pep8 E231 fix for awx_collection
2022-02-17 13:21:23 +01:00
Julen Landa Alustiza
98adb196ea pep8 E231 fix for awx_collection
Signed-off-by: Julen Landa Alustiza <jlanda@redhat.com>
2022-02-17 09:34:48 +01:00
Rebeccah
6b60edbe5d filter out both hop and control nodes instead of just one or the other 2022-02-16 18:32:41 -05:00
Satoe Imaishi
9d6de42f48 Allow rekey with an existing key
(cherry picked from commit 0c6440b46756f02a669d87e461faa4abc5bab8e6)
2022-02-16 17:58:22 -05:00
Tiago Góes
a94a602ccd Merge pull request #11746 from AlexSCorey/11744-fixValidatorBug
Fixes validator console error, and routing issue in Instance Groups Branch
2022-02-16 12:28:43 -03:00
dluong
301818003d Fixed doc string for Container Groups credential type 2022-02-15 16:10:28 -05:00
Alex Corey
170d95aa3c Fixes validator console error, and routing issue in Instance Groups branch 2022-02-15 13:07:36 -05:00
Alan Rominger
fe7a2fe229 Add loop label with docker-compose playbook 2022-02-15 13:05:59 -05:00
Elijah DeLee
921b2bfb28 drop unused logic in task manager
There is no current need or use to keep a seperate dependency graph for
each instance group. In the interest of making it clearer what the
current code does, eliminate this superfluous complication.

We are no longer ever referencing any accounting of instance group
capacity, instead we only look
at capacity on intances.
2022-02-14 16:15:03 -05:00
Elijah DeLee
dd6cf19c39 update steps for using custom operator
Updating this to use the new make commands in the operator repo
2022-02-14 11:01:30 -05:00
Shane McDonald
e70059ed6b Fix image push when overriding awx_image_tag 2022-02-12 13:34:46 -05:00
85 changed files with 1505 additions and 633 deletions

View File

@@ -5,7 +5,7 @@ env:
on:
pull_request:
jobs:
common_tests:
common-tests:
name: ${{ matrix.tests.name }}
runs-on: ubuntu-latest
permissions:
@@ -33,9 +33,12 @@ jobs:
- name: ui-lint
label: Run UI Linters
command: make ui-lint
- name: ui-test
label: Run UI Tests
command: make ui-test
- name: ui-test-screens
label: Run UI Screens Tests
command: make ui-test-screens
- name: ui-test-general
label: Run UI General Tests
command: make ui-test-general
steps:
- uses: actions/checkout@v2
@@ -63,6 +66,36 @@ jobs:
run: |
docker run -u $(id -u) --rm -v ${{ github.workspace}}:/awx_devel/:Z \
--workdir=/awx_devel ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} ${{ matrix.tests.command }}
dev-env:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Log in to registry
run: |
echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u ${{ github.actor }} --password-stdin
- name: Pre-pull image to warm build cache
run: |
docker pull ghcr.io/${{ github.repository_owner }}/awx_devel:${{ env.BRANCH }} || :
- name: Build image
run: |
DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }} COMPOSE_TAG=${{ env.BRANCH }} make docker-compose-build
- name: Run smoke test
run: |
export DEV_DOCKER_TAG_BASE=ghcr.io/${{ github.repository_owner }}
export COMPOSE_TAG=${{ env.BRANCH }}
ansible-playbook tools/docker-compose/ansible/smoke-test.yml -e repo_dir=$(pwd) -v
awx-operator:
runs-on: ubuntu-latest

View File

@@ -8,6 +8,53 @@ jobs:
promote:
runs-on: ubuntu-latest
steps:
- name: Checkout awx
uses: actions/checkout@v2
- name: Get python version from Makefile
run: echo py_version=`make PYTHON_VERSION` >> $GITHUB_ENV
- name: Install python ${{ env.py_version }}
uses: actions/setup-python@v2
with:
python-version: ${{ env.py_version }}
- name: Install dependencies
run: |
python${{ env.py_version }} -m pip install wheel twine
- name: Set official collection namespace
run: echo collection_namespace=awx >> $GITHUB_ENV
if: ${{ github.repository_owner == 'ansible' }}
- name: Set unofficial collection namespace
run: echo collection_namespace=${{ github.repository_owner }} >> $GITHUB_ENV
if: ${{ github.repository_owner != 'ansible' }}
- name: Build collection and publish to galaxy
run: |
COLLECTION_NAMESPACE=${{ env.collection_namespace }} make build_collection
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ github.event.release.tag_name }}.tar.gz
- name: Set official pypi info
run: echo pypi_repo=pypi >> $GITHUB_ENV
if: ${{ github.repository_owner == 'ansible' }}
- name: Set unofficial pypi info
run: echo pypi_repo=testpypi >> $GITHUB_ENV
if: ${{ github.repository_owner != 'ansible' }}
- name: Build awxkit and upload to pypi
run: |
cd awxkit && python3 setup.py bdist_wheel
twine upload \
-r ${{ env.pypi_repo }} \
-u ${{ secrets.PYPI_USERNAME }} \
-p ${{ secrets.PYPI_PASSWORD }} \
dist/*
- name: Log in to GHCR
run: |
echo ${{ secrets.GITHUB_TOKEN }} | docker login ghcr.io -u ${{ github.actor }} --password-stdin

View File

@@ -305,7 +305,7 @@ symlink_collection:
mkdir -p ~/.ansible/collections/ansible_collections/$(COLLECTION_NAMESPACE) # in case it does not exist
ln -s $(shell pwd)/awx_collection $(COLLECTION_INSTALL)
build_collection:
awx_collection_build: $(shell find awx_collection -type f)
ansible-playbook -i localhost, awx_collection/tools/template_galaxy.yml \
-e collection_package=$(COLLECTION_PACKAGE) \
-e collection_namespace=$(COLLECTION_NAMESPACE) \
@@ -313,6 +313,8 @@ build_collection:
-e '{"awx_template_version":false}'
ansible-galaxy collection build awx_collection_build --force --output-path=awx_collection_build
build_collection: awx_collection_build
install_collection: build_collection
rm -rf $(COLLECTION_INSTALL)
ansible-galaxy collection install awx_collection_build/$(COLLECTION_NAMESPACE)-$(COLLECTION_PACKAGE)-$(COLLECTION_VERSION).tar.gz
@@ -400,9 +402,18 @@ ui-lint:
ui-test:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui test
$(NPM_BIN) run --prefix awx/ui test
ui-test-screens:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui test-screens --runInBand
ui-test-general:
$(NPM_BIN) --prefix awx/ui install
$(NPM_BIN) run --prefix awx/ui pretest
$(NPM_BIN) run --prefix awx/ui/ test-general --runInBand
# Build a pip-installable package into dist/ with a timestamped version number.
dev_build:
$(PYTHON) setup.py dev_build
@@ -567,3 +578,6 @@ messages:
. $(VENV_BASE)/awx/bin/activate; \
fi; \
$(PYTHON) manage.py makemessages -l $(LANG) --keep-pot
print-%:
@echo $($*)

View File

@@ -99,6 +99,7 @@ class LoggedLoginView(auth_views.LoginView):
current_user = smart_text(JSONRenderer().render(current_user.data))
current_user = urllib.parse.quote('%s' % current_user, '')
ret.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
else:

View File

@@ -113,7 +113,7 @@ from awx.api.permissions import (
from awx.api import renderers
from awx.api import serializers
from awx.api.metadata import RoleMetadata
from awx.main.constants import ACTIVE_STATES
from awx.main.constants import ACTIVE_STATES, SURVEY_TYPE_MAPPING
from awx.main.scheduler.dag_workflow import WorkflowDAG
from awx.api.views.mixin import (
ControlledByScmMixin,
@@ -2468,8 +2468,6 @@ class JobTemplateSurveySpec(GenericAPIView):
obj_permission_type = 'admin'
serializer_class = serializers.EmptySerializer
ALLOWED_TYPES = {'text': str, 'textarea': str, 'password': str, 'multiplechoice': str, 'multiselect': str, 'integer': int, 'float': float}
def get(self, request, *args, **kwargs):
obj = self.get_object()
return Response(obj.display_survey_spec())
@@ -2540,17 +2538,17 @@ class JobTemplateSurveySpec(GenericAPIView):
# Type-specific validation
# validate question type <-> default type
qtype = survey_item["type"]
if qtype not in JobTemplateSurveySpec.ALLOWED_TYPES:
if qtype not in SURVEY_TYPE_MAPPING:
return Response(
dict(
error=_("'{survey_item[type]}' in survey question {idx} is not one of '{allowed_types}' allowed question types.").format(
allowed_types=', '.join(JobTemplateSurveySpec.ALLOWED_TYPES.keys()), **context
allowed_types=', '.join(SURVEY_TYPE_MAPPING.keys()), **context
)
),
status=status.HTTP_400_BAD_REQUEST,
)
if 'default' in survey_item and survey_item['default'] != '':
if not isinstance(survey_item['default'], JobTemplateSurveySpec.ALLOWED_TYPES[qtype]):
if not isinstance(survey_item['default'], SURVEY_TYPE_MAPPING[qtype]):
type_label = 'string'
if qtype in ['integer', 'float']:
type_label = qtype

View File

@@ -19,7 +19,7 @@ class MeshVisualizer(APIView):
data = {
'nodes': InstanceNodeSerializer(Instance.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.all(), many=True).data,
'links': InstanceLinkSerializer(InstanceLink.objects.select_related('target', 'source'), many=True).data,
}
return Response(data)

View File

@@ -337,6 +337,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
{tbl}.parent_uuid,
{tbl}.event,
task_action,
resolved_action,
-- '-' operator listed here:
-- https://www.postgresql.org/docs/12/functions-json.html
-- note that operator is only supported by jsonb objects
@@ -356,7 +357,7 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
x.duration AS duration,
x.res->'warnings' AS warnings,
x.res->'deprecations' AS deprecations
FROM {tbl}, jsonb_to_record({event_data}) AS x("res" json, "duration" text, "task_action" text, "start" text, "end" text)
FROM {tbl}, jsonb_to_record({event_data}) AS x("res" json, "duration" text, "task_action" text, "resolved_action" text, "start" text, "end" text)
WHERE ({tbl}.{where_column} > '{since.isoformat()}' AND {tbl}.{where_column} <= '{until.isoformat()}')) TO STDOUT WITH CSV HEADER'''
return query
@@ -366,23 +367,24 @@ def _events_table(since, full_path, until, tbl, where_column, project_job_create
return _copy_table(table='events', query=query(f"replace({tbl}.event_data::text, '\\u0000', '')::jsonb"), path=full_path)
@register('events_table', '1.3', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
@register('events_table', '1.4', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_unpartitioned(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, '_unpartitioned_main_jobevent', 'created', **kwargs)
@register('events_table', '1.3', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
@register('events_table', '1.4', format='csv', description=_('Automation task records'), expensive=four_hour_slicing)
def events_table_partitioned_modified(since, full_path, until, **kwargs):
return _events_table(since, full_path, until, 'main_jobevent', 'modified', project_job_created=True, **kwargs)
@register('unified_jobs_table', '1.2', format='csv', description=_('Data on jobs run'), expensive=four_hour_slicing)
@register('unified_jobs_table', '1.3', format='csv', description=_('Data on jobs run'), expensive=four_hour_slicing)
def unified_jobs_table(since, full_path, until, **kwargs):
unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id,
django_content_type.model,
main_unifiedjob.organization_id,
main_organization.name as organization_name,
main_executionenvironment.image as execution_environment_image,
main_job.inventory_id,
main_inventory.name as inventory_name,
main_unifiedjob.created,
@@ -407,6 +409,7 @@ def unified_jobs_table(since, full_path, until, **kwargs):
LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id
LEFT JOIN main_inventory ON main_job.inventory_id = main_inventory.id
LEFT JOIN main_organization ON main_organization.id = main_unifiedjob.organization_id
LEFT JOIN main_executionenvironment ON main_executionenvironment.id = main_unifiedjob.execution_environment_id
WHERE ((main_unifiedjob.created > '{0}' AND main_unifiedjob.created <= '{1}')
OR (main_unifiedjob.finished > '{0}' AND main_unifiedjob.finished <= '{1}'))
AND main_unifiedjob.launch_type != 'sync'
@@ -417,11 +420,12 @@ def unified_jobs_table(since, full_path, until, **kwargs):
return _copy_table(table='unified_jobs', query=unified_job_query, path=full_path)
@register('unified_job_template_table', '1.0', format='csv', description=_('Data on job templates'))
@register('unified_job_template_table', '1.1', format='csv', description=_('Data on job templates'))
def unified_job_template_table(since, full_path, **kwargs):
unified_job_template_query = '''COPY (SELECT main_unifiedjobtemplate.id,
main_unifiedjobtemplate.polymorphic_ctype_id,
django_content_type.model,
main_executionenvironment.image as execution_environment_image,
main_unifiedjobtemplate.created,
main_unifiedjobtemplate.modified,
main_unifiedjobtemplate.created_by_id,
@@ -434,7 +438,8 @@ def unified_job_template_table(since, full_path, **kwargs):
main_unifiedjobtemplate.next_job_run,
main_unifiedjobtemplate.next_schedule_id,
main_unifiedjobtemplate.status
FROM main_unifiedjobtemplate, django_content_type
FROM main_unifiedjobtemplate
LEFT JOIN main_executionenvironment ON main_executionenvironment.id = main_unifiedjobtemplate.execution_environment_id, django_content_type
WHERE main_unifiedjobtemplate.polymorphic_ctype_id = django_content_type.id
ORDER BY main_unifiedjobtemplate.id ASC) TO STDOUT WITH CSV HEADER'''
return _copy_table(table='unified_job_template', query=unified_job_template_query, path=full_path)

View File

@@ -334,6 +334,19 @@ register(
category_slug='jobs',
)
register(
'AWX_MOUNT_ISOLATED_PATHS_ON_K8S',
field_class=fields.BooleanField,
default=False,
label=_('Expose host paths for Container Groups'),
help_text=_(
'Expose paths via hostPath for the Pods created by a Container Group. '
'HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. '
),
category=_('Jobs'),
category_slug='jobs',
)
register(
'GALAXY_IGNORE_CERTS',
field_class=fields.BooleanField,

View File

@@ -88,7 +88,10 @@ JOB_FOLDER_PREFIX = 'awx_%s_'
# :z option tells Podman that two containers share the volume content with r/w
# :O option tells Podman to mount the directory from the host as a temporary storage using the overlay file system.
# :ro or :rw option to mount a volume in read-only or read-write mode, respectively. By default, the volumes are mounted read-write.
# see podman-run manpage for further details
# /HOST-DIR:/CONTAINER-DIR:OPTIONS
CONTAINER_VOLUMES_MOUNT_TYPES = ['z', 'O']
CONTAINER_VOLUMES_MOUNT_TYPES = ['z', 'O', 'ro', 'rw']
MAX_ISOLATED_PATH_COLON_DELIMITER = 2
SURVEY_TYPE_MAPPING = {'text': str, 'textarea': str, 'password': str, 'multiplechoice': str, 'multiselect': str, 'integer': int, 'float': (float, int)}

View File

@@ -22,7 +22,7 @@ import psutil
from awx.main.models import UnifiedJob
from awx.main.dispatch import reaper
from awx.main.utils.common import convert_mem_str_to_bytes
from awx.main.utils.common import convert_mem_str_to_bytes, get_mem_effective_capacity
if 'run_callback_receiver' in sys.argv:
logger = logging.getLogger('awx.main.commands.run_callback_receiver')
@@ -324,8 +324,9 @@ class AutoscalePool(WorkerPool):
total_memory_gb = convert_mem_str_to_bytes(settings_absmem) // 2**30
else:
total_memory_gb = (psutil.virtual_memory().total >> 30) + 1 # noqa: round up
# 5 workers per GB of total memory
self.max_workers = total_memory_gb * 5
# Get same number as max forks based on memory, this function takes memory as bytes
self.max_workers = get_mem_effective_capacity(total_memory_gb * 2**30)
# max workers can't be less than min_workers
self.max_workers = max(self.min_workers, self.max_workers)

View File

@@ -16,13 +16,26 @@ from awx.main.utils.encryption import encrypt_field, decrypt_field, encrypt_valu
class Command(BaseCommand):
"""
Regenerate a new SECRET_KEY value and re-encrypt every secret in the database.
Re-encrypt every secret in the database, using regenerated new SECRET_KEY or user provided key.
"""
def add_arguments(self, parser):
parser.add_argument(
'--use-custom-key',
dest='use_custom_key',
action='store_true',
default=False,
help='Use existing key provided as TOWER_SECRET_KEY environment variable',
)
@transaction.atomic
def handle(self, **options):
self.old_key = settings.SECRET_KEY
self.new_key = base64.encodebytes(os.urandom(33)).decode().rstrip()
custom_key = os.environ.get("TOWER_SECRET_KEY")
if options.get("use_custom_key") and custom_key:
self.new_key = custom_key
else:
self.new_key = base64.encodebytes(os.urandom(33)).decode().rstrip()
self._notification_templates()
self._credentials()
self._unified_jobs()

View File

@@ -71,6 +71,7 @@ class TaskManager:
instances = Instance.objects.filter(hostname__isnull=False, enabled=True).exclude(node_type='hop')
self.real_instances = {i.hostname: i for i in instances}
self.controlplane_ig = None
self.dependency_graph = DependencyGraph()
instances_partial = [
SimpleNamespace(
@@ -90,32 +91,18 @@ class TaskManager:
if rampart_group.name == settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME:
self.controlplane_ig = rampart_group
self.graph[rampart_group.name] = dict(
graph=DependencyGraph(),
execution_capacity=0,
control_capacity=0,
consumed_capacity=0,
consumed_control_capacity=0,
consumed_execution_capacity=0,
instances=[],
instances=[
instances_by_hostname[instance.hostname] for instance in rampart_group.instances.all() if instance.hostname in instances_by_hostname
],
)
for instance in rampart_group.instances.all():
if not instance.enabled:
continue
for capacity_type in ('control', 'execution'):
if instance.node_type in (capacity_type, 'hybrid'):
self.graph[rampart_group.name][f'{capacity_type}_capacity'] += instance.capacity
for instance in rampart_group.instances.filter(enabled=True).order_by('hostname'):
if instance.hostname in instances_by_hostname:
self.graph[rampart_group.name]['instances'].append(instances_by_hostname[instance.hostname])
def job_blocked_by(self, task):
# TODO: I'm not happy with this, I think blocking behavior should be decided outside of the dependency graph
# in the old task manager this was handled as a method on each task object outside of the graph and
# probably has the side effect of cutting down *a lot* of the logic from this task manager class
for g in self.graph:
blocked_by = self.graph[g]['graph'].task_blocked_by(task)
if blocked_by:
return blocked_by
blocked_by = self.dependency_graph.task_blocked_by(task)
if blocked_by:
return blocked_by
if not task.dependent_jobs_finished():
blocked_by = task.dependent_jobs.first()
@@ -298,16 +285,6 @@ class TaskManager:
task.save()
task.log_lifecycle("waiting")
if rampart_group is not None:
self.consume_capacity(task, rampart_group.name, instance=instance)
if task.controller_node:
self.consume_capacity(
task,
settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME,
instance=self.real_instances[task.controller_node],
impact=settings.AWX_CONTROL_NODE_TASK_IMPACT,
)
def post_commit():
if task.status != 'failed' and type(task) is not WorkflowJob:
# Before task is dispatched, ensure that job_event partitions exist
@@ -327,8 +304,7 @@ class TaskManager:
def process_running_tasks(self, running_tasks):
for task in running_tasks:
if task.instance_group:
self.graph[task.instance_group.name]['graph'].add_job(task)
self.dependency_graph.add_job(task)
def create_project_update(self, task):
project_task = Project.objects.get(id=task.project_id).create_project_update(_eager_fields=dict(launch_type='dependency'))
@@ -515,8 +491,10 @@ class TaskManager:
task.execution_node = control_instance.hostname
control_instance.remaining_capacity = max(0, control_instance.remaining_capacity - control_impact)
control_instance.jobs_running += 1
self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['graph'].add_job(task)
self.dependency_graph.add_job(task)
execution_instance = self.real_instances[control_instance.hostname]
task.log_lifecycle("controller_node_chosen")
task.log_lifecycle("execution_node_chosen")
self.start_task(task, self.controlplane_ig, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
continue
@@ -524,7 +502,7 @@ class TaskManager:
for rampart_group in preferred_instance_groups:
if rampart_group.is_container_group:
control_instance.jobs_running += 1
self.graph[settings.DEFAULT_CONTROL_PLANE_QUEUE_NAME]['graph'].add_job(task)
self.dependency_graph.add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True
break
@@ -559,7 +537,7 @@ class TaskManager:
)
)
execution_instance = self.real_instances[execution_instance.hostname]
self.graph[rampart_group.name]['graph'].add_job(task)
self.dependency_graph.add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), execution_instance)
found_acceptable_queue = True
break
@@ -616,29 +594,9 @@ class TaskManager:
logger.error(f'{j.execution_node} is not a registered instance; reaping {j.log_format}')
reap_job(j, 'failed')
def calculate_capacity_consumed(self, tasks):
self.graph = InstanceGroup.objects.capacity_values(tasks=tasks, graph=self.graph)
def consume_capacity(self, task, instance_group, instance=None, impact=None):
impact = impact if impact else task.task_impact
logger.debug(
'{} consumed {} capacity units from {} with prior total of {}'.format(
task.log_format, impact, instance_group, self.graph[instance_group]['consumed_capacity']
)
)
self.graph[instance_group]['consumed_capacity'] += impact
for capacity_type in ('control', 'execution'):
if instance is None or instance.node_type in ('hybrid', capacity_type):
self.graph[instance_group][f'consumed_{capacity_type}_capacity'] += impact
def get_remaining_capacity(self, instance_group, capacity_type='execution'):
return self.graph[instance_group][f'{capacity_type}_capacity'] - self.graph[instance_group][f'consumed_{capacity_type}_capacity']
def process_tasks(self, all_sorted_tasks):
running_tasks = [t for t in all_sorted_tasks if t.status in ['waiting', 'running']]
self.calculate_capacity_consumed(running_tasks)
self.process_running_tasks(running_tasks)
pending_tasks = [t for t in all_sorted_tasks if t.status == 'pending']

View File

@@ -40,6 +40,7 @@ from awx.main.constants import (
STANDARD_INVENTORY_UPDATE_ENV,
JOB_FOLDER_PREFIX,
MAX_ISOLATED_PATH_COLON_DELIMITER,
CONTAINER_VOLUMES_MOUNT_TYPES,
)
from awx.main.models import (
Instance,
@@ -163,8 +164,14 @@ class BaseTask(object):
# Using z allows the dir to be mounted by multiple containers
# Uppercase Z restricts access (in weird ways) to 1 container at a time
if this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER:
src, dest, scontext = this_path.split(':')
params['container_volume_mounts'].append(f'{src}:{dest}:{scontext}')
src, dest, mount_option = this_path.split(':')
# mount_option validation via performed via API, but since this can be overriden via settings.py
if mount_option not in CONTAINER_VOLUMES_MOUNT_TYPES:
mount_option = 'z'
logger.warn(f'The path {this_path} has volume mount type {mount_option} which is not supported. Using "z" instead.')
params['container_volume_mounts'].append(f'{src}:{dest}:{mount_option}')
elif this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER - 1:
src, dest = this_path.split(':')
params['container_volume_mounts'].append(f'{src}:{dest}:z')
@@ -816,11 +823,12 @@ class RunJob(BaseTask):
return job.playbook
def build_extra_vars_file(self, job, private_data_dir):
# Define special extra_vars for AWX, combine with job.extra_vars.
extra_vars = job.awx_meta_vars()
extra_vars = dict()
# load in JT extra vars
if job.extra_vars_dict:
extra_vars.update(json.loads(job.decrypted_extra_vars()))
# load in meta vars, overriding any variable set in JT extra vars
extra_vars.update(job.awx_meta_vars())
# By default, all extra vars disallow Jinja2 template usage for
# security reasons; top level key-values defined in JT.extra_vars, however,
@@ -854,24 +862,6 @@ class RunJob(BaseTask):
d[r'Vault password \({}\):\s*?$'.format(vault_id)] = k
return d
def build_execution_environment_params(self, instance, private_data_dir):
if settings.IS_K8S:
return {}
params = super(RunJob, self).build_execution_environment_params(instance, private_data_dir)
# If this has an insights agent and it is not already mounted then show it
insights_dir = os.path.dirname(settings.INSIGHTS_SYSTEM_ID_FILE)
if instance.use_fact_cache and os.path.exists(insights_dir):
logger.info('not parent of others')
params.setdefault('container_volume_mounts', [])
params['container_volume_mounts'].extend(
[
f"{insights_dir}:{insights_dir}:Z",
]
)
return params
def pre_run_hook(self, job, private_data_dir):
super(RunJob, self).pre_run_hook(job, private_data_dir)
if job.inventory is None:
@@ -1896,14 +1886,6 @@ class RunAdHocCommand(BaseTask):
if ad_hoc_command.verbosity:
args.append('-%s' % ('v' * min(5, ad_hoc_command.verbosity)))
extra_vars = ad_hoc_command.awx_meta_vars()
if ad_hoc_command.extra_vars_dict:
redacted_extra_vars, removed_vars = extract_ansible_vars(ad_hoc_command.extra_vars_dict)
if removed_vars:
raise ValueError(_("{} are prohibited from use in ad hoc commands.").format(", ".join(removed_vars)))
extra_vars.update(ad_hoc_command.extra_vars_dict)
if ad_hoc_command.limit:
args.append(ad_hoc_command.limit)
else:
@@ -1912,13 +1894,13 @@ class RunAdHocCommand(BaseTask):
return args
def build_extra_vars_file(self, ad_hoc_command, private_data_dir):
extra_vars = ad_hoc_command.awx_meta_vars()
extra_vars = dict()
if ad_hoc_command.extra_vars_dict:
redacted_extra_vars, removed_vars = extract_ansible_vars(ad_hoc_command.extra_vars_dict)
if removed_vars:
raise ValueError(_("{} are prohibited from use in ad hoc commands.").format(", ".join(removed_vars)))
extra_vars.update(ad_hoc_command.extra_vars_dict)
extra_vars.update(ad_hoc_command.awx_meta_vars())
self._write_extra_vars_file(private_data_dir, extra_vars)
def build_module_name(self, ad_hoc_command):

View File

@@ -7,8 +7,6 @@ import logging
import os
import shutil
import socket
import sys
import threading
import time
import yaml
@@ -26,6 +24,8 @@ from awx.main.utils.common import (
parse_yaml_or_json,
cleanup_new_process,
)
from awx.main.constants import MAX_ISOLATED_PATH_COLON_DELIMITER
# Receptorctl
from receptorctl.socket_interface import ReceptorControl
@@ -247,16 +247,6 @@ def worker_cleanup(node_name, vargs, timeout=300.0):
return stdout
class TransmitterThread(threading.Thread):
def run(self):
self.exc = None
try:
super().run()
except Exception:
self.exc = sys.exc_info()
class AWXReceptorJob:
def __init__(self, task, runner_params=None):
self.task = task
@@ -296,41 +286,42 @@ class AWXReceptorJob:
# reading.
sockin, sockout = socket.socketpair()
transmitter_thread = TransmitterThread(target=self.transmit, args=[sockin])
transmitter_thread.start()
# submit our work, passing
# in the right side of our socketpair for reading.
_kw = {}
# Prepare the submit_work kwargs before creating threads, because references to settings are not thread-safe
work_submit_kw = dict(worktype=self.work_type, params=self.receptor_params, signwork=self.sign_work)
if self.work_type == 'ansible-runner':
_kw['node'] = self.task.instance.execution_node
use_stream_tls = get_conn_type(_kw['node'], receptor_ctl).name == "STREAMTLS"
_kw['tlsclient'] = get_tls_client(use_stream_tls)
result = receptor_ctl.submit_work(worktype=self.work_type, payload=sockout.makefile('rb'), params=self.receptor_params, signwork=self.sign_work, **_kw)
self.unit_id = result['unitid']
# Update the job with the work unit in-memory so that the log_lifecycle
# will print out the work unit that is to be associated with the job in the database
# via the update_model() call.
# We want to log the work_unit_id as early as possible. A failure can happen in between
# when we start the job in receptor and when we associate the job <-> work_unit_id.
# In that case, there will be work running in receptor and Controller will not know
# which Job it is associated with.
# We do not programatically handle this case. Ideally, we would handle this with a reaper case.
# The two distinct job lifecycle log events below allow for us to at least detect when this
# edge case occurs. If the lifecycle event work_unit_id_received occurs without the
# work_unit_id_assigned event then this case may have occured.
self.task.instance.work_unit_id = result['unitid'] # Set work_unit_id in-memory only
self.task.instance.log_lifecycle("work_unit_id_received")
self.task.update_model(self.task.instance.pk, work_unit_id=result['unitid'])
self.task.instance.log_lifecycle("work_unit_id_assigned")
work_submit_kw['node'] = self.task.instance.execution_node
use_stream_tls = get_conn_type(work_submit_kw['node'], receptor_ctl).name == "STREAMTLS"
work_submit_kw['tlsclient'] = get_tls_client(use_stream_tls)
sockin.close()
sockout.close()
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
transmitter_future = executor.submit(self.transmit, sockin)
if transmitter_thread.exc:
raise transmitter_thread.exc[1].with_traceback(transmitter_thread.exc[2])
# submit our work, passing in the right side of our socketpair for reading.
result = receptor_ctl.submit_work(payload=sockout.makefile('rb'), **work_submit_kw)
transmitter_thread.join()
sockin.close()
sockout.close()
self.unit_id = result['unitid']
# Update the job with the work unit in-memory so that the log_lifecycle
# will print out the work unit that is to be associated with the job in the database
# via the update_model() call.
# We want to log the work_unit_id as early as possible. A failure can happen in between
# when we start the job in receptor and when we associate the job <-> work_unit_id.
# In that case, there will be work running in receptor and Controller will not know
# which Job it is associated with.
# We do not programatically handle this case. Ideally, we would handle this with a reaper case.
# The two distinct job lifecycle log events below allow for us to at least detect when this
# edge case occurs. If the lifecycle event work_unit_id_received occurs without the
# work_unit_id_assigned event then this case may have occured.
self.task.instance.work_unit_id = result['unitid'] # Set work_unit_id in-memory only
self.task.instance.log_lifecycle("work_unit_id_received")
self.task.update_model(self.task.instance.pk, work_unit_id=result['unitid'])
self.task.instance.log_lifecycle("work_unit_id_assigned")
# Throws an exception if the transmit failed.
# Will be caught by the try/except in BaseTask#run.
transmitter_future.result()
# Artifacts are an output, but sometimes they are an input as well
# this is the case with fact cache, where clearing facts deletes a file, and this must be captured
@@ -488,6 +479,48 @@ class AWXReceptorJob:
if self.task.instance.execution_environment.pull:
pod_spec['spec']['containers'][0]['imagePullPolicy'] = pull_options[self.task.instance.execution_environment.pull]
# This allows the user to also expose the isolated path list
# to EEs running in k8s/ocp environments, i.e. container groups.
# This assumes the node and SA supports hostPath volumes
# type is not passed due to backward compatibility,
# which means that no checks will be performed before mounting the hostPath volume.
if settings.AWX_MOUNT_ISOLATED_PATHS_ON_K8S and settings.AWX_ISOLATION_SHOW_PATHS:
spec_volume_mounts = []
spec_volumes = []
for idx, this_path in enumerate(settings.AWX_ISOLATION_SHOW_PATHS):
mount_option = None
if this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER:
src, dest, mount_option = this_path.split(':')
elif this_path.count(':') == MAX_ISOLATED_PATH_COLON_DELIMITER - 1:
src, dest = this_path.split(':')
else:
src = dest = this_path
# Enforce read-only volume if 'ro' has been explicitly passed
# We do this so we can use the same configuration for regular scenarios and k8s
# Since flags like ':O', ':z' or ':Z' are not valid in the k8s realm
# Example: /data:/data:ro
read_only = bool('ro' == mount_option)
# Since type is not being passed, k8s by default will not perform any checks if the
# hostPath volume exists on the k8s node itself.
spec_volumes.append({'name': f'volume-{idx}', 'hostPath': {'path': src}})
spec_volume_mounts.append({'name': f'volume-{idx}', 'mountPath': f'{dest}', 'readOnly': read_only})
# merge any volumes definition already present in the pod_spec
if 'volumes' in pod_spec['spec']:
pod_spec['spec']['volumes'] += spec_volumes
else:
pod_spec['spec']['volumes'] = spec_volumes
# merge any volumesMounts definition already present in the pod_spec
if 'volumeMounts' in pod_spec['spec']['containers'][0]:
pod_spec['spec']['containers'][0]['volumeMounts'] += spec_volume_mounts
else:
pod_spec['spec']['containers'][0]['volumeMounts'] = spec_volume_mounts
if self.task and self.task.instance.is_container_group_task:
# If EE credential is passed, create an imagePullSecret
if self.task.instance.execution_environment and self.task.instance.execution_environment.credential:

View File

@@ -3,6 +3,8 @@ import json
from cryptography.fernet import InvalidToken
from django.test.utils import override_settings
from django.conf import settings
from django.core.management import call_command
import os
import pytest
from awx.main import models
@@ -158,3 +160,25 @@ class TestKeyRegeneration:
# verify that the new SECRET_KEY *does* work
with override_settings(SECRET_KEY=new_key):
assert models.OAuth2Application.objects.get(pk=oauth_application.pk).client_secret == secret
def test_use_custom_key_with_tower_secret_key_env_var(self):
custom_key = 'MXSq9uqcwezBOChl/UfmbW1k4op+bC+FQtwPqgJ1u9XV'
os.environ['TOWER_SECRET_KEY'] = custom_key
new_key = call_command('regenerate_secret_key', '--use-custom-key')
assert custom_key == new_key
def test_use_custom_key_with_empty_tower_secret_key_env_var(self):
os.environ['TOWER_SECRET_KEY'] = ''
new_key = call_command('regenerate_secret_key', '--use-custom-key')
assert settings.SECRET_KEY != new_key
def test_use_custom_key_with_no_tower_secret_key_env_var(self):
os.environ.pop('TOWER_SECRET_KEY', None)
new_key = call_command('regenerate_secret_key', '--use-custom-key')
assert settings.SECRET_KEY != new_key
def test_with_tower_secret_key_env_var(self):
custom_key = 'MXSq9uqcwezBOChl/UfmbW1k4op+bC+FQtwPqgJ1u9XV'
os.environ['TOWER_SECRET_KEY'] = custom_key
new_key = call_command('regenerate_secret_key')
assert custom_key != new_key

View File

@@ -59,6 +59,38 @@ class SurveyVariableValidation:
assert accepted == {}
assert str(errors[0]) == "Value 5 for 'a' expected to be a string."
def test_job_template_survey_default_variable_validation(self, job_template_factory):
objects = job_template_factory(
"survey_variable_validation",
organization="org1",
inventory="inventory1",
credential="cred1",
persisted=False,
)
obj = objects.job_template
obj.survey_spec = {
"description": "",
"spec": [
{
"required": True,
"min": 0,
"default": "2",
"max": 1024,
"question_description": "",
"choices": "",
"variable": "a",
"question_name": "float_number",
"type": "float",
}
],
"name": "",
}
obj.survey_enabled = True
accepted, _, errors = obj.accept_or_ignore_variables({"a": 2})
assert accepted == {{"a": 2.0}}
assert not errors
@pytest.fixture
def job(mocker):

View File

@@ -10,6 +10,7 @@ from datetime import datetime
# Django
from django.conf import settings
from django.utils.timezone import now
from django.utils.encoding import force_str
# AWX
from awx.main.exceptions import PostRunError
@@ -42,7 +43,7 @@ class RSysLogHandler(logging.handlers.SysLogHandler):
msg += exc.splitlines()[-1]
except Exception:
msg += exc
msg = '\n'.join([msg, record.msg, ''])
msg = '\n'.join([msg, force_str(record.msg), '']) # force_str used in case of translated strings
sys.stderr.write(msg)
def emit(self, msg):

View File

@@ -252,6 +252,10 @@ SESSION_COOKIE_SECURE = True
# Note: This setting may be overridden by database settings.
SESSION_COOKIE_AGE = 1800
# Name of the cookie that contains the session information.
# Note: Changing this value may require changes to any clients.
SESSION_COOKIE_NAME = 'awx_sessionid'
# Maximum number of per-user valid, concurrent sessions.
# -1 is unlimited
# Note: This setting may be overridden by database settings.
@@ -996,4 +1000,7 @@ DEFAULT_CONTROL_PLANE_QUEUE_NAME = 'controlplane'
# Extend container runtime attributes.
# For example, to disable SELinux in containers for podman
# DEFAULT_CONTAINER_RUN_OPTIONS = ['--security-opt', 'label=disable']
DEFAULT_CONTAINER_RUN_OPTIONS = []
DEFAULT_CONTAINER_RUN_OPTIONS = ['--network', 'slirp4netns:enable_ipv6=true']
# Mount exposed paths as hostPath resource in k8s/ocp
AWX_MOUNT_ISOLATED_PATHS_ON_K8S = False

View File

@@ -91,3 +91,8 @@ except IOError:
DATABASES.setdefault('default', dict()).setdefault('OPTIONS', dict()).setdefault(
'application_name', f'{CLUSTER_HOST_ID}-{os.getpid()}-{" ".join(sys.argv)}'[:63]
) # noqa
AWX_ISOLATION_SHOW_PATHS = [
'/etc/pki/ca-trust:/etc/pki/ca-trust:O',
'/usr/share/pki:/usr/share/pki:O',
]

View File

@@ -263,9 +263,14 @@ def _check_flag(user, flag, attributes, user_flags_settings):
if user_flags_settings.get(is_value_key, None):
# If so, check and see if the value of the attr matches the required value
attribute_value = attributes.get(attr_setting, None)
attribute_matches = False
if isinstance(attribute_value, (list, tuple)):
attribute_value = attribute_value[0]
if attribute_value == user_flags_settings.get(is_value_key):
if user_flags_settings.get(is_value_key) in attribute_value:
attribute_matches = True
elif attribute_value == user_flags_settings.get(is_value_key):
attribute_matches = True
if attribute_matches:
logger.debug("Giving %s %s from attribute %s with matching value" % (user.username, flag, attr_setting))
new_flag = True
# if they don't match make sure that new_flag is false

View File

@@ -447,6 +447,16 @@ class TestSAMLUserFlags:
{'is_superuser_role': 'test-role-1', 'is_superuser_attr': 'is_superuser', 'is_superuser_value': 'true'},
(True, True),
),
# In this test case we will validate that a single attribute (instead of a list) still works
(
{'is_superuser_attr': 'name_id', 'is_superuser_value': 'test_id'},
(True, True),
),
# This will be a negative test for a single atrribute
(
{'is_superuser_attr': 'name_id', 'is_superuser_value': 'junk'},
(False, False),
),
],
)
def test__check_flag(self, user_flags_settings, expected):
@@ -457,10 +467,10 @@ class TestSAMLUserFlags:
attributes = {
'email': ['noone@nowhere.com'],
'last_name': ['Westcott'],
'is_superuser': ['true'],
'is_superuser': ['something', 'else', 'true'],
'username': ['test_id'],
'first_name': ['John'],
'Role': ['test-role-1'],
'Role': ['test-role-1', 'something', 'different'],
'name_id': 'test_id',
}

View File

@@ -46,6 +46,7 @@ class CompleteView(BaseRedirectView):
current_user = smart_text(JSONRenderer().render(current_user.data))
current_user = urllib.parse.quote('%s' % current_user, '')
response.set_cookie('current_user', current_user, secure=settings.SESSION_COOKIE_SECURE or None)
response.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return response

View File

@@ -66,7 +66,7 @@
"react-scripts": "5.0.0"
},
"engines": {
"node": "14.x"
"node": ">=16.14.0"
}
},
"node_modules/@babel/code-frame": {
@@ -20507,9 +20507,9 @@
}
},
"node_modules/url-parse": {
"version": "1.5.3",
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz",
"integrity": "sha512-IIORyIQD9rvj0A4CLWsHkBBJuNqWpFQe224b6j9t/ABmquIS0qDU2pY6kl6AuOrL5OkCXHMCFNe1jBcuAggjvQ==",
"version": "1.5.9",
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.9.tgz",
"integrity": "sha512-HpOvhKBvre8wYez+QhHcYiVvVmeF6DVnuSOOPhe3cTum3BnqHhvKaZm8FU5yTiOu/Jut2ZpB2rA/SbBA1JIGlQ==",
"dev": true,
"dependencies": {
"querystringify": "^2.1.1",
@@ -37195,9 +37195,9 @@
}
},
"url-parse": {
"version": "1.5.3",
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.3.tgz",
"integrity": "sha512-IIORyIQD9rvj0A4CLWsHkBBJuNqWpFQe224b6j9t/ABmquIS0qDU2pY6kl6AuOrL5OkCXHMCFNe1jBcuAggjvQ==",
"version": "1.5.9",
"resolved": "https://registry.npmjs.org/url-parse/-/url-parse-1.5.9.tgz",
"integrity": "sha512-HpOvhKBvre8wYez+QhHcYiVvVmeF6DVnuSOOPhe3cTum3BnqHhvKaZm8FU5yTiOu/Jut2ZpB2rA/SbBA1JIGlQ==",
"dev": true,
"requires": {
"querystringify": "^2.1.1",

View File

@@ -75,6 +75,8 @@
"start-instrumented": "ESLINT_NO_DEV_ERRORS=true DEBUG=instrument-cra PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts -r @cypress/instrument-cra start",
"build": "INLINE_RUNTIME_CHUNK=false react-scripts build",
"test": "TZ='UTC' react-scripts test --watchAll=false",
"test-screens": "TZ='UTC' react-scripts test screens --watchAll=false",
"test-general": "TZ='UTC' react-scripts test --testPathIgnorePatterns='<rootDir>/src/screens/' --watchAll=false",
"test-watch": "TZ='UTC' react-scripts test",
"eject": "react-scripts eject",
"lint": "eslint --ext .js --ext .jsx .",

View File

@@ -59,6 +59,7 @@ function AdHocCommands({
useEffect(() => {
fetchData();
}, [fetchData]);
const {
isLoading: isLaunchLoading,
error: launchError,
@@ -172,6 +173,8 @@ function AdHocCommands({
AdHocCommands.propTypes = {
adHocItems: PropTypes.arrayOf(PropTypes.object).isRequired,
hasListItems: PropTypes.bool.isRequired,
onLaunchLoading: PropTypes.func.isRequired,
moduleOptions: PropTypes.arrayOf(PropTypes.array).isRequired,
};
export default AdHocCommands;

View File

@@ -73,6 +73,10 @@ describe('<AdHocCommands />', () => {
adHocItems={adHocItems}
hasListItems
onLaunchLoading={() => jest.fn()}
moduleOptions={[
['command', 'command'],
['shell', 'shell'],
]}
/>
);
});

View File

@@ -1,45 +1,16 @@
import React from 'react';
import { Link } from 'react-router-dom';
import { t } from '@lingui/macro';
import getScheduleUrl from 'util/getScheduleUrl';
import Detail from './Detail';
function getScheduleURL(template, scheduleId, inventoryId = null) {
let scheduleUrl;
switch (template.unified_job_type) {
case 'inventory_update':
scheduleUrl =
inventoryId &&
`/inventories/inventory/${inventoryId}/sources/${template.id}/schedules/${scheduleId}/details`;
break;
case 'job':
scheduleUrl = `/templates/job_template/${template.id}/schedules/${scheduleId}/details`;
break;
case 'project_update':
scheduleUrl = `/projects/${template.id}/schedules/${scheduleId}/details`;
break;
case 'system_job':
scheduleUrl = `/management_jobs/${template.id}/schedules/${scheduleId}/details`;
break;
case 'workflow_job':
scheduleUrl = `/templates/workflow_job_template/${template.id}/schedules/${scheduleId}/details`;
break;
default:
break;
}
return scheduleUrl;
}
const getLaunchedByDetails = ({ summary_fields = {}, launch_type }) => {
const getLaunchedByDetails = (job) => {
const {
created_by: createdBy,
job_template: jobTemplate,
unified_job_template: unifiedJT,
workflow_job_template: workflowJT,
inventory,
schedule,
} = summary_fields;
} = job.summary_fields;
if (!createdBy && !schedule) {
return {};
@@ -48,7 +19,7 @@ const getLaunchedByDetails = ({ summary_fields = {}, launch_type }) => {
let link;
let value;
switch (launch_type) {
switch (job.launch_type) {
case 'webhook':
value = t`Webhook`;
link =
@@ -58,7 +29,7 @@ const getLaunchedByDetails = ({ summary_fields = {}, launch_type }) => {
break;
case 'scheduled':
value = schedule.name;
link = getScheduleURL(unifiedJT, schedule.id, inventory?.id);
link = getScheduleUrl(job);
break;
case 'manual':
link = `/users/${createdBy.id}/details`;

View File

@@ -8,10 +8,16 @@ import { RocketIcon } from '@patternfly/react-icons';
import styled from 'styled-components';
import { formatDateString } from 'util/dates';
import { isJobRunning } from 'util/jobs';
import getScheduleUrl from 'util/getScheduleUrl';
import { ActionsTd, ActionItem, TdBreakWord } from '../PaginatedTable';
import { LaunchButton, ReLaunchDropDown } from '../LaunchButton';
import StatusLabel from '../StatusLabel';
import { DetailList, Detail, LaunchedByDetail } from '../DetailList';
import {
DetailList,
Detail,
DeletedDetail,
LaunchedByDetail,
} from '../DetailList';
import ChipGroup from '../ChipGroup';
import CredentialChip from '../CredentialChip';
import ExecutionEnvironmentDetail from '../ExecutionEnvironmentDetail';
@@ -48,6 +54,7 @@ function JobListItem({
job_template,
labels,
project,
schedule,
source_workflow_job,
workflow_job_template,
} = job.summary_fields;
@@ -167,6 +174,18 @@ function JobListItem({
/>
)}
<LaunchedByDetail job={job} />
{job.launch_type === 'scheduled' &&
(schedule ? (
<Detail
dataCy="job-schedule"
label={t`Schedule`}
value={
<Link to={getScheduleUrl(job)}>{schedule.name}</Link>
}
/>
) : (
<DeletedDetail label={t`Schedule`} />
))}
{job_template && (
<Detail
label={t`Job Template`}

View File

@@ -14,11 +14,20 @@ const mockJob = {
delete: true,
start: true,
},
schedule: {
name: 'mock schedule',
id: 999,
},
unified_job_template: {
unified_job_type: 'job',
id: 1,
},
},
created: '2019-08-08T19:24:05.344276Z',
modified: '2019-08-08T19:24:18.162949Z',
name: 'Demo Job Template',
job_type: 'run',
launch_type: 'scheduled',
started: '2019-08-08T19:24:18.329589Z',
finished: '2019-08-08T19:24:50.119995Z',
status: 'successful',
@@ -51,7 +60,11 @@ describe('<JobListItem />', () => {
test('initially renders successfully', () => {
expect(wrapper.find('JobListItem').length).toBe(1);
});
test('should display expected details', () => {
assertDetail('Job Slice', '1/3');
assertDetail('Schedule', 'mock schedule');
});
test('launch button shown to users with launch capabilities', () => {
@@ -129,6 +142,25 @@ describe('<JobListItem />', () => {
expect(wrapper.find('Td[dataLabel="Type"]').length).toBe(1);
});
test('should not show schedule detail in expanded view', () => {
wrapper = mountWithContexts(
<table>
<tbody>
<JobListItem
job={{
...mockJob,
summary_fields: {},
}}
showTypeColumn
isSelected
onSelect={() => {}}
/>
</tbody>
</table>
);
expect(wrapper.find('Detail[label="Schedule"] dt').length).toBe(1);
});
test('should not display EE for canceled jobs', () => {
wrapper = mountWithContexts(
<table>

View File

@@ -56,31 +56,29 @@ function ResourceAccessList({ apiModel, resource }) {
let orgRoles;
if (location.pathname.includes('/organizations')) {
const {
data: { results: roles },
} = await RolesAPI.read({ content_type__isnull: true });
const sysAdmin = roles.filter(
(role) => role.name === 'System Administrator'
);
const sysAud = roles.filter((role) => {
let auditor;
if (role.name === 'System Auditor') {
auditor = role.id;
}
return auditor;
});
const [
{
data: { results: systemAdmin },
},
{
data: { results: systemAuditor },
},
] = await Promise.all([
RolesAPI.read({ singleton_name: 'system_administrator' }),
RolesAPI.read({ singleton_name: 'system_auditor' }),
]);
orgRoles = Object.values(resource.summary_fields.object_roles).map(
(opt) => {
let item;
if (opt.name === 'Admin') {
item = [`${opt.id}, ${sysAdmin[0].id}`, opt.name];
} else if (sysAud[0].id && opt.name === 'Auditor') {
item = [`${sysAud[0].id}, ${opt.id}`, opt.name];
} else {
item = [`${opt.id}`, opt.name];
orgRoles = Object.entries(resource.summary_fields.object_roles).map(
([key, value]) => {
if (key === 'admin_role') {
return [`${value.id}, ${systemAdmin[0].id}`, value.name];
}
return item;
if (key === 'auditor_role') {
return [`${value.id}, ${systemAuditor[0].id}`, value.name];
}
return [`${value.id}`, value.name];
}
);
}

View File

@@ -12,6 +12,7 @@ import useSelected from 'hooks/useSelected';
import useExpanded from 'hooks/useExpanded';
import { getQSConfig, parseQueryString } from 'util/qs';
import useWsTemplates from 'hooks/useWsTemplates';
import useToast, { AlertVariant } from 'hooks/useToast';
import { relatedResourceDeleteRequests } from 'util/getRelatedResourceDeleteDetails';
import AlertModal from '../AlertModal';
import DatalistToolbar from '../DataListToolbar';
@@ -41,6 +42,8 @@ function TemplateList({ defaultParams }) {
);
const location = useLocation();
const { addToast, Toast, toastProps } = useToast();
const {
result: {
results,
@@ -123,6 +126,18 @@ function TemplateList({ defaultParams }) {
}
);
const handleCopy = useCallback(
(newTemplateId) => {
addToast({
id: newTemplateId,
title: t`Template copied successfully`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[addToast]
);
const handleTemplateDelete = async () => {
await deleteTemplates();
clearSelected();
@@ -266,6 +281,7 @@ function TemplateList({ defaultParams }) {
onSelect={() => handleSelect(template)}
isExpanded={expanded.some((row) => row.id === template.id)}
onExpand={() => handleExpand(template)}
onCopy={handleCopy}
isSelected={selected.some((row) => row.id === template.id)}
fetchTemplates={fetchTemplates}
rowIndex={index}
@@ -274,6 +290,7 @@ function TemplateList({ defaultParams }) {
emptyStateControls={(canAddJT || canAddWFJT) && addButton}
/>
</Card>
<Toast {...toastProps} />
<AlertModal
aria-label={t`Deletion Error`}
isOpen={deletionError}

View File

@@ -39,6 +39,7 @@ function TemplateListItem({
template,
isSelected,
onSelect,
onCopy,
detailUrl,
fetchTemplates,
rowIndex,
@@ -52,17 +53,21 @@ function TemplateListItem({
)}/html/upgrade-migration-guide/upgrade_to_ees.html`;
const copyTemplate = useCallback(async () => {
let response;
if (template.type === 'job_template') {
await JobTemplatesAPI.copy(template.id, {
response = await JobTemplatesAPI.copy(template.id, {
name: `${template.name} @ ${timeOfDay()}`,
});
} else {
await WorkflowJobTemplatesAPI.copy(template.id, {
response = await WorkflowJobTemplatesAPI.copy(template.id, {
name: `${template.name} @ ${timeOfDay()}`,
});
}
if (response.status === 201) {
onCopy(response.data.id);
}
await fetchTemplates();
}, [fetchTemplates, template.id, template.name, template.type]);
}, [fetchTemplates, template.id, template.name, template.type, onCopy]);
const handleCopyStart = useCallback(() => {
setIsDisabled(true);

View File

@@ -1,6 +1,6 @@
import React, { useState, useCallback } from 'react';
import { t } from '@lingui/macro';
import { useParams } from 'react-router-dom';
import { useParams, useRouteMatch } from 'react-router-dom';
import styled from 'styled-components';
import useRequest from 'hooks/useRequest';
import useSelected from 'hooks/useSelected';
@@ -27,6 +27,11 @@ function UserAndTeamAccessAdd({
const [selectedResourceType, setSelectedResourceType] = useState(null);
const [stepIdReached, setStepIdReached] = useState(1);
const { id: userId } = useParams();
const teamsRouteMatch = useRouteMatch({
path: '/teams/:id/roles',
exact: true,
});
const { selected: resourcesSelected, handleSelect: handleResourceSelect } =
useSelected([]);
@@ -54,6 +59,19 @@ function UserAndTeamAccessAdd({
{}
);
// Object roles can be user only, so we remove them when
// showing role choices for team access
const selectableRoles = {
...resourcesSelected[0]?.summary_fields?.object_roles,
};
if (teamsRouteMatch && resourcesSelected[0]?.type === 'organization') {
Object.keys(selectableRoles).forEach((key) => {
if (selectableRoles[key].user_only) {
delete selectableRoles[key];
}
});
}
const steps = [
{
id: 1,
@@ -101,7 +119,7 @@ function UserAndTeamAccessAdd({
component: resourcesSelected?.length > 0 && (
<SelectRoleStep
onRolesClick={handleRoleSelect}
roles={resourcesSelected[0].summary_fields.object_roles}
roles={selectableRoles}
selectedListKey={
selectedResourceType === 'users' ? 'username' : 'name'
}

View File

@@ -0,0 +1,64 @@
import React, { useState, useCallback } from 'react';
import {
AlertGroup,
Alert,
AlertActionCloseButton,
AlertVariant,
} from '@patternfly/react-core';
import { arrayOf, func } from 'prop-types';
import { Toast as ToastType } from 'types';
export default function useToast() {
const [toasts, setToasts] = useState([]);
const addToast = useCallback((newToast) => {
setToasts((oldToasts) => [...oldToasts, newToast]);
}, []);
const removeToast = useCallback((toastId) => {
setToasts((oldToasts) => oldToasts.filter((t) => t.id !== toastId));
}, []);
return {
addToast,
removeToast,
Toast,
toastProps: {
toasts,
removeToast,
},
};
}
export function Toast({ toasts, removeToast }) {
if (!toasts.length) {
return null;
}
return (
<AlertGroup data-cy="toast-container" isToast>
{toasts.map((toast) => (
<Alert
actionClose={
<AlertActionCloseButton onClose={() => removeToast(toast.id)} />
}
onTimeout={() => removeToast(toast.id)}
timeout={toast.hasTimeout}
title={toast.title}
variant={toast.variant}
key={`toast-message-${toast.id}`}
ouiaId={`toast-message-${toast.id}`}
>
{toast.message}
</Alert>
))}
</AlertGroup>
);
}
Toast.propTypes = {
toasts: arrayOf(ToastType).isRequired,
removeToast: func.isRequired,
};
export { AlertVariant };

View File

@@ -0,0 +1,124 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { shallow, mount } from 'enzyme';
import useToast, { Toast, AlertVariant } from './useToast';
describe('useToast', () => {
const Child = () => <div />;
const Test = () => {
const toastVals = useToast();
return <Child {...toastVals} />;
};
test('should provide Toast component', () => {
const wrapper = mount(<Test />);
expect(wrapper.find('Child').prop('Toast')).toEqual(Toast);
});
test('should add toast', () => {
const wrapper = mount(<Test />);
expect(wrapper.find('Child').prop('toastProps').toasts).toEqual([]);
act(() => {
wrapper.find('Child').prop('addToast')({
message: 'one',
id: 1,
variant: 'success',
});
});
wrapper.update();
expect(wrapper.find('Child').prop('toastProps').toasts).toEqual([
{
message: 'one',
id: 1,
variant: 'success',
},
]);
});
test('should remove toast', () => {
const wrapper = mount(<Test />);
act(() => {
wrapper.find('Child').prop('addToast')({
message: 'one',
id: 1,
variant: 'success',
});
});
wrapper.update();
expect(wrapper.find('Child').prop('toastProps').toasts).toHaveLength(1);
act(() => {
wrapper.find('Child').prop('removeToast')(1);
});
wrapper.update();
expect(wrapper.find('Child').prop('toastProps').toasts).toHaveLength(0);
});
});
describe('Toast', () => {
test('should render nothing with no toasts', () => {
const wrapper = shallow(<Toast toasts={[]} removeToast={() => {}} />);
expect(wrapper).toEqual({});
});
test('should render toast alert', () => {
const toast = {
title: 'Inventory saved',
variant: AlertVariant.success,
id: 1,
message: 'the message',
};
const wrapper = shallow(<Toast toasts={[toast]} removeToast={() => {}} />);
const alert = wrapper.find('Alert');
expect(alert.prop('title')).toEqual('Inventory saved');
expect(alert.prop('variant')).toEqual('success');
expect(alert.prop('ouiaId')).toEqual('toast-message-1');
expect(alert.prop('children')).toEqual('the message');
});
test('should call removeToast', () => {
const removeToast = jest.fn();
const toast = {
title: 'Inventory saved',
variant: AlertVariant.success,
id: 1,
};
const wrapper = shallow(
<Toast toasts={[toast]} removeToast={removeToast} />
);
const alert = wrapper.find('Alert');
alert.prop('actionClose').props.onClose(1);
expect(removeToast).toHaveBeenCalledTimes(1);
});
test('should render multiple alerts', () => {
const toasts = [
{
title: 'Inventory saved',
variant: AlertVariant.success,
id: 1,
message: 'the message',
},
{
title: 'error saving',
variant: AlertVariant.danger,
id: 2,
},
];
const wrapper = shallow(<Toast toasts={toasts} removeToast={() => {}} />);
const alert = wrapper.find('Alert');
expect(alert).toHaveLength(2);
expect(alert.at(0).prop('title')).toEqual('Inventory saved');
expect(alert.at(0).prop('variant')).toEqual('success');
expect(alert.at(1).prop('title')).toEqual('error saving');
expect(alert.at(1).prop('variant')).toEqual('danger');
});
});

View File

@@ -12,6 +12,20 @@ import {
import useRequest from 'hooks/useRequest';
import CredentialForm from '../shared/CredentialForm';
const fetchCredentialTypes = async (pageNo = 1, credentialTypes = []) => {
const { data } = await CredentialTypesAPI.read({
page_size: 200,
page: pageNo,
});
if (data.next) {
return fetchCredentialTypes(
pageNo + 1,
credentialTypes.concat(data.results)
);
}
return credentialTypes.concat(data.results);
};
function CredentialAdd({ me }) {
const history = useHistory();
@@ -76,6 +90,7 @@ function CredentialAdd({ me }) {
history.push(`/credentials/${credentialId}/details`);
}
}, [credentialId, history]);
const {
isLoading,
error,
@@ -83,18 +98,7 @@ function CredentialAdd({ me }) {
result,
} = useRequest(
useCallback(async () => {
const { data } = await CredentialTypesAPI.read({ page_size: 200 });
const credTypes = data.results;
if (data.next && data.next.includes('page=2')) {
const {
data: { results },
} = await CredentialTypesAPI.read({
page_size: 200,
page: 2,
});
credTypes.concat(results);
}
const credTypes = await fetchCredentialTypes();
const creds = credTypes.reduce((credentialTypesMap, credentialType) => {
credentialTypesMap[credentialType.id] = credentialType;
return credentialTypesMap;

View File

@@ -4,6 +4,7 @@ import { t, Plural } from '@lingui/macro';
import { Card, PageSection } from '@patternfly/react-core';
import { CredentialsAPI } from 'api';
import useSelected from 'hooks/useSelected';
import useToast, { AlertVariant } from 'hooks/useToast';
import AlertModal from 'components/AlertModal';
import ErrorDetail from 'components/ErrorDetail';
import DataListToolbar from 'components/DataListToolbar';
@@ -27,6 +28,8 @@ const QS_CONFIG = getQSConfig('credential', {
function CredentialList() {
const location = useLocation();
const { addToast, Toast, toastProps } = useToast();
const {
result: {
credentials,
@@ -104,100 +107,116 @@ function CredentialList() {
setSelected([]);
};
const handleCopy = useCallback(
(newCredentialId) => {
addToast({
id: newCredentialId,
title: t`Credential copied successfully`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[addToast]
);
const canAdd =
actions && Object.prototype.hasOwnProperty.call(actions, 'POST');
const deleteDetailsRequests = relatedResourceDeleteRequests.credential(
selected[0]
);
return (
<PageSection>
<Card>
<PaginatedTable
contentError={contentError}
hasContentLoading={isLoading || isDeleteLoading}
items={credentials}
itemCount={credentialCount}
qsConfig={QS_CONFIG}
clearSelected={clearSelected}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
toolbarSearchColumns={[
{
name: t`Name`,
key: 'name__icontains',
isDefault: true,
},
{
name: t`Description`,
key: 'description__icontains',
},
{
name: t`Created By (Username)`,
key: 'created_by__username__icontains',
},
{
name: t`Modified By (Username)`,
key: 'modified_by__username__icontains',
},
]}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="name">{t`Name`}</HeaderCell>
<HeaderCell>{t`Type`}</HeaderCell>
<HeaderCell>{t`Actions`}</HeaderCell>
</HeaderRow>
}
renderRow={(item, index) => (
<CredentialListItem
key={item.id}
credential={item}
fetchCredentials={fetchCredentials}
detailUrl={`/credentials/${item.id}/details`}
isSelected={selected.some((row) => row.id === item.id)}
onSelect={() => handleSelect(item)}
rowIndex={index}
/>
)}
renderToolbar={(props) => (
<DataListToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
qsConfig={QS_CONFIG}
additionalControls={[
...(canAdd
? [<ToolbarAddButton key="add" linkTo="/credentials/add" />]
: []),
<ToolbarDeleteButton
key="delete"
onDelete={handleDelete}
itemsToDelete={selected}
pluralizedItemName={t`Credentials`}
deleteDetailsRequests={deleteDetailsRequests}
deleteMessage={
<Plural
value={selected.length}
one="This credential is currently being used by other resources. Are you sure you want to delete it?"
other="Deleting these credentials could impact other resources that rely on them. Are you sure you want to delete anyway?"
/>
}
/>,
]}
/>
)}
/>
</Card>
<AlertModal
aria-label={t`Deletion Error`}
isOpen={deletionError}
variant="error"
title={t`Error!`}
onClose={clearDeletionError}
>
{t`Failed to delete one or more credentials.`}
<ErrorDetail error={deletionError} />
</AlertModal>
</PageSection>
<>
<PageSection>
<Card>
<PaginatedTable
contentError={contentError}
hasContentLoading={isLoading || isDeleteLoading}
items={credentials}
itemCount={credentialCount}
qsConfig={QS_CONFIG}
clearSelected={clearSelected}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
toolbarSearchColumns={[
{
name: t`Name`,
key: 'name__icontains',
isDefault: true,
},
{
name: t`Description`,
key: 'description__icontains',
},
{
name: t`Created By (Username)`,
key: 'created_by__username__icontains',
},
{
name: t`Modified By (Username)`,
key: 'modified_by__username__icontains',
},
]}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="name">{t`Name`}</HeaderCell>
<HeaderCell>{t`Type`}</HeaderCell>
<HeaderCell>{t`Actions`}</HeaderCell>
</HeaderRow>
}
renderRow={(item, index) => (
<CredentialListItem
key={item.id}
credential={item}
fetchCredentials={fetchCredentials}
detailUrl={`/credentials/${item.id}/details`}
isSelected={selected.some((row) => row.id === item.id)}
onSelect={() => handleSelect(item)}
onCopy={handleCopy}
rowIndex={index}
/>
)}
renderToolbar={(props) => (
<DataListToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
qsConfig={QS_CONFIG}
additionalControls={[
...(canAdd
? [<ToolbarAddButton key="add" linkTo="/credentials/add" />]
: []),
<ToolbarDeleteButton
key="delete"
onDelete={handleDelete}
itemsToDelete={selected}
pluralizedItemName={t`Credentials`}
deleteDetailsRequests={deleteDetailsRequests}
deleteMessage={
<Plural
value={selected.length}
one="This credential is currently being used by other resources. Are you sure you want to delete it?"
other="Deleting these credentials could impact other resources that rely on them. Are you sure you want to delete anyway?"
/>
}
/>,
]}
/>
)}
/>
</Card>
<AlertModal
aria-label={t`Deletion Error`}
isOpen={deletionError}
variant="error"
title={t`Error!`}
onClose={clearDeletionError}
>
{t`Failed to delete one or more credentials.`}
<ErrorDetail error={deletionError} />
</AlertModal>
</PageSection>
<Toast {...toastProps} />
</>
);
}

View File

@@ -18,7 +18,7 @@ function CredentialListItem({
detailUrl,
isSelected,
onSelect,
onCopy,
fetchCredentials,
rowIndex,
}) {
@@ -28,11 +28,14 @@ function CredentialListItem({
const canEdit = credential.summary_fields.user_capabilities.edit;
const copyCredential = useCallback(async () => {
await CredentialsAPI.copy(credential.id, {
const response = await CredentialsAPI.copy(credential.id, {
name: `${credential.name} @ ${timeOfDay()}`,
});
if (response.status === 201) {
onCopy(response.data.id);
}
await fetchCredentials();
}, [credential.id, credential.name, fetchCredentials]);
}, [credential.id, credential.name, fetchCredentials, onCopy]);
const handleCopyStart = useCallback(() => {
setIsDisabled(true);

View File

@@ -7,6 +7,7 @@ import { ExecutionEnvironmentsAPI } from 'api';
import { getQSConfig, parseQueryString } from 'util/qs';
import useRequest, { useDeleteItems } from 'hooks/useRequest';
import useSelected from 'hooks/useSelected';
import useToast, { AlertVariant } from 'hooks/useToast';
import PaginatedTable, {
HeaderRow,
HeaderCell,
@@ -29,6 +30,7 @@ const QS_CONFIG = getQSConfig('execution_environments', {
function ExecutionEnvironmentList() {
const location = useLocation();
const match = useRouteMatch();
const { addToast, Toast, toastProps } = useToast();
const {
error: contentError,
@@ -94,6 +96,18 @@ function ExecutionEnvironmentList() {
}
);
const handleCopy = useCallback(
(newId) => {
addToast({
id: newId,
title: t`Execution environment copied successfully`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[addToast]
);
const handleDelete = async () => {
await deleteExecutionEnvironments();
clearSelected();
@@ -194,6 +208,7 @@ function ExecutionEnvironmentList() {
executionEnvironment={executionEnvironment}
detailUrl={`${match.url}/${executionEnvironment.id}/details`}
onSelect={() => handleSelect(executionEnvironment)}
onCopy={handleCopy}
isSelected={selected.some(
(row) => row.id === executionEnvironment.id
)}
@@ -218,6 +233,7 @@ function ExecutionEnvironmentList() {
{t`Failed to delete one or more execution environments`}
<ErrorDetail error={deletionError} />
</AlertModal>
<Toast {...toastProps} />
</>
);
}

View File

@@ -18,20 +18,28 @@ function ExecutionEnvironmentListItem({
detailUrl,
isSelected,
onSelect,
onCopy,
rowIndex,
fetchExecutionEnvironments,
}) {
const [isDisabled, setIsDisabled] = useState(false);
const copyExecutionEnvironment = useCallback(async () => {
await ExecutionEnvironmentsAPI.copy(executionEnvironment.id, {
name: `${executionEnvironment.name} @ ${timeOfDay()}`,
});
const response = await ExecutionEnvironmentsAPI.copy(
executionEnvironment.id,
{
name: `${executionEnvironment.name} @ ${timeOfDay()}`,
}
);
if (response.status === 201) {
onCopy(response.data.id);
}
await fetchExecutionEnvironments();
}, [
executionEnvironment.id,
executionEnvironment.name,
fetchExecutionEnvironments,
onCopy,
]);
const handleCopyStart = useCallback(() => {
@@ -114,6 +122,7 @@ ExecutionEnvironmentListItem.prototype = {
detailUrl: string.isRequired,
isSelected: bool.isRequired,
onSelect: func.isRequired,
onCopy: func.isRequired,
};
export default ExecutionEnvironmentListItem;

View File

@@ -68,13 +68,9 @@ function ContainerGroupEdit({
if (isLoading) {
return (
<PageSection>
<Card>
<CardBody>
<ContentLoading />
</CardBody>
</Card>
</PageSection>
<CardBody>
<ContentLoading />
</CardBody>
);
}

View File

@@ -2,11 +2,12 @@ import React, { useCallback, useEffect, useState } from 'react';
import { t } from '@lingui/macro';
import { Route, Switch, useLocation } from 'react-router-dom';
import { Card, PageSection } from '@patternfly/react-core';
import useRequest from 'hooks/useRequest';
import { SettingsAPI } from 'api';
import ScreenHeader from 'components/ScreenHeader';
import ContentLoading from 'components/ContentLoading';
import InstanceGroupAdd from './InstanceGroupAdd';
import InstanceGroupList from './InstanceGroupList';
import InstanceGroup from './InstanceGroup';
@@ -81,35 +82,43 @@ function InstanceGroups() {
streamType={streamType}
breadcrumbConfig={breadcrumbConfig}
/>
<Switch>
<Route path="/instance_groups/container_group/add">
<ContainerGroupAdd
defaultControlPlane={defaultControlPlane}
defaultExecution={defaultExecution}
/>
</Route>
<Route path="/instance_groups/container_group/:id">
<ContainerGroup setBreadcrumb={buildBreadcrumbConfig} />
</Route>
{!isSettingsRequestLoading && !isKubernetes && (
<Route path="/instance_groups/add">
<InstanceGroupAdd
{isSettingsRequestLoading ? (
<PageSection>
<Card>
<ContentLoading />
</Card>
</PageSection>
) : (
<Switch>
<Route path="/instance_groups/container_group/add">
<ContainerGroupAdd
defaultControlPlane={defaultControlPlane}
defaultExecution={defaultExecution}
/>
</Route>
)}
<Route path="/instance_groups/:id">
<InstanceGroup setBreadcrumb={buildBreadcrumbConfig} />
</Route>
<Route path="/instance_groups">
<InstanceGroupList
isKubernetes={isKubernetes}
isSettingsRequestLoading={isSettingsRequestLoading}
settingsRequestError={settingsRequestError}
/>
</Route>
</Switch>
<Route path="/instance_groups/container_group/:id">
<ContainerGroup setBreadcrumb={buildBreadcrumbConfig} />
</Route>
{!isKubernetes && (
<Route path="/instance_groups/add">
<InstanceGroupAdd
defaultControlPlane={defaultControlPlane}
defaultExecution={defaultExecution}
/>
</Route>
)}
<Route path="/instance_groups/:id">
<InstanceGroup setBreadcrumb={buildBreadcrumbConfig} />
</Route>
<Route path="/instance_groups">
<InstanceGroupList
isKubernetes={isKubernetes}
isSettingsRequestLoading={isSettingsRequestLoading}
settingsRequestError={settingsRequestError}
/>
</Route>
</Switch>
)}
</>
);
}

View File

@@ -127,7 +127,7 @@ function InstanceList() {
async (instancesToAssociate) => {
await Promise.all(
instancesToAssociate
.filter((i) => i.node_type !== 'control')
.filter((i) => i.node_type !== 'control' || i.node_type !== 'hop')
.map((instance) =>
InstanceGroupsAPI.associateInstance(instanceGroupId, instance.id)
)
@@ -155,8 +155,7 @@ function InstanceList() {
InstancesAPI.read(
mergeParams(params, {
...{ not__rampart_groups__id: instanceGroupId },
...{ not__node_type: 'control' },
...{ not__node_type: 'hop' },
...{ not__node_type: ['hop', 'control'] },
})
),
[instanceGroupId]

View File

@@ -24,8 +24,8 @@ function InstanceGroupFormFields({ defaultControlPlane, defaultExecution }) {
const validators = combine([
required(null),
protectedResourceName(
[defaultControlPlane, defaultExecution],
t`This is a protected name for Instance Groups. Please use a different name.`
t`This is a protected name for Instance Groups. Please use a different name.`,
[defaultControlPlane, defaultExecution]
),
]);

View File

@@ -5,6 +5,7 @@ import { Card, PageSection, DropdownItem } from '@patternfly/react-core';
import { InventoriesAPI } from 'api';
import useRequest, { useDeleteItems } from 'hooks/useRequest';
import useSelected from 'hooks/useSelected';
import useToast, { AlertVariant } from 'hooks/useToast';
import AlertModal from 'components/AlertModal';
import DatalistToolbar from 'components/DataListToolbar';
import ErrorDetail from 'components/ErrorDetail';
@@ -29,6 +30,7 @@ const QS_CONFIG = getQSConfig('inventory', {
function InventoryList() {
const location = useLocation();
const match = useRouteMatch();
const { addToast, Toast, toastProps } = useToast();
const {
result: {
@@ -112,6 +114,18 @@ function InventoryList() {
clearSelected();
};
const handleCopy = useCallback(
(newInventoryId) => {
addToast({
id: newInventoryId,
title: t`Inventory copied successfully`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[addToast]
);
const hasContentLoading = isDeleteLoading || isLoading;
const canAdd = actions && actions.POST;
@@ -149,130 +163,134 @@ function InventoryList() {
);
return (
<PageSection>
<Card>
<PaginatedTable
contentError={contentError}
hasContentLoading={hasContentLoading}
items={inventories}
itemCount={itemCount}
pluralizedItemName={t`Inventories`}
qsConfig={QS_CONFIG}
toolbarSearchColumns={[
{
name: t`Name`,
key: 'name__icontains',
isDefault: true,
},
{
name: t`Inventory Type`,
key: 'or__kind',
options: [
['', t`Inventory`],
['smart', t`Smart Inventory`],
],
},
{
name: t`Organization`,
key: 'organization__name',
},
{
name: t`Description`,
key: 'description__icontains',
},
{
name: t`Created By (Username)`,
key: 'created_by__username__icontains',
},
{
name: t`Modified By (Username)`,
key: 'modified_by__username__icontains',
},
]}
toolbarSortColumns={[
{
name: t`Name`,
key: 'name',
},
]}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
clearSelected={clearSelected}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="name">{t`Name`}</HeaderCell>
<HeaderCell>{t`Status`}</HeaderCell>
<HeaderCell>{t`Type`}</HeaderCell>
<HeaderCell>{t`Organization`}</HeaderCell>
<HeaderCell>{t`Actions`}</HeaderCell>
</HeaderRow>
}
renderToolbar={(props) => (
<DatalistToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
qsConfig={QS_CONFIG}
additionalControls={[
...(canAdd ? [addButton] : []),
<ToolbarDeleteButton
key="delete"
onDelete={handleInventoryDelete}
itemsToDelete={selected}
pluralizedItemName={t`Inventories`}
deleteDetailsRequests={deleteDetailsRequests}
deleteMessage={
<Plural
value={selected.length}
one="This inventory is currently being used by some templates. Are you sure you want to delete it?"
other="Deleting these inventories could impact some templates that rely on them. Are you sure you want to delete anyway?"
/>
}
warningMessage={
<Plural
value={selected.length}
one="The inventory will be in a pending status until the final delete is processed."
other="The inventories will be in a pending status until the final delete is processed."
/>
}
/>,
]}
/>
)}
renderRow={(inventory, index) => (
<InventoryListItem
key={inventory.id}
value={inventory.name}
inventory={inventory}
rowIndex={index}
fetchInventories={fetchInventories}
detailUrl={
inventory.kind === 'smart'
? `${match.url}/smart_inventory/${inventory.id}/details`
: `${match.url}/inventory/${inventory.id}/details`
}
onSelect={() => {
if (!inventory.pending_deletion) {
handleSelect(inventory);
<>
<PageSection>
<Card>
<PaginatedTable
contentError={contentError}
hasContentLoading={hasContentLoading}
items={inventories}
itemCount={itemCount}
pluralizedItemName={t`Inventories`}
qsConfig={QS_CONFIG}
toolbarSearchColumns={[
{
name: t`Name`,
key: 'name__icontains',
isDefault: true,
},
{
name: t`Inventory Type`,
key: 'or__kind',
options: [
['', t`Inventory`],
['smart', t`Smart Inventory`],
],
},
{
name: t`Organization`,
key: 'organization__name',
},
{
name: t`Description`,
key: 'description__icontains',
},
{
name: t`Created By (Username)`,
key: 'created_by__username__icontains',
},
{
name: t`Modified By (Username)`,
key: 'modified_by__username__icontains',
},
]}
toolbarSortColumns={[
{
name: t`Name`,
key: 'name',
},
]}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
clearSelected={clearSelected}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="name">{t`Name`}</HeaderCell>
<HeaderCell>{t`Status`}</HeaderCell>
<HeaderCell>{t`Type`}</HeaderCell>
<HeaderCell>{t`Organization`}</HeaderCell>
<HeaderCell>{t`Actions`}</HeaderCell>
</HeaderRow>
}
renderToolbar={(props) => (
<DatalistToolbar
{...props}
isAllSelected={isAllSelected}
onSelectAll={selectAll}
qsConfig={QS_CONFIG}
additionalControls={[
...(canAdd ? [addButton] : []),
<ToolbarDeleteButton
key="delete"
onDelete={handleInventoryDelete}
itemsToDelete={selected}
pluralizedItemName={t`Inventories`}
deleteDetailsRequests={deleteDetailsRequests}
deleteMessage={
<Plural
value={selected.length}
one="This inventory is currently being used by some templates. Are you sure you want to delete it?"
other="Deleting these inventories could impact some templates that rely on them. Are you sure you want to delete anyway?"
/>
}
warningMessage={
<Plural
value={selected.length}
one="The inventory will be in a pending status until the final delete is processed."
other="The inventories will be in a pending status until the final delete is processed."
/>
}
/>,
]}
/>
)}
renderRow={(inventory, index) => (
<InventoryListItem
key={inventory.id}
value={inventory.name}
inventory={inventory}
rowIndex={index}
fetchInventories={fetchInventories}
detailUrl={
inventory.kind === 'smart'
? `${match.url}/smart_inventory/${inventory.id}/details`
: `${match.url}/inventory/${inventory.id}/details`
}
}}
isSelected={selected.some((row) => row.id === inventory.id)}
/>
)}
emptyStateControls={canAdd && addButton}
/>
</Card>
<AlertModal
isOpen={deletionError}
variant="error"
aria-label={t`Deletion Error`}
title={t`Error!`}
onClose={clearDeletionError}
>
{t`Failed to delete one or more inventories.`}
<ErrorDetail error={deletionError} />
</AlertModal>
</PageSection>
onSelect={() => {
if (!inventory.pending_deletion) {
handleSelect(inventory);
}
}}
onCopy={handleCopy}
isSelected={selected.some((row) => row.id === inventory.id)}
/>
)}
emptyStateControls={canAdd && addButton}
/>
</Card>
<AlertModal
isOpen={deletionError}
variant="error"
aria-label={t`Deletion Error`}
title={t`Error!`}
onClose={clearDeletionError}
>
{t`Failed to delete one or more inventories.`}
<ErrorDetail error={deletionError} />
</AlertModal>
</PageSection>
<Toast {...toastProps} />
</>
);
}

View File

@@ -18,6 +18,7 @@ function InventoryListItem({
rowIndex,
isSelected,
onSelect,
onCopy,
detailUrl,
fetchInventories,
}) {
@@ -30,11 +31,14 @@ function InventoryListItem({
const [isCopying, setIsCopying] = useState(false);
const copyInventory = useCallback(async () => {
await InventoriesAPI.copy(inventory.id, {
const response = await InventoriesAPI.copy(inventory.id, {
name: `${inventory.name} @ ${timeOfDay()}`,
});
if (response.status === 201) {
onCopy(response.data.id);
}
await fetchInventories();
}, [inventory.id, inventory.name, fetchInventories]);
}, [inventory.id, inventory.name, fetchInventories, onCopy]);
const handleCopyStart = useCallback(() => {
setIsCopying(true);

View File

@@ -13,6 +13,7 @@ import { ActionsTd, ActionItem, TdBreakWord } from 'components/PaginatedTable';
import StatusLabel from 'components/StatusLabel';
import JobCancelButton from 'components/JobCancelButton';
import { formatDateString } from 'util/dates';
import { isJobRunning } from 'util/jobs';
import InventorySourceSyncButton from '../shared/InventorySourceSyncButton';
const ExclamationTriangleIcon = styled(PFExclamationTriangleIcon)`
@@ -64,6 +65,7 @@ function InventorySourceListItem({
rowIndex,
isSelected,
onSelect,
disable: isJobRunning(source.status),
}}
/>
<TdBreakWord dataLabel={t`Name`}>

View File

@@ -25,25 +25,33 @@ function SmartInventoryHostList({ inventory }) {
const location = useLocation();
const [isAdHocLaunchLoading, setIsAdHocLaunchLoading] = useState(false);
const {
result: { hosts, count },
result: { hosts, count, moduleOptions },
error: contentError,
isLoading,
request: fetchHosts,
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, location.search);
const {
data: { results, count: hostCount },
} = await InventoriesAPI.readHosts(inventory.id, params);
const [
{
data: { results, count: hostCount },
},
adHocOptions,
] = await Promise.all([
InventoriesAPI.readHosts(inventory.id, params),
InventoriesAPI.readAdHocOptions(inventory.id),
]);
return {
hosts: results,
count: hostCount,
moduleOptions: adHocOptions.data.actions.GET.module_name.choices,
};
}, [location.search, inventory.id]),
{
hosts: [],
count: 0,
moduleOptions: [],
}
);
@@ -91,6 +99,7 @@ function SmartInventoryHostList({ inventory }) {
adHocItems={selected}
hasListItems={count > 0}
onLaunchLoading={setIsAdHocLaunchLoading}
moduleOptions={moduleOptions}
/>,
]
: []

View File

@@ -27,6 +27,21 @@ describe('<SmartInventoryHostList />', () => {
InventoriesAPI.readHosts.mockResolvedValue({
data: mockHosts,
});
InventoriesAPI.readAdHocOptions.mockResolvedValue({
data: {
actions: {
GET: {
module_name: {
choices: [
['command', 'command'],
['shell', 'shell'],
],
},
},
POST: {},
},
},
});
await act(async () => {
wrapper = mountWithContexts(
<SmartInventoryHostList inventory={clonedInventory} />

View File

@@ -189,6 +189,7 @@ describe('<JobDetail />', () => {
<JobDetail
job={{
...mockJobData,
type: 'workflow_job',
launch_type: 'scheduled',
summary_fields: {
user_capabilities: {},

View File

@@ -109,12 +109,12 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
value={template.description}
dataCy="nt-detail-description"
/>
{summary_fields.recent_notifications.length && (
{summary_fields.recent_notifications.length ? (
<Detail
label={t`Status`}
value={<StatusLabel status={testStatus} />}
/>
)}
) : null}
{summary_fields.organization ? (
<Detail
label={t`Organization`}

View File

@@ -1,14 +1,8 @@
import React, { useCallback, useEffect, useState } from 'react';
import React, { useCallback, useEffect } from 'react';
import { useLocation, useRouteMatch } from 'react-router-dom';
import { t } from '@lingui/macro';
import {
Alert,
AlertActionCloseButton,
AlertGroup,
Card,
PageSection,
} from '@patternfly/react-core';
import { Card, PageSection } from '@patternfly/react-core';
import { NotificationTemplatesAPI } from 'api';
import PaginatedTable, {
HeaderRow,
@@ -22,6 +16,7 @@ import ErrorDetail from 'components/ErrorDetail';
import DataListToolbar from 'components/DataListToolbar';
import useRequest, { useDeleteItems } from 'hooks/useRequest';
import useSelected from 'hooks/useSelected';
import useToast, { AlertVariant } from 'hooks/useToast';
import { getQSConfig, parseQueryString } from 'util/qs';
import NotificationTemplateListItem from './NotificationTemplateListItem';
@@ -34,7 +29,8 @@ const QS_CONFIG = getQSConfig('notification-templates', {
function NotificationTemplatesList() {
const location = useLocation();
const match = useRouteMatch();
const [testToasts, setTestToasts] = useState([]);
// const [testToasts, setTestToasts] = useState([]);
const { addToast, Toast, toastProps } = useToast();
const addUrl = `${match.url}/add`;
@@ -107,18 +103,7 @@ function NotificationTemplatesList() {
clearSelected();
};
const addTestToast = useCallback((notification) => {
setTestToasts((oldToasts) => [...oldToasts, notification]);
}, []);
const removeTestToast = (notificationId) => {
setTestToasts((oldToasts) =>
oldToasts.filter((toast) => toast.id !== notificationId)
);
};
const canAdd = actions && actions.POST;
const alertGroupDataCy = 'notification-template-alerts';
return (
<>
@@ -198,7 +183,35 @@ function NotificationTemplatesList() {
}
renderRow={(template, index) => (
<NotificationTemplateListItem
onAddToast={addTestToast}
onAddToast={(notification) => {
if (notification.status === 'pending') {
return;
}
let message;
if (notification.status === 'successful') {
message = t`Notification sent successfully`;
}
if (notification.status === 'failed') {
if (notification?.error === 'timed out') {
message = t`Notification timed out`;
} else {
message = notification.error;
}
}
addToast({
id: notification.id,
title:
notification.summary_fields.notification_template.name,
variant:
notification.status === 'failed'
? AlertVariant.danger
: AlertVariant.success,
hasTimeout: notification.status !== 'failed',
message,
});
}}
key={template.id}
fetchTemplates={fetchTemplates}
template={template}
@@ -223,39 +236,7 @@ function NotificationTemplatesList() {
{t`Failed to delete one or more notification template.`}
<ErrorDetail error={deletionError} />
</AlertModal>
<AlertGroup data-cy={alertGroupDataCy} isToast>
{testToasts
.filter((notification) => notification.status !== 'pending')
.map((notification) => (
<Alert
actionClose={
<AlertActionCloseButton
onClose={() => removeTestToast(notification.id)}
/>
}
onTimeout={() => removeTestToast(notification.id)}
timeout={notification.status !== 'failed'}
title={notification.summary_fields.notification_template.name}
variant={notification.status === 'failed' ? 'danger' : 'success'}
key={`notification-template-alert-${notification.id}`}
ouiaId={`notification-template-alert-${notification.id}`}
>
<>
{notification.status === 'successful' && (
<p>{t`Notification sent successfully`}</p>
)}
{notification.status === 'failed' &&
notification?.error === 'timed out' && (
<p>{t`Notification timed out`}</p>
)}
{notification.status === 'failed' &&
notification?.error !== 'timed out' && (
<p>{notification.error}</p>
)}
</>
</Alert>
))}
</AlertGroup>
<Toast {...toastProps} />
</>
);
}

View File

@@ -19,6 +19,7 @@ import PaginatedTable, {
} from 'components/PaginatedTable';
import useSelected from 'hooks/useSelected';
import useExpanded from 'hooks/useExpanded';
import useToast, { AlertVariant } from 'hooks/useToast';
import { relatedResourceDeleteRequests } from 'util/getRelatedResourceDeleteDetails';
import { getQSConfig, parseQueryString } from 'util/qs';
import useWsProjects from './useWsProjects';
@@ -34,6 +35,7 @@ const QS_CONFIG = getQSConfig('project', {
function ProjectList() {
const location = useLocation();
const match = useRouteMatch();
const { addToast, Toast, toastProps } = useToast();
const {
request: fetchUpdatedProject,
@@ -123,6 +125,18 @@ function ProjectList() {
}
);
const handleCopy = useCallback(
(newId) => {
addToast({
id: newId,
title: t`Project copied successfully`,
variant: AlertVariant.success,
hasTimeout: true,
});
},
[addToast]
);
const handleProjectDelete = async () => {
await deleteProjects();
setSelected([]);
@@ -255,6 +269,7 @@ function ProjectList() {
detailUrl={`${match.url}/${project.id}`}
isSelected={selected.some((row) => row.id === project.id)}
onSelect={() => handleSelect(project)}
onCopy={handleCopy}
rowIndex={index}
onRefreshRow={(projectId) => fetchUpdatedProject(projectId)}
/>
@@ -267,6 +282,7 @@ function ProjectList() {
/>
</Card>
</PageSection>
<Toast {...toastProps} />
{deletionError && (
<AlertModal
isOpen={deletionError}

View File

@@ -39,6 +39,7 @@ function ProjectListItem({
project,
isSelected,
onSelect,
onCopy,
detailUrl,
fetchProjects,
rowIndex,
@@ -53,11 +54,14 @@ function ProjectListItem({
};
const copyProject = useCallback(async () => {
await ProjectsAPI.copy(project.id, {
const response = await ProjectsAPI.copy(project.id, {
name: `${project.name} @ ${timeOfDay()}`,
});
if (response.status === 201) {
onCopy(response.data.id);
}
await fetchProjects();
}, [project.id, project.name, fetchProjects]);
}, [project.id, project.name, fetchProjects, onCopy]);
const generateLastJobTooltip = (job) => (
<>
@@ -168,6 +172,7 @@ function ProjectListItem({
rowIndex,
isSelected,
onSelect,
disable: isJobRunning(job?.status),
}}
dataLabel={t`Selected`}
/>

View File

@@ -69,6 +69,7 @@ describe('<JobsDetail />', () => {
assertDetail(wrapper, 'Default Project Update Timeout', '0 seconds');
assertDetail(wrapper, 'Per-Host Ansible Fact Cache Timeout', '0 seconds');
assertDetail(wrapper, 'Maximum number of forks per job', '200');
assertDetail(wrapper, 'Expose host paths for Container Groups', 'Off');
assertVariableDetail(
wrapper,
'Ansible Modules Allowed for Ad Hoc Jobs',

View File

@@ -212,6 +212,10 @@ function JobsEdit() {
name="AWX_ISOLATION_SHOW_PATHS"
config={jobs.AWX_ISOLATION_SHOW_PATHS}
/>
<BooleanField
name="AWX_MOUNT_ISOLATED_PATHS_ON_K8S"
config={jobs.AWX_MOUNT_ISOLATED_PATHS_ON_K8S}
/>
<ObjectField name="AWX_TASK_ENV" config={jobs.AWX_TASK_ENV} />
{submitError && <FormSubmitError error={submitError} />}
{revertError && <FormSubmitError error={revertError} />}

View File

@@ -27,6 +27,7 @@
"AWX_ISOLATION_SHOW_PATHS": [],
"AWX_ROLES_ENABLED": true,
"AWX_SHOW_PLAYBOOK_LINKS": false,
"AWX_MOUNT_ISOLATED_PATHS_ON_K8S": false,
"AWX_TASK_ENV": {},
"DEFAULT_INVENTORY_UPDATE_TIMEOUT": 0,
"DEFAULT_JOB_TIMEOUT": 0,

View File

@@ -47,6 +47,15 @@ function SubscriptionModal({
subscriptionCreds.username,
subscriptionCreds.password
);
// Ensure unique ids for each subscription
// because it is possible to have multiple
// subscriptions with the same pool_id
let repeatId = 1;
data.forEach((i) => {
i.id = repeatId++;
});
return data;
}, []), // eslint-disable-line react-hooks/exhaustive-deps
[]
@@ -64,17 +73,9 @@ function SubscriptionModal({
fetchSubscriptions();
}, [fetchSubscriptions]);
const handleSelect = (item) => {
if (selected.some((s) => s.pool_id === item.pool_id)) {
setSelected(selected.filter((s) => s.pool_id !== item.pool_id));
} else {
setSelected(selected.concat(item));
}
};
useEffect(() => {
if (selectedSubscription?.pool_id) {
handleSelect({ pool_id: selectedSubscription.pool_id });
if (selectedSubscription?.id) {
setSelected([selectedSubscription]);
}
}, []); // eslint-disable-line react-hooks/exhaustive-deps
@@ -150,19 +151,18 @@ function SubscriptionModal({
<Tbody>
{subscriptions.map((subscription) => (
<Tr
key={`row-${subscription.pool_id}`}
id={subscription.pool_id}
key={`row-${subscription.id}`}
id={`row-${subscription.id}`}
ouiaId={`subscription-row-${subscription.pool_id}`}
>
<Td
key={`row-${subscription.pool_id}`}
select={{
onSelect: () => handleSelect(subscription),
onSelect: () => setSelected([subscription]),
isSelected: selected.some(
(row) => row.pool_id === subscription.pool_id
(row) => row.id === subscription.id
),
variant: 'radio',
rowIndex: `row-${subscription.pool_id}`,
rowIndex: `row-${subscription.id}`,
}}
/>
<Td dataLabel={t`Trial`}>{subscription.subscription_name}</Td>

View File

@@ -125,14 +125,14 @@ describe('<SubscriptionModal />', () => {
password: '$encrypted',
}}
selectedSubscription={{
pool_id: 8,
id: 2,
}}
/>
);
await waitForElement(wrapper, 'table');
expect(wrapper.find('tr[id=7] input').prop('checked')).toBe(false);
expect(wrapper.find('tr[id=8] input').prop('checked')).toBe(true);
expect(wrapper.find('tr[id=9] input').prop('checked')).toBe(false);
expect(wrapper.find('tr[id="row-1"] input').prop('checked')).toBe(false);
expect(wrapper.find('tr[id="row-2"] input').prop('checked')).toBe(true);
expect(wrapper.find('tr[id="row-3"] input').prop('checked')).toBe(false);
});
});

View File

@@ -227,7 +227,7 @@ function SubscriptionStep() {
username: username.value,
password: password.value,
}}
selectedSubscripion={subscription?.value}
selectedSubscription={subscription?.value}
onClose={closeModal}
onConfirm={(value) => subscriptionHelpers.setValue(value)}
/>

View File

@@ -276,6 +276,15 @@
"category_slug": "jobs",
"default": false
},
"AWX_MOUNT_ISOLATED_PATHS_ON_K8S": {
"type": "boolean",
"required": false,
"label": "Expose host paths for Container Groups",
"help_text": "Expose paths via hostPath for the Pods created by a Container Group. HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. ",
"category": "Jobs",
"category_slug": "jobs",
"default": false
},
"GALAXY_IGNORE_CERTS": {
"type": "boolean",
"required": false,
@@ -3973,6 +3982,14 @@
"category_slug": "jobs",
"defined_in_file": false
},
"AWX_MOUNT_ISOLATED_PATHS_ON_K8S": {
"type": "boolean",
"label": "Expose host paths for Container Groups",
"help_text": "Expose paths via hostPath for the Pods created by a Container Group. HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. ",
"category": "Jobs",
"category_slug": "jobs",
"defined_in_file": false
},
"GALAXY_IGNORE_CERTS": {
"type": "boolean",
"label": "Ignore Ansible Galaxy SSL Certificate Verification",

View File

@@ -297,5 +297,6 @@
"users":{"fields":["username"],"adj_list":[]},
"instances":{"fields":["hostname"],"adj_list":[]}
},
"DEFAULT_EXECUTION_ENVIRONMENT": 1
"DEFAULT_EXECUTION_ENVIRONMENT": 1,
"AWX_MOUNT_ISOLATED_PATHS_ON_K8S": false
}

View File

@@ -21,5 +21,6 @@
"DEFAULT_INVENTORY_UPDATE_TIMEOUT": 0,
"DEFAULT_PROJECT_UPDATE_TIMEOUT": 0,
"ANSIBLE_FACT_CACHE_TIMEOUT": 0,
"MAX_FORKS": 200
"MAX_FORKS": 200,
"AWX_MOUNT_ISOLATED_PATHS_ON_K8S": false
}

View File

@@ -82,6 +82,41 @@ const mockUsers = [
external_account: null,
auth: [],
},
{
id: 10,
type: 'user',
url: '/api/v2/users/10/',
related: {
teams: '/api/v2/users/10/teams/',
organizations: '/api/v2/users/10/organizations/',
admin_of_organizations: '/api/v2/users/10/admin_of_organizations/',
projects: '/api/v2/users/10/projects/',
credentials: '/api/v2/users/10/credentials/',
roles: '/api/v2/users/10/roles/',
activity_stream: '/api/v2/users/10/activity_stream/',
access_list: '/api/v2/users/10/access_list/',
tokens: '/api/v2/users/10/tokens/',
authorized_tokens: '/api/v2/users/10/authorized_tokens/',
personal_tokens: '/api/v2/users/10/personal_tokens/',
},
summary_fields: {
user_capabilities: {
edit: true,
delete: false,
},
},
created: '2019-11-04T18:52:13.565525Z',
username: 'nobody',
first_name: '',
last_name: '',
email: 'systemauditor@ansible.com',
is_superuser: false,
is_system_auditor: true,
ldap_dn: '',
last_login: null,
external_account: null,
auth: [],
},
];
afterEach(() => {
@@ -124,6 +159,15 @@ describe('UsersList with full permissions', () => {
expect(wrapper.find('ToolbarAddButton').length).toBe(1);
});
test('Last user should have no first name or last name and the row items should render properly', async () => {
await waitForElement(wrapper, 'ContentLoading', (el) => el.length === 0);
expect(UsersAPI.read).toHaveBeenCalled();
expect(wrapper.find('Td[dataLabel="First Name"]').at(2)).toHaveLength(1);
expect(wrapper.find('Td[dataLabel="First Name"]').at(2).text()).toBe('');
expect(wrapper.find('Td[dataLabel="Last Name"]').at(2)).toHaveLength(1);
expect(wrapper.find('Td[dataLabel="Last Name"]').at(2).text()).toBe('');
});
test('should check and uncheck the row item', async () => {
expect(
wrapper.find('.pf-c-table__check input').first().props().checked
@@ -147,7 +191,7 @@ describe('UsersList with full permissions', () => {
});
test('should check all row items when select all is checked', async () => {
expect(wrapper.find('.pf-c-table__check input')).toHaveLength(2);
expect(wrapper.find('.pf-c-table__check input')).toHaveLength(3);
wrapper.find('.pf-c-table__check input').forEach((el) => {
expect(el.props().checked).toBe(false);
});

View File

@@ -50,8 +50,8 @@ function UserListItem({ user, isSelected, onSelect, detailUrl, rowIndex }) {
</span>
)}
</TdBreakWord>
{user.first_name && <Td dataLabel={t`First Name`}>{user.first_name}</Td>}
{user.last_name && <Td dataLabel={t`Last Name`}>{user.last_name}</Td>}
<Td dataLabel={t`First Name`}>{user.first_name}</Td>
<Td dataLabel={t`Last Name`}>{user.last_name}</Td>
<Td dataLabel={t`Role`}>{user_type}</Td>
<ActionsTd dataLabel={t`Actions`}>
<ActionItem

View File

@@ -20,7 +20,7 @@ const QS_CONFIG = getQSConfig('organizations', {
function UserOrganizationList() {
const location = useLocation();
const { id: userId } = useParams();
const { id } = useParams();
const {
result: { organizations, count, searchableKeys, relatedSearchableKeys },
@@ -36,8 +36,8 @@ function UserOrganizationList() {
},
actions,
] = await Promise.all([
UsersAPI.readOrganizations(userId, params),
UsersAPI.readOrganizationOptions(),
UsersAPI.readOrganizations(id, params),
UsersAPI.readOrganizationOptions(id),
]);
return {
searchableKeys: Object.keys(actions.data.actions?.GET || {}).filter(
@@ -49,7 +49,7 @@ function UserOrganizationList() {
organizations: results,
count: orgCount,
};
}, [userId, location.search]),
}, [id, location.search]),
{
organizations: [],
count: 0,

View File

@@ -72,6 +72,6 @@ describe('<UserOrganizationlist />', () => {
page_size: 20,
type: 'organization',
});
expect(UsersAPI.readOrganizationOptions).toBeCalled();
expect(UsersAPI.readOrganizationOptions).toBeCalledWith('1');
});
});

View File

@@ -12,7 +12,7 @@ export default function UserOrganizationListItem({ organization }) {
>
<Td id={labelId} dataLabel={t`Name`}>
<Link to={`/organizations/${organization.id}/details`} id={labelId}>
{organization.name}
<b>{organization.name}</b>
</Link>
</Td>
<Td dataLabel={t`Description`}>{organization.description}</Td>

View File

@@ -9,6 +9,7 @@ import {
oneOf,
oneOfType,
} from 'prop-types';
import { AlertVariant } from '@patternfly/react-core';
export const Role = shape({
descendent_roles: arrayOf(string),
@@ -428,3 +429,11 @@ export const SearchableKeys = arrayOf(
type: string.isRequired,
})
);
export const Toast = shape({
title: string.isRequired,
variant: oneOf(Object.values(AlertVariant)).isRequired,
id: oneOfType([string, number]).isRequired,
hasTimeout: bool,
message: string,
});

View File

@@ -0,0 +1,32 @@
export default function getScheduleUrl(job) {
const templateId = job.summary_fields.unified_job_template.id;
const scheduleId = job.summary_fields.schedule.id;
const inventoryId = job.summary_fields.inventory
? job.summary_fields.inventory.id
: null;
let scheduleUrl;
switch (job.type) {
case 'inventory_update':
scheduleUrl =
inventoryId &&
`/inventories/inventory/${inventoryId}/sources/${templateId}/schedules/${scheduleId}/details`;
break;
case 'job':
scheduleUrl = `/templates/job_template/${templateId}/schedules/${scheduleId}/details`;
break;
case 'project_update':
scheduleUrl = `/projects/${templateId}/schedules/${scheduleId}/details`;
break;
case 'system_job':
scheduleUrl = `/management_jobs/${templateId}/schedules/${scheduleId}/details`;
break;
case 'workflow_job':
scheduleUrl = `/templates/workflow_job_template/${templateId}/schedules/${scheduleId}/details`;
break;
default:
break;
}
return scheduleUrl;
}

View File

@@ -0,0 +1,103 @@
import getScheduleUrl from './getScheduleUrl';
describe('getScheduleUrl', () => {
test('should return expected schedule URL for inventory update job', () => {
const invSrcJob = {
type: 'inventory_update',
summary_fields: {
inventory: {
id: 1,
name: 'mock inv',
},
schedule: {
name: 'mock schedule',
id: 3,
},
unified_job_template: {
unified_job_type: 'inventory_update',
name: 'mock inv src',
id: 2,
},
},
};
expect(getScheduleUrl(invSrcJob)).toEqual(
'/inventories/inventory/1/sources/2/schedules/3/details'
);
});
test('should return expected schedule URL for job', () => {
const templateJob = {
type: 'job',
summary_fields: {
schedule: {
name: 'mock schedule',
id: 5,
},
unified_job_template: {
unified_job_type: 'job',
name: 'mock job',
id: 4,
},
},
};
expect(getScheduleUrl(templateJob)).toEqual(
'/templates/job_template/4/schedules/5/details'
);
});
test('should return expected schedule URL for project update job', () => {
const projectUpdateJob = {
type: 'project_update',
summary_fields: {
schedule: {
name: 'mock schedule',
id: 7,
},
unified_job_template: {
unified_job_type: 'project_update',
name: 'mock job',
id: 6,
},
},
};
expect(getScheduleUrl(projectUpdateJob)).toEqual(
'/projects/6/schedules/7/details'
);
});
test('should return expected schedule URL for system job', () => {
const systemJob = {
type: 'system_job',
summary_fields: {
schedule: {
name: 'mock schedule',
id: 10,
},
unified_job_template: {
unified_job_type: 'system_job',
name: 'mock job',
id: 9,
},
},
};
expect(getScheduleUrl(systemJob)).toEqual(
'/management_jobs/9/schedules/10/details'
);
});
test('should return expected schedule URL for workflow job', () => {
const workflowJob = {
type: 'workflow_job',
summary_fields: {
schedule: {
name: 'mock schedule',
id: 12,
},
unified_job_template: {
unified_job_type: 'job',
name: 'mock job',
id: 11,
},
},
};
expect(getScheduleUrl(workflowJob)).toEqual(
'/templates/workflow_job_template/11/schedules/12/details'
);
});
});

View File

@@ -33,7 +33,7 @@ options:
type: str
credential:
description:
- Credential to authenticate with Kubernetes or OpenShift. Must be of type "Kubernetes/OpenShift API Bearer Token".
- Credential to authenticate with Kubernetes or OpenShift. Must be of type "OpenShift or Kubernetes API Bearer Token".
required: False
type: str
is_container_group:

View File

@@ -192,7 +192,9 @@ def main():
association_fields['galaxy_credentials'].append(module.resolve_name_to_id('credentials', item))
# Create the data that gets sent for create and update
org_fields = {'name': new_name if new_name else (module.get_item_name(organization) if organization else name),}
org_fields = {
'name': new_name if new_name else (module.get_item_name(organization) if organization else name),
}
if description is not None:
org_fields['description'] = description
if default_ee is not None:

View File

@@ -33,6 +33,10 @@ class Connection(object):
def __init__(self, server, verify=False):
self.server = server
self.verify = verify
# Note: We use the old sessionid here incase someone is trying to connect to an older AWX version
# There is a check below so that if AWX returns an X-API-Session-Cookie-Name we will grab it and
# connect with the new session cookie name.
self.session_cookie_name = 'sessionid'
if not self.verify:
requests.packages.urllib3.disable_warnings()
@@ -49,8 +53,13 @@ class Connection(object):
_next = kwargs.get('next')
if _next:
headers = self.session.headers.copy()
self.post('/api/login/', headers=headers, data=dict(username=username, password=password, next=_next))
self.session_id = self.session.cookies.get('sessionid')
response = self.post('/api/login/', headers=headers, data=dict(username=username, password=password, next=_next))
# The login causes a redirect so we need to search the history of the request to find the header
for historical_response in response.history:
if 'X-API-Session-Cookie-Name' in historical_response.headers:
self.session_cookie_name = historical_response.headers.get('X-API-Session-Cookie-Name')
self.session_id = self.session.cookies.get(self.session_cookie_name, None)
self.uses_session_cookie = True
else:
self.session.auth = (username, password)
@@ -61,7 +70,7 @@ class Connection(object):
def logout(self):
if self.uses_session_cookie:
self.session.cookies.pop('sessionid', None)
self.session.cookies.pop(self.session_cookie_name, None)
else:
self.session.auth = None

View File

@@ -1,3 +1,4 @@
from collections import defaultdict
import itertools
import logging
@@ -204,7 +205,7 @@ class ApiV2(base.Base):
# Import methods
def _dependent_resources(self, data):
def _dependent_resources(self):
page_resource = {getattr(self, resource)._create().__item_class__: resource for resource in self.json}
data_pages = [getattr(self, resource)._create().__item_class__ for resource in EXPORTABLE_RESOURCES]
@@ -256,7 +257,12 @@ class ApiV2(base.Base):
if not S:
continue
if name == 'roles':
self._roles.append((_page, S))
indexed_roles = defaultdict(list)
for role in S:
if 'content_object' not in role:
continue
indexed_roles[role['content_object']['type']].append(role)
self._roles.append((_page, indexed_roles))
else:
self._related.append((_page, name, S))
@@ -278,17 +284,17 @@ class ApiV2(base.Base):
log.debug("post_data: %r", {'id': role_page['id']})
def _assign_membership(self):
for _page, roles in self._roles:
for _page, indexed_roles in self._roles:
role_endpoint = _page.json['related']['roles']
for role in roles:
if role['name'] == 'Member':
for content_type in ('organization', 'team'):
for role in indexed_roles.get(content_type, []):
self._assign_role(role_endpoint, role)
def _assign_roles(self):
for _page, roles in self._roles:
for _page, indexed_roles in self._roles:
role_endpoint = _page.json['related']['roles']
for role in roles:
if role['name'] != 'Member':
for content_type in set(indexed_roles) - {'organization', 'team'}:
for role in indexed_roles.get(content_type, []):
self._assign_role(role_endpoint, role)
def _assign_related(self):
@@ -330,7 +336,7 @@ class ApiV2(base.Base):
changed = False
for resource in self._dependent_resources(data):
for resource in self._dependent_resources():
endpoint = getattr(self, resource)
# Load up existing objects, so that we can try to update or link to them
self._cache.get_page(endpoint)

View File

@@ -95,12 +95,12 @@ def as_user(v, username, password=None):
# requests doesn't provide interface for retrieving
# domain segregated cookies other than iterating.
for cookie in connection.session.cookies:
if cookie.name == 'sessionid':
if cookie.name == connection.session_cookie_name:
session_id = cookie.value
domain = cookie.domain
break
if session_id:
del connection.session.cookies['sessionid']
del connection.session.cookies[connection.session_cookie_name]
if access_token:
kwargs = dict(token=access_token)
else:
@@ -114,9 +114,9 @@ def as_user(v, username, password=None):
if config.use_sessions:
if access_token:
connection.session.auth = None
del connection.session.cookies['sessionid']
del connection.session.cookies[connection.session_cookie_name]
if session_id:
connection.session.cookies.set('sessionid', session_id, domain=domain)
connection.session.cookies.set(connection.session_cookie_name, session_id, domain=domain)
else:
connection.session.auth = previous_auth

View File

@@ -51,7 +51,9 @@ class WSClient(object):
# Subscription group types
def __init__(self, token=None, hostname='', port=443, secure=True, session_id=None, csrftoken=None, add_received_time=False):
def __init__(
self, token=None, hostname='', port=443, secure=True, session_id=None, csrftoken=None, add_received_time=False, session_cookie_name='awx_sessionid'
):
# delay this import, because this is an optional dependency
import websocket
@@ -78,7 +80,7 @@ class WSClient(object):
if self.token is not None:
auth_cookie = 'token="{0.token}";'.format(self)
elif self.session_id is not None:
auth_cookie = 'sessionid="{0.session_id}"'.format(self)
auth_cookie = '{1}="{0.session_id}"'.format(self, session_cookie_name)
if self.csrftoken:
auth_cookie += ';csrftoken={0.csrftoken}'.format(self)
else:

View File

@@ -6,9 +6,9 @@ Session authentication is a safer way of utilizing HTTP(S) cookies. Theoreticall
`Cookie` header, but this method is vulnerable to cookie hijacks, where crackers can see and steal user
information from the cookie payload.
Session authentication, on the other hand, sets a single `session_id` cookie. The `session_id`
is *a random string which will be mapped to user authentication informations by server*. Crackers who
hijack cookies will only get the `session_id` itself, which does not imply any critical user info, is valid only for
Session authentication, on the other hand, sets a single `awx_sessionid` cookie. The `awx_sessionid`
is *a random string which will be mapped to user authentication information by the server*. Crackers who
hijack cookies will only get the `awx_sessionid` itself, which does not imply any critical user info, is valid only for
a limited time, and can be revoked at any time.
> Note: The CSRF token will by default allow HTTP. To increase security, the `CSRF_COOKIE_SECURE` setting should
@@ -34,22 +34,27 @@ be provided in the form:
* `next`: The path of the redirect destination, in API browser `"/api/"` is used.
* `csrfmiddlewaretoken`: The CSRF token, usually populated by using Django template `{% csrf_token %}`.
The `session_id` is provided as a return `Set-Cookie` header. Here is a typical one:
The `awx_session_id` is provided as a return `Set-Cookie` header. Here is a typical one:
```
Set-Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; expires=Tue, 21-Nov-2017 16:33:13 GMT; httponly; Max-Age=1209600; Path=/
Set-Cookie: awx_sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; expires=Tue, 21-Nov-2017 16:33:13 GMT; httponly; Max-Age=1209600; Path=/
```
In addition, when the `awx_sessionid` a header called `X-API-Session-Cookie-Name` this header will only be displayed once on a successful logging and denotes the name of the session cookie name. By default this is `awx_sessionid` but can be changed (see below).
Any client should follow the standard rules of [cookie protocol](https://tools.ietf.org/html/rfc6265) to
parse that header to obtain information about the session, such as session cookie name (`session_id`),
parse that header to obtain information about the session, such as session cookie name (`awx_sessionid`),
session cookie value, expiration date, duration, etc.
The name of the cookie is configurable by Tower Configuration setting `SESSION_COOKIE_NAME` under the category `authentication`. It is a string. The default session cookie name is `awx_sessionid`.
The duration of the cookie is configurable by Tower Configuration setting `SESSION_COOKIE_AGE` under
category `authentication`. It is an integer denoting the number of seconds the session cookie should
live. The default session cookie age is two weeks.
After a valid session is acquired, a client should provide the `session_id` as a cookie for subsequent requests
After a valid session is acquired, a client should provide the `awx_sessionid` as a cookie for subsequent requests
in order to be authenticated. For example:
```
Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; ...
Cookie: awx_sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; ...
```
User should use the `/api/logout/` endpoint to log out. In the API browser, a logged-in user can do that by

View File

@@ -52,12 +52,12 @@ of the awx-operator repo. If not, continue to the next section.
### Building and Deploying a Custom AWX Operator Image
```
$ operator-sdk build quay.io/<username>/awx-operator
$ docker push quay.io/<username>/awx-operator
$ ansible-playbook ansible/deploy-operator.yml \
-e pull_policy=Always \
-e operator_image=quay.io/<username>/awx-operator \
-e operator_version=latest
# in awx-operator repo on the branch you want to use
$ export IMAGE_TAG_BASE=quay.io/<username>/awx-operator
$ export VERSION=<cusom-tag>
$ make docker-build
$ docker push ${IMAGE_TAG_BASE}:${VERSION}
$ make deploy
```
## Deploy AWX into Minikube using the AWX Operator

View File

@@ -0,0 +1,3 @@
This software is made available under the terms of *either* of the licenses
found in LICENSE.APACHE or LICENSE.BSD. Contributions to this software is made
under the terms of *both* these licenses.

View File

@@ -224,6 +224,8 @@ oauthlib==3.1.0
# social-auth-core
openshift==0.11.0
# via -r /awx_devel/requirements/requirements.in
packaging==21.3
# via ansible-runner
pbr==5.6.0
# via -r /awx_devel/requirements/requirements.in
pexpect==4.7.0
@@ -265,7 +267,9 @@ pyjwt==1.7.1
pyopenssl==19.1.0
# via twisted
pyparsing==2.4.6
# via -r /awx_devel/requirements/requirements.in
# via
# -r /awx_devel/requirements/requirements.in
# packaging
pyrad==2.3
# via django-radius
pyrsistent==0.15.7

View File

@@ -12,11 +12,11 @@
- name: Tag and Push Container Images
docker_image:
name: "{{ awx_image }}:{{ awx_version }}"
name: "{{ awx_image }}:{{ awx_image_tag }}"
repository: "{{ registry }}/{{ awx_image }}:{{ item }}"
force_tag: yes
push: true
source: local
with_items:
- "latest"
- "{{ awx_version }}"
- "{{ awx_image_tag }}"

View File

@@ -24,7 +24,7 @@ rules:
resources: ["secrets"]
verbs: ["get", "create", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ minikube_service_account_name }}

View File

@@ -26,6 +26,8 @@
mode: '0600'
when: not lookup('vars', item.item, default='') and not item.stat.exists
loop: "{{ secrets.results }}"
loop_control:
label: '{{ item.item }}'
- name: Include generated secrets unless they are explicitly passed in
include_vars: "{{ sources_dest }}/secrets/{{ item.item }}.yml"

View File

@@ -0,0 +1,77 @@
---
#
# This is used by a CI check in GitHub Actions and isnt really
# meant to be run locally.
#
# The development environment does some unfortunate things to
# make rootless podman work inside of a docker container.
# The goal here is to essentially tests that the awx user is
# able to run `podman run`.
#
- name: Test that the development environment is able to launch a job
hosts: localhost
tasks:
- name: Boot the development environment
command: |
make docker-compose
environment:
COMPOSE_UP_OPTS: -d
args:
chdir: "{{ repo_dir }}"
# Takes a while for migrations to finish
- name: Wait for the dev environment to be ready
uri:
url: "http://localhost:8013/api/v2/ping/"
register: _result
until: _result.status == 200
retries: 120
delay: 5
- name: Reset admin password
shell: |
docker exec -i tools_awx_1 bash <<EOSH
awx-manage update_password --username=admin --password=password
awx-manage create_preload_data
EOSH
- block:
- name: Launch Demo Job Template
awx.awx.job_launch:
name: Demo Job Template
wait: yes
validate_certs: no
controller_host: "http://localhost:8013"
controller_username: "admin"
controller_password: "password"
rescue:
- name: Get list of project updates and jobs
uri:
url: "http://localhost:8013/api/v2/{{ resource }}/"
user: admin
password: "password"
force_basic_auth: yes
register: job_lists
loop:
- project_updates
- jobs
loop_control:
loop_var: resource
- name: Get all job and project details
uri:
url: "http://localhost:8013{{ endpoint }}"
user: admin
password: "password"
force_basic_auth: yes
loop: |
{{ job_lists.results | map(attribute='json') | map(attribute='results') | flatten | map(attribute='url') }}
loop_control:
loop_var: endpoint
- name: Re-emit failure
vars:
failed_task:
result: '{{ ansible_failed_result }}'
fail:
msg: '{{ failed_task }}'