mirror of
https://github.com/ansible/awx.git
synced 2026-01-11 18:09:57 -03:30
commit
226a689f7b
@ -1,2 +1 @@
|
||||
.git
|
||||
awx/ui/node_modules
|
||||
|
||||
6
.gitignore
vendored
6
.gitignore
vendored
@ -33,6 +33,7 @@ awx/ui_next/src/locales/
|
||||
awx/ui_next/coverage/
|
||||
awx/ui_next/build
|
||||
awx/ui_next/.env.local
|
||||
awx/ui_next/instrumented
|
||||
rsyslog.pid
|
||||
tools/prometheus/data
|
||||
tools/docker-compose/Dockerfile
|
||||
@ -146,3 +147,8 @@ use_dev_supervisor.txt
|
||||
*.unison.tmp
|
||||
*.#
|
||||
/tools/docker-compose/overrides/
|
||||
/awx/ui_next/.ui-built
|
||||
/Dockerfile
|
||||
/_build/
|
||||
/_build_kube_dev/
|
||||
/Dockerfile.kube-dev
|
||||
|
||||
46
CHANGELOG.md
46
CHANGELOG.md
@ -2,6 +2,50 @@
|
||||
|
||||
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
|
||||
|
||||
# 17.0.1 (January 26, 2021)
|
||||
- Fixed pgdocker directory permissions issue with Local Docker installer: https://github.com/ansible/awx/pull/9152
|
||||
- Fixed a bug in the UI which caused toggle settings to not be changed when clicked: https://github.com/ansible/awx/pull/9093
|
||||
|
||||
# 17.0.0 (January 22, 2021)
|
||||
- AWX now requires PostgreSQL 12 by default: https://github.com/ansible/awx/pull/8943
|
||||
**Note:** users who encounter permissions errors at upgrade time should `chown -R ~/.awx/pgdocker` to ensure it's owned by the user running the install playbook
|
||||
- Added support for region name for OpenStack inventory: https://github.com/ansible/awx/issues/5080
|
||||
- Added the ability to chain undefined attributes in custom notification templates: https://github.com/ansible/awx/issues/8677
|
||||
- Dramatically simplified the `image_build` role: https://github.com/ansible/awx/pull/8980
|
||||
- Fixed a bug which can cause schema migrations to fail at install time: https://github.com/ansible/awx/issues/9077
|
||||
- Fixed a bug which caused the `is_superuser` user property to be out of date in certain circumstances: https://github.com/ansible/awx/pull/8833
|
||||
- Fixed a bug which sometimes results in race conditions on setting access: https://github.com/ansible/awx/pull/8580
|
||||
- Fixed a bug which sometimes causes an unexpected delay in stdout for some playbooks: https://github.com/ansible/awx/issues/9085
|
||||
- (UI) Added support for credential password prompting on job launch: https://github.com/ansible/awx/pull/9028
|
||||
- (UI) Added the ability to configure LDAP settings in the UI: https://github.com/ansible/awx/issues/8291
|
||||
- (UI) Added a sync button to the Project detail view: https://github.com/ansible/awx/issues/8847
|
||||
- (UI) Added a form for configuring Google Outh 2.0 settings: https://github.com/ansible/awx/pull/8762
|
||||
- (UI) Added searchable keys and related keys to the Credentials list: https://github.com/ansible/awx/issues/8603
|
||||
- (UI) Added support for advanced search and copying to Notification Templates: https://github.com/ansible/awx/issues/7879
|
||||
- (UI) Added support for prompting on workflow nodes: https://github.com/ansible/awx/issues/5913
|
||||
- (UI) Added support for session timeouts: https://github.com/ansible/awx/pull/8250
|
||||
- (UI) Fixed a bug that broke websocket streaming for the insecure ws:// protocol: https://github.com/ansible/awx/pull/8877
|
||||
- (UI) Fixed a bug in the user interface when a translation for the browser's preferred locale isn't available: https://github.com/ansible/awx/issues/8884
|
||||
- (UI) Fixed bug where navigating from one survey question form directly to another wasn't reloading the form: https://github.com/ansible/awx/issues/7522
|
||||
- (UI) Fixed a bug which can cause an uncaught error while launching a Job Template: https://github.com/ansible/awx/issues/8936
|
||||
- Updated autobahn to address CVE-2020-35678
|
||||
|
||||
## 16.0.0 (December 10, 2020)
|
||||
- AWX now ships with a reimagined user interface. **Please read this before upgrading:** https://groups.google.com/g/awx-project/c/KuT5Ao92HWo
|
||||
- Removed support for syncing inventory from Red Hat CloudForms - https://github.com/ansible/awx/commit/0b701b3b2
|
||||
- Removed support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
|
||||
- Upgraded NodeJS to actively maintained LTS 14.15.1 - https://github.com/ansible/awx/pull/8766
|
||||
- Added Git-LFS to the default image build - https://github.com/ansible/awx/pull/8700
|
||||
- Added the ability to specify `metadata.labels` in the podspec for container groups - https://github.com/ansible/awx/issues/8486
|
||||
- Added support for Kubernetes pod annotations - https://github.com/ansible/awx/pull/8434
|
||||
- Added the ability to label the web container in local Docker installs - https://github.com/ansible/awx/pull/8449
|
||||
- Added additional metadata (as an extra var) to playbook runs to report the SCM branch name - https://github.com/ansible/awx/pull/8433
|
||||
- Fixed a bug that caused k8s installations to fail due to an incorrect Helm repo - https://github.com/ansible/awx/issues/8715
|
||||
- Fixed a bug that prevented certain Workflow Approval resources from being deleted - https://github.com/ansible/awx/pull/8612
|
||||
- Fixed a bug that prevented the deletion of inventories stuck in "pending deletion" state - https://github.com/ansible/awx/issues/8525
|
||||
- Fixed a display bug in webhook notifications with certain unicode characters - https://github.com/ansible/awx/issues/7400
|
||||
- Improved support for exporting dependent objects (Inventory Hosts and Groups) in the `awx export` CLI tool - https://github.com/ansible/awx/commit/607bc0788
|
||||
|
||||
## 15.0.1 (October 20, 2020)
|
||||
- Added several optimizations to improve performance for a variety of high-load simultaneous job launch use cases https://github.com/ansible/awx/pull/8403
|
||||
- Added the ability to source roles and collections from requirements.yaml files (not just requirements.yml) - https://github.com/ansible/awx/issues/4540
|
||||
@ -88,7 +132,7 @@ This is a list of high-level changes for each release of AWX. A full list of com
|
||||
- Fixed a bug that caused rsyslogd's configuration file to have world-readable file permissions, potentially leaking secrets (CVE-2020-10782)
|
||||
|
||||
## 12.0.0 (Jun 9, 2020)
|
||||
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
|
||||
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
|
||||
- Moved to a single container image build instead of separate awx_web and awx_task images. The container image is just `awx` (https://github.com/ansible/awx/pull/7228)
|
||||
- Official AWX container image builds now use a two-stage container build process that notably reduces the size of our published images (https://github.com/ansible/awx/pull/7017)
|
||||
- Removed support for HipChat notifications ([EoL announcement](https://www.atlassian.com/partnerships/slack/faq#faq-98b17ca3-247f-423b-9a78-70a91681eff0)); all previously-created HipChat notification templates will be deleted due to this removal.
|
||||
|
||||
@ -85,7 +85,7 @@ If you're not using Docker for Mac, or Docker for Windows, you may need, or choo
|
||||
|
||||
#### Frontend Development
|
||||
|
||||
See [the ui development documentation](awx/ui/README.md).
|
||||
See [the ui development documentation](awx/ui_next/CONTRIBUTING.md).
|
||||
|
||||
|
||||
### Build the environment
|
||||
@ -158,7 +158,7 @@ $ docker ps
|
||||
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
|
||||
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…" 27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
|
||||
40de380e3c2e redis:latest "docker-entrypoint.s…" 28 seconds ago Up 26 seconds
|
||||
b66a506d3007 postgres:10 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
|
||||
b66a506d3007 postgres:12 "docker-entrypoint.s…" 28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
|
||||
```
|
||||
**NOTE**
|
||||
|
||||
|
||||
@ -60,7 +60,7 @@ Please note that deploying from `HEAD` (or the latest commit) is **not** stable,
|
||||
|
||||
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
|
||||
|
||||
Once you have a local copy, run commands within the root of the project tree.
|
||||
Once you have a local copy, run the commands in the following sections from the root of the project tree.
|
||||
|
||||
### AWX branding
|
||||
|
||||
@ -83,7 +83,7 @@ Before you can run a deployment, you'll need the following installed in your loc
|
||||
- [GNU Make](https://www.gnu.org/software/make/)
|
||||
- [Git](https://git-scm.com/) Requires Version 1.8.4+
|
||||
- Python 3.6+
|
||||
- [Node 10.x LTS version](https://nodejs.org/en/download/)
|
||||
- [Node 14.x LTS version](https://nodejs.org/en/download/)
|
||||
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
|
||||
- [NPM 6.x LTS](https://docs.npmjs.com/)
|
||||
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
|
||||
@ -497,7 +497,7 @@ Before starting the install process, review the [inventory](./installer/inventor
|
||||
|
||||
*docker_compose_dir*
|
||||
|
||||
> When using docker-compose, the `docker-compose.yml` file will be created there (default `/tmp/awxcompose`).
|
||||
> When using docker-compose, the `docker-compose.yml` file will be created there (default `~/.awx/awxcompose`).
|
||||
|
||||
*custom_venv_dir*
|
||||
|
||||
|
||||
75
Makefile
75
Makefile
@ -19,7 +19,8 @@ PYCURL_SSL_LIBRARY ?= openssl
|
||||
COMPOSE_TAG ?= $(GIT_BRANCH)
|
||||
COMPOSE_HOST ?= $(shell hostname)
|
||||
|
||||
VENV_BASE ?= /venv
|
||||
VENV_BASE ?= /var/lib/awx/venv/
|
||||
COLLECTION_BASE ?= /var/lib/awx/vendor/awx_ansible_collections
|
||||
SCL_PREFIX ?=
|
||||
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
|
||||
|
||||
@ -266,11 +267,27 @@ collectstatic:
|
||||
fi; \
|
||||
mkdir -p awx/public/static && $(PYTHON) manage.py collectstatic --clear --noinput > /dev/null 2>&1
|
||||
|
||||
UWSGI_DEV_RELOAD_COMMAND ?= supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver
|
||||
|
||||
uwsgi: collectstatic
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
. $(VENV_BASE)/awx/bin/activate; \
|
||||
fi; \
|
||||
uwsgi -b 32768 --socket 127.0.0.1:8050 --module=awx.wsgi:application --home=/venv/awx --chdir=/awx_devel/ --vacuum --processes=5 --harakiri=120 --master --no-orphans --py-autoreload 1 --max-requests=1000 --stats /tmp/stats.socket --lazy-apps --logformat "%(addr) %(method) %(uri) - %(proto) %(status)" --hook-accepting1="exec:supervisorctl restart tower-processes:awx-dispatcher tower-processes:awx-receiver"
|
||||
uwsgi -b 32768 \
|
||||
--socket 127.0.0.1:8050 \
|
||||
--module=awx.wsgi:application \
|
||||
--home=/var/lib/awx/venv/awx \
|
||||
--chdir=/awx_devel/ \
|
||||
--vacuum \
|
||||
--processes=5 \
|
||||
--harakiri=120 --master \
|
||||
--no-orphans \
|
||||
--py-autoreload 1 \
|
||||
--max-requests=1000 \
|
||||
--stats /tmp/stats.socket \
|
||||
--lazy-apps \
|
||||
--logformat "%(addr) %(method) %(uri) - %(proto) %(status)" \
|
||||
--hook-accepting1="exec: $(UWSGI_DEV_RELOAD_COMMAND)"
|
||||
|
||||
daphne:
|
||||
@if [ "$(VENV_BASE)" ]; then \
|
||||
@ -340,7 +357,7 @@ check: flake8 pep8 # pyflakes pylint
|
||||
|
||||
awx-link:
|
||||
[ -d "/awx_devel/awx.egg-info" ] || python3 /awx_devel/setup.py egg_info_dev
|
||||
cp -f /tmp/awx.egg-link /venv/awx/lib/python$(PYTHON_VERSION)/site-packages/awx.egg-link
|
||||
cp -f /tmp/awx.egg-link /var/lib/awx/venv/awx/lib/python$(PYTHON_VERSION)/site-packages/awx.egg-link
|
||||
|
||||
TEST_DIRS ?= awx/main/tests/unit awx/main/tests/functional awx/conf/tests awx/sso/tests
|
||||
|
||||
@ -462,19 +479,24 @@ endif
|
||||
|
||||
# UI TASKS
|
||||
# --------------------------------------
|
||||
awx/ui_next/node_modules:
|
||||
$(NPM_BIN) --prefix awx/ui_next install
|
||||
|
||||
UI_BUILD_FLAG_FILE = awx/ui_next/.ui-built
|
||||
|
||||
clean-ui:
|
||||
rm -rf node_modules
|
||||
rm -rf awx/ui_next/node_modules
|
||||
rm -rf awx/ui_next/build
|
||||
rm -rf awx/ui_next/src/locales/_build
|
||||
rm -rf $(UI_BUILD_FLAG_FILE)
|
||||
git checkout awx/ui_next/src/locales
|
||||
|
||||
ui-release: ui-devel
|
||||
ui-devel: awx/ui_next/node_modules
|
||||
$(NPM_BIN) --prefix awx/ui_next run extract-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next run compile-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next run build
|
||||
awx/ui_next/node_modules:
|
||||
$(NPM_BIN) --prefix awx/ui_next --loglevel warn --ignore-scripts install
|
||||
|
||||
$(UI_BUILD_FLAG_FILE):
|
||||
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run extract-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run compile-strings
|
||||
$(NPM_BIN) --prefix awx/ui_next --loglevel warn run build
|
||||
git checkout awx/ui_next/src/locales
|
||||
mkdir -p awx/public/static/css
|
||||
mkdir -p awx/public/static/js
|
||||
@ -482,6 +504,12 @@ ui-devel: awx/ui_next/node_modules
|
||||
cp -r awx/ui_next/build/static/css/* awx/public/static/css
|
||||
cp -r awx/ui_next/build/static/js/* awx/public/static/js
|
||||
cp -r awx/ui_next/build/static/media/* awx/public/static/media
|
||||
touch $@
|
||||
|
||||
ui-release: awx/ui_next/node_modules $(UI_BUILD_FLAG_FILE)
|
||||
|
||||
ui-devel: awx/ui_next/node_modules
|
||||
@$(MAKE) -B $(UI_BUILD_FLAG_FILE)
|
||||
|
||||
ui-zuul-lint-and-test:
|
||||
$(NPM_BIN) --prefix awx/ui_next install
|
||||
@ -567,15 +595,18 @@ docker-compose-clean: awx/projects
|
||||
|
||||
# Base development image build
|
||||
docker-compose-build:
|
||||
ansible localhost -m template -a "src=installer/roles/image_build/templates/Dockerfile.j2 dest=tools/docker-compose/Dockerfile" -e build_dev=True
|
||||
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile \
|
||||
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
|
||||
ansible-playbook installer/dockerfile.yml -e build_dev=True
|
||||
docker build -t ansible/awx_devel \
|
||||
--build-arg BUILDKIT_INLINE_CACHE=1 \
|
||||
--cache-from=$(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG) .
|
||||
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
|
||||
#docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
|
||||
|
||||
# For use when developing on "isolated" AWX deployments
|
||||
docker-compose-isolated-build: docker-compose-build
|
||||
docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile .
|
||||
docker build -t ansible/awx_isolated \
|
||||
--build-arg BUILDKIT_INLINE_CACHE=1 \
|
||||
-f tools/docker-isolated/Dockerfile .
|
||||
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
|
||||
#docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
|
||||
|
||||
@ -607,7 +638,21 @@ clean-elk:
|
||||
docker rm tools_kibana_1
|
||||
|
||||
psql-container:
|
||||
docker run -it --net tools_default --rm postgres:10 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
|
||||
docker run -it --net tools_default --rm postgres:12 sh -c 'exec psql -h "postgres" -p "5432" -U postgres'
|
||||
|
||||
VERSION:
|
||||
@echo "awx: $(VERSION)"
|
||||
|
||||
Dockerfile: installer/roles/dockerfile/templates/Dockerfile.j2
|
||||
ansible-playbook installer/dockerfile.yml
|
||||
|
||||
Dockerfile.kube-dev: installer/roles/dockerfile/templates/Dockerfile.j2
|
||||
ansible-playbook installer/dockerfile.yml \
|
||||
-e dockerfile_name=Dockerfile.kube-dev \
|
||||
-e kube_dev=True \
|
||||
-e template_dest=_build_kube_dev
|
||||
|
||||
awx-kube-dev-build: Dockerfile.kube-dev
|
||||
docker build -f Dockerfile.kube-dev \
|
||||
--build-arg BUILDKIT_INLINE_CACHE=1 \
|
||||
-t $(DEV_DOCKER_TAG_BASE)/awx_kube_devel:$(COMPOSE_TAG) .
|
||||
|
||||
13
README.md
13
README.md
@ -16,20 +16,20 @@ Contributing
|
||||
------------
|
||||
|
||||
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
|
||||
- All code submissions are done through pull requests against the `devel` branch.
|
||||
- All contributors must use git commit --signoff for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
|
||||
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
|
||||
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
|
||||
- All code submissions are made through pull requests against the `devel` branch.
|
||||
- All contributors must use git commit --signoff for any commit to be merged and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
|
||||
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs. `git merge` for this reason.
|
||||
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net and talk about what you would like to do or add first. This not only helps everyone know what's going on, but it also helps save time and effort if the community decides some changes are needed.
|
||||
|
||||
Reporting Issues
|
||||
----------------
|
||||
|
||||
If you're experiencing a problem that you feel is a bug in AWX, or have ideas for how to improve AWX, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
|
||||
If you're experiencing a problem that you feel is a bug in AWX or have ideas for improving AWX, we encourage you to open an issue and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
|
||||
|
||||
Code of Conduct
|
||||
---------------
|
||||
|
||||
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
|
||||
|
||||
Get Involved
|
||||
------------
|
||||
@ -43,4 +43,3 @@ License
|
||||
-------
|
||||
|
||||
[Apache v2](./LICENSE.md)
|
||||
|
||||
|
||||
@ -640,7 +640,7 @@ class EmptySerializer(serializers.Serializer):
|
||||
|
||||
|
||||
class UnifiedJobTemplateSerializer(BaseSerializer):
|
||||
# As a base serializer, the capabilities prefetch is not used directly,
|
||||
# As a base serializer, the capabilities prefetch is not used directly,
|
||||
# instead they are derived from the Workflow Job Template Serializer and the Job Template Serializer, respectively.
|
||||
capabilities_prefetch = []
|
||||
|
||||
@ -1748,7 +1748,7 @@ class HostSerializer(BaseSerializerWithVariables):
|
||||
attrs['variables'] = json.dumps(vars_dict)
|
||||
if Group.objects.filter(name=name, inventory=inventory).exists():
|
||||
raise serializers.ValidationError(_('A Group with that name already exists.'))
|
||||
|
||||
|
||||
return super(HostSerializer, self).validate(attrs)
|
||||
|
||||
def to_representation(self, obj):
|
||||
@ -3945,12 +3945,12 @@ class ProjectUpdateEventSerializer(JobEventSerializer):
|
||||
return UriCleaner.remove_sensitive(obj.stdout)
|
||||
|
||||
def get_event_data(self, obj):
|
||||
# the project update playbook uses the git, hg, or svn modules
|
||||
# the project update playbook uses the git or svn modules
|
||||
# to clone repositories, and those modules are prone to printing
|
||||
# raw SCM URLs in their stdout (which *could* contain passwords)
|
||||
# attempt to detect and filter HTTP basic auth passwords in the stdout
|
||||
# of these types of events
|
||||
if obj.event_data.get('task_action') in ('git', 'hg', 'svn'):
|
||||
if obj.event_data.get('task_action') in ('git', 'svn'):
|
||||
try:
|
||||
return json.loads(
|
||||
UriCleaner.remove_sensitive(
|
||||
|
||||
@ -4,7 +4,6 @@ The following lists the expected format and details of our rrules:
|
||||
* DTSTART is expected to be in UTC
|
||||
* INTERVAL is required
|
||||
* SECONDLY is not supported
|
||||
* TZID is not supported
|
||||
* RRULE must precede the rule statements
|
||||
* BYDAY is supported but not BYDAY with a numerical prefix
|
||||
* BYYEARDAY and BYWEEKNO are not supported
|
||||
|
||||
@ -242,8 +242,6 @@ class DashboardView(APIView):
|
||||
git_failed_projects = git_projects.filter(last_job_failed=True)
|
||||
svn_projects = user_projects.filter(scm_type='svn')
|
||||
svn_failed_projects = svn_projects.filter(last_job_failed=True)
|
||||
hg_projects = user_projects.filter(scm_type='hg')
|
||||
hg_failed_projects = hg_projects.filter(last_job_failed=True)
|
||||
archive_projects = user_projects.filter(scm_type='archive')
|
||||
archive_failed_projects = archive_projects.filter(last_job_failed=True)
|
||||
data['scm_types'] = {}
|
||||
@ -257,11 +255,6 @@ class DashboardView(APIView):
|
||||
'failures_url': reverse('api:project_list', request=request) + "?scm_type=svn&last_job_failed=True",
|
||||
'total': svn_projects.count(),
|
||||
'failed': svn_failed_projects.count()}
|
||||
data['scm_types']['hg'] = {'url': reverse('api:project_list', request=request) + "?scm_type=hg",
|
||||
'label': 'Mercurial',
|
||||
'failures_url': reverse('api:project_list', request=request) + "?scm_type=hg&last_job_failed=True",
|
||||
'total': hg_projects.count(),
|
||||
'failed': hg_failed_projects.count()}
|
||||
data['scm_types']['archive'] = {'url': reverse('api:project_list', request=request) + "?scm_type=archive",
|
||||
'label': 'Remote Archive',
|
||||
'failures_url': reverse('api:project_list', request=request) + "?scm_type=archive&last_job_failed=True",
|
||||
|
||||
@ -13,6 +13,7 @@ from django.utils.translation import ugettext_lazy as _
|
||||
from awx.main.models import (
|
||||
ActivityStream,
|
||||
Inventory,
|
||||
Host,
|
||||
Project,
|
||||
JobTemplate,
|
||||
WorkflowJobTemplate,
|
||||
@ -98,6 +99,7 @@ class OrganizationDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAPI
|
||||
organization__id=org_id).count()
|
||||
org_counts['job_templates'] = JobTemplate.accessible_objects(**access_kwargs).filter(
|
||||
organization__id=org_id).count()
|
||||
org_counts['hosts'] = Host.objects.org_active_count(org_id)
|
||||
|
||||
full_context['related_field_counts'] = {}
|
||||
full_context['related_field_counts'][org_id] = org_counts
|
||||
|
||||
@ -4,6 +4,7 @@ import logging
|
||||
import sys
|
||||
import threading
|
||||
import time
|
||||
import os
|
||||
|
||||
# Django
|
||||
from django.conf import LazySettings
|
||||
@ -247,6 +248,7 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
# These values have to be stored via self.__dict__ in this way to get
|
||||
# around the magic __setattr__ method on this class (which is used to
|
||||
# store API-assigned settings in the database).
|
||||
self.__dict__['__forks__'] = {}
|
||||
self.__dict__['default_settings'] = default_settings
|
||||
self.__dict__['_awx_conf_settings'] = self
|
||||
self.__dict__['_awx_conf_preload_expires'] = None
|
||||
@ -255,6 +257,26 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
self.__dict__['cache'] = EncryptedCacheProxy(cache, registry)
|
||||
self.__dict__['registry'] = registry
|
||||
|
||||
# record the current pid so we compare it post-fork for
|
||||
# processes like the dispatcher and callback receiver
|
||||
self.__dict__['pid'] = os.getpid()
|
||||
|
||||
def __clean_on_fork__(self):
|
||||
pid = os.getpid()
|
||||
# if the current pid does *not* match the value on self, it means
|
||||
# that value was copied on fork, and we're now in a *forked* process;
|
||||
# the *first* time we enter this code path (on setting access),
|
||||
# forcibly close DB/cache sockets and set a marker so we don't run
|
||||
# this code again _in this process_
|
||||
#
|
||||
if pid != self.__dict__['pid'] and pid not in self.__dict__['__forks__']:
|
||||
self.__dict__['__forks__'][pid] = True
|
||||
# It's important to close these post-fork, because we
|
||||
# don't want the forked processes to inherit the open sockets
|
||||
# for the DB and cache connections (that way lies race conditions)
|
||||
connection.close()
|
||||
django_cache.close()
|
||||
|
||||
@cached_property
|
||||
def all_supported_settings(self):
|
||||
return self.registry.get_registered_settings()
|
||||
@ -330,6 +352,7 @@ class SettingsWrapper(UserSettingsHolder):
|
||||
self.cache.set_many(settings_to_cache, timeout=SETTING_CACHE_TIMEOUT)
|
||||
|
||||
def _get_local(self, name, validate=True):
|
||||
self.__clean_on_fork__()
|
||||
self._preload_cache()
|
||||
cache_key = Setting.get_cache_key(name)
|
||||
try:
|
||||
|
||||
@ -3354,6 +3354,15 @@ msgid ""
|
||||
"common scenarios."
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:824
|
||||
msgid "Region Name"
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:826
|
||||
msgid ""
|
||||
"For some cloud providers, like OVH, region must be specified."
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:824
|
||||
#: awx/main/models/credential/__init__.py:1131
|
||||
#: awx/main/models/credential/__init__.py:1166
|
||||
|
||||
@ -3354,6 +3354,15 @@ msgid ""
|
||||
"common scenarios."
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:824
|
||||
msgid "Region Name"
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:826
|
||||
msgid ""
|
||||
"For some cloud providers, like OVH, region must be specified."
|
||||
msgstr ""
|
||||
|
||||
#: awx/main/models/credential/__init__.py:824
|
||||
#: awx/main/models/credential/__init__.py:1131
|
||||
#: awx/main/models/credential/__init__.py:1166
|
||||
|
||||
@ -3294,6 +3294,16 @@ msgid ""
|
||||
"common scenarios."
|
||||
msgstr "Les domaines OpenStack définissent les limites administratives. Ils sont nécessaires uniquement pour les URL d’authentification Keystone v3. Voir la documentation Ansible Tower pour les scénarios courants."
|
||||
|
||||
#: awx/main/models/credential/__init__.py:824
|
||||
msgid "Region Name"
|
||||
msgstr "Nom de la region"
|
||||
|
||||
#: awx/main/models/credential/__init__.py:826
|
||||
msgid ""
|
||||
"For some cloud providers, like OVH, region must be specified."
|
||||
msgstr ""
|
||||
"Chez certains fournisseurs, comme OVH, vous devez spécifier le nom de la région"
|
||||
|
||||
#: awx/main/models/credential/__init__.py:812
|
||||
#: awx/main/models/credential/__init__.py:1110
|
||||
#: awx/main/models/credential/__init__.py:1144
|
||||
|
||||
@ -333,14 +333,14 @@ class BaseAccess(object):
|
||||
report_violation(_("License has expired."))
|
||||
|
||||
free_instances = validation_info.get('free_instances', 0)
|
||||
available_instances = validation_info.get('available_instances', 0)
|
||||
instance_count = validation_info.get('instance_count', 0)
|
||||
|
||||
if add_host_name:
|
||||
host_exists = Host.objects.filter(name=add_host_name).exists()
|
||||
if not host_exists and free_instances == 0:
|
||||
report_violation(_("License count of %s instances has been reached.") % available_instances)
|
||||
report_violation(_("License count of %s instances has been reached.") % instance_count)
|
||||
elif not host_exists and free_instances < 0:
|
||||
report_violation(_("License count of %s instances has been exceeded.") % available_instances)
|
||||
report_violation(_("License count of %s instances has been exceeded.") % instance_count)
|
||||
elif not add_host_name and free_instances < 0:
|
||||
report_violation(_("Host count exceeds available instances."))
|
||||
|
||||
|
||||
@ -280,14 +280,16 @@ def _copy_table(table, query, path):
|
||||
return file.file_list()
|
||||
|
||||
|
||||
@register('events_table', '1.1', format='csv', description=_('Automation task records'), expensive=True)
|
||||
@register('events_table', '1.2', format='csv', description=_('Automation task records'), expensive=True)
|
||||
def events_table(since, full_path, until, **kwargs):
|
||||
events_query = '''COPY (SELECT main_jobevent.id,
|
||||
main_jobevent.created,
|
||||
main_jobevent.modified,
|
||||
main_jobevent.uuid,
|
||||
main_jobevent.parent_uuid,
|
||||
main_jobevent.event,
|
||||
main_jobevent.event_data::json->'task_action' AS task_action,
|
||||
(CASE WHEN event = 'playbook_on_stats' THEN event_data END) as playbook_on_stats,
|
||||
main_jobevent.failed,
|
||||
main_jobevent.changed,
|
||||
main_jobevent.playbook,
|
||||
|
||||
@ -68,7 +68,7 @@ def register(key, version, description=None, format='json', expensive=False):
|
||||
|
||||
@register('projects_by_scm_type', 1)
|
||||
def projects_by_scm_type():
|
||||
return {'git': 5, 'svn': 1, 'hg': 0}
|
||||
return {'git': 5, 'svn': 1}
|
||||
"""
|
||||
|
||||
def decorate(f):
|
||||
@ -102,7 +102,7 @@ def gather(dest=None, module=None, subset = None, since = None, until = now(), c
|
||||
|
||||
last_run = since or settings.AUTOMATION_ANALYTICS_LAST_GATHER or (now() - timedelta(weeks=4))
|
||||
logger.debug("Last analytics run was: {}".format(settings.AUTOMATION_ANALYTICS_LAST_GATHER))
|
||||
|
||||
|
||||
if _valid_license() is False:
|
||||
logger.exception("Invalid License provided, or No License Provided")
|
||||
return None
|
||||
|
||||
@ -68,7 +68,7 @@ def metrics():
|
||||
'external_logger_type': getattr(settings, 'LOG_AGGREGATOR_TYPE', 'None')
|
||||
})
|
||||
|
||||
LICENSE_INSTANCE_TOTAL.set(str(license_info.get('available_instances', 0)))
|
||||
LICENSE_INSTANCE_TOTAL.set(str(license_info.get('instance_count', 0)))
|
||||
LICENSE_INSTANCE_FREE.set(str(license_info.get('free_instances', 0)))
|
||||
|
||||
current_counts = counts(None)
|
||||
|
||||
@ -75,7 +75,7 @@ class WebsocketSecretAuthHelper:
|
||||
nonce_diff = now - nonce_parsed
|
||||
if abs(nonce_diff) > nonce_tolerance:
|
||||
logger.warn(f"Potential replay attack or machine(s) time out of sync by {nonce_diff} seconds.")
|
||||
raise ValueError("Potential replay attack or machine(s) time out of sync by {nonce_diff} seconds.")
|
||||
raise ValueError(f"Potential replay attack or machine(s) time out of sync by {nonce_diff} seconds.")
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@ -38,6 +38,7 @@ class CallbackBrokerWorker(BaseWorker):
|
||||
|
||||
MAX_RETRIES = 2
|
||||
last_stats = time.time()
|
||||
last_flush = time.time()
|
||||
total = 0
|
||||
last_event = ''
|
||||
prof = None
|
||||
@ -52,7 +53,7 @@ class CallbackBrokerWorker(BaseWorker):
|
||||
|
||||
def read(self, queue):
|
||||
try:
|
||||
res = self.redis.blpop(settings.CALLBACK_QUEUE, timeout=settings.JOB_EVENT_BUFFER_SECONDS)
|
||||
res = self.redis.blpop(settings.CALLBACK_QUEUE, timeout=1)
|
||||
if res is None:
|
||||
return {'event': 'FLUSH'}
|
||||
self.total += 1
|
||||
@ -102,6 +103,7 @@ class CallbackBrokerWorker(BaseWorker):
|
||||
now = tz_now()
|
||||
if (
|
||||
force or
|
||||
(time.time() - self.last_flush) > settings.JOB_EVENT_BUFFER_SECONDS or
|
||||
any([len(events) >= 1000 for events in self.buff.values()])
|
||||
):
|
||||
for cls, events in self.buff.items():
|
||||
@ -124,6 +126,7 @@ class CallbackBrokerWorker(BaseWorker):
|
||||
for e in events:
|
||||
emit_event_detail(e)
|
||||
self.buff = {}
|
||||
self.last_flush = time.time()
|
||||
|
||||
def perform_work(self, body):
|
||||
try:
|
||||
|
||||
@ -30,3 +30,10 @@ class _AwxTaskError():
|
||||
|
||||
|
||||
AwxTaskError = _AwxTaskError()
|
||||
|
||||
|
||||
class PostRunError(Exception):
|
||||
def __init__(self, msg, status='failed', tb=''):
|
||||
self.status = status
|
||||
self.tb = tb
|
||||
super(PostRunError, self).__init__(msg)
|
||||
|
||||
@ -7,6 +7,7 @@ import tempfile
|
||||
import time
|
||||
import logging
|
||||
import yaml
|
||||
import datetime
|
||||
|
||||
from django.conf import settings
|
||||
import ansible_runner
|
||||
@ -123,6 +124,7 @@ class IsolatedManager(object):
|
||||
dir=private_data_dir
|
||||
)
|
||||
params = self.runner_params.copy()
|
||||
params.get('envvars', dict())['ANSIBLE_CALLBACK_WHITELIST'] = 'profile_tasks'
|
||||
params['playbook'] = playbook
|
||||
params['private_data_dir'] = iso_dir
|
||||
if idle_timeout:
|
||||
@ -149,7 +151,6 @@ class IsolatedManager(object):
|
||||
# don't rsync source control metadata (it can be huge!)
|
||||
'- /project/.git',
|
||||
'- /project/.svn',
|
||||
'- /project/.hg',
|
||||
# don't rsync job events that are in the process of being written
|
||||
'- /artifacts/job_events/*-partial.json.tmp',
|
||||
# don't rsync the ssh_key FIFO
|
||||
@ -169,7 +170,8 @@ class IsolatedManager(object):
|
||||
extravars = {
|
||||
'src': self.private_data_dir,
|
||||
'dest': settings.AWX_PROOT_BASE_PATH,
|
||||
'ident': self.ident
|
||||
'ident': self.ident,
|
||||
'job_id': self.instance.id,
|
||||
}
|
||||
if playbook:
|
||||
extravars['playbook'] = playbook
|
||||
@ -205,7 +207,10 @@ class IsolatedManager(object):
|
||||
:param interval: an interval (in seconds) to wait between status polls
|
||||
"""
|
||||
interval = interval if interval is not None else settings.AWX_ISOLATED_CHECK_INTERVAL
|
||||
extravars = {'src': self.private_data_dir}
|
||||
extravars = {
|
||||
'src': self.private_data_dir,
|
||||
'job_id': self.instance.id
|
||||
}
|
||||
status = 'failed'
|
||||
rc = None
|
||||
last_check = time.time()
|
||||
@ -221,9 +226,13 @@ class IsolatedManager(object):
|
||||
logger.warning('Isolated job {} was manually canceled.'.format(self.instance.id))
|
||||
|
||||
logger.debug('Checking on isolated job {} with `check_isolated.yml`.'.format(self.instance.id))
|
||||
time_start = datetime.datetime.now()
|
||||
runner_obj = self.run_management_playbook('check_isolated.yml',
|
||||
self.private_data_dir,
|
||||
extravars=extravars)
|
||||
time_end = datetime.datetime.now()
|
||||
time_diff = time_end - time_start
|
||||
logger.debug('Finished checking on isolated job {} with `check_isolated.yml` took {} seconds.'.format(self.instance.id, time_diff.total_seconds()))
|
||||
status, rc = runner_obj.status, runner_obj.rc
|
||||
|
||||
if self.check_callback is not None and not self.captured_command_artifact:
|
||||
|
||||
@ -21,7 +21,7 @@ from awx.main.signals import (
|
||||
disable_computed_fields
|
||||
)
|
||||
|
||||
from awx.main.management.commands.deletion import AWXCollector, pre_delete
|
||||
from awx.main.utils.deletion import AWXCollector, pre_delete
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
|
||||
@ -19,6 +19,9 @@ from django.core.management.base import BaseCommand, CommandError
|
||||
from django.db import connection, transaction
|
||||
from django.utils.encoding import smart_text
|
||||
|
||||
# DRF error class to distinguish license exceptions
|
||||
from rest_framework.exceptions import PermissionDenied
|
||||
|
||||
# AWX inventory imports
|
||||
from awx.main.models.inventory import (
|
||||
Inventory,
|
||||
@ -31,11 +34,12 @@ from awx.main.utils.safe_yaml import sanitize_jinja
|
||||
|
||||
# other AWX imports
|
||||
from awx.main.models.rbac import batch_role_ancestor_rebuilding
|
||||
# TODO: remove proot utils once we move to running inv. updates in containers
|
||||
from awx.main.utils import (
|
||||
ignore_inventory_computed_fields,
|
||||
check_proot_installed,
|
||||
wrap_args_with_proot,
|
||||
build_proot_temp_dir,
|
||||
ignore_inventory_computed_fields,
|
||||
get_licenser
|
||||
)
|
||||
from awx.main.signals import disable_activity_stream
|
||||
@ -53,11 +57,11 @@ No license.
|
||||
See http://www.ansible.com/renew for license information.'''
|
||||
|
||||
LICENSE_MESSAGE = '''\
|
||||
Number of licensed instances exceeded, would bring available instances to %(new_count)d, system is licensed for %(available_instances)d.
|
||||
Number of licensed instances exceeded, would bring available instances to %(new_count)d, system is licensed for %(instance_count)d.
|
||||
See http://www.ansible.com/renew for license extension information.'''
|
||||
|
||||
DEMO_LICENSE_MESSAGE = '''\
|
||||
Demo mode free license count exceeded, would bring available instances to %(new_count)d, demo mode allows %(available_instances)d.
|
||||
Demo mode free license count exceeded, would bring available instances to %(new_count)d, demo mode allows %(instance_count)d.
|
||||
See http://www.ansible.com/renew for licensing information.'''
|
||||
|
||||
|
||||
@ -75,13 +79,11 @@ class AnsibleInventoryLoader(object):
|
||||
/usr/bin/ansible/ansible-inventory -i hosts --list
|
||||
'''
|
||||
|
||||
def __init__(self, source, is_custom=False, venv_path=None, verbosity=0):
|
||||
def __init__(self, source, venv_path=None, verbosity=0):
|
||||
self.source = source
|
||||
self.source_dir = functioning_dir(self.source)
|
||||
self.is_custom = is_custom
|
||||
self.tmp_private_dir = None
|
||||
self.method = 'ansible-inventory'
|
||||
self.verbosity = verbosity
|
||||
# TODO: remove once proot has been removed
|
||||
self.tmp_private_dir = None
|
||||
if venv_path:
|
||||
self.venv_path = venv_path
|
||||
else:
|
||||
@ -131,38 +133,34 @@ class AnsibleInventoryLoader(object):
|
||||
# NOTE: why do we add "python" to the start of these args?
|
||||
# the script that runs ansible-inventory specifies a python interpreter
|
||||
# that makes no sense in light of the fact that we put all the dependencies
|
||||
# inside of /venv/ansible, so we override the specified interpreter
|
||||
# inside of /var/lib/awx/venv/ansible, so we override the specified interpreter
|
||||
# https://github.com/ansible/ansible/issues/50714
|
||||
bargs = ['python', ansible_inventory_path, '-i', self.source]
|
||||
bargs.extend(['--playbook-dir', self.source_dir])
|
||||
bargs.extend(['--playbook-dir', functioning_dir(self.source)])
|
||||
if self.verbosity:
|
||||
# INFO: -vvv, DEBUG: -vvvvv, for inventory, any more than 3 makes little difference
|
||||
bargs.append('-{}'.format('v' * min(5, self.verbosity * 2 + 1)))
|
||||
logger.debug('Using base command: {}'.format(' '.join(bargs)))
|
||||
return bargs
|
||||
|
||||
# TODO: Remove this once we move to running ansible-inventory in containers
|
||||
# and don't need proot for process isolation anymore
|
||||
def get_proot_args(self, cmd, env):
|
||||
cwd = os.getcwd()
|
||||
if not check_proot_installed():
|
||||
raise RuntimeError("proot is not installed but is configured for use")
|
||||
|
||||
kwargs = {}
|
||||
if self.is_custom:
|
||||
# use source's tmp dir for proot, task manager will delete folder
|
||||
logger.debug("Using provided directory '{}' for isolation.".format(self.source_dir))
|
||||
kwargs['proot_temp_dir'] = self.source_dir
|
||||
cwd = self.source_dir
|
||||
else:
|
||||
# we cannot safely store tmp data in source dir or trust script contents
|
||||
if env['AWX_PRIVATE_DATA_DIR']:
|
||||
# If this is non-blank, file credentials are being used and we need access
|
||||
private_data_dir = functioning_dir(env['AWX_PRIVATE_DATA_DIR'])
|
||||
logger.debug("Using private credential data in '{}'.".format(private_data_dir))
|
||||
kwargs['private_data_dir'] = private_data_dir
|
||||
self.tmp_private_dir = build_proot_temp_dir()
|
||||
logger.debug("Using fresh temporary directory '{}' for isolation.".format(self.tmp_private_dir))
|
||||
kwargs['proot_temp_dir'] = self.tmp_private_dir
|
||||
kwargs['proot_show_paths'] = [functioning_dir(self.source), settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
|
||||
# we cannot safely store tmp data in source dir or trust script contents
|
||||
if env['AWX_PRIVATE_DATA_DIR']:
|
||||
# If this is non-blank, file credentials are being used and we need access
|
||||
private_data_dir = functioning_dir(env['AWX_PRIVATE_DATA_DIR'])
|
||||
logger.debug("Using private credential data in '{}'.".format(private_data_dir))
|
||||
kwargs['private_data_dir'] = private_data_dir
|
||||
self.tmp_private_dir = build_proot_temp_dir()
|
||||
logger.debug("Using fresh temporary directory '{}' for isolation.".format(self.tmp_private_dir))
|
||||
kwargs['proot_temp_dir'] = self.tmp_private_dir
|
||||
kwargs['proot_show_paths'] = [functioning_dir(self.source), settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
|
||||
logger.debug("Running from `{}` working directory.".format(cwd))
|
||||
|
||||
if self.venv_path != settings.ANSIBLE_VENV_PATH:
|
||||
@ -170,12 +168,14 @@ class AnsibleInventoryLoader(object):
|
||||
|
||||
return wrap_args_with_proot(cmd, cwd, **kwargs)
|
||||
|
||||
|
||||
def command_to_json(self, cmd):
|
||||
data = {}
|
||||
stdout, stderr = '', ''
|
||||
env = self.build_env()
|
||||
|
||||
if ((self.is_custom or 'AWX_PRIVATE_DATA_DIR' in env) and
|
||||
# TODO: remove proot args once inv. updates run in containers
|
||||
if (('AWX_PRIVATE_DATA_DIR' in env) and
|
||||
getattr(settings, 'AWX_PROOT_ENABLED', False)):
|
||||
cmd = self.get_proot_args(cmd, env)
|
||||
|
||||
@ -184,11 +184,13 @@ class AnsibleInventoryLoader(object):
|
||||
stdout = smart_text(stdout)
|
||||
stderr = smart_text(stderr)
|
||||
|
||||
# TODO: can be removed when proot is removed
|
||||
if self.tmp_private_dir:
|
||||
shutil.rmtree(self.tmp_private_dir, True)
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise RuntimeError('%s failed (rc=%d) with stdout:\n%s\nstderr:\n%s' % (
|
||||
self.method, proc.returncode, stdout, stderr))
|
||||
'ansible-inventory', proc.returncode, stdout, stderr))
|
||||
|
||||
for line in stderr.splitlines():
|
||||
logger.error(line)
|
||||
@ -231,9 +233,9 @@ class Command(BaseCommand):
|
||||
action='store_true', default=False,
|
||||
help='overwrite (rather than merge) variables')
|
||||
parser.add_argument('--keep-vars', dest='keep_vars', action='store_true', default=False,
|
||||
help='use database variables if set')
|
||||
help='DEPRECATED legacy option, has no effect')
|
||||
parser.add_argument('--custom', dest='custom', action='store_true', default=False,
|
||||
help='this is a custom inventory script')
|
||||
help='DEPRECATED indicates a custom inventory script, no longer used')
|
||||
parser.add_argument('--source', dest='source', type=str, default=None,
|
||||
metavar='s', help='inventory directory, file, or script to load')
|
||||
parser.add_argument('--enabled-var', dest='enabled_var', type=str,
|
||||
@ -259,10 +261,10 @@ class Command(BaseCommand):
|
||||
'specifies the unique, immutable instance ID, may be '
|
||||
'specified as "foo.bar" to traverse nested dicts.')
|
||||
|
||||
def set_logging_level(self):
|
||||
def set_logging_level(self, verbosity):
|
||||
log_levels = dict(enumerate([logging.WARNING, logging.INFO,
|
||||
logging.DEBUG, 0]))
|
||||
logger.setLevel(log_levels.get(self.verbosity, 0))
|
||||
logger.setLevel(log_levels.get(verbosity, 0))
|
||||
|
||||
def _get_instance_id(self, variables, default=''):
|
||||
'''
|
||||
@ -322,7 +324,8 @@ class Command(BaseCommand):
|
||||
else:
|
||||
raise NotImplementedError('Value of enabled {} not understood.'.format(enabled))
|
||||
|
||||
def get_source_absolute_path(self, source):
|
||||
@staticmethod
|
||||
def get_source_absolute_path(source):
|
||||
if not os.path.exists(source):
|
||||
raise IOError('Source does not exist: %s' % source)
|
||||
source = os.path.join(os.getcwd(), os.path.dirname(source),
|
||||
@ -330,61 +333,6 @@ class Command(BaseCommand):
|
||||
source = os.path.normpath(os.path.abspath(source))
|
||||
return source
|
||||
|
||||
def load_inventory_from_database(self):
|
||||
'''
|
||||
Load inventory and related objects from the database.
|
||||
'''
|
||||
# Load inventory object based on name or ID.
|
||||
if self.inventory_id:
|
||||
q = dict(id=self.inventory_id)
|
||||
else:
|
||||
q = dict(name=self.inventory_name)
|
||||
try:
|
||||
self.inventory = Inventory.objects.get(**q)
|
||||
except Inventory.DoesNotExist:
|
||||
raise CommandError('Inventory with %s = %s cannot be found' % list(q.items())[0])
|
||||
except Inventory.MultipleObjectsReturned:
|
||||
raise CommandError('Inventory with %s = %s returned multiple results' % list(q.items())[0])
|
||||
logger.info('Updating inventory %d: %s' % (self.inventory.pk,
|
||||
self.inventory.name))
|
||||
|
||||
# Load inventory source if specified via environment variable (when
|
||||
# inventory_import is called from an InventoryUpdate task).
|
||||
inventory_source_id = os.getenv('INVENTORY_SOURCE_ID', None)
|
||||
inventory_update_id = os.getenv('INVENTORY_UPDATE_ID', None)
|
||||
if inventory_source_id:
|
||||
try:
|
||||
self.inventory_source = InventorySource.objects.get(pk=inventory_source_id,
|
||||
inventory=self.inventory)
|
||||
except InventorySource.DoesNotExist:
|
||||
raise CommandError('Inventory source with id=%s not found' %
|
||||
inventory_source_id)
|
||||
try:
|
||||
self.inventory_update = InventoryUpdate.objects.get(pk=inventory_update_id)
|
||||
except InventoryUpdate.DoesNotExist:
|
||||
raise CommandError('Inventory update with id=%s not found' %
|
||||
inventory_update_id)
|
||||
# Otherwise, create a new inventory source to capture this invocation
|
||||
# via command line.
|
||||
else:
|
||||
with ignore_inventory_computed_fields():
|
||||
self.inventory_source, created = InventorySource.objects.get_or_create(
|
||||
inventory=self.inventory,
|
||||
source='file',
|
||||
source_path=os.path.abspath(self.source),
|
||||
overwrite=self.overwrite,
|
||||
overwrite_vars=self.overwrite_vars,
|
||||
)
|
||||
self.inventory_update = self.inventory_source.create_inventory_update(
|
||||
_eager_fields=dict(
|
||||
job_args=json.dumps(sys.argv),
|
||||
job_env=dict(os.environ.items()),
|
||||
job_cwd=os.getcwd())
|
||||
)
|
||||
|
||||
# FIXME: Wait or raise error if inventory is being updated by another
|
||||
# source.
|
||||
|
||||
def _batch_add_m2m(self, related_manager, *objs, **kwargs):
|
||||
key = (related_manager.instance.pk, related_manager.through._meta.db_table)
|
||||
flush = bool(kwargs.get('flush', False))
|
||||
@ -894,9 +842,9 @@ class Command(BaseCommand):
|
||||
source_vars = self.all_group.variables
|
||||
remote_license_type = source_vars.get('tower_metadata', {}).get('license_type', None)
|
||||
if remote_license_type is None:
|
||||
raise CommandError('Unexpected Error: Tower inventory plugin missing needed metadata!')
|
||||
raise PermissionDenied('Unexpected Error: Tower inventory plugin missing needed metadata!')
|
||||
if local_license_type != remote_license_type:
|
||||
raise CommandError('Tower server licenses must match: source: {} local: {}'.format(
|
||||
raise PermissionDenied('Tower server licenses must match: source: {} local: {}'.format(
|
||||
remote_license_type, local_license_type
|
||||
))
|
||||
|
||||
@ -905,10 +853,10 @@ class Command(BaseCommand):
|
||||
local_license_type = license_info.get('license_type', 'UNLICENSED')
|
||||
if local_license_type == 'UNLICENSED':
|
||||
logger.error(LICENSE_NON_EXISTANT_MESSAGE)
|
||||
raise CommandError('No license found!')
|
||||
raise PermissionDenied('No license found!')
|
||||
elif local_license_type == 'open':
|
||||
return
|
||||
available_instances = license_info.get('available_instances', 0)
|
||||
instance_count = license_info.get('instance_count', 0)
|
||||
free_instances = license_info.get('free_instances', 0)
|
||||
time_remaining = license_info.get('time_remaining', 0)
|
||||
hard_error = license_info.get('trial', False) is True or license_info['instance_count'] == 10
|
||||
@ -916,24 +864,24 @@ class Command(BaseCommand):
|
||||
if time_remaining <= 0:
|
||||
if hard_error:
|
||||
logger.error(LICENSE_EXPIRED_MESSAGE)
|
||||
raise CommandError("License has expired!")
|
||||
raise PermissionDenied("License has expired!")
|
||||
else:
|
||||
logger.warning(LICENSE_EXPIRED_MESSAGE)
|
||||
# special check for tower-type inventory sources
|
||||
# but only if running the plugin
|
||||
TOWER_SOURCE_FILES = ['tower.yml', 'tower.yaml']
|
||||
if self.inventory_source.source == 'tower' and any(f in self.source for f in TOWER_SOURCE_FILES):
|
||||
if self.inventory_source.source == 'tower' and any(f in self.inventory_source.source_path for f in TOWER_SOURCE_FILES):
|
||||
# only if this is the 2nd call to license check, we cannot compare before running plugin
|
||||
if hasattr(self, 'all_group'):
|
||||
self.remote_tower_license_compare(local_license_type)
|
||||
if free_instances < 0:
|
||||
d = {
|
||||
'new_count': new_count,
|
||||
'available_instances': available_instances,
|
||||
'instance_count': instance_count,
|
||||
}
|
||||
if hard_error:
|
||||
logger.error(LICENSE_MESSAGE % d)
|
||||
raise CommandError('License count exceeded!')
|
||||
raise PermissionDenied('License count exceeded!')
|
||||
else:
|
||||
logger.warning(LICENSE_MESSAGE % d)
|
||||
|
||||
@ -948,7 +896,7 @@ class Command(BaseCommand):
|
||||
|
||||
active_count = Host.objects.org_active_count(org.id)
|
||||
if active_count > org.max_hosts:
|
||||
raise CommandError('Host limit for organization exceeded!')
|
||||
raise PermissionDenied('Host limit for organization exceeded!')
|
||||
|
||||
def mark_license_failure(self, save=True):
|
||||
self.inventory_update.license_error = True
|
||||
@ -959,16 +907,103 @@ class Command(BaseCommand):
|
||||
self.inventory_update.save(update_fields=['org_host_limit_error'])
|
||||
|
||||
def handle(self, *args, **options):
|
||||
self.verbosity = int(options.get('verbosity', 1))
|
||||
self.set_logging_level()
|
||||
self.inventory_name = options.get('inventory_name', None)
|
||||
self.inventory_id = options.get('inventory_id', None)
|
||||
venv_path = options.get('venv', None)
|
||||
# Load inventory and related objects from database.
|
||||
inventory_name = options.get('inventory_name', None)
|
||||
inventory_id = options.get('inventory_id', None)
|
||||
if inventory_name and inventory_id:
|
||||
raise CommandError('--inventory-name and --inventory-id are mutually exclusive')
|
||||
elif not inventory_name and not inventory_id:
|
||||
raise CommandError('--inventory-name or --inventory-id is required')
|
||||
|
||||
with advisory_lock('inventory_{}_import'.format(inventory_id)):
|
||||
# Obtain rest of the options needed to run update
|
||||
raw_source = options.get('source', None)
|
||||
if not raw_source:
|
||||
raise CommandError('--source is required')
|
||||
verbosity = int(options.get('verbosity', 1))
|
||||
self.set_logging_level(verbosity)
|
||||
venv_path = options.get('venv', None)
|
||||
|
||||
# Load inventory object based on name or ID.
|
||||
if inventory_id:
|
||||
q = dict(id=inventory_id)
|
||||
else:
|
||||
q = dict(name=inventory_name)
|
||||
try:
|
||||
inventory = Inventory.objects.get(**q)
|
||||
except Inventory.DoesNotExist:
|
||||
raise CommandError('Inventory with %s = %s cannot be found' % list(q.items())[0])
|
||||
except Inventory.MultipleObjectsReturned:
|
||||
raise CommandError('Inventory with %s = %s returned multiple results' % list(q.items())[0])
|
||||
logger.info('Updating inventory %d: %s' % (inventory.pk, inventory.name))
|
||||
|
||||
|
||||
# Create ad-hoc inventory source and inventory update objects
|
||||
with ignore_inventory_computed_fields():
|
||||
source = Command.get_source_absolute_path(raw_source)
|
||||
|
||||
inventory_source, created = InventorySource.objects.get_or_create(
|
||||
inventory=inventory,
|
||||
source='file',
|
||||
source_path=os.path.abspath(source),
|
||||
overwrite=bool(options.get('overwrite', False)),
|
||||
overwrite_vars=bool(options.get('overwrite_vars', False)),
|
||||
)
|
||||
inventory_update = inventory_source.create_inventory_update(
|
||||
_eager_fields=dict(
|
||||
job_args=json.dumps(sys.argv),
|
||||
job_env=dict(os.environ.items()),
|
||||
job_cwd=os.getcwd())
|
||||
)
|
||||
|
||||
data = AnsibleInventoryLoader(
|
||||
source=source, venv_path=venv_path, verbosity=verbosity
|
||||
).load()
|
||||
|
||||
logger.debug('Finished loading from source: %s', source)
|
||||
|
||||
status, tb, exc = 'error', '', None
|
||||
try:
|
||||
self.perform_update(options, data, inventory_update)
|
||||
status = 'successful'
|
||||
except Exception as e:
|
||||
exc = e
|
||||
if isinstance(e, KeyboardInterrupt):
|
||||
status = 'canceled'
|
||||
else:
|
||||
tb = traceback.format_exc()
|
||||
|
||||
with ignore_inventory_computed_fields():
|
||||
inventory_update = InventoryUpdate.objects.get(pk=inventory_update.pk)
|
||||
inventory_update.result_traceback = tb
|
||||
inventory_update.status = status
|
||||
inventory_update.save(update_fields=['status', 'result_traceback'])
|
||||
inventory_source.status = status
|
||||
inventory_source.save(update_fields=['status'])
|
||||
|
||||
if exc:
|
||||
logger.error(str(exc))
|
||||
|
||||
if exc:
|
||||
if isinstance(exc, CommandError):
|
||||
sys.exit(1)
|
||||
raise exc
|
||||
|
||||
def perform_update(self, options, data, inventory_update):
|
||||
"""Shared method for both awx-manage CLI updates and inventory updates
|
||||
from the tasks system.
|
||||
|
||||
This saves the inventory data to the database, calling load_into_database
|
||||
but also wraps that method in a host of options processing
|
||||
"""
|
||||
# outside of normal options, these are needed as part of programatic interface
|
||||
self.inventory = inventory_update.inventory
|
||||
self.inventory_source = inventory_update.inventory_source
|
||||
self.inventory_update = inventory_update
|
||||
|
||||
# the update options, could be parser object or dict
|
||||
self.overwrite = bool(options.get('overwrite', False))
|
||||
self.overwrite_vars = bool(options.get('overwrite_vars', False))
|
||||
self.keep_vars = bool(options.get('keep_vars', False))
|
||||
self.is_custom = bool(options.get('custom', False))
|
||||
self.source = options.get('source', None)
|
||||
self.enabled_var = options.get('enabled_var', None)
|
||||
self.enabled_value = options.get('enabled_value', None)
|
||||
self.group_filter = options.get('group_filter', None) or r'^.+$'
|
||||
@ -976,17 +1011,6 @@ class Command(BaseCommand):
|
||||
self.exclude_empty_groups = bool(options.get('exclude_empty_groups', False))
|
||||
self.instance_id_var = options.get('instance_id_var', None)
|
||||
|
||||
self.invoked_from_dispatcher = False if os.getenv('INVENTORY_SOURCE_ID', None) is None else True
|
||||
|
||||
# Load inventory and related objects from database.
|
||||
if self.inventory_name and self.inventory_id:
|
||||
raise CommandError('--inventory-name and --inventory-id are mutually exclusive')
|
||||
elif not self.inventory_name and not self.inventory_id:
|
||||
raise CommandError('--inventory-name or --inventory-id is required')
|
||||
if (self.overwrite or self.overwrite_vars) and self.keep_vars:
|
||||
raise CommandError('--overwrite/--overwrite-vars and --keep-vars are mutually exclusive')
|
||||
if not self.source:
|
||||
raise CommandError('--source is required')
|
||||
try:
|
||||
self.group_filter_re = re.compile(self.group_filter)
|
||||
except re.error:
|
||||
@ -997,146 +1021,115 @@ class Command(BaseCommand):
|
||||
raise CommandError('invalid regular expression for --host-filter')
|
||||
|
||||
begin = time.time()
|
||||
with advisory_lock('inventory_{}_update'.format(self.inventory_id)):
|
||||
self.load_inventory_from_database()
|
||||
|
||||
# Since perform_update can be invoked either through the awx-manage CLI
|
||||
# or from the task system, we need to create a new lock at this level
|
||||
# (even though inventory_import.Command.handle -- which calls
|
||||
# perform_update -- has its own lock, inventory_ID_import)
|
||||
with advisory_lock('inventory_{}_perform_update'.format(self.inventory.id)):
|
||||
|
||||
try:
|
||||
self.check_license()
|
||||
except CommandError as e:
|
||||
except PermissionDenied as e:
|
||||
self.mark_license_failure(save=True)
|
||||
raise e
|
||||
|
||||
try:
|
||||
# Check the per-org host limits
|
||||
self.check_org_host_limit()
|
||||
except CommandError as e:
|
||||
except PermissionDenied as e:
|
||||
self.mark_org_limits_failure(save=True)
|
||||
raise e
|
||||
|
||||
status, tb, exc = 'error', '', None
|
||||
try:
|
||||
if settings.SQL_DEBUG:
|
||||
queries_before = len(connection.queries)
|
||||
if settings.SQL_DEBUG:
|
||||
queries_before = len(connection.queries)
|
||||
|
||||
# Update inventory update for this command line invocation.
|
||||
with ignore_inventory_computed_fields():
|
||||
iu = self.inventory_update
|
||||
if iu.status != 'running':
|
||||
with transaction.atomic():
|
||||
self.inventory_update.status = 'running'
|
||||
self.inventory_update.save()
|
||||
# Update inventory update for this command line invocation.
|
||||
with ignore_inventory_computed_fields():
|
||||
# TODO: move this to before perform_update
|
||||
iu = self.inventory_update
|
||||
if iu.status != 'running':
|
||||
with transaction.atomic():
|
||||
self.inventory_update.status = 'running'
|
||||
self.inventory_update.save()
|
||||
|
||||
source = self.get_source_absolute_path(self.source)
|
||||
logger.info('Processing JSON output...')
|
||||
inventory = MemInventory(
|
||||
group_filter_re=self.group_filter_re, host_filter_re=self.host_filter_re)
|
||||
inventory = dict_to_mem_data(data, inventory=inventory)
|
||||
|
||||
data = AnsibleInventoryLoader(source=source, is_custom=self.is_custom,
|
||||
venv_path=venv_path, verbosity=self.verbosity).load()
|
||||
logger.info('Loaded %d groups, %d hosts', len(inventory.all_group.all_groups),
|
||||
len(inventory.all_group.all_hosts))
|
||||
|
||||
logger.debug('Finished loading from source: %s', source)
|
||||
logger.info('Processing JSON output...')
|
||||
inventory = MemInventory(
|
||||
group_filter_re=self.group_filter_re, host_filter_re=self.host_filter_re)
|
||||
inventory = dict_to_mem_data(data, inventory=inventory)
|
||||
if self.exclude_empty_groups:
|
||||
inventory.delete_empty_groups()
|
||||
|
||||
del data # forget dict from import, could be large
|
||||
self.all_group = inventory.all_group
|
||||
|
||||
logger.info('Loaded %d groups, %d hosts', len(inventory.all_group.all_groups),
|
||||
len(inventory.all_group.all_hosts))
|
||||
if settings.DEBUG:
|
||||
# depending on inventory source, this output can be
|
||||
# *exceedingly* verbose - crawling a deeply nested
|
||||
# inventory/group data structure and printing metadata about
|
||||
# each host and its memberships
|
||||
#
|
||||
# it's easy for this scale of data to overwhelm pexpect,
|
||||
# (and it's likely only useful for purposes of debugging the
|
||||
# actual inventory import code), so only print it if we have to:
|
||||
# https://github.com/ansible/ansible-tower/issues/7414#issuecomment-321615104
|
||||
self.all_group.debug_tree()
|
||||
|
||||
if self.exclude_empty_groups:
|
||||
inventory.delete_empty_groups()
|
||||
|
||||
self.all_group = inventory.all_group
|
||||
|
||||
if settings.DEBUG:
|
||||
# depending on inventory source, this output can be
|
||||
# *exceedingly* verbose - crawling a deeply nested
|
||||
# inventory/group data structure and printing metadata about
|
||||
# each host and its memberships
|
||||
#
|
||||
# it's easy for this scale of data to overwhelm pexpect,
|
||||
# (and it's likely only useful for purposes of debugging the
|
||||
# actual inventory import code), so only print it if we have to:
|
||||
# https://github.com/ansible/ansible-tower/issues/7414#issuecomment-321615104
|
||||
self.all_group.debug_tree()
|
||||
|
||||
with batch_role_ancestor_rebuilding():
|
||||
# If using with transaction.atomic() with try ... catch,
|
||||
# with transaction.atomic() must be inside the try section of the code as per Django docs
|
||||
try:
|
||||
# Ensure that this is managed as an atomic SQL transaction,
|
||||
# and thus properly rolled back if there is an issue.
|
||||
with transaction.atomic():
|
||||
# Merge/overwrite inventory into database.
|
||||
if settings.SQL_DEBUG:
|
||||
logger.warning('loading into database...')
|
||||
with ignore_inventory_computed_fields():
|
||||
if getattr(settings, 'ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', True):
|
||||
with batch_role_ancestor_rebuilding():
|
||||
# If using with transaction.atomic() with try ... catch,
|
||||
# with transaction.atomic() must be inside the try section of the code as per Django docs
|
||||
try:
|
||||
# Ensure that this is managed as an atomic SQL transaction,
|
||||
# and thus properly rolled back if there is an issue.
|
||||
with transaction.atomic():
|
||||
# Merge/overwrite inventory into database.
|
||||
if settings.SQL_DEBUG:
|
||||
logger.warning('loading into database...')
|
||||
with ignore_inventory_computed_fields():
|
||||
if getattr(settings, 'ACTIVITY_STREAM_ENABLED_FOR_INVENTORY_SYNC', True):
|
||||
self.load_into_database()
|
||||
else:
|
||||
with disable_activity_stream():
|
||||
self.load_into_database()
|
||||
else:
|
||||
with disable_activity_stream():
|
||||
self.load_into_database()
|
||||
if settings.SQL_DEBUG:
|
||||
queries_before2 = len(connection.queries)
|
||||
self.inventory.update_computed_fields()
|
||||
if settings.SQL_DEBUG:
|
||||
logger.warning('update computed fields took %d queries',
|
||||
len(connection.queries) - queries_before2)
|
||||
# Check if the license is valid.
|
||||
# If the license is not valid, a CommandError will be thrown,
|
||||
# and inventory update will be marked as invalid.
|
||||
# with transaction.atomic() will roll back the changes.
|
||||
license_fail = True
|
||||
self.check_license()
|
||||
if settings.SQL_DEBUG:
|
||||
queries_before2 = len(connection.queries)
|
||||
self.inventory.update_computed_fields()
|
||||
if settings.SQL_DEBUG:
|
||||
logger.warning('update computed fields took %d queries',
|
||||
len(connection.queries) - queries_before2)
|
||||
|
||||
# Check the per-org host limits
|
||||
license_fail = False
|
||||
self.check_org_host_limit()
|
||||
except CommandError as e:
|
||||
if license_fail:
|
||||
self.mark_license_failure()
|
||||
else:
|
||||
self.mark_org_limits_failure()
|
||||
raise e
|
||||
# Check if the license is valid.
|
||||
# If the license is not valid, a CommandError will be thrown,
|
||||
# and inventory update will be marked as invalid.
|
||||
# with transaction.atomic() will roll back the changes.
|
||||
license_fail = True
|
||||
self.check_license()
|
||||
|
||||
if settings.SQL_DEBUG:
|
||||
logger.warning('Inventory import completed for %s in %0.1fs',
|
||||
self.inventory_source.name, time.time() - begin)
|
||||
# Check the per-org host limits
|
||||
license_fail = False
|
||||
self.check_org_host_limit()
|
||||
except PermissionDenied as e:
|
||||
if license_fail:
|
||||
self.mark_license_failure(save=True)
|
||||
else:
|
||||
logger.info('Inventory import completed for %s in %0.1fs',
|
||||
self.inventory_source.name, time.time() - begin)
|
||||
status = 'successful'
|
||||
self.mark_org_limits_failure(save=True)
|
||||
raise e
|
||||
|
||||
# If we're in debug mode, then log the queries and time
|
||||
# used to do the operation.
|
||||
if settings.SQL_DEBUG:
|
||||
queries_this_import = connection.queries[queries_before:]
|
||||
sqltime = sum(float(x['time']) for x in queries_this_import)
|
||||
logger.warning('Inventory import required %d queries '
|
||||
'taking %0.3fs', len(queries_this_import),
|
||||
sqltime)
|
||||
except Exception as e:
|
||||
if isinstance(e, KeyboardInterrupt):
|
||||
status = 'canceled'
|
||||
exc = e
|
||||
elif isinstance(e, CommandError):
|
||||
exc = e
|
||||
logger.warning('Inventory import completed for %s in %0.1fs',
|
||||
self.inventory_source.name, time.time() - begin)
|
||||
else:
|
||||
tb = traceback.format_exc()
|
||||
exc = e
|
||||
logger.info('Inventory import completed for %s in %0.1fs',
|
||||
self.inventory_source.name, time.time() - begin)
|
||||
|
||||
if not self.invoked_from_dispatcher:
|
||||
with ignore_inventory_computed_fields():
|
||||
self.inventory_update = InventoryUpdate.objects.get(pk=self.inventory_update.pk)
|
||||
self.inventory_update.result_traceback = tb
|
||||
self.inventory_update.status = status
|
||||
self.inventory_update.save(update_fields=['status', 'result_traceback'])
|
||||
self.inventory_source.status = status
|
||||
self.inventory_source.save(update_fields=['status'])
|
||||
|
||||
if exc:
|
||||
logger.error(str(exc))
|
||||
|
||||
if exc:
|
||||
if isinstance(exc, CommandError):
|
||||
sys.exit(1)
|
||||
raise exc
|
||||
# If we're in debug mode, then log the queries and time
|
||||
# used to do the operation.
|
||||
if settings.SQL_DEBUG:
|
||||
queries_this_import = connection.queries[queries_before:]
|
||||
sqltime = sum(float(x['time']) for x in queries_this_import)
|
||||
logger.warning('Inventory import required %d queries '
|
||||
'taking %0.3fs', len(queries_this_import),
|
||||
sqltime)
|
||||
|
||||
@ -181,4 +181,4 @@ class MigrationRanCheckMiddleware(MiddlewareMixin):
|
||||
plan = executor.migration_plan(executor.loader.graph.leaf_nodes())
|
||||
if bool(plan) and \
|
||||
getattr(resolve(request.path), 'url_name', '') != 'migrations_notran':
|
||||
return redirect(reverse("ui:migrations_notran"))
|
||||
return redirect(reverse("ui_next:migrations_notran"))
|
||||
|
||||
23
awx/main/migrations/0123_drop_hg_support.py
Normal file
23
awx/main/migrations/0123_drop_hg_support.py
Normal file
@ -0,0 +1,23 @@
|
||||
from django.db import migrations, models
|
||||
from awx.main.migrations._hg_removal import delete_hg_scm
|
||||
|
||||
|
||||
class Migration(migrations.Migration):
|
||||
|
||||
dependencies = [
|
||||
('main', '0122_really_remove_cloudforms_inventory'),
|
||||
]
|
||||
|
||||
operations = [
|
||||
migrations.RunPython(delete_hg_scm),
|
||||
migrations.AlterField(
|
||||
model_name='project',
|
||||
name='scm_type',
|
||||
field=models.CharField(blank=True, choices=[('', 'Manual'), ('git', 'Git'), ('svn', 'Subversion'), ('insights', 'Red Hat Insights'), ('archive', 'Remote Archive')], default='', help_text='Specifies the source control system used to store the project.', max_length=8, verbose_name='SCM Type'),
|
||||
),
|
||||
migrations.AlterField(
|
||||
model_name='projectupdate',
|
||||
name='scm_type',
|
||||
field=models.CharField(blank=True, choices=[('', 'Manual'), ('git', 'Git'), ('svn', 'Subversion'), ('insights', 'Red Hat Insights'), ('archive', 'Remote Archive')], default='', help_text='Specifies the source control system used to store the project.', max_length=8, verbose_name='SCM Type'),
|
||||
),
|
||||
]
|
||||
19
awx/main/migrations/_hg_removal.py
Normal file
19
awx/main/migrations/_hg_removal.py
Normal file
@ -0,0 +1,19 @@
|
||||
import logging
|
||||
|
||||
from awx.main.utils.common import set_current_apps
|
||||
|
||||
logger = logging.getLogger('awx.main.migrations')
|
||||
|
||||
|
||||
def delete_hg_scm(apps, schema_editor):
|
||||
set_current_apps(apps)
|
||||
Project = apps.get_model('main', 'Project')
|
||||
ProjectUpdate = apps.get_model('main', 'ProjectUpdate')
|
||||
|
||||
ProjectUpdate.objects.filter(project__scm_type='hg').update(scm_type='')
|
||||
update_ct = Project.objects.filter(scm_type='hg').update(scm_type='')
|
||||
|
||||
if update_ct:
|
||||
logger.warn('Changed {} mercurial projects to manual, deprecation period ended'.format(
|
||||
update_ct
|
||||
))
|
||||
@ -1,9 +1,13 @@
|
||||
import json
|
||||
import re
|
||||
import logging
|
||||
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
from django.utils.encoding import iri_to_uri
|
||||
|
||||
|
||||
FrozenInjectors = dict()
|
||||
logger = logging.getLogger('awx.main.migrations')
|
||||
|
||||
|
||||
class PluginFileInjector(object):
|
||||
@ -129,6 +133,7 @@ class azure_rm(PluginFileInjector):
|
||||
ret['exclude_host_filters'].append("location not in {}".format(repr(python_regions)))
|
||||
return ret
|
||||
|
||||
|
||||
class ec2(PluginFileInjector):
|
||||
plugin_name = 'aws_ec2'
|
||||
namespace = 'amazon'
|
||||
@ -586,6 +591,7 @@ class openstack(PluginFileInjector):
|
||||
ret['inventory_hostname'] = use_host_name_for_name(source_vars['use_hostnames'])
|
||||
return ret
|
||||
|
||||
|
||||
class rhv(PluginFileInjector):
|
||||
"""ovirt uses the custom credential templating, and that is all
|
||||
"""
|
||||
|
||||
@ -81,10 +81,17 @@ User.add_to_class('accessible_objects', user_accessible_objects)
|
||||
|
||||
|
||||
def enforce_bigint_pk_migration():
|
||||
#
|
||||
# NOTE: this function is not actually in use anymore,
|
||||
# but has been intentionally kept for historical purposes,
|
||||
# and to serve as an illustration if we ever need to perform
|
||||
# bulk modification/migration of event data in the future.
|
||||
#
|
||||
# see: https://github.com/ansible/awx/issues/6010
|
||||
# look at all the event tables and verify that they have been fully migrated
|
||||
# from the *old* int primary key table to the replacement bigint table
|
||||
# if not, attempt to migrate them in the background
|
||||
#
|
||||
for tblname in (
|
||||
'main_jobevent', 'main_inventoryupdateevent',
|
||||
'main_projectupdateevent', 'main_adhoccommandevent',
|
||||
|
||||
@ -819,6 +819,11 @@ ManagedCredentialType(
|
||||
'It is only needed for Keystone v3 authentication '
|
||||
'URLs. Refer to Ansible Tower documentation for '
|
||||
'common scenarios.')
|
||||
}, {
|
||||
'id': 'region',
|
||||
'label': ugettext_noop('Region Name'),
|
||||
'type': 'string',
|
||||
'help_text': ugettext_noop('For some cloud providers, like OVH, region must be specified'),
|
||||
}, {
|
||||
'id': 'verify_ssl',
|
||||
'label': ugettext_noop('Verify SSL'),
|
||||
|
||||
@ -82,6 +82,7 @@ def _openstack_data(cred):
|
||||
if cred.has_input('domain'):
|
||||
openstack_auth['domain_name'] = cred.get_input('domain', default='')
|
||||
verify_state = cred.get_input('verify_ssl', default=True)
|
||||
|
||||
openstack_data = {
|
||||
'clouds': {
|
||||
'devstack': {
|
||||
@ -90,6 +91,10 @@ def _openstack_data(cred):
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
if cred.has_input('project_region_name'):
|
||||
openstack_data['clouds']['devstack']['region_name'] = cred.get_input('project_region_name', default='')
|
||||
|
||||
return openstack_data
|
||||
|
||||
|
||||
|
||||
@ -12,7 +12,7 @@ from django.core.mail.message import EmailMessage
|
||||
from django.db import connection
|
||||
from django.utils.translation import ugettext_lazy as _
|
||||
from django.utils.encoding import smart_str, force_text
|
||||
from jinja2 import sandbox
|
||||
from jinja2 import sandbox, ChainableUndefined
|
||||
from jinja2.exceptions import TemplateSyntaxError, UndefinedError, SecurityError
|
||||
|
||||
# AWX
|
||||
@ -357,7 +357,7 @@ class JobNotificationMixin(object):
|
||||
'url': 'https://towerhost/#/jobs/playbook/1010',
|
||||
'approval_status': 'approved',
|
||||
'approval_node_name': 'Approve Me',
|
||||
'workflow_url': 'https://towerhost/#/workflows/1010',
|
||||
'workflow_url': 'https://towerhost/#/jobs/workflow/1010',
|
||||
'job_metadata': """{'url': 'https://towerhost/$/jobs/playbook/13',
|
||||
'traceback': '',
|
||||
'status': 'running',
|
||||
@ -429,7 +429,7 @@ class JobNotificationMixin(object):
|
||||
raise RuntimeError("Define me")
|
||||
|
||||
def build_notification_message(self, nt, status):
|
||||
env = sandbox.ImmutableSandboxedEnvironment()
|
||||
env = sandbox.ImmutableSandboxedEnvironment(undefined=ChainableUndefined)
|
||||
|
||||
from awx.api.serializers import UnifiedJobSerializer
|
||||
job_serialization = UnifiedJobSerializer(self).to_representation(self)
|
||||
|
||||
@ -52,7 +52,6 @@ class ProjectOptions(models.Model):
|
||||
SCM_TYPE_CHOICES = [
|
||||
('', _('Manual')),
|
||||
('git', _('Git')),
|
||||
('hg', _('Mercurial')),
|
||||
('svn', _('Subversion')),
|
||||
('insights', _('Red Hat Insights')),
|
||||
('archive', _('Remote Archive')),
|
||||
|
||||
@ -620,7 +620,7 @@ class WorkflowJob(UnifiedJob, WorkflowJobOptions, SurveyJobMixin, JobNotificatio
|
||||
return reverse('api:workflow_job_detail', kwargs={'pk': self.pk}, request=request)
|
||||
|
||||
def get_ui_url(self):
|
||||
return urljoin(settings.TOWER_URL_BASE, '/#/workflows/{}'.format(self.pk))
|
||||
return urljoin(settings.TOWER_URL_BASE, '/#/jobs/workflow/{}'.format(self.pk))
|
||||
|
||||
def notification_data(self):
|
||||
result = super(WorkflowJob, self).notification_data()
|
||||
@ -752,7 +752,7 @@ class WorkflowApproval(UnifiedJob, JobNotificationMixin):
|
||||
return None
|
||||
|
||||
def get_ui_url(self):
|
||||
return urljoin(settings.TOWER_URL_BASE, '/#/workflows/{}'.format(self.workflow_job.id))
|
||||
return urljoin(settings.TOWER_URL_BASE, '/#/jobs/workflow/{}'.format(self.workflow_job.id))
|
||||
|
||||
def _get_parent_field_name(self):
|
||||
return 'workflow_approval_template'
|
||||
@ -840,7 +840,7 @@ class WorkflowApproval(UnifiedJob, JobNotificationMixin):
|
||||
return (msg, body)
|
||||
|
||||
def context(self, approval_status):
|
||||
workflow_url = urljoin(settings.TOWER_URL_BASE, '/#/workflows/{}'.format(self.workflow_job.id))
|
||||
workflow_url = urljoin(settings.TOWER_URL_BASE, '/#/jobs/workflow/{}'.format(self.workflow_job.id))
|
||||
return {'approval_status': approval_status,
|
||||
'approval_node_name': self.workflow_approval_template.name,
|
||||
'workflow_url': workflow_url,
|
||||
|
||||
@ -121,6 +121,27 @@ def sync_superuser_status_to_rbac(instance, **kwargs):
|
||||
Role.singleton(ROLE_SINGLETON_SYSTEM_ADMINISTRATOR).members.remove(instance)
|
||||
|
||||
|
||||
def sync_rbac_to_superuser_status(instance, sender, **kwargs):
|
||||
'When the is_superuser flag is false but a user has the System Admin role, update the database to reflect that'
|
||||
if kwargs['action'] in ['post_add', 'post_remove', 'post_clear']:
|
||||
new_status_value = bool(kwargs['action'] == 'post_add')
|
||||
if hasattr(instance, 'singleton_name'): # duck typing, role.members.add() vs user.roles.add()
|
||||
role = instance
|
||||
if role.singleton_name == ROLE_SINGLETON_SYSTEM_ADMINISTRATOR:
|
||||
if kwargs['pk_set']:
|
||||
kwargs['model'].objects.filter(pk__in=kwargs['pk_set']).update(is_superuser=new_status_value)
|
||||
elif kwargs['action'] == 'post_clear':
|
||||
kwargs['model'].objects.all().update(is_superuser=False)
|
||||
else:
|
||||
user = instance
|
||||
if kwargs['action'] == 'post_clear':
|
||||
user.is_superuser = False
|
||||
user.save(update_fields=['is_superuser'])
|
||||
elif kwargs['model'].objects.filter(pk__in=kwargs['pk_set'], singleton_name=ROLE_SINGLETON_SYSTEM_ADMINISTRATOR).exists():
|
||||
user.is_superuser = new_status_value
|
||||
user.save(update_fields=['is_superuser'])
|
||||
|
||||
|
||||
def rbac_activity_stream(instance, sender, **kwargs):
|
||||
# Only if we are associating/disassociating
|
||||
if kwargs['action'] in ['pre_add', 'pre_remove']:
|
||||
@ -197,6 +218,7 @@ m2m_changed.connect(rebuild_role_ancestor_list, Role.parents.through)
|
||||
m2m_changed.connect(rbac_activity_stream, Role.members.through)
|
||||
m2m_changed.connect(rbac_activity_stream, Role.parents.through)
|
||||
post_save.connect(sync_superuser_status_to_rbac, sender=User)
|
||||
m2m_changed.connect(sync_rbac_to_superuser_status, Role.members.through)
|
||||
pre_delete.connect(cleanup_detached_labels_on_deleted_parent, sender=UnifiedJob)
|
||||
pre_delete.connect(cleanup_detached_labels_on_deleted_parent, sender=UnifiedJobTemplate)
|
||||
|
||||
|
||||
@ -23,7 +23,6 @@ import fcntl
|
||||
from pathlib import Path
|
||||
from uuid import uuid4
|
||||
import urllib.parse as urlparse
|
||||
import shlex
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
@ -61,10 +60,10 @@ from awx.main.models import (
|
||||
Inventory, InventorySource, SmartInventoryMembership,
|
||||
Job, AdHocCommand, ProjectUpdate, InventoryUpdate, SystemJob,
|
||||
JobEvent, ProjectUpdateEvent, InventoryUpdateEvent, AdHocCommandEvent, SystemJobEvent,
|
||||
build_safe_env, enforce_bigint_pk_migration
|
||||
build_safe_env
|
||||
)
|
||||
from awx.main.constants import ACTIVE_STATES
|
||||
from awx.main.exceptions import AwxTaskError
|
||||
from awx.main.exceptions import AwxTaskError, PostRunError
|
||||
from awx.main.queue import CallbackQueueDispatcher
|
||||
from awx.main.isolated import manager as isolated_manager
|
||||
from awx.main.dispatch.publish import task
|
||||
@ -79,6 +78,7 @@ from awx.main.utils.external_logging import reconfigure_rsyslog
|
||||
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
|
||||
from awx.main.utils.reload import stop_local_services
|
||||
from awx.main.utils.pglock import advisory_lock
|
||||
from awx.main.utils.handlers import SpecialInventoryHandler
|
||||
from awx.main.consumers import emit_channel_notification
|
||||
from awx.main import analytics
|
||||
from awx.conf import settings_registry
|
||||
@ -138,12 +138,6 @@ def dispatch_startup():
|
||||
if Instance.objects.me().is_controller():
|
||||
awx_isolated_heartbeat()
|
||||
|
||||
# at process startup, detect the need to migrate old event records from int
|
||||
# to bigint; at *some point* in the future, once certain versions of AWX
|
||||
# and Tower fall out of use/support, we can probably just _assume_ that
|
||||
# everybody has moved to bigint, and remove this code entirely
|
||||
enforce_bigint_pk_migration()
|
||||
|
||||
# Update Tower's rsyslog.conf file based on loggins settings in the db
|
||||
reconfigure_rsyslog()
|
||||
|
||||
@ -378,6 +372,7 @@ def gather_analytics():
|
||||
|
||||
from awx.conf.models import Setting
|
||||
from rest_framework.fields import DateTimeField
|
||||
from awx.main.signals import disable_activity_stream
|
||||
if not settings.INSIGHTS_TRACKING_STATE:
|
||||
return
|
||||
if not (settings.AUTOMATION_ANALYTICS_URL and settings.REDHAT_USERNAME and settings.REDHAT_PASSWORD):
|
||||
@ -414,7 +409,8 @@ def gather_analytics():
|
||||
if not _gather_and_ship(incremental_collectors, since=start, until=until):
|
||||
break
|
||||
start = until
|
||||
settings.AUTOMATION_ANALYTICS_LAST_GATHER = until
|
||||
with disable_activity_stream():
|
||||
settings.AUTOMATION_ANALYTICS_LAST_GATHER = until
|
||||
if subset:
|
||||
_gather_and_ship(subset, since=since, until=gather_time)
|
||||
|
||||
@ -736,6 +732,12 @@ def update_host_smart_inventory_memberships():
|
||||
|
||||
@task(queue=get_local_queuename)
|
||||
def migrate_legacy_event_data(tblname):
|
||||
#
|
||||
# NOTE: this function is not actually in use anymore,
|
||||
# but has been intentionally kept for historical purposes,
|
||||
# and to serve as an illustration if we ever need to perform
|
||||
# bulk modification/migration of event data in the future.
|
||||
#
|
||||
if 'event' not in tblname:
|
||||
return
|
||||
with advisory_lock(f'bigint_migration_{tblname}', wait=False) as acquired:
|
||||
@ -1225,6 +1227,13 @@ class BaseTask(object):
|
||||
Ansible runner puts a parent_uuid on each event, no matter what the type.
|
||||
AWX only saves the parent_uuid if the event is for a Job.
|
||||
'''
|
||||
# cache end_line locally for RunInventoryUpdate tasks
|
||||
# which generate job events from two 'streams':
|
||||
# ansible-inventory and the awx.main.commands.inventory_import
|
||||
# logger
|
||||
if isinstance(self, RunInventoryUpdate):
|
||||
self.end_line = event_data['end_line']
|
||||
|
||||
if event_data.get(self.event_data_key, None):
|
||||
if self.event_data_key != 'job_id':
|
||||
event_data.pop('parent_uuid', None)
|
||||
@ -1253,7 +1262,7 @@ class BaseTask(object):
|
||||
# so it *should* have a negligible performance impact
|
||||
task = event_data.get('event_data', {}).get('task_action')
|
||||
try:
|
||||
if task in ('git', 'hg', 'svn'):
|
||||
if task in ('git', 'svn'):
|
||||
event_data_json = json.dumps(event_data)
|
||||
event_data_json = UriCleaner.remove_sensitive(event_data_json)
|
||||
event_data = json.loads(event_data_json)
|
||||
@ -1521,6 +1530,12 @@ class BaseTask(object):
|
||||
|
||||
try:
|
||||
self.post_run_hook(self.instance, status)
|
||||
except PostRunError as exc:
|
||||
if status == 'successful':
|
||||
status = exc.status
|
||||
extra_update_fields['job_explanation'] = exc.args[0]
|
||||
if exc.tb:
|
||||
extra_update_fields['result_traceback'] = exc.tb
|
||||
except Exception:
|
||||
logger.exception('{} Post run hook errored.'.format(self.instance.log_format))
|
||||
|
||||
@ -2141,7 +2156,7 @@ class RunProjectUpdate(BaseTask):
|
||||
elif not scm_branch:
|
||||
raise RuntimeError('Could not determine a revision to run from project.')
|
||||
elif not scm_branch:
|
||||
scm_branch = {'hg': 'tip'}.get(project_update.scm_type, 'HEAD')
|
||||
scm_branch = 'HEAD'
|
||||
|
||||
galaxy_creds_are_defined = (
|
||||
project_update.project.organization and
|
||||
@ -2150,7 +2165,7 @@ class RunProjectUpdate(BaseTask):
|
||||
if not galaxy_creds_are_defined and (
|
||||
settings.AWX_ROLES_ENABLED or settings.AWX_COLLECTIONS_ENABLED
|
||||
):
|
||||
logger.debug(
|
||||
logger.warning(
|
||||
'Galaxy role/collection syncing is enabled, but no '
|
||||
f'credentials are configured for {project_update.project.organization}.'
|
||||
)
|
||||
@ -2417,9 +2432,10 @@ class RunProjectUpdate(BaseTask):
|
||||
shutil.rmtree(stage_path) # cannot trust content update produced
|
||||
|
||||
if self.job_private_data_dir:
|
||||
# copy project folder before resetting to default branch
|
||||
# because some git-tree-specific resources (like submodules) might matter
|
||||
self.make_local_copy(instance, self.job_private_data_dir)
|
||||
if status == 'successful':
|
||||
# copy project folder before resetting to default branch
|
||||
# because some git-tree-specific resources (like submodules) might matter
|
||||
self.make_local_copy(instance, self.job_private_data_dir)
|
||||
if self.original_branch:
|
||||
# for git project syncs, non-default branches can be problems
|
||||
# restore to branch the repo was on before this run
|
||||
@ -2461,6 +2477,14 @@ class RunInventoryUpdate(BaseTask):
|
||||
event_model = InventoryUpdateEvent
|
||||
event_data_key = 'inventory_update_id'
|
||||
|
||||
# TODO: remove once inv updates run in containers
|
||||
def should_use_proot(self, inventory_update):
|
||||
'''
|
||||
Return whether this task should use proot.
|
||||
'''
|
||||
return getattr(settings, 'AWX_PROOT_ENABLED', False)
|
||||
|
||||
# TODO: remove once inv updates run in containers
|
||||
@property
|
||||
def proot_show_paths(self):
|
||||
return [settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
|
||||
@ -2485,15 +2509,11 @@ class RunInventoryUpdate(BaseTask):
|
||||
return injector.build_private_data(inventory_update, private_data_dir)
|
||||
|
||||
def build_env(self, inventory_update, private_data_dir, isolated, private_data_files=None):
|
||||
"""Build environment dictionary for inventory import.
|
||||
"""Build environment dictionary for ansible-inventory.
|
||||
|
||||
This used to be the mechanism by which any data that needs to be passed
|
||||
to the inventory update script is set up. In particular, this is how
|
||||
inventory update is aware of its proper credentials.
|
||||
|
||||
Most environment injection is now accomplished by the credential
|
||||
injectors. The primary purpose this still serves is to
|
||||
still point to the inventory update INI or config file.
|
||||
Most environment variables related to credentials or configuration
|
||||
are accomplished by the inventory source injectors (in this method)
|
||||
or custom credential type injectors (in main run method).
|
||||
"""
|
||||
env = super(RunInventoryUpdate, self).build_env(inventory_update,
|
||||
private_data_dir,
|
||||
@ -2501,8 +2521,11 @@ class RunInventoryUpdate(BaseTask):
|
||||
private_data_files=private_data_files)
|
||||
if private_data_files is None:
|
||||
private_data_files = {}
|
||||
self.add_awx_venv(env)
|
||||
# Pass inventory source ID to inventory script.
|
||||
# TODO: remove once containers replace custom venvs
|
||||
self.add_ansible_venv(inventory_update.ansible_virtualenv_path, env, isolated=isolated)
|
||||
|
||||
# Legacy environment variables, were used as signal to awx-manage command
|
||||
# now they are provided in case some scripts may be relying on them
|
||||
env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id)
|
||||
env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk)
|
||||
env.update(STANDARD_INVENTORY_UPDATE_ENV)
|
||||
@ -2565,47 +2588,25 @@ class RunInventoryUpdate(BaseTask):
|
||||
if inventory is None:
|
||||
raise RuntimeError('Inventory Source is not associated with an Inventory.')
|
||||
|
||||
# Piece together the initial command to run via. the shell.
|
||||
args = ['awx-manage', 'inventory_import']
|
||||
args.extend(['--inventory-id', str(inventory.pk)])
|
||||
args = ['ansible-inventory', '--list', '--export']
|
||||
|
||||
# Add appropriate arguments for overwrite if the inventory_update
|
||||
# object calls for it.
|
||||
if inventory_update.overwrite:
|
||||
args.append('--overwrite')
|
||||
if inventory_update.overwrite_vars:
|
||||
args.append('--overwrite-vars')
|
||||
# Add arguments for the source inventory file/script/thing
|
||||
source_location = self.pseudo_build_inventory(inventory_update, private_data_dir)
|
||||
args.append('-i')
|
||||
args.append(source_location)
|
||||
|
||||
# Declare the virtualenv the management command should activate
|
||||
# as it calls ansible-inventory
|
||||
args.extend(['--venv', inventory_update.ansible_virtualenv_path])
|
||||
args.append('--output')
|
||||
args.append(os.path.join(private_data_dir, 'artifacts', 'output.json'))
|
||||
|
||||
src = inventory_update.source
|
||||
if inventory_update.enabled_var:
|
||||
args.extend(['--enabled-var', shlex.quote(inventory_update.enabled_var)])
|
||||
args.extend(['--enabled-value', shlex.quote(inventory_update.enabled_value)])
|
||||
if os.path.isdir(source_location):
|
||||
playbook_dir = source_location
|
||||
else:
|
||||
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
|
||||
args.extend(['--enabled-var',
|
||||
getattr(settings, '%s_ENABLED_VAR' % src.upper())])
|
||||
if getattr(settings, '%s_ENABLED_VALUE' % src.upper(), False):
|
||||
args.extend(['--enabled-value',
|
||||
getattr(settings, '%s_ENABLED_VALUE' % src.upper())])
|
||||
if inventory_update.host_filter:
|
||||
args.extend(['--host-filter', shlex.quote(inventory_update.host_filter)])
|
||||
if getattr(settings, '%s_EXCLUDE_EMPTY_GROUPS' % src.upper()):
|
||||
args.append('--exclude-empty-groups')
|
||||
if getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper(), False):
|
||||
args.extend(['--instance-id-var',
|
||||
"'{}'".format(getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper())),])
|
||||
# Add arguments for the source inventory script
|
||||
args.append('--source')
|
||||
args.append(self.pseudo_build_inventory(inventory_update, private_data_dir))
|
||||
if src == 'custom':
|
||||
args.append("--custom")
|
||||
args.append('-v%d' % inventory_update.verbosity)
|
||||
if settings.DEBUG:
|
||||
args.append('--traceback')
|
||||
playbook_dir = os.path.dirname(source_location)
|
||||
args.extend(['--playbook-dir', playbook_dir])
|
||||
|
||||
if inventory_update.verbosity:
|
||||
args.append('-' + 'v' * min(5, inventory_update.verbosity * 2 + 1))
|
||||
|
||||
return args
|
||||
|
||||
def build_inventory(self, inventory_update, private_data_dir):
|
||||
@ -2645,11 +2646,9 @@ class RunInventoryUpdate(BaseTask):
|
||||
|
||||
def build_cwd(self, inventory_update, private_data_dir):
|
||||
'''
|
||||
There are two cases where the inventory "source" is in a different
|
||||
There is one case where the inventory "source" is in a different
|
||||
location from the private data:
|
||||
- deprecated vendored inventory scripts in awx/plugins/inventory
|
||||
- SCM, where source needs to live in the project folder
|
||||
in these cases, the inventory does not exist in the standard tempdir
|
||||
'''
|
||||
src = inventory_update.source
|
||||
if src == 'scm' and inventory_update.source_project_update:
|
||||
@ -2707,6 +2706,75 @@ class RunInventoryUpdate(BaseTask):
|
||||
# This follows update, not sync, so make copy here
|
||||
RunProjectUpdate.make_local_copy(source_project, private_data_dir)
|
||||
|
||||
def post_run_hook(self, inventory_update, status):
|
||||
if status != 'successful':
|
||||
return # nothing to save, step out of the way to allow error reporting
|
||||
|
||||
private_data_dir = inventory_update.job_env['AWX_PRIVATE_DATA_DIR']
|
||||
expected_output = os.path.join(private_data_dir, 'artifacts', 'output.json')
|
||||
with open(expected_output) as f:
|
||||
data = json.load(f)
|
||||
|
||||
# build inventory save options
|
||||
options = dict(
|
||||
overwrite=inventory_update.overwrite,
|
||||
overwrite_vars=inventory_update.overwrite_vars,
|
||||
)
|
||||
src = inventory_update.source
|
||||
|
||||
if inventory_update.enabled_var:
|
||||
options['enabled_var'] = inventory_update.enabled_var
|
||||
options['enabled_value'] = inventory_update.enabled_value
|
||||
else:
|
||||
if getattr(settings, '%s_ENABLED_VAR' % src.upper(), False):
|
||||
options['enabled_var'] = getattr(settings, '%s_ENABLED_VAR' % src.upper())
|
||||
if getattr(settings, '%s_ENABLED_VALUE' % src.upper(), False):
|
||||
options['enabled_value'] = getattr(settings, '%s_ENABLED_VALUE' % src.upper())
|
||||
|
||||
if inventory_update.host_filter:
|
||||
options['host_filter'] = inventory_update.host_filter
|
||||
|
||||
if getattr(settings, '%s_EXCLUDE_EMPTY_GROUPS' % src.upper()):
|
||||
options['exclude_empty_groups'] = True
|
||||
if getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper(), False):
|
||||
options['instance_id_var'] = getattr(settings, '%s_INSTANCE_ID_VAR' % src.upper())
|
||||
|
||||
# Verbosity is applied to saving process, as well as ansible-inventory CLI option
|
||||
if inventory_update.verbosity:
|
||||
options['verbosity'] = inventory_update.verbosity
|
||||
|
||||
handler = SpecialInventoryHandler(
|
||||
self.event_handler, self.cancel_callback,
|
||||
verbosity=inventory_update.verbosity,
|
||||
job_timeout=self.get_instance_timeout(self.instance),
|
||||
start_time=inventory_update.started,
|
||||
counter=self.event_ct, initial_line=self.end_line
|
||||
)
|
||||
inv_logger = logging.getLogger('awx.main.commands.inventory_import')
|
||||
formatter = inv_logger.handlers[0].formatter
|
||||
formatter.job_start = inventory_update.started
|
||||
handler.formatter = formatter
|
||||
inv_logger.handlers[0] = handler
|
||||
|
||||
from awx.main.management.commands.inventory_import import Command as InventoryImportCommand
|
||||
cmd = InventoryImportCommand()
|
||||
try:
|
||||
# save the inventory data to database.
|
||||
# canceling exceptions will be handled in the global post_run_hook
|
||||
cmd.perform_update(options, data, inventory_update)
|
||||
except PermissionDenied as exc:
|
||||
logger.exception('License error saving {} content'.format(inventory_update.log_format))
|
||||
raise PostRunError(str(exc), status='error')
|
||||
except PostRunError:
|
||||
logger.exception('Error saving {} content, rolling back changes'.format(inventory_update.log_format))
|
||||
raise
|
||||
except Exception:
|
||||
logger.exception('Exception saving {} content, rolling back changes.'.format(
|
||||
inventory_update.log_format))
|
||||
raise PostRunError(
|
||||
'Error occured while saving inventory data, see traceback or server logs',
|
||||
status='error', tb=traceback.format_exc())
|
||||
|
||||
|
||||
@task(queue=get_local_queuename)
|
||||
class RunAdHocCommand(BaseTask):
|
||||
|
||||
@ -2,7 +2,7 @@ import pytest
|
||||
|
||||
from awx.api.versioning import reverse
|
||||
|
||||
from awx.main.models import Project
|
||||
from awx.main.models import Project, Host
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
@ -81,6 +81,8 @@ def test_org_counts_detail_admin(resourced_organization, user, get):
|
||||
assert response.status_code == 200
|
||||
|
||||
counts = response.data['summary_fields']['related_field_counts']
|
||||
assert counts['hosts'] == 0
|
||||
counts.pop('hosts')
|
||||
assert counts == COUNTS_PRIMES
|
||||
|
||||
|
||||
@ -93,6 +95,8 @@ def test_org_counts_detail_member(resourced_organization, user, get):
|
||||
assert response.status_code == 200
|
||||
|
||||
counts = response.data['summary_fields']['related_field_counts']
|
||||
assert counts['hosts'] == 0
|
||||
counts.pop('hosts')
|
||||
assert counts == {
|
||||
'users': COUNTS_PRIMES['users'], # Policy is that members can see other users and admins
|
||||
'admins': COUNTS_PRIMES['admins'],
|
||||
@ -111,6 +115,7 @@ def test_org_counts_list_admin(resourced_organization, user, get):
|
||||
assert response.status_code == 200
|
||||
|
||||
counts = response.data['results'][0]['summary_fields']['related_field_counts']
|
||||
assert 'hosts' not in counts # doesn't show in list view
|
||||
assert counts == COUNTS_PRIMES
|
||||
|
||||
|
||||
@ -123,6 +128,7 @@ def test_org_counts_list_member(resourced_organization, user, get):
|
||||
assert response.status_code == 200
|
||||
|
||||
counts = response.data['results'][0]['summary_fields']['related_field_counts']
|
||||
assert 'hosts' not in counts # doesn't show in list view
|
||||
|
||||
assert counts == {
|
||||
'users': COUNTS_PRIMES['users'], # Policy is that members can see other users and admins
|
||||
@ -145,6 +151,7 @@ def test_new_org_zero_counts(user, post):
|
||||
|
||||
new_org_list = post_response.render().data
|
||||
counts_dict = new_org_list['summary_fields']['related_field_counts']
|
||||
assert 'hosts' not in counts_dict # doesn't show in list view
|
||||
assert counts_dict == COUNTS_ZEROS
|
||||
|
||||
|
||||
@ -167,6 +174,19 @@ def test_two_organizations(resourced_organization, organizations, user, get):
|
||||
assert counts[org_id_zero] == COUNTS_ZEROS
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_hosts_counted(resourced_organization, user, get):
|
||||
admin_user = user('admin', True)
|
||||
assert Host.objects.org_active_count(resourced_organization.id) == 0
|
||||
resourced_organization.inventories.first().hosts.create(name='Some Host')
|
||||
assert Host.objects.org_active_count(resourced_organization.id) == 1
|
||||
response = get(reverse('api:organization_detail', kwargs={'pk': resourced_organization.pk}), admin_user)
|
||||
assert response.status_code == 200
|
||||
|
||||
counts = response.data['summary_fields']['related_field_counts']
|
||||
assert counts['hosts'] == Host.objects.org_active_count(resourced_organization.id) == 1
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_scan_JT_counted(resourced_organization, user, get):
|
||||
admin_user = user('admin', True)
|
||||
@ -180,7 +200,10 @@ def test_scan_JT_counted(resourced_organization, user, get):
|
||||
# Test detail view
|
||||
detail_response = get(reverse('api:organization_detail', kwargs={'pk': resourced_organization.pk}), admin_user)
|
||||
assert detail_response.status_code == 200
|
||||
assert detail_response.data['summary_fields']['related_field_counts'] == counts_dict
|
||||
counts = detail_response.data['summary_fields']['related_field_counts']
|
||||
assert 'hosts' in counts
|
||||
counts.pop('hosts')
|
||||
assert counts == counts_dict
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@ -205,4 +228,7 @@ def test_JT_not_double_counted(resourced_organization, user, get):
|
||||
# Test detail view
|
||||
detail_response = get(reverse('api:organization_detail', kwargs={'pk': resourced_organization.pk}), admin_user)
|
||||
assert detail_response.status_code == 200
|
||||
assert detail_response.data['summary_fields']['related_field_counts'] == counts_dict
|
||||
counts = detail_response.data['summary_fields']['related_field_counts']
|
||||
assert 'hosts' in counts
|
||||
counts.pop('hosts')
|
||||
assert counts == counts_dict
|
||||
|
||||
@ -393,3 +393,43 @@ def test_saml_x509cert_validation(patch, get, admin, headers):
|
||||
}
|
||||
})
|
||||
assert resp.status_code == 200
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_github_settings(get, put, patch, delete, admin):
|
||||
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'github'})
|
||||
get(url, user=admin, expect=200)
|
||||
delete(url, user=admin, expect=204)
|
||||
response = get(url, user=admin, expect=200)
|
||||
data = dict(response.data.items())
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
patch(url, user=admin, data={'SOCIAL_AUTH_GITHUB_KEY': '???'}, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_KEY'] == '???'
|
||||
data.pop('SOCIAL_AUTH_GITHUB_KEY')
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_KEY'] == ''
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_github_enterprise_settings(get, put, patch, delete, admin):
|
||||
url = reverse('api:setting_singleton_detail', kwargs={'category_slug': 'github-enterprise'})
|
||||
get(url, user=admin, expect=200)
|
||||
delete(url, user=admin, expect=204)
|
||||
response = get(url, user=admin, expect=200)
|
||||
data = dict(response.data.items())
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
patch(url, user=admin, data={
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL': 'example.com',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL': 'example.com',
|
||||
}, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_URL'] == 'example.com'
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL'] == 'example.com'
|
||||
data.pop('SOCIAL_AUTH_GITHUB_ENTERPRISE_URL')
|
||||
data.pop('SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL')
|
||||
put(url, user=admin, data=data, expect=200)
|
||||
response = get(url, user=admin, expect=200)
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_URL'] == ''
|
||||
assert response.data['SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL'] == ''
|
||||
|
||||
@ -6,7 +6,7 @@ from collections import OrderedDict
|
||||
from django.db.models.deletion import Collector, SET_NULL, CASCADE
|
||||
from django.core.management import call_command
|
||||
|
||||
from awx.main.management.commands.deletion import AWXCollector
|
||||
from awx.main.utils.deletion import AWXCollector
|
||||
from awx.main.models import (
|
||||
JobTemplate, User, Job, JobEvent, Notification,
|
||||
WorkflowJobNode, JobHostSummary
|
||||
|
||||
@ -9,6 +9,9 @@ import os
|
||||
# Django
|
||||
from django.core.management.base import CommandError
|
||||
|
||||
# for license errors
|
||||
from rest_framework.exceptions import PermissionDenied
|
||||
|
||||
# AWX
|
||||
from awx.main.management.commands import inventory_import
|
||||
from awx.main.models import Inventory, Host, Group, InventorySource
|
||||
@ -83,7 +86,7 @@ class MockLoader:
|
||||
return self._data
|
||||
|
||||
|
||||
def mock_logging(self):
|
||||
def mock_logging(self, level):
|
||||
pass
|
||||
|
||||
|
||||
@ -322,6 +325,6 @@ def test_tower_version_compare():
|
||||
"version": "2.0.1-1068-g09684e2c41"
|
||||
}
|
||||
}
|
||||
with pytest.raises(CommandError):
|
||||
with pytest.raises(PermissionDenied):
|
||||
cmd.remote_tower_license_compare('very_supported')
|
||||
cmd.remote_tower_license_compare('open')
|
||||
|
||||
@ -16,7 +16,7 @@ def test_awx_virtualenv_from_settings(inventory, project, machine_credential):
|
||||
)
|
||||
jt.credentials.add(machine_credential)
|
||||
job = jt.create_unified_job()
|
||||
assert job.ansible_virtualenv_path == '/venv/ansible'
|
||||
assert job.ansible_virtualenv_path == '/var/lib/awx/venv/ansible'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
@ -43,28 +43,28 @@ def test_awx_custom_virtualenv(inventory, project, machine_credential, organizat
|
||||
jt.credentials.add(machine_credential)
|
||||
job = jt.create_unified_job()
|
||||
|
||||
job.organization.custom_virtualenv = '/venv/fancy-org'
|
||||
job.organization.custom_virtualenv = '/var/lib/awx/venv/fancy-org'
|
||||
job.organization.save()
|
||||
assert job.ansible_virtualenv_path == '/venv/fancy-org'
|
||||
assert job.ansible_virtualenv_path == '/var/lib/awx/venv/fancy-org'
|
||||
|
||||
job.project.custom_virtualenv = '/venv/fancy-proj'
|
||||
job.project.custom_virtualenv = '/var/lib/awx/venv/fancy-proj'
|
||||
job.project.save()
|
||||
assert job.ansible_virtualenv_path == '/venv/fancy-proj'
|
||||
assert job.ansible_virtualenv_path == '/var/lib/awx/venv/fancy-proj'
|
||||
|
||||
job.job_template.custom_virtualenv = '/venv/fancy-jt'
|
||||
job.job_template.custom_virtualenv = '/var/lib/awx/venv/fancy-jt'
|
||||
job.job_template.save()
|
||||
assert job.ansible_virtualenv_path == '/venv/fancy-jt'
|
||||
assert job.ansible_virtualenv_path == '/var/lib/awx/venv/fancy-jt'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
def test_awx_custom_virtualenv_without_jt(project):
|
||||
project.custom_virtualenv = '/venv/fancy-proj'
|
||||
project.custom_virtualenv = '/var/lib/awx/venv/fancy-proj'
|
||||
project.save()
|
||||
job = Job(project=project)
|
||||
job.save()
|
||||
|
||||
job = Job.objects.get(pk=job.id)
|
||||
assert job.ansible_virtualenv_path == '/venv/fancy-proj'
|
||||
assert job.ansible_virtualenv_path == '/var/lib/awx/venv/fancy-proj'
|
||||
|
||||
|
||||
@pytest.mark.django_db
|
||||
|
||||
@ -214,6 +214,9 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
|
||||
f"'{inventory_filename}' file not found in inventory update runtime files {content.keys()}"
|
||||
|
||||
env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test
|
||||
env.pop('PYTHONPATH')
|
||||
env.pop('VIRTUAL_ENV')
|
||||
env.pop('PROOT_TMP_DIR')
|
||||
base_dir = os.path.join(DATA, 'plugins')
|
||||
if not os.path.exists(base_dir):
|
||||
os.mkdir(base_dir)
|
||||
|
||||
@ -4,7 +4,7 @@ from unittest import mock
|
||||
from django.test import TransactionTestCase
|
||||
|
||||
from awx.main.access import UserAccess, RoleAccess, TeamAccess
|
||||
from awx.main.models import User, Organization, Inventory
|
||||
from awx.main.models import User, Organization, Inventory, Role
|
||||
|
||||
|
||||
class TestSysAuditorTransactional(TransactionTestCase):
|
||||
@ -170,4 +170,34 @@ def test_org_admin_cannot_delete_member_attached_to_other_group(org_admin, org_m
|
||||
access = UserAccess(org_admin)
|
||||
other_org.member_role.members.add(org_member)
|
||||
assert not access.can_delete(org_member)
|
||||
|
||||
|
||||
|
||||
@pytest.mark.parametrize('reverse', (True, False))
|
||||
@pytest.mark.django_db
|
||||
def test_consistency_of_is_superuser_flag(reverse):
|
||||
users = [User.objects.create(username='rando_{}'.format(i)) for i in range(2)]
|
||||
for u in users:
|
||||
assert u.is_superuser is False
|
||||
|
||||
system_admin = Role.singleton('system_administrator')
|
||||
if reverse:
|
||||
for u in users:
|
||||
u.roles.add(system_admin)
|
||||
else:
|
||||
system_admin.members.add(*[u.id for u in users]) # like .add(42, 54)
|
||||
|
||||
for u in users:
|
||||
u.refresh_from_db()
|
||||
assert u.is_superuser is True
|
||||
|
||||
users[0].roles.clear()
|
||||
for u in users:
|
||||
u.refresh_from_db()
|
||||
assert users[0].is_superuser is False
|
||||
assert users[1].is_superuser is True
|
||||
|
||||
system_admin.members.clear()
|
||||
|
||||
for u in users:
|
||||
u.refresh_from_db()
|
||||
assert u.is_superuser is False
|
||||
|
||||
@ -35,16 +35,17 @@ data_loggly = {
|
||||
# Test reconfigure logging settings function
|
||||
# name this whatever you want
|
||||
@pytest.mark.parametrize(
|
||||
'enabled, log_type, host, port, protocol, expected_config', [
|
||||
'enabled, log_type, host, port, protocol, errorfile, expected_config', [
|
||||
(
|
||||
True,
|
||||
'loggly',
|
||||
'http://logs-01.loggly.com/inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/',
|
||||
None,
|
||||
'https',
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
|
||||
'action(type="omhttp" server="logs-01.loggly.com" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="inputs/1fd38090-2af1-4e1e-8d80-492899da0f71/tag/http/")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -53,6 +54,7 @@ data_loggly = {
|
||||
'localhost',
|
||||
9000,
|
||||
'udp',
|
||||
'', # empty errorfile
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="udp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
|
||||
@ -64,6 +66,7 @@ data_loggly = {
|
||||
'localhost',
|
||||
9000,
|
||||
'tcp',
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")',
|
||||
'action(type="omfwd" target="localhost" port="9000" protocol="tcp" action.resumeRetryCount="-1" action.resumeInterval="5" template="awx")', # noqa
|
||||
@ -75,9 +78,10 @@ data_loggly = {
|
||||
'https://yoursplunk/services/collector/event',
|
||||
None,
|
||||
None,
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -86,9 +90,10 @@ data_loggly = {
|
||||
'http://yoursplunk/services/collector/event',
|
||||
None,
|
||||
None,
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="80" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -97,9 +102,10 @@ data_loggly = {
|
||||
'https://yoursplunk:8088/services/collector/event',
|
||||
None,
|
||||
None,
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -108,9 +114,10 @@ data_loggly = {
|
||||
'https://yoursplunk/services/collector/event',
|
||||
8088,
|
||||
None,
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -119,9 +126,10 @@ data_loggly = {
|
||||
'yoursplunk.org/services/collector/event',
|
||||
8088,
|
||||
'https',
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -130,9 +138,10 @@ data_loggly = {
|
||||
'http://yoursplunk.org/services/collector/event',
|
||||
8088,
|
||||
None,
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="services/collector/event")', # noqa
|
||||
'action(type="omhttp" server="yoursplunk.org" serverport="8088" usehttps="off" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="services/collector/event")', # noqa
|
||||
])
|
||||
),
|
||||
(
|
||||
@ -141,14 +150,15 @@ data_loggly = {
|
||||
'https://endpoint5.collection.us2.sumologic.com/receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==', # noqa
|
||||
None,
|
||||
'https',
|
||||
'/var/log/tower/rsyslog.err',
|
||||
'\n'.join([
|
||||
'template(name="awx" type="string" string="%rawmsg-after-pri%")\nmodule(load="omhttp")',
|
||||
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" errorfile="/var/log/tower/rsyslog.err" action.resumeInterval="5" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
|
||||
'action(type="omhttp" server="endpoint5.collection.us2.sumologic.com" serverport="443" usehttps="on" allowunsignedcerts="off" skipverifyhost="off" action.resumeRetryCount="-1" template="awx" action.resumeInterval="5" errorfile="/var/log/tower/rsyslog.err" restpath="receiver/v1/http/ZaVnC4dhaV0qoiETY0MrM3wwLoDgO1jFgjOxE6-39qokkj3LGtOroZ8wNaN2M6DtgYrJZsmSi4-36_Up5TbbN_8hosYonLKHSSOSKY845LuLZBCBwStrHQ==")', # noqa
|
||||
])
|
||||
),
|
||||
]
|
||||
)
|
||||
def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, expected_config):
|
||||
def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, errorfile, expected_config):
|
||||
|
||||
mock_settings, _ = _mock_logging_defaults()
|
||||
|
||||
@ -159,6 +169,7 @@ def test_rsyslog_conf_template(enabled, log_type, host, port, protocol, expected
|
||||
setattr(mock_settings, 'LOG_AGGREGATOR_ENABLED', enabled)
|
||||
setattr(mock_settings, 'LOG_AGGREGATOR_TYPE', log_type)
|
||||
setattr(mock_settings, 'LOG_AGGREGATOR_HOST', host)
|
||||
setattr(mock_settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', errorfile)
|
||||
if port:
|
||||
setattr(mock_settings, 'LOG_AGGREGATOR_PORT', port)
|
||||
if protocol:
|
||||
|
||||
@ -33,32 +33,6 @@ class TestInvalidOptions:
|
||||
assert 'inventory-id' in str(err.value)
|
||||
assert 'exclusive' in str(err.value)
|
||||
|
||||
def test_invalid_options_id_and_keep_vars(self):
|
||||
# You can't overwrite and keep_vars at the same time, that wouldn't make sense
|
||||
cmd = Command()
|
||||
with pytest.raises(CommandError) as err:
|
||||
cmd.handle(
|
||||
inventory_id=42, overwrite=True, keep_vars=True
|
||||
)
|
||||
assert 'overwrite-vars' in str(err.value)
|
||||
assert 'exclusive' in str(err.value)
|
||||
|
||||
def test_invalid_options_id_but_no_source(self):
|
||||
# Need a source to import
|
||||
cmd = Command()
|
||||
with pytest.raises(CommandError) as err:
|
||||
cmd.handle(
|
||||
inventory_id=42, overwrite=True, keep_vars=True
|
||||
)
|
||||
assert 'overwrite-vars' in str(err.value)
|
||||
assert 'exclusive' in str(err.value)
|
||||
with pytest.raises(CommandError) as err:
|
||||
cmd.handle(
|
||||
inventory_id=42, overwrite_vars=True, keep_vars=True
|
||||
)
|
||||
assert 'overwrite-vars' in str(err.value)
|
||||
assert 'exclusive' in str(err.value)
|
||||
|
||||
def test_invalid_options_missing_source(self):
|
||||
cmd = Command()
|
||||
with pytest.raises(CommandError) as err:
|
||||
|
||||
@ -180,7 +180,7 @@ def test_openstack_client_config_generation(mocker, source, expected, private_da
|
||||
'source_vars_dict': {},
|
||||
'get_cloud_credential': mocker.Mock(return_value=credential),
|
||||
'get_extra_credentials': lambda x: [],
|
||||
'ansible_virtualenv_path': '/venv/foo'
|
||||
'ansible_virtualenv_path': '/var/lib/awx/venv/foo'
|
||||
})
|
||||
cloud_config = update.build_private_data(inventory_update, private_data_dir)
|
||||
cloud_credential = yaml.safe_load(
|
||||
@ -224,6 +224,52 @@ def test_openstack_client_config_generation_with_project_domain_name(mocker, sou
|
||||
'source_vars_dict': {},
|
||||
'get_cloud_credential': mocker.Mock(return_value=credential),
|
||||
'get_extra_credentials': lambda x: [],
|
||||
'ansible_virtualenv_path': '/var/lib/awx/venv/foo'
|
||||
})
|
||||
cloud_config = update.build_private_data(inventory_update, private_data_dir)
|
||||
cloud_credential = yaml.safe_load(
|
||||
cloud_config.get('credentials')[credential]
|
||||
)
|
||||
assert cloud_credential['clouds'] == {
|
||||
'devstack': {
|
||||
'auth': {
|
||||
'auth_url': 'https://keystone.openstack.example.org',
|
||||
'password': 'secrete',
|
||||
'project_name': 'demo-project',
|
||||
'username': 'demo',
|
||||
'domain_name': 'my-demo-domain',
|
||||
'project_domain_name': 'project-domain',
|
||||
},
|
||||
'verify': expected,
|
||||
'private': True,
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@pytest.mark.parametrize("source,expected", [
|
||||
(None, True), (False, False), (True, True)
|
||||
])
|
||||
def test_openstack_client_config_generation_with_project_region_name(mocker, source, expected, private_data_dir):
|
||||
update = tasks.RunInventoryUpdate()
|
||||
credential_type = CredentialType.defaults['openstack']()
|
||||
inputs = {
|
||||
'host': 'https://keystone.openstack.example.org',
|
||||
'username': 'demo',
|
||||
'password': 'secrete',
|
||||
'project': 'demo-project',
|
||||
'domain': 'my-demo-domain',
|
||||
'project_domain_name': 'project-domain',
|
||||
'project_region_name': 'region-name',
|
||||
}
|
||||
if source is not None:
|
||||
inputs['verify_ssl'] = source
|
||||
credential = Credential(pk=1, credential_type=credential_type, inputs=inputs)
|
||||
|
||||
inventory_update = mocker.Mock(**{
|
||||
'source': 'openstack',
|
||||
'source_vars_dict': {},
|
||||
'get_cloud_credential': mocker.Mock(return_value=credential),
|
||||
'get_extra_credentials': lambda x: [],
|
||||
'ansible_virtualenv_path': '/venv/foo'
|
||||
})
|
||||
cloud_config = update.build_private_data(inventory_update, private_data_dir)
|
||||
@ -242,6 +288,7 @@ def test_openstack_client_config_generation_with_project_domain_name(mocker, sou
|
||||
},
|
||||
'verify': expected,
|
||||
'private': True,
|
||||
'region_name': 'region-name',
|
||||
}
|
||||
}
|
||||
|
||||
@ -267,7 +314,7 @@ def test_openstack_client_config_generation_with_private_source_vars(mocker, sou
|
||||
'source_vars_dict': {'private': source},
|
||||
'get_cloud_credential': mocker.Mock(return_value=credential),
|
||||
'get_extra_credentials': lambda x: [],
|
||||
'ansible_virtualenv_path': '/venv/foo'
|
||||
'ansible_virtualenv_path': '/var/lib/awx/venv/foo'
|
||||
})
|
||||
cloud_config = update.build_private_data(inventory_update, private_data_dir)
|
||||
cloud_credential = yaml.load(
|
||||
@ -625,13 +672,13 @@ class TestGenericRun():
|
||||
|
||||
def test_invalid_custom_virtualenv(self, patch_Job, private_data_dir):
|
||||
job = Job(project=Project(), inventory=Inventory())
|
||||
job.project.custom_virtualenv = '/venv/missing'
|
||||
job.project.custom_virtualenv = '/var/lib/awx/venv/missing'
|
||||
task = tasks.RunJob()
|
||||
|
||||
with pytest.raises(tasks.InvalidVirtualenvError) as e:
|
||||
task.build_env(job, private_data_dir)
|
||||
|
||||
assert 'Invalid virtual environment selected: /venv/missing' == str(e.value)
|
||||
assert 'Invalid virtual environment selected: /var/lib/awx/venv/missing' == str(e.value)
|
||||
|
||||
|
||||
class TestAdhocRun(TestJobExecution):
|
||||
@ -1909,19 +1956,16 @@ class TestProjectUpdateCredentials(TestJobExecution):
|
||||
parametrize = {
|
||||
'test_username_and_password_auth': [
|
||||
dict(scm_type='git'),
|
||||
dict(scm_type='hg'),
|
||||
dict(scm_type='svn'),
|
||||
dict(scm_type='archive'),
|
||||
],
|
||||
'test_ssh_key_auth': [
|
||||
dict(scm_type='git'),
|
||||
dict(scm_type='hg'),
|
||||
dict(scm_type='svn'),
|
||||
dict(scm_type='archive'),
|
||||
],
|
||||
'test_awx_task_env': [
|
||||
dict(scm_type='git'),
|
||||
dict(scm_type='hg'),
|
||||
dict(scm_type='svn'),
|
||||
dict(scm_type='archive'),
|
||||
]
|
||||
@ -2061,8 +2105,8 @@ class TestInventoryUpdateCredentials(TestJobExecution):
|
||||
credential, env, {}, [], private_data_dir
|
||||
)
|
||||
|
||||
assert '--custom' in ' '.join(args)
|
||||
script = args[args.index('--source') + 1]
|
||||
assert '-i' in ' '.join(args)
|
||||
script = args[args.index('-i') + 1]
|
||||
with open(script, 'r') as f:
|
||||
assert f.read() == inventory_update.source_script.script
|
||||
assert env['FOO'] == 'BAR'
|
||||
|
||||
@ -222,9 +222,8 @@ def update_scm_url(scm_type, url, username=True, password=True,
|
||||
'''
|
||||
# Handle all of the URL formats supported by the SCM systems:
|
||||
# git: https://www.kernel.org/pub/software/scm/git/docs/git-clone.html#URLS
|
||||
# hg: http://www.selenic.com/mercurial/hg.1.html#url-paths
|
||||
# svn: http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.advanced.reposurls
|
||||
if scm_type not in ('git', 'hg', 'svn', 'insights', 'archive'):
|
||||
if scm_type not in ('git', 'svn', 'insights', 'archive'):
|
||||
raise ValueError(_('Unsupported SCM type "%s"') % str(scm_type))
|
||||
if not url.strip():
|
||||
return ''
|
||||
@ -256,8 +255,8 @@ def update_scm_url(scm_type, url, username=True, password=True,
|
||||
# SCP style before passed to git module.
|
||||
parts = urllib.parse.urlsplit('git+ssh://%s' % modified_url)
|
||||
# Handle local paths specified without file scheme (e.g. /path/to/foo).
|
||||
# Only supported by git and hg.
|
||||
elif scm_type in ('git', 'hg'):
|
||||
# Only supported by git.
|
||||
elif scm_type == 'git':
|
||||
if not url.startswith('/'):
|
||||
parts = urllib.parse.urlsplit('file:///%s' % url)
|
||||
else:
|
||||
@ -268,7 +267,6 @@ def update_scm_url(scm_type, url, username=True, password=True,
|
||||
# Validate that scheme is valid for given scm_type.
|
||||
scm_type_schemes = {
|
||||
'git': ('ssh', 'git', 'git+ssh', 'http', 'https', 'ftp', 'ftps', 'file'),
|
||||
'hg': ('http', 'https', 'ssh', 'file'),
|
||||
'svn': ('http', 'https', 'svn', 'svn+ssh', 'file'),
|
||||
'insights': ('http', 'https'),
|
||||
'archive': ('http', 'https'),
|
||||
@ -300,12 +298,6 @@ def update_scm_url(scm_type, url, username=True, password=True,
|
||||
if scm_type == 'git' and parts.scheme.endswith('ssh') and parts.hostname in special_git_hosts and netloc_password:
|
||||
#raise ValueError('Password not allowed for SSH access to %s.' % parts.hostname)
|
||||
netloc_password = ''
|
||||
special_hg_hosts = ('bitbucket.org', 'altssh.bitbucket.org')
|
||||
if scm_type == 'hg' and parts.scheme == 'ssh' and parts.hostname in special_hg_hosts and netloc_username != 'hg':
|
||||
raise ValueError(_('Username must be "hg" for SSH access to %s.') % parts.hostname)
|
||||
if scm_type == 'hg' and parts.scheme == 'ssh' and netloc_password:
|
||||
#raise ValueError('Password not supported for SSH with Mercurial.')
|
||||
netloc_password = ''
|
||||
|
||||
if netloc_username and parts.scheme != 'file' and scm_type not in ("insights", "archive"):
|
||||
netloc = u':'.join([urllib.parse.quote(x,safe='') for x in (netloc_username, netloc_password) if x])
|
||||
|
||||
@ -18,6 +18,7 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
timeout = getattr(settings, 'LOG_AGGREGATOR_TCP_TIMEOUT', 5)
|
||||
max_disk_space = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_GB', 1)
|
||||
spool_directory = getattr(settings, 'LOG_AGGREGATOR_MAX_DISK_USAGE_PATH', '/var/lib/awx').rstrip('/')
|
||||
error_log_file = getattr(settings, 'LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE', '')
|
||||
|
||||
if not os.access(spool_directory, os.W_OK):
|
||||
spool_directory = '/var/lib/awx'
|
||||
@ -74,9 +75,10 @@ def construct_rsyslog_conf_template(settings=settings):
|
||||
f'skipverifyhost="{skip_verify}"',
|
||||
'action.resumeRetryCount="-1"',
|
||||
'template="awx"',
|
||||
'errorfile="/var/log/tower/rsyslog.err"',
|
||||
f'action.resumeInterval="{timeout}"'
|
||||
]
|
||||
if error_log_file:
|
||||
params.append(f'errorfile="{error_log_file}"')
|
||||
if parsed.path:
|
||||
path = urlparse.quote(parsed.path[1:], safe='/=')
|
||||
if parsed.query:
|
||||
|
||||
@ -9,6 +9,7 @@ import socket
|
||||
from datetime import datetime
|
||||
|
||||
from dateutil.tz import tzutc
|
||||
from django.utils.timezone import now
|
||||
from django.core.serializers.json import DjangoJSONEncoder
|
||||
from django.conf import settings
|
||||
|
||||
@ -17,8 +18,15 @@ class TimeFormatter(logging.Formatter):
|
||||
'''
|
||||
Custom log formatter used for inventory imports
|
||||
'''
|
||||
def __init__(self, start_time=None, **kwargs):
|
||||
if start_time is None:
|
||||
self.job_start = now()
|
||||
else:
|
||||
self.job_start = start_time
|
||||
super(TimeFormatter, self).__init__(**kwargs)
|
||||
|
||||
def format(self, record):
|
||||
record.relativeSeconds = record.relativeCreated / 1000.0
|
||||
record.relativeSeconds = (now() - self.job_start).total_seconds()
|
||||
return logging.Formatter.format(self, record)
|
||||
|
||||
|
||||
|
||||
@ -7,6 +7,10 @@ import os.path
|
||||
|
||||
# Django
|
||||
from django.conf import settings
|
||||
from django.utils.timezone import now
|
||||
|
||||
# AWX
|
||||
from awx.main.exceptions import PostRunError
|
||||
|
||||
|
||||
class RSysLogHandler(logging.handlers.SysLogHandler):
|
||||
@ -40,6 +44,58 @@ class RSysLogHandler(logging.handlers.SysLogHandler):
|
||||
pass
|
||||
|
||||
|
||||
class SpecialInventoryHandler(logging.Handler):
|
||||
"""Logging handler used for the saving-to-database part of inventory updates
|
||||
ran by the task system
|
||||
this dispatches events directly to be processed by the callback receiver,
|
||||
as opposed to ansible-runner
|
||||
"""
|
||||
|
||||
def __init__(self, event_handler, cancel_callback, job_timeout, verbosity,
|
||||
start_time=None, counter=0, initial_line=0, **kwargs):
|
||||
self.event_handler = event_handler
|
||||
self.cancel_callback = cancel_callback
|
||||
self.job_timeout = job_timeout
|
||||
if start_time is None:
|
||||
self.job_start = now()
|
||||
else:
|
||||
self.job_start = start_time
|
||||
self.last_check = self.job_start
|
||||
self.counter = counter
|
||||
self.skip_level = [logging.WARNING, logging.INFO, logging.DEBUG, 0][verbosity]
|
||||
self._current_line = initial_line
|
||||
super(SpecialInventoryHandler, self).__init__(**kwargs)
|
||||
|
||||
def emit(self, record):
|
||||
# check cancel and timeout status regardless of log level
|
||||
this_time = now()
|
||||
if (this_time - self.last_check).total_seconds() > 0.5: # cancel callback is expensive
|
||||
self.last_check = this_time
|
||||
if self.cancel_callback():
|
||||
raise PostRunError('Inventory update has been canceled', status='canceled')
|
||||
if self.job_timeout and ((this_time - self.job_start).total_seconds() > self.job_timeout):
|
||||
raise PostRunError('Inventory update has timed out', status='canceled')
|
||||
|
||||
# skip logging for low severity logs
|
||||
if record.levelno < self.skip_level:
|
||||
return
|
||||
|
||||
self.counter += 1
|
||||
msg = self.format(record)
|
||||
n_lines = len(msg.strip().split('\n')) # don't count line breaks at boundry of text
|
||||
dispatch_data = dict(
|
||||
created=now().isoformat(),
|
||||
event='verbose',
|
||||
counter=self.counter,
|
||||
stdout=msg,
|
||||
start_line=self._current_line,
|
||||
end_line=self._current_line + n_lines
|
||||
)
|
||||
self._current_line += n_lines
|
||||
|
||||
self.event_handler(dispatch_data)
|
||||
|
||||
|
||||
ColorHandler = logging.StreamHandler
|
||||
|
||||
if settings.COLOR_LOGS is True:
|
||||
|
||||
@ -11,7 +11,7 @@ The Licenser class can do the following:
|
||||
|
||||
import base64
|
||||
import configparser
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
import collections
|
||||
import copy
|
||||
import io
|
||||
@ -73,9 +73,12 @@ def validate_entitlement_manifest(data):
|
||||
|
||||
buff.write(export)
|
||||
z = zipfile.ZipFile(buff)
|
||||
subs = []
|
||||
for f in z.filelist:
|
||||
if f.filename.startswith('export/entitlements') and f.filename.endswith('.json'):
|
||||
return json.loads(z.open(f).read())
|
||||
subs.append(json.loads(z.open(f).read()))
|
||||
if subs:
|
||||
return subs
|
||||
raise ValueError(_("Invalid manifest: manifest contains no subscriptions."))
|
||||
|
||||
|
||||
@ -131,21 +134,61 @@ class Licenser(object):
|
||||
|
||||
|
||||
def license_from_manifest(self, manifest):
|
||||
def is_appropriate_manifest_sub(sub):
|
||||
if sub['pool']['activeSubscription'] is False:
|
||||
return False
|
||||
now = datetime.now(timezone.utc)
|
||||
if parse_date(sub['startDate']) > now:
|
||||
return False
|
||||
if parse_date(sub['endDate']) < now:
|
||||
return False
|
||||
products = sub['pool']['providedProducts']
|
||||
if any(product.get('productId') == '480' for product in products):
|
||||
return True
|
||||
return False
|
||||
|
||||
def _can_aggregate(sub, license):
|
||||
# We aggregate multiple subs into a larger meta-sub, if they match
|
||||
#
|
||||
# No current sub in aggregate
|
||||
if not license:
|
||||
return True
|
||||
# Same SKU type (SER vs MCT vs others)?
|
||||
if license['sku'][0:3] != sub['pool']['productId'][0:3]:
|
||||
return False
|
||||
return True
|
||||
|
||||
# Parse output for subscription metadata to build config
|
||||
license = dict()
|
||||
license['sku'] = manifest['pool']['productId']
|
||||
try:
|
||||
license['instance_count'] = manifest['pool']['exported']
|
||||
except KeyError:
|
||||
license['instance_count'] = manifest['pool']['quantity']
|
||||
license['subscription_name'] = manifest['pool']['productName']
|
||||
license['pool_id'] = manifest['pool']['id']
|
||||
license['license_date'] = parse_date(manifest['endDate']).strftime('%s')
|
||||
license['product_name'] = manifest['pool']['productName']
|
||||
license['valid_key'] = True
|
||||
license['license_type'] = 'enterprise'
|
||||
license['satellite'] = False
|
||||
for sub in manifest:
|
||||
if not is_appropriate_manifest_sub(sub):
|
||||
logger.warning("Subscription %s (%s) in manifest is not active or for another product" %
|
||||
(sub['pool']['productName'], sub['pool']['productId']))
|
||||
continue
|
||||
if not _can_aggregate(sub, license):
|
||||
logger.warning("Subscription %s (%s) in manifest does not match other manifest subscriptions" %
|
||||
(sub['pool']['productName'], sub['pool']['productId']))
|
||||
continue
|
||||
|
||||
license.setdefault('sku', sub['pool']['productId'])
|
||||
license.setdefault('subscription_name', sub['pool']['productName'])
|
||||
license.setdefault('pool_id', sub['pool']['id'])
|
||||
license.setdefault('product_name', sub['pool']['productName'])
|
||||
license.setdefault('valid_key', True)
|
||||
license.setdefault('license_type', 'enterprise')
|
||||
license.setdefault('satellite', False)
|
||||
# Use the nearest end date
|
||||
endDate = parse_date(sub['endDate'])
|
||||
currentEndDateStr = license.get('license_date', '4102462800') # 2100-01-01
|
||||
currentEndDate = datetime.fromtimestamp(int(currentEndDateStr), timezone.utc)
|
||||
if endDate < currentEndDate:
|
||||
license['license_date'] = endDate.strftime('%s')
|
||||
instances = sub['quantity']
|
||||
license['instance_count'] = license.get('instance_count', 0) + instances
|
||||
license['subscription_name'] = re.sub(r'[\d]* Managed Nodes', '%d Managed Nodes' % license['instance_count'], license['subscription_name'])
|
||||
|
||||
if not license:
|
||||
logger.error("No valid subscriptions found in manifest")
|
||||
self._attrs.update(license)
|
||||
settings.LICENSE = self._attrs
|
||||
return self._attrs
|
||||
@ -214,11 +257,15 @@ class Licenser(object):
|
||||
|
||||
|
||||
def get_satellite_subs(self, host, user, pw):
|
||||
port = None
|
||||
try:
|
||||
verify = str(self.config.get("rhsm", "repo_ca_cert"))
|
||||
port = str(self.config.get("server", "port"))
|
||||
except Exception as e:
|
||||
logger.exception('Unable to read rhsm config to get ca_cert location. {}'.format(str(e)))
|
||||
verify = getattr(settings, 'REDHAT_CANDLEPIN_VERIFY', True)
|
||||
if port:
|
||||
host = ':'.join([host, port])
|
||||
json = []
|
||||
try:
|
||||
orgs = requests.get(
|
||||
@ -272,7 +319,7 @@ class Licenser(object):
|
||||
return False
|
||||
# Products that contain Ansible Tower
|
||||
products = sub.get('providedProducts', [])
|
||||
if any(map(lambda product: product.get('productId', None) == "480", products)):
|
||||
if any(product.get('productId') == '480' for product in products):
|
||||
return True
|
||||
return False
|
||||
|
||||
@ -373,10 +420,9 @@ class Licenser(object):
|
||||
current_instances = Host.objects.active_count()
|
||||
else:
|
||||
current_instances = 0
|
||||
available_instances = int(attrs.get('instance_count', None) or 0)
|
||||
instance_count = int(attrs.get('instance_count', 0))
|
||||
attrs['current_instances'] = current_instances
|
||||
attrs['available_instances'] = available_instances
|
||||
free_instances = (available_instances - current_instances)
|
||||
free_instances = (instance_count - current_instances)
|
||||
attrs['free_instances'] = max(0, free_instances)
|
||||
|
||||
license_date = int(attrs.get('license_date', 0) or 0)
|
||||
|
||||
@ -1,15 +0,0 @@
|
||||
from __future__ import (absolute_import, division, print_function)
|
||||
__metaclass__ = type
|
||||
|
||||
from ansible.plugins.action import ActionBase
|
||||
|
||||
|
||||
class ActionModule(ActionBase):
|
||||
|
||||
def run(self, tmp=None, task_vars=None):
|
||||
self._supports_check_mode = False
|
||||
result = super(ActionModule, self).run(tmp, task_vars)
|
||||
result['changed'] = result['failed'] = False
|
||||
result['msg'] = ''
|
||||
self._display.deprecated("Mercurial support is deprecated")
|
||||
return result
|
||||
@ -9,6 +9,9 @@
|
||||
- ansible.posix
|
||||
|
||||
tasks:
|
||||
- name: "Output job the playbook is running for"
|
||||
debug:
|
||||
msg: "Checking on job {{ job_id }}"
|
||||
|
||||
- name: Determine if daemon process is alive.
|
||||
shell: "ansible-runner is-alive {{src}}"
|
||||
|
||||
@ -48,12 +48,6 @@
|
||||
tags:
|
||||
- update_git
|
||||
|
||||
- block:
|
||||
- name: include hg tasks
|
||||
include_tasks: project_update_hg_tasks.yml
|
||||
tags:
|
||||
- update_hg
|
||||
|
||||
- block:
|
||||
- name: update project using svn
|
||||
subversion:
|
||||
@ -150,7 +144,6 @@
|
||||
msg: "Repository Version {{ scm_version }}"
|
||||
tags:
|
||||
- update_git
|
||||
- update_hg
|
||||
- update_svn
|
||||
- update_insights
|
||||
- update_archive
|
||||
|
||||
@ -1,20 +0,0 @@
|
||||
---
|
||||
- name: Mercurial support is deprecated.
|
||||
hg_deprecation:
|
||||
|
||||
- name: update project using hg
|
||||
hg:
|
||||
dest: "{{project_path|quote}}"
|
||||
repo: "{{scm_url|quote}}"
|
||||
revision: "{{scm_branch|quote}}"
|
||||
force: "{{scm_clean}}"
|
||||
register: hg_result
|
||||
|
||||
- name: Set the hg repository version
|
||||
set_fact:
|
||||
scm_version: "{{ hg_result['after'] }}"
|
||||
when: "'after' in hg_result"
|
||||
|
||||
- name: parse hg version string properly
|
||||
set_fact:
|
||||
scm_version: "{{scm_version|regex_replace('^([A-Za-z0-9]+).*$', '\\1')}}"
|
||||
@ -13,6 +13,10 @@
|
||||
- ansible.posix
|
||||
|
||||
tasks:
|
||||
- name: "Output job the playbook is running for"
|
||||
debug:
|
||||
msg: "Checking on job {{ job_id }}"
|
||||
|
||||
- name: synchronize job environment with isolated host
|
||||
synchronize:
|
||||
copy_links: true
|
||||
|
||||
@ -116,7 +116,7 @@ LOGIN_URL = '/api/login/'
|
||||
|
||||
# Absolute filesystem path to the directory to host projects (with playbooks).
|
||||
# This directory should not be web-accessible.
|
||||
PROJECTS_ROOT = os.path.join(BASE_DIR, 'projects')
|
||||
PROJECTS_ROOT = '/var/lib/awx/projects/'
|
||||
|
||||
# Absolute filesystem path to the directory to host collections for
|
||||
# running inventory imports, isolated playbooks
|
||||
@ -125,10 +125,10 @@ AWX_ANSIBLE_COLLECTIONS_PATHS = os.path.join(BASE_DIR, 'vendor', 'awx_ansible_co
|
||||
# Absolute filesystem path to the directory for job status stdout (default for
|
||||
# development and tests, default for production defined in production.py). This
|
||||
# directory should not be web-accessible
|
||||
JOBOUTPUT_ROOT = os.path.join(BASE_DIR, 'job_output')
|
||||
JOBOUTPUT_ROOT = '/var/lib/awx/job_status/'
|
||||
|
||||
# Absolute filesystem path to the directory to store logs
|
||||
LOG_ROOT = os.path.join(BASE_DIR)
|
||||
LOG_ROOT = '/var/log/tower/'
|
||||
|
||||
# The heartbeat file for the tower scheduler
|
||||
SCHEDULE_METADATA_LOCATION = os.path.join(BASE_DIR, '.tower_cycle')
|
||||
@ -196,9 +196,9 @@ LOCAL_STDOUT_EXPIRE_TIME = 2592000
|
||||
# events into the database
|
||||
JOB_EVENT_WORKERS = 4
|
||||
|
||||
# The number of seconds (must be an integer) to buffer callback receiver bulk
|
||||
# The number of seconds to buffer callback receiver bulk
|
||||
# writes in memory before flushing via JobEvent.objects.bulk_create()
|
||||
JOB_EVENT_BUFFER_SECONDS = 1
|
||||
JOB_EVENT_BUFFER_SECONDS = .1
|
||||
|
||||
# The interval at which callback receiver statistics should be
|
||||
# recorded
|
||||
@ -248,6 +248,7 @@ TEMPLATES = [
|
||||
'django.template.context_processors.static',
|
||||
'django.template.context_processors.tz',
|
||||
'django.contrib.messages.context_processors.messages',
|
||||
'awx.ui.context_processors.csp',
|
||||
'social_django.context_processors.backends',
|
||||
'social_django.context_processors.login_redirect',
|
||||
],
|
||||
@ -343,6 +344,9 @@ AUTHENTICATION_BACKENDS = (
|
||||
'social_core.backends.github.GithubOAuth2',
|
||||
'social_core.backends.github.GithubOrganizationOAuth2',
|
||||
'social_core.backends.github.GithubTeamOAuth2',
|
||||
'social_core.backends.github_enterprise.GithubEnterpriseOAuth2',
|
||||
'social_core.backends.github_enterprise.GithubEnterpriseOrganizationOAuth2',
|
||||
'social_core.backends.github_enterprise.GithubEnterpriseTeamOAuth2',
|
||||
'social_core.backends.azuread.AzureADOAuth2',
|
||||
'awx.sso.backends.SAMLAuth',
|
||||
'django.contrib.auth.backends.ModelBackend',
|
||||
@ -519,6 +523,20 @@ SOCIAL_AUTH_GITHUB_TEAM_SECRET = ''
|
||||
SOCIAL_AUTH_GITHUB_TEAM_ID = ''
|
||||
SOCIAL_AUTH_GITHUB_TEAM_SCOPE = ['user:email', 'read:org']
|
||||
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_SCOPE = ['user:email', 'read:org']
|
||||
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SCOPE = ['user:email', 'read:org']
|
||||
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID = ''
|
||||
SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SCOPE = ['user:email', 'read:org']
|
||||
|
||||
SOCIAL_AUTH_AZUREAD_OAUTH2_KEY = ''
|
||||
SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET = ''
|
||||
|
||||
@ -661,7 +679,7 @@ INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM")
|
||||
# ----------------
|
||||
EC2_ENABLED_VAR = 'ec2_state'
|
||||
EC2_ENABLED_VALUE = 'running'
|
||||
EC2_INSTANCE_ID_VAR = 'ec2_id'
|
||||
EC2_INSTANCE_ID_VAR = 'instance_id'
|
||||
EC2_EXCLUDE_EMPTY_GROUPS = True
|
||||
|
||||
# ------------
|
||||
@ -769,6 +787,7 @@ LOG_AGGREGATOR_LEVEL = 'INFO'
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_GB = 1
|
||||
LOG_AGGREGATOR_MAX_DISK_USAGE_PATH = '/var/lib/awx'
|
||||
LOG_AGGREGATOR_RSYSLOGD_DEBUG = False
|
||||
LOG_AGGREGATOR_RSYSLOGD_ERROR_LOG_FILE = '/var/log/tower/rsyslog.err'
|
||||
|
||||
# The number of retry attempts for websocket session establishment
|
||||
# If you're encountering issues establishing websockets in clustered Tower,
|
||||
@ -852,38 +871,30 @@ LOGGING = {
|
||||
},
|
||||
'tower_warnings': {
|
||||
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'class': 'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false', 'dynamic_level_filter'],
|
||||
'filename': os.path.join(LOG_ROOT, 'tower.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'callback_receiver': {
|
||||
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'class': 'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false', 'dynamic_level_filter'],
|
||||
'filename': os.path.join(LOG_ROOT, 'callback_receiver.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'dispatcher': {
|
||||
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'class': 'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false', 'dynamic_level_filter'],
|
||||
'filename': os.path.join(LOG_ROOT, 'dispatcher.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'dispatcher',
|
||||
},
|
||||
'wsbroadcast': {
|
||||
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'class': 'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false', 'dynamic_level_filter'],
|
||||
'filename': os.path.join(LOG_ROOT, 'wsbroadcast.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'celery.beat': {
|
||||
@ -897,38 +908,36 @@ LOGGING = {
|
||||
},
|
||||
'task_system': {
|
||||
# don't define a level here, it's set by settings.LOG_AGGREGATOR_LEVEL
|
||||
'class': 'logging.handlers.RotatingFileHandler',
|
||||
'class': 'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false', 'dynamic_level_filter'],
|
||||
'filename': os.path.join(LOG_ROOT, 'task_system.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'management_playbooks': {
|
||||
'level': 'DEBUG',
|
||||
'class':'logging.handlers.RotatingFileHandler',
|
||||
'class':'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false'],
|
||||
'filename': os.path.join(LOG_ROOT, 'management_playbooks.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'system_tracking_migrations': {
|
||||
'level': 'WARNING',
|
||||
'class':'logging.handlers.RotatingFileHandler',
|
||||
'class':'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false'],
|
||||
'filename': os.path.join(LOG_ROOT, 'tower_system_tracking_migrations.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'rbac_migrations': {
|
||||
'level': 'WARNING',
|
||||
'class':'logging.handlers.RotatingFileHandler',
|
||||
'class':'logging.handlers.WatchedFileHandler',
|
||||
'filters': ['require_debug_false'],
|
||||
'filename': os.path.join(LOG_ROOT, 'tower_rbac_migrations.log'),
|
||||
'maxBytes': 1024 * 1024 * 5, # 5 MB
|
||||
'backupCount': 5,
|
||||
'formatter':'simple',
|
||||
},
|
||||
'isolated_manager': {
|
||||
'level': 'WARNING',
|
||||
'class':'logging.handlers.WatchedFileHandler',
|
||||
'filename': os.path.join(LOG_ROOT, 'isolated_manager.log'),
|
||||
'formatter':'simple',
|
||||
},
|
||||
},
|
||||
@ -980,6 +989,11 @@ LOGGING = {
|
||||
'awx.main.wsbroadcast': {
|
||||
'handlers': ['wsbroadcast'],
|
||||
},
|
||||
'awx.isolated.manager': {
|
||||
'level': 'WARNING',
|
||||
'handlers': ['console', 'file', 'isolated_manager'],
|
||||
'propagate': True
|
||||
},
|
||||
'awx.isolated.manager.playbooks': {
|
||||
'handlers': ['management_playbooks'],
|
||||
'propagate': False
|
||||
|
||||
@ -148,9 +148,9 @@ include(optional('/etc/tower/settings.py'), scope=locals())
|
||||
include(optional('/etc/tower/conf.d/*.py'), scope=locals())
|
||||
|
||||
# Installed differently in Dockerfile compared to production versions
|
||||
AWX_ANSIBLE_COLLECTIONS_PATHS = '/vendor/awx_ansible_collections'
|
||||
AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections'
|
||||
|
||||
BASE_VENV_PATH = "/venv/"
|
||||
BASE_VENV_PATH = "/var/lib/awx/venv/"
|
||||
ANSIBLE_VENV_PATH = os.path.join(BASE_VENV_PATH, "ansible")
|
||||
AWX_VENV_PATH = os.path.join(BASE_VENV_PATH, "awx")
|
||||
|
||||
@ -158,7 +158,10 @@ AWX_VENV_PATH = os.path.join(BASE_VENV_PATH, "awx")
|
||||
# default settings for development. If not present, we can still run using
|
||||
# only the defaults.
|
||||
try:
|
||||
include(optional('local_*.py'), scope=locals())
|
||||
if os.getenv('AWX_KUBE_DEVEL', False):
|
||||
include(optional('minikube.py'), scope=locals())
|
||||
else:
|
||||
include(optional('local_*.py'), scope=locals())
|
||||
except ImportError:
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
@ -48,56 +48,12 @@ if "pytest" in sys.modules:
|
||||
}
|
||||
}
|
||||
|
||||
# Absolute filesystem path to the directory to host projects (with playbooks).
|
||||
# This directory should NOT be web-accessible.
|
||||
PROJECTS_ROOT = '/var/lib/awx/projects/'
|
||||
|
||||
# Location for cross-development of inventory plugins
|
||||
AWX_ANSIBLE_COLLECTIONS_PATHS = '/vendor/awx_ansible_collections'
|
||||
|
||||
# Absolute filesystem path to the directory for job status stdout
|
||||
# This directory should not be web-accessible
|
||||
JOBOUTPUT_ROOT = os.path.join(BASE_DIR, 'job_status')
|
||||
AWX_ANSIBLE_COLLECTIONS_PATHS = '/var/lib/awx/vendor/awx_ansible_collections'
|
||||
|
||||
# The UUID of the system, for HA.
|
||||
SYSTEM_UUID = '00000000-0000-0000-0000-000000000000'
|
||||
|
||||
# Local time zone for this installation. Choices can be found here:
|
||||
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
|
||||
# although not all choices may be available on all operating systems.
|
||||
# On Unix systems, a value of None will cause Django to use the same
|
||||
# timezone as the operating system.
|
||||
# If running in a Windows environment this must be set to the same as your
|
||||
# system time zone.
|
||||
USE_TZ = True
|
||||
TIME_ZONE = 'UTC'
|
||||
|
||||
# Language code for this installation. All choices can be found here:
|
||||
# http://www.i18nguy.com/unicode/language-identifiers.html
|
||||
LANGUAGE_CODE = 'en-us'
|
||||
|
||||
# SECURITY WARNING: keep the secret key used in production secret!
|
||||
# Hardcoded values can leak through source control. Consider loading
|
||||
# the secret key from an environment variable or a file instead.
|
||||
SECRET_KEY = 'p7z7g1ql4%6+(6nlebb6hdk7sd^&fnjpal308%n%+p^_e6vo1y'
|
||||
|
||||
# HTTP headers and meta keys to search to determine remote host name or IP. Add
|
||||
# additional items to this list, such as "HTTP_X_FORWARDED_FOR", if behind a
|
||||
# reverse proxy.
|
||||
REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
|
||||
|
||||
# If Tower is behind a reverse proxy/load balancer, use this setting to
|
||||
# whitelist the proxy IP addresses from which Tower should trust custom
|
||||
# REMOTE_HOST_HEADERS header values
|
||||
# REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', ''REMOTE_ADDR', 'REMOTE_HOST']
|
||||
# PROXY_IP_WHITELIST = ['10.0.1.100', '10.0.1.101']
|
||||
# If this setting is an empty list (the default), the headers specified by
|
||||
# REMOTE_HOST_HEADERS will be trusted unconditionally')
|
||||
PROXY_IP_WHITELIST = []
|
||||
|
||||
# Define additional environment variables to be passed to ansible subprocesses
|
||||
#AWX_TASK_ENV['FOO'] = 'BAR'
|
||||
|
||||
# If set, use -vvv for project updates instead of -v for more output.
|
||||
# PROJECT_UPDATE_VVV=True
|
||||
|
||||
@ -108,40 +64,6 @@ PROXY_IP_WHITELIST = []
|
||||
# Enable logging to syslog. Setting level to ERROR captures 500 errors,
|
||||
# WARNING also logs 4xx responses.
|
||||
|
||||
LOGGING['handlers']['syslog'] = {
|
||||
'level': 'WARNING',
|
||||
'filters': ['require_debug_false'],
|
||||
'class': 'logging.NullHandler',
|
||||
'formatter': 'simple',
|
||||
}
|
||||
|
||||
LOGGING['loggers']['django.request']['handlers'] = ['console']
|
||||
LOGGING['loggers']['rest_framework.request']['handlers'] = ['console']
|
||||
LOGGING['loggers']['awx']['handlers'] = ['console', 'external_logger']
|
||||
LOGGING['loggers']['awx.main.commands.run_callback_receiver']['handlers'] = [] # propogates to awx
|
||||
LOGGING['loggers']['awx.main.tasks']['handlers'] = ['console', 'external_logger']
|
||||
LOGGING['loggers']['awx.main.scheduler']['handlers'] = ['console', 'external_logger']
|
||||
LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
|
||||
LOGGING['loggers']['social']['handlers'] = ['console']
|
||||
LOGGING['loggers']['system_tracking_migrations']['handlers'] = ['console']
|
||||
LOGGING['loggers']['rbac_migrations']['handlers'] = ['console']
|
||||
LOGGING['loggers']['awx.isolated.manager.playbooks']['handlers'] = ['console']
|
||||
LOGGING['handlers']['callback_receiver'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['fact_receiver'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['task_system'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['tower_warnings'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['rbac_migrations'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['system_tracking_migrations'] = {'class': 'logging.NullHandler'}
|
||||
LOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}
|
||||
|
||||
|
||||
# Enable the following lines to also log to a file.
|
||||
#LOGGING['handlers']['file'] = {
|
||||
# 'class': 'logging.FileHandler',
|
||||
# 'filename': os.path.join(BASE_DIR, 'awx.log'),
|
||||
# 'formatter': 'simple',
|
||||
#}
|
||||
|
||||
# Enable the following lines to turn on lots of permissions-related logging.
|
||||
#LOGGING['loggers']['awx.main.access']['level'] = 'DEBUG'
|
||||
#LOGGING['loggers']['awx.main.signals']['level'] = 'DEBUG'
|
||||
@ -154,81 +76,6 @@ LOGGING['handlers']['management_playbooks'] = {'class': 'logging.NullHandler'}
|
||||
#LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
|
||||
#LOGGING['loggers']['django_auth_ldap']['level'] = 'DEBUG'
|
||||
|
||||
###############################################################################
|
||||
# SCM TEST SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Define these variables to enable more complete testing of project support for
|
||||
# SCM updates. The test repositories listed do not have to contain any valid
|
||||
# playbooks.
|
||||
|
||||
try:
|
||||
path = os.path.expanduser(os.path.expandvars('~/.ssh/id_rsa'))
|
||||
TEST_SSH_KEY_DATA = open(path, 'rb').read()
|
||||
except IOError:
|
||||
TEST_SSH_KEY_DATA = ''
|
||||
|
||||
TEST_GIT_USERNAME = ''
|
||||
TEST_GIT_PASSWORD = ''
|
||||
TEST_GIT_KEY_DATA = TEST_SSH_KEY_DATA
|
||||
TEST_GIT_PUBLIC_HTTPS = 'https://github.com/ansible/ansible.github.com.git'
|
||||
TEST_GIT_PRIVATE_HTTPS = 'https://github.com/ansible/product-docs.git'
|
||||
TEST_GIT_PRIVATE_SSH = 'git@github.com:ansible/product-docs.git'
|
||||
|
||||
TEST_HG_USERNAME = ''
|
||||
TEST_HG_PASSWORD = ''
|
||||
TEST_HG_KEY_DATA = TEST_SSH_KEY_DATA
|
||||
TEST_HG_PUBLIC_HTTPS = 'https://bitbucket.org/cchurch/django-hotrunner'
|
||||
TEST_HG_PRIVATE_HTTPS = ''
|
||||
TEST_HG_PRIVATE_SSH = ''
|
||||
|
||||
TEST_SVN_USERNAME = ''
|
||||
TEST_SVN_PASSWORD = ''
|
||||
TEST_SVN_PUBLIC_HTTPS = 'https://github.com/ansible/ansible.github.com'
|
||||
TEST_SVN_PRIVATE_HTTPS = 'https://github.com/ansible/product-docs'
|
||||
|
||||
# To test repo access via SSH login to localhost.
|
||||
import getpass
|
||||
try:
|
||||
TEST_SSH_LOOPBACK_USERNAME = getpass.getuser()
|
||||
except KeyError:
|
||||
TEST_SSH_LOOPBACK_USERNAME = 'root'
|
||||
TEST_SSH_LOOPBACK_PASSWORD = ''
|
||||
|
||||
###############################################################################
|
||||
# INVENTORY IMPORT TEST SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Define these variables to enable more complete testing of inventory import
|
||||
# from cloud providers.
|
||||
|
||||
# EC2 credentials
|
||||
TEST_AWS_ACCESS_KEY_ID = ''
|
||||
TEST_AWS_SECRET_ACCESS_KEY = ''
|
||||
TEST_AWS_REGIONS = 'all'
|
||||
# Check IAM STS credentials
|
||||
TEST_AWS_SECURITY_TOKEN = ''
|
||||
|
||||
# Rackspace credentials
|
||||
TEST_RACKSPACE_USERNAME = ''
|
||||
TEST_RACKSPACE_API_KEY = ''
|
||||
TEST_RACKSPACE_REGIONS = 'all'
|
||||
|
||||
# VMware credentials
|
||||
TEST_VMWARE_HOST = ''
|
||||
TEST_VMWARE_USER = ''
|
||||
TEST_VMWARE_PASSWORD = ''
|
||||
|
||||
# OpenStack credentials
|
||||
TEST_OPENSTACK_HOST = ''
|
||||
TEST_OPENSTACK_USER = ''
|
||||
TEST_OPENSTACK_PASSWORD = ''
|
||||
TEST_OPENSTACK_PROJECT = ''
|
||||
|
||||
# Azure credentials.
|
||||
TEST_AZURE_USERNAME = ''
|
||||
TEST_AZURE_KEY_DATA = ''
|
||||
|
||||
BROADCAST_WEBSOCKET_SECRET = '🤖starscream🤖'
|
||||
BROADCAST_WEBSOCKET_PORT = 8013
|
||||
BROADCAST_WEBSOCKET_VERIFY_CERT = False
|
||||
|
||||
@ -1,199 +0,0 @@
|
||||
# Copyright (c) 2015 Ansible, Inc. (formerly AnsibleWorks, Inc.)
|
||||
# All Rights Reserved.
|
||||
|
||||
# Local Django settings for AWX project. Rename to "local_settings.py" and
|
||||
# edit as needed for your development environment.
|
||||
|
||||
# All variables defined in awx/settings/development.py will already be loaded
|
||||
# into the global namespace before this file is loaded, to allow for reading
|
||||
# and updating the default settings as needed.
|
||||
|
||||
###############################################################################
|
||||
# MISC PROJECT SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Database settings to use PostgreSQL for development.
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ENGINE': 'django.db.backends.postgresql_psycopg2',
|
||||
'NAME': 'awx-dev',
|
||||
'USER': 'awx-dev',
|
||||
'PASSWORD': 'AWXsome1',
|
||||
'HOST': 'localhost',
|
||||
'PORT': '',
|
||||
}
|
||||
}
|
||||
|
||||
# Use SQLite for unit tests instead of PostgreSQL. If the lines below are
|
||||
# commented out, Django will create the test_awx-dev database in PostgreSQL to
|
||||
# run unit tests.
|
||||
if is_testing(sys.argv):
|
||||
DATABASES = {
|
||||
'default': {
|
||||
'ENGINE': 'django.db.backends.sqlite3',
|
||||
'NAME': os.path.join(BASE_DIR, 'awx.sqlite3'),
|
||||
'TEST': {
|
||||
# Test database cannot be :memory: for tests.
|
||||
'NAME': os.path.join(BASE_DIR, 'awx_test.sqlite3'),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
# AMQP configuration.
|
||||
BROKER_URL = 'amqp://guest:guest@localhost:5672'
|
||||
|
||||
# Absolute filesystem path to the directory to host projects (with playbooks).
|
||||
# This directory should NOT be web-accessible.
|
||||
PROJECTS_ROOT = os.path.join(BASE_DIR, 'projects')
|
||||
|
||||
# Absolute filesystem path to the directory for job status stdout
|
||||
# This directory should not be web-accessible
|
||||
JOBOUTPUT_ROOT = os.path.join(BASE_DIR, 'job_status')
|
||||
|
||||
# The UUID of the system, for HA.
|
||||
SYSTEM_UUID = '00000000-0000-0000-0000-000000000000'
|
||||
|
||||
# Local time zone for this installation. Choices can be found here:
|
||||
# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name
|
||||
# although not all choices may be available on all operating systems.
|
||||
# On Unix systems, a value of None will cause Django to use the same
|
||||
# timezone as the operating system.
|
||||
# If running in a Windows environment this must be set to the same as your
|
||||
# system time zone.
|
||||
TIME_ZONE = None
|
||||
|
||||
# Language code for this installation. All choices can be found here:
|
||||
# http://www.i18nguy.com/unicode/language-identifiers.html
|
||||
LANGUAGE_CODE = 'en-us'
|
||||
|
||||
# SECURITY WARNING: keep the secret key used in production secret!
|
||||
# Hardcoded values can leak through source control. Consider loading
|
||||
# the secret key from an environment variable or a file instead.
|
||||
SECRET_KEY = 'p7z7g1ql4%6+(6nlebb6hdk7sd^&fnjpal308%n%+p^_e6vo1y'
|
||||
|
||||
# HTTP headers and meta keys to search to determine remote host name or IP. Add
|
||||
# additional items to this list, such as "HTTP_X_FORWARDED_FOR", if behind a
|
||||
# reverse proxy.
|
||||
REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
|
||||
|
||||
# If Tower is behind a reverse proxy/load balancer, use this setting to
|
||||
# whitelist the proxy IP addresses from which Tower should trust custom
|
||||
# REMOTE_HOST_HEADERS header values
|
||||
# REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', ''REMOTE_ADDR', 'REMOTE_HOST']
|
||||
# PROXY_IP_WHITELIST = ['10.0.1.100', '10.0.1.101']
|
||||
# If this setting is an empty list (the default), the headers specified by
|
||||
# REMOTE_HOST_HEADERS will be trusted unconditionally')
|
||||
PROXY_IP_WHITELIST = []
|
||||
|
||||
# Define additional environment variables to be passed to ansible subprocesses
|
||||
#AWX_TASK_ENV['FOO'] = 'BAR'
|
||||
|
||||
# If set, use -vvv for project updates instead of -v for more output.
|
||||
# PROJECT_UPDATE_VVV=True
|
||||
|
||||
###############################################################################
|
||||
# LOGGING SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Enable logging to syslog. Setting level to ERROR captures 500 errors,
|
||||
# WARNING also logs 4xx responses.
|
||||
LOGGING['handlers']['syslog'] = {
|
||||
'level': 'WARNING',
|
||||
'filters': [],
|
||||
'class': 'logging.handlers.SysLogHandler',
|
||||
'address': '/dev/log',
|
||||
'facility': 'local0',
|
||||
'formatter': 'simple',
|
||||
}
|
||||
|
||||
# Enable the following lines to also log to a file.
|
||||
#LOGGING['handlers']['file'] = {
|
||||
# 'class': 'logging.FileHandler',
|
||||
# 'filename': os.path.join(BASE_DIR, 'awx.log'),
|
||||
# 'formatter': 'simple',
|
||||
#}
|
||||
|
||||
# Enable the following lines to turn on lots of permissions-related logging.
|
||||
#LOGGING['loggers']['awx.main.access']['level'] = 'DEBUG'
|
||||
#LOGGING['loggers']['awx.main.signals']['level'] = 'DEBUG'
|
||||
#LOGGING['loggers']['awx.main.permissions']['level'] = 'DEBUG'
|
||||
|
||||
# Enable the following line to turn on database settings logging.
|
||||
#LOGGING['loggers']['awx.conf']['level'] = 'DEBUG'
|
||||
|
||||
# Enable the following lines to turn on LDAP auth logging.
|
||||
#LOGGING['loggers']['django_auth_ldap']['handlers'] = ['console']
|
||||
#LOGGING['loggers']['django_auth_ldap']['level'] = 'DEBUG'
|
||||
|
||||
###############################################################################
|
||||
# SCM TEST SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Define these variables to enable more complete testing of project support for
|
||||
# SCM updates. The test repositories listed do not have to contain any valid
|
||||
# playbooks.
|
||||
|
||||
try:
|
||||
path = os.path.expanduser(os.path.expandvars('~/.ssh/id_rsa'))
|
||||
TEST_SSH_KEY_DATA = file(path, 'rb').read()
|
||||
except IOError:
|
||||
TEST_SSH_KEY_DATA = ''
|
||||
|
||||
TEST_GIT_USERNAME = ''
|
||||
TEST_GIT_PASSWORD = ''
|
||||
TEST_GIT_KEY_DATA = TEST_SSH_KEY_DATA
|
||||
TEST_GIT_PUBLIC_HTTPS = 'https://github.com/ansible/ansible.github.com.git'
|
||||
TEST_GIT_PRIVATE_HTTPS = 'https://github.com/ansible/product-docs.git'
|
||||
TEST_GIT_PRIVATE_SSH = 'git@github.com:ansible/product-docs.git'
|
||||
|
||||
TEST_HG_USERNAME = ''
|
||||
TEST_HG_PASSWORD = ''
|
||||
TEST_HG_KEY_DATA = TEST_SSH_KEY_DATA
|
||||
TEST_HG_PUBLIC_HTTPS = 'https://bitbucket.org/cchurch/django-hotrunner'
|
||||
TEST_HG_PRIVATE_HTTPS = ''
|
||||
TEST_HG_PRIVATE_SSH = ''
|
||||
|
||||
TEST_SVN_USERNAME = ''
|
||||
TEST_SVN_PASSWORD = ''
|
||||
TEST_SVN_PUBLIC_HTTPS = 'https://github.com/ansible/ansible.github.com'
|
||||
TEST_SVN_PRIVATE_HTTPS = 'https://github.com/ansible/product-docs'
|
||||
|
||||
# To test repo access via SSH login to localhost.
|
||||
import getpass
|
||||
TEST_SSH_LOOPBACK_USERNAME = getpass.getuser()
|
||||
TEST_SSH_LOOPBACK_PASSWORD = ''
|
||||
|
||||
###############################################################################
|
||||
# INVENTORY IMPORT TEST SETTINGS
|
||||
###############################################################################
|
||||
|
||||
# Define these variables to enable more complete testing of inventory import
|
||||
# from cloud providers.
|
||||
|
||||
# EC2 credentials
|
||||
TEST_AWS_ACCESS_KEY_ID = ''
|
||||
TEST_AWS_SECRET_ACCESS_KEY = ''
|
||||
TEST_AWS_REGIONS = 'all'
|
||||
# Check IAM STS credentials
|
||||
TEST_AWS_SECURITY_TOKEN = ''
|
||||
|
||||
|
||||
# Rackspace credentials
|
||||
TEST_RACKSPACE_USERNAME = ''
|
||||
TEST_RACKSPACE_API_KEY = ''
|
||||
TEST_RACKSPACE_REGIONS = 'all'
|
||||
|
||||
# VMware credentials
|
||||
TEST_VMWARE_HOST = ''
|
||||
TEST_VMWARE_USER = ''
|
||||
TEST_VMWARE_PASSWORD = ''
|
||||
|
||||
# OpenStack credentials
|
||||
TEST_OPENSTACK_HOST = ''
|
||||
TEST_OPENSTACK_USER = ''
|
||||
TEST_OPENSTACK_PASSWORD = ''
|
||||
TEST_OPENSTACK_PROJECT = ''
|
||||
|
||||
# Azure credentials.
|
||||
TEST_AZURE_USERNAME = ''
|
||||
TEST_AZURE_KEY_DATA = ''
|
||||
4
awx/settings/minikube.py
Normal file
4
awx/settings/minikube.py
Normal file
@ -0,0 +1,4 @@
|
||||
BROADCAST_WEBSOCKET_SECRET = '🤖starscream🤖'
|
||||
BROADCAST_WEBSOCKET_PORT = 8013
|
||||
BROADCAST_WEBSOCKET_VERIFY_CERT = False
|
||||
BROADCAST_WEBSOCKET_PROTOCOL = 'http'
|
||||
@ -30,10 +30,6 @@ SECRET_KEY = None
|
||||
# See https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
|
||||
ALLOWED_HOSTS = []
|
||||
|
||||
# Absolute filesystem path to the directory for job status stdout
|
||||
# This directory should not be web-accessible
|
||||
JOBOUTPUT_ROOT = '/var/lib/awx/job_status/'
|
||||
|
||||
# The heartbeat file for the tower scheduler
|
||||
SCHEDULE_METADATA_LOCATION = '/var/lib/awx/.tower_cycle'
|
||||
|
||||
@ -46,15 +42,6 @@ AWX_VENV_PATH = os.path.join(BASE_VENV_PATH, "awx")
|
||||
|
||||
AWX_ISOLATED_USERNAME = 'awx'
|
||||
|
||||
LOGGING['handlers']['tower_warnings']['filename'] = '/var/log/tower/tower.log' # noqa
|
||||
LOGGING['handlers']['callback_receiver']['filename'] = '/var/log/tower/callback_receiver.log' # noqa
|
||||
LOGGING['handlers']['dispatcher']['filename'] = '/var/log/tower/dispatcher.log' # noqa
|
||||
LOGGING['handlers']['wsbroadcast']['filename'] = '/var/log/tower/wsbroadcast.log' # noqa
|
||||
LOGGING['handlers']['task_system']['filename'] = '/var/log/tower/task_system.log' # noqa
|
||||
LOGGING['handlers']['management_playbooks']['filename'] = '/var/log/tower/management_playbooks.log' # noqa
|
||||
LOGGING['handlers']['system_tracking_migrations']['filename'] = '/var/log/tower/tower_system_tracking_migrations.log' # noqa
|
||||
LOGGING['handlers']['rbac_migrations']['filename'] = '/var/log/tower/tower_rbac_migrations.log' # noqa
|
||||
|
||||
# Store a snapshot of default settings at this point before loading any
|
||||
# customizable config files.
|
||||
DEFAULTS_SNAPSHOT = {}
|
||||
|
||||
292
awx/sso/conf.py
292
awx/sso/conf.py
@ -842,6 +842,298 @@ register(
|
||||
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
###############################################################################
|
||||
# GITHUB ENTERPRISE OAUTH2 AUTHENTICATION SETTINGS
|
||||
###############################################################################
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_CALLBACK_URL',
|
||||
field_class=fields.CharField,
|
||||
read_only=True,
|
||||
default=SocialAuthCallbackURL('github-enterprise'),
|
||||
label=_('GitHub Enterprise OAuth2 Callback URL'),
|
||||
help_text=_('Provide this URL as the callback URL for your application as part '
|
||||
'of your registration process. Refer to the Ansible Tower '
|
||||
'documentation for more detail.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
depends_on=['TOWER_URL_BASE'],
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise URL'),
|
||||
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
|
||||
'documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise API URL'),
|
||||
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
|
||||
'Enterprise documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise OAuth2 Key'),
|
||||
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise developer application.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise OAuth2 Secret'),
|
||||
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise developer application.'),
|
||||
category=_('GitHub OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
encrypted=True,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORGANIZATION_MAP',
|
||||
field_class=SocialOrganizationMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise OAuth2 Organization Map'),
|
||||
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_MAP',
|
||||
field_class=SocialTeamMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise OAuth2 Team Map'),
|
||||
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise',
|
||||
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
###############################################################################
|
||||
# GITHUB ENTERPRISE ORG OAUTH2 AUTHENTICATION SETTINGS
|
||||
###############################################################################
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_CALLBACK_URL',
|
||||
field_class=fields.CharField,
|
||||
read_only=True,
|
||||
default=SocialAuthCallbackURL('github-enterprise-org'),
|
||||
label=_('GitHub Enterprise Organization OAuth2 Callback URL'),
|
||||
help_text=_('Provide this URL as the callback URL for your application as part '
|
||||
'of your registration process. Refer to the Ansible Tower '
|
||||
'documentation for more detail.'),
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
depends_on=['TOWER_URL_BASE'],
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Organization URL'),
|
||||
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
|
||||
'documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_API_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Organization API URL'),
|
||||
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
|
||||
'Enterprise documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Organization OAuth2 Key'),
|
||||
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise organization application.'),
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Organization OAuth2 Secret'),
|
||||
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise organization application.'),
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
encrypted=True,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Organization Name'),
|
||||
help_text=_('The name of your GitHub Enterprise organization, as used in your '
|
||||
'organization\'s URL: https://github.com/<yourorg>/.'),
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_ORGANIZATION_MAP',
|
||||
field_class=SocialOrganizationMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise Organization OAuth2 Organization Map'),
|
||||
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_TEAM_MAP',
|
||||
field_class=SocialTeamMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise Organization OAuth2 Team Map'),
|
||||
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise Organization OAuth2'),
|
||||
category_slug='github-enterprise-org',
|
||||
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
###############################################################################
|
||||
# GITHUB ENTERPRISE TEAM OAUTH2 AUTHENTICATION SETTINGS
|
||||
###############################################################################
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_CALLBACK_URL',
|
||||
field_class=fields.CharField,
|
||||
read_only=True,
|
||||
default=SocialAuthCallbackURL('github-enterprise-team'),
|
||||
label=_('GitHub Enterprise Team OAuth2 Callback URL'),
|
||||
help_text=_('Create an organization-owned application at '
|
||||
'https://github.com/organizations/<yourorg>/settings/applications '
|
||||
'and obtain an OAuth2 key (Client ID) and secret (Client Secret). '
|
||||
'Provide this URL as the callback URL for your application.'),
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
depends_on=['TOWER_URL_BASE'],
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Team URL'),
|
||||
help_text=_('The URL for your Github Enterprise instance, e.g.: http(s)://hostname/. Refer to Github Enterprise '
|
||||
'documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_API_URL',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Team API URL'),
|
||||
help_text=_('The API URL for your GitHub Enterprise instance, e.g.: http(s)://hostname/api/v3/. Refer to Github '
|
||||
'Enterprise documentation for more details.'),
|
||||
category=_('GitHub Enterprise OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Team OAuth2 Key'),
|
||||
help_text=_('The OAuth2 key (Client ID) from your GitHub Enterprise organization application.'),
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Team OAuth2 Secret'),
|
||||
help_text=_('The OAuth2 secret (Client Secret) from your GitHub Enterprise organization application.'),
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
encrypted=True,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID',
|
||||
field_class=fields.CharField,
|
||||
allow_blank=True,
|
||||
default='',
|
||||
label=_('GitHub Enterprise Team ID'),
|
||||
help_text=_('Find the numeric team ID using the Github Enterprise API: '
|
||||
'http://fabian-kostadinov.github.io/2015/01/16/how-to-find-a-github-team-id/.'),
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ORGANIZATION_MAP',
|
||||
field_class=SocialOrganizationMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise Team OAuth2 Organization Map'),
|
||||
help_text=SOCIAL_AUTH_ORGANIZATION_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
placeholder=SOCIAL_AUTH_ORGANIZATION_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
register(
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_TEAM_MAP',
|
||||
field_class=SocialTeamMapField,
|
||||
allow_null=True,
|
||||
default=None,
|
||||
label=_('GitHub Enterprise Team OAuth2 Team Map'),
|
||||
help_text=SOCIAL_AUTH_TEAM_MAP_HELP_TEXT,
|
||||
category=_('GitHub Enterprise Team OAuth2'),
|
||||
category_slug='github-enterprise-team',
|
||||
placeholder=SOCIAL_AUTH_TEAM_MAP_PLACEHOLDER,
|
||||
)
|
||||
|
||||
###############################################################################
|
||||
# MICROSOFT AZURE ACTIVE DIRECTORY SETTINGS
|
||||
###############################################################################
|
||||
|
||||
@ -187,6 +187,26 @@ class AuthenticationBackendsField(fields.StringListField):
|
||||
'SOCIAL_AUTH_GITHUB_TEAM_SECRET',
|
||||
'SOCIAL_AUTH_GITHUB_TEAM_ID',
|
||||
]),
|
||||
('social_core.backends.github_enterprise.GithubEnterpriseOAuth2', [
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_API_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_KEY',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_SECRET',
|
||||
]),
|
||||
('social_core.backends.github_enterprise.GithubEnterpriseOrganizationOAuth2', [
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_API_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_KEY',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_SECRET',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_ORG_NAME',
|
||||
]),
|
||||
('social_core.backends.github_enterprise.GithubEnterpriseTeamOAuth2', [
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_API_URL',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_KEY',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_SECRET',
|
||||
'SOCIAL_AUTH_GITHUB_ENTERPRISE_TEAM_ID',
|
||||
]),
|
||||
('social_core.backends.azuread.AzureADOAuth2', [
|
||||
'SOCIAL_AUTH_AZUREAD_OAUTH2_KEY',
|
||||
'SOCIAL_AUTH_AZUREAD_OAUTH2_SECRET',
|
||||
@ -445,7 +465,8 @@ class LDAPGroupTypeField(fields.ChoiceField, DependsOnMixin):
|
||||
|
||||
default_error_messages = {
|
||||
'type_error': _('Expected an instance of LDAPGroupType but got {input_type} instead.'),
|
||||
'missing_parameters': _('Missing required parameters in {dependency}.')
|
||||
'missing_parameters': _('Missing required parameters in {dependency}.'),
|
||||
'invalid_parameters': _('Invalid group_type parameters. Expected instance of dict but got {parameters_type} instead.')
|
||||
}
|
||||
|
||||
def __init__(self, choices=None, **kwargs):
|
||||
@ -465,7 +486,6 @@ class LDAPGroupTypeField(fields.ChoiceField, DependsOnMixin):
|
||||
if not data:
|
||||
return None
|
||||
|
||||
params = self.get_depends_on() or {}
|
||||
cls = find_class_in_modules(data)
|
||||
if not cls:
|
||||
return None
|
||||
@ -475,8 +495,16 @@ class LDAPGroupTypeField(fields.ChoiceField, DependsOnMixin):
|
||||
# Backwords compatability. Before AUTH_LDAP_GROUP_TYPE_PARAMS existed
|
||||
# MemberDNGroupType was the only group type, of the underlying lib, that
|
||||
# took a parameter.
|
||||
params = self.get_depends_on() or {}
|
||||
params_sanitized = dict()
|
||||
for attr in inspect.getargspec(cls.__init__).args[1:]:
|
||||
|
||||
cls_args = inspect.getargspec(cls.__init__).args[1:]
|
||||
|
||||
if cls_args:
|
||||
if not isinstance(params, dict):
|
||||
self.fail('invalid_parameters', parameters_type=type(params))
|
||||
|
||||
for attr in cls_args:
|
||||
if attr in params:
|
||||
params_sanitized[attr] = params[attr]
|
||||
|
||||
|
||||
@ -25,7 +25,7 @@ class BaseRedirectView(RedirectView):
|
||||
def get_redirect_url(self, *args, **kwargs):
|
||||
last_path = self.request.COOKIES.get('lastPath', '')
|
||||
last_path = urllib.parse.quote(urllib.parse.unquote(last_path).strip('"'))
|
||||
url = reverse('ui:index')
|
||||
url = reverse('ui_next:index')
|
||||
if last_path:
|
||||
return '%s#%s' % (url, last_path)
|
||||
else:
|
||||
|
||||
8
awx/ui/context_processors.py
Normal file
8
awx/ui/context_processors.py
Normal file
@ -0,0 +1,8 @@
|
||||
import base64
|
||||
import os
|
||||
|
||||
|
||||
def csp(request):
|
||||
return {
|
||||
'csp_nonce': base64.encodebytes(os.urandom(32)).decode().rstrip(),
|
||||
}
|
||||
@ -6,4 +6,5 @@ coverage
|
||||
build
|
||||
node_modules
|
||||
dist
|
||||
images
|
||||
images
|
||||
instrumented
|
||||
@ -8,8 +8,14 @@
|
||||
"modules": true
|
||||
}
|
||||
},
|
||||
"plugins": ["react-hooks"],
|
||||
"extends": ["airbnb", "prettier", "prettier/react"],
|
||||
"plugins": ["react-hooks", "jsx-a11y", "i18next"],
|
||||
"extends": [
|
||||
"airbnb",
|
||||
"prettier",
|
||||
"prettier/react",
|
||||
"plugin:jsx-a11y/strict",
|
||||
"plugin:i18next/recommended"
|
||||
],
|
||||
"settings": {
|
||||
"react": {
|
||||
"version": "16.5.2"
|
||||
@ -24,6 +30,70 @@
|
||||
"window": true
|
||||
},
|
||||
"rules": {
|
||||
"i18next/no-literal-string": [
|
||||
2,
|
||||
{
|
||||
"markupOnly": true,
|
||||
"ignoreAttribute": [
|
||||
"to",
|
||||
"streamType",
|
||||
"path",
|
||||
"component",
|
||||
"variant",
|
||||
"key",
|
||||
"position",
|
||||
"promptName",
|
||||
"color",
|
||||
"promptId",
|
||||
"headingLevel",
|
||||
"size",
|
||||
"target",
|
||||
"autoComplete",
|
||||
"trigger",
|
||||
"from",
|
||||
"name",
|
||||
"fieldId",
|
||||
"css",
|
||||
"gutter",
|
||||
"dataCy",
|
||||
"tooltipMaxWidth",
|
||||
"mode",
|
||||
"aria-labelledby",
|
||||
"aria-hidden",
|
||||
"sortKey",
|
||||
"ouiaId",
|
||||
"credentialTypeNamespace",
|
||||
"link",
|
||||
"value",
|
||||
"credentialTypeKind",
|
||||
"linkTo",
|
||||
"scrollToAlignment",
|
||||
"displayKey",
|
||||
"sortedColumnKey",
|
||||
"maxHeight",
|
||||
"role",
|
||||
"aria-haspopup",
|
||||
"dropDirection",
|
||||
"resizeOrientation",
|
||||
"src",
|
||||
"theme",
|
||||
"gridColumns"
|
||||
],
|
||||
"ignore": ["Ansible", "Tower", "JSON", "YAML", "lg", "START"],
|
||||
"ignoreComponent": [
|
||||
"code",
|
||||
"Omit",
|
||||
"PotentialLink",
|
||||
"TypeRedirect",
|
||||
"Radio",
|
||||
"RunOnRadio",
|
||||
"NodeTypeLetter",
|
||||
"SelectableItem",
|
||||
"Dash"
|
||||
],
|
||||
"ignoreCallee": ["describe"]
|
||||
}
|
||||
],
|
||||
"camelcase": "off",
|
||||
"arrow-parens": "off",
|
||||
"comma-dangle": "off",
|
||||
|
||||
@ -57,12 +57,12 @@ The UI is built using [ReactJS](https://reactjs.org/docs/getting-started.html) a
|
||||
|
||||
The AWX UI requires the following:
|
||||
|
||||
- Node 10.x LTS
|
||||
- Node 14.x LTS
|
||||
- NPM 6.x LTS
|
||||
|
||||
Run the following to install all the dependencies:
|
||||
```bash
|
||||
(host) $ npm run install
|
||||
(host) $ npm install
|
||||
```
|
||||
|
||||
#### Build the User Interface
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
FROM node:10
|
||||
FROM node:14
|
||||
ARG NPMRC_FILE=.npmrc
|
||||
ENV NPMRC_FILE=${NPMRC_FILE}
|
||||
ARG TARGET='https://awx:8043'
|
||||
|
||||
@ -1,7 +1,7 @@
|
||||
# AWX-PF
|
||||
|
||||
## Requirements
|
||||
- node 10.x LTS, npm 6.x LTS, make, git
|
||||
- node 14.x LTS, npm 6.x LTS, make, git
|
||||
|
||||
## Development
|
||||
The API development server will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md).
|
||||
@ -15,6 +15,19 @@ npm --prefix=awx/ui_next install
|
||||
npm --prefix=awx/ui_next start
|
||||
```
|
||||
|
||||
### Build for the Development Containers
|
||||
If you just want to build a ui for the container-based awx development
|
||||
environment, use these make targets:
|
||||
|
||||
```shell
|
||||
# The ui will be reachable at https://localhost:8043 or
|
||||
# http://localhost:8013
|
||||
make ui-devel
|
||||
|
||||
# clean up
|
||||
make clean-ui
|
||||
```
|
||||
|
||||
### Using an External Server
|
||||
If you normally run awx on an external host/server (in this example, `awx.local`),
|
||||
you'll need use the `TARGET` environment variable when starting the ui development
|
||||
|
||||
27
awx/ui_next/docs/APP_ARCHITECTURE.md
Normal file
27
awx/ui_next/docs/APP_ARCHITECTURE.md
Normal file
@ -0,0 +1,27 @@
|
||||
# Application Architecture
|
||||
|
||||
## Local Storage Integration
|
||||
The `useStorage` hook integrates with the browser's localStorage api.
|
||||
It accepts a localStorage key as its only argument and returns a state
|
||||
variable and setter function for that state variable. The hook enables
|
||||
bidirectional data transfer between tabs via an event listener that
|
||||
is registered with the Web Storage api.
|
||||
|
||||
|
||||

|
||||
|
||||
The `useStorage` hook currently lives in the `AppContainer` component. It
|
||||
can be relocated to a more general location should and if the need
|
||||
ever arise
|
||||
|
||||
## Session Expiration
|
||||
Session timeout state is communicated to the client in the HTTP(S)
|
||||
response headers. Every HTTP(S) response is intercepted to read the
|
||||
session expiration time before being passed into the rest of the
|
||||
application. A timeout date is computed from the intercepted HTTP(S)
|
||||
headers and is pushed into local storage, where it can be read using
|
||||
standard Web Storage apis or other utilities, such as `useStorage`.
|
||||
|
||||
|
||||

|
||||
|
||||
BIN
awx/ui_next/docs/images/sessionExpiration.png
Normal file
BIN
awx/ui_next/docs/images/sessionExpiration.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 70 KiB |
BIN
awx/ui_next/docs/images/useStorage.png
Normal file
BIN
awx/ui_next/docs/images/useStorage.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 57 KiB |
8423
awx/ui_next/package-lock.json
generated
8423
awx/ui_next/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -3,15 +3,16 @@
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"engines": {
|
||||
"node": "10.x"
|
||||
"node": "14.x"
|
||||
},
|
||||
"dependencies": {
|
||||
"@lingui/react": "^2.9.1",
|
||||
"@patternfly/patternfly": "4.59.1",
|
||||
"@patternfly/react-core": "4.75.2",
|
||||
"@patternfly/react-icons": "4.7.16",
|
||||
"@patternfly/patternfly": "4.70.2",
|
||||
"@patternfly/react-core": "4.84.3",
|
||||
"@patternfly/react-icons": "4.7.22",
|
||||
"@patternfly/react-table": "^4.19.15",
|
||||
"ansi-to-html": "^0.6.11",
|
||||
"axios": "^0.18.1",
|
||||
"axios": "^0.21.1",
|
||||
"codemirror": "^5.47.0",
|
||||
"d3": "^5.12.0",
|
||||
"dagre": "^0.8.4",
|
||||
@ -30,6 +31,7 @@
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/polyfill": "^7.8.7",
|
||||
"@cypress/instrument-cra": "^1.4.0",
|
||||
"@lingui/cli": "^2.9.2",
|
||||
"@lingui/macro": "^2.9.1",
|
||||
"@nteract/mockument": "^1.0.4",
|
||||
@ -41,8 +43,9 @@
|
||||
"eslint-config-airbnb": "^17.1.0",
|
||||
"eslint-config-prettier": "^5.0.0",
|
||||
"eslint-import-resolver-webpack": "0.11.1",
|
||||
"eslint-plugin-i18next": "^5.0.0",
|
||||
"eslint-plugin-import": "^2.14.0",
|
||||
"eslint-plugin-jsx-a11y": "^6.1.1",
|
||||
"eslint-plugin-jsx-a11y": "^6.4.1",
|
||||
"eslint-plugin-react": "^7.11.1",
|
||||
"eslint-plugin-react-hooks": "^2.2.0",
|
||||
"http-proxy-middleware": "^1.0.3",
|
||||
@ -53,7 +56,8 @@
|
||||
},
|
||||
"scripts": {
|
||||
"start": "PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts start",
|
||||
"build": "react-scripts build",
|
||||
"start-instrumented": "DEBUG=instrument-cra PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts -r @cypress/instrument-cra start",
|
||||
"build": "INLINE_RUNTIME_CHUNK=false react-scripts build",
|
||||
"test": "TZ='UTC' react-scripts test --coverage --watchAll=false",
|
||||
"test-watch": "TZ='UTC' react-scripts test",
|
||||
"eject": "react-scripts eject",
|
||||
|
||||
@ -1,6 +1,15 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<% if (process.env.NODE_ENV === 'production') { %>
|
||||
<script nonce="{{ csp_nonce }}" type="text/javascript">
|
||||
window.NONCE_ID = '{{ csp_nonce }}';
|
||||
</script>
|
||||
<meta
|
||||
http-equiv="Content-Security-Policy"
|
||||
content="default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'unsafe-inline'; script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io; img-src 'self' *.pendo.io data:;"
|
||||
/>
|
||||
<% } %>
|
||||
<meta charset="utf-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<meta name="theme-color" content="#000000" />
|
||||
@ -12,6 +21,10 @@
|
||||
</head>
|
||||
<body>
|
||||
<noscript>You need to enable JavaScript to run this app.</noscript>
|
||||
<div id="app" style="height: 100%"></div>
|
||||
<% if (process.env.NODE_ENV === 'production') { %>
|
||||
<style nonce="{{ csp_nonce }}">.app{height: 100%;}</style><div id="app" class="app"></div>
|
||||
<% } else { %>
|
||||
<div id="app" style="height: 100%"></div>
|
||||
<% } %>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
25
awx/ui_next/public/installing.html
Normal file
25
awx/ui_next/public/installing.html
Normal file
@ -0,0 +1,25 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en-US">
|
||||
<head>
|
||||
<meta
|
||||
http-equiv="Content-Security-Policy"
|
||||
content="default-src 'self'; connect-src 'self' ws: wss:; style-src 'self' 'nonce-{{ csp_nonce }}'; script-src 'self' 'nonce-{{ csp_nonce }}' *.pendo.io; img-src 'self' *.pendo.io data:;"
|
||||
/>
|
||||
<meta charset="utf-8">
|
||||
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<script nonce="{{ csp_nonce }}">
|
||||
setInterval(function() {
|
||||
window.location = '/';
|
||||
}, 10000);
|
||||
</script>
|
||||
</head>
|
||||
<body>
|
||||
<div>
|
||||
<span>
|
||||
<p>AWX is installing.</p>
|
||||
<p>This page will refresh when complete.</p>
|
||||
</span>
|
||||
</div>
|
||||
</body>
|
||||
</html>
|
||||
@ -30,7 +30,12 @@ const ProtectedRoute = ({ children, ...rest }) =>
|
||||
|
||||
function App() {
|
||||
const catalogs = { en, ja };
|
||||
const language = getLanguageWithoutRegionCode(navigator);
|
||||
let language = getLanguageWithoutRegionCode(navigator);
|
||||
if (!Object.keys(catalogs).includes(language)) {
|
||||
// If there isn't a string catalog available for the browser's
|
||||
// preferred language, default to one that has strings.
|
||||
language = 'en';
|
||||
}
|
||||
const match = useRouteMatch();
|
||||
const { hash, search, pathname } = useLocation();
|
||||
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
import React from 'react';
|
||||
|
||||
import { act } from 'react-dom/test-utils';
|
||||
import { mountWithContexts } from '../testUtils/enzymeHelpers';
|
||||
|
||||
import App from './App';
|
||||
@ -7,8 +7,11 @@ import App from './App';
|
||||
jest.mock('./api');
|
||||
|
||||
describe('<App />', () => {
|
||||
test('renders ok', () => {
|
||||
const wrapper = mountWithContexts(<App />);
|
||||
test('renders ok', async () => {
|
||||
let wrapper;
|
||||
await act(async () => {
|
||||
wrapper = mountWithContexts(<App />);
|
||||
});
|
||||
expect(wrapper.length).toBe(1);
|
||||
});
|
||||
});
|
||||
|
||||
@ -1,6 +1,13 @@
|
||||
import axios from 'axios';
|
||||
|
||||
import { SESSION_TIMEOUT_KEY } from '../constants';
|
||||
import { encodeQueryString } from '../util/qs';
|
||||
import debounce from '../util/debounce';
|
||||
|
||||
const updateStorage = debounce((key, val) => {
|
||||
window.localStorage.setItem(key, val);
|
||||
window.dispatchEvent(new Event('storage'));
|
||||
}, 500);
|
||||
|
||||
const defaultHttp = axios.create({
|
||||
xsrfCookieName: 'csrftoken',
|
||||
@ -10,6 +17,15 @@ const defaultHttp = axios.create({
|
||||
},
|
||||
});
|
||||
|
||||
defaultHttp.interceptors.response.use(response => {
|
||||
const timeout = response?.headers['session-timeout'];
|
||||
if (timeout) {
|
||||
const timeoutDate = new Date().getTime() + timeout * 1000;
|
||||
updateStorage(SESSION_TIMEOUT_KEY, String(timeoutDate));
|
||||
}
|
||||
return response;
|
||||
});
|
||||
|
||||
class Base {
|
||||
constructor(http = defaultHttp, baseURL) {
|
||||
this.http = http;
|
||||
|
||||
@ -1,5 +1,7 @@
|
||||
import ActivityStream from './models/ActivityStream';
|
||||
import AdHocCommands from './models/AdHocCommands';
|
||||
import Applications from './models/Applications';
|
||||
import Auth from './models/Auth';
|
||||
import Config from './models/Config';
|
||||
import CredentialInputSources from './models/CredentialInputSources';
|
||||
import CredentialTypes from './models/CredentialTypes';
|
||||
@ -32,13 +34,16 @@ import Tokens from './models/Tokens';
|
||||
import UnifiedJobTemplates from './models/UnifiedJobTemplates';
|
||||
import UnifiedJobs from './models/UnifiedJobs';
|
||||
import Users from './models/Users';
|
||||
import WorkflowApprovals from './models/WorkflowApprovals';
|
||||
import WorkflowApprovalTemplates from './models/WorkflowApprovalTemplates';
|
||||
import WorkflowJobTemplateNodes from './models/WorkflowJobTemplateNodes';
|
||||
import WorkflowJobTemplates from './models/WorkflowJobTemplates';
|
||||
import WorkflowJobs from './models/WorkflowJobs';
|
||||
|
||||
const ActivityStreamAPI = new ActivityStream();
|
||||
const AdHocCommandsAPI = new AdHocCommands();
|
||||
const ApplicationsAPI = new Applications();
|
||||
const AuthAPI = new Auth();
|
||||
const ConfigAPI = new Config();
|
||||
const CredentialInputSourcesAPI = new CredentialInputSources();
|
||||
const CredentialTypesAPI = new CredentialTypes();
|
||||
@ -71,14 +76,17 @@ const TokensAPI = new Tokens();
|
||||
const UnifiedJobTemplatesAPI = new UnifiedJobTemplates();
|
||||
const UnifiedJobsAPI = new UnifiedJobs();
|
||||
const UsersAPI = new Users();
|
||||
const WorkflowApprovalsAPI = new WorkflowApprovals();
|
||||
const WorkflowApprovalTemplatesAPI = new WorkflowApprovalTemplates();
|
||||
const WorkflowJobTemplateNodesAPI = new WorkflowJobTemplateNodes();
|
||||
const WorkflowJobTemplatesAPI = new WorkflowJobTemplates();
|
||||
const WorkflowJobsAPI = new WorkflowJobs();
|
||||
|
||||
export {
|
||||
ActivityStreamAPI,
|
||||
AdHocCommandsAPI,
|
||||
ApplicationsAPI,
|
||||
AuthAPI,
|
||||
ConfigAPI,
|
||||
CredentialInputSourcesAPI,
|
||||
CredentialTypesAPI,
|
||||
@ -111,6 +119,7 @@ export {
|
||||
UnifiedJobTemplatesAPI,
|
||||
UnifiedJobsAPI,
|
||||
UsersAPI,
|
||||
WorkflowApprovalsAPI,
|
||||
WorkflowApprovalTemplatesAPI,
|
||||
WorkflowJobTemplateNodesAPI,
|
||||
WorkflowJobTemplatesAPI,
|
||||
|
||||
10
awx/ui_next/src/api/models/ActivityStream.js
Normal file
10
awx/ui_next/src/api/models/ActivityStream.js
Normal file
@ -0,0 +1,10 @@
|
||||
import Base from '../Base';
|
||||
|
||||
class ActivityStream extends Base {
|
||||
constructor(http) {
|
||||
super(http);
|
||||
this.baseUrl = '/api/v2/activity_stream/';
|
||||
}
|
||||
}
|
||||
|
||||
export default ActivityStream;
|
||||
10
awx/ui_next/src/api/models/Auth.js
Normal file
10
awx/ui_next/src/api/models/Auth.js
Normal file
@ -0,0 +1,10 @@
|
||||
import Base from '../Base';
|
||||
|
||||
class Auth extends Base {
|
||||
constructor(http) {
|
||||
super(http);
|
||||
this.baseUrl = '/api/v2/auth/';
|
||||
}
|
||||
}
|
||||
|
||||
export default Auth;
|
||||
@ -36,6 +36,10 @@ class Jobs extends RelaunchMixin(Base) {
|
||||
return this.http.post(`/api/v2${getBaseURL(type)}${id}/cancel/`);
|
||||
}
|
||||
|
||||
readCredentials(id, type) {
|
||||
return this.http.get(`/api/v2${getBaseURL(type)}${id}/credentials/`);
|
||||
}
|
||||
|
||||
readDetail(id, type) {
|
||||
return this.http.get(`/api/v2${getBaseURL(type)}${id}/`);
|
||||
}
|
||||
|
||||
18
awx/ui_next/src/api/models/WorkflowApprovals.js
Normal file
18
awx/ui_next/src/api/models/WorkflowApprovals.js
Normal file
@ -0,0 +1,18 @@
|
||||
import Base from '../Base';
|
||||
|
||||
class WorkflowApprovals extends Base {
|
||||
constructor(http) {
|
||||
super(http);
|
||||
this.baseUrl = '/api/v2/workflow_approvals/';
|
||||
}
|
||||
|
||||
approve(id) {
|
||||
return this.http.post(`${this.baseUrl}${id}/approve/`);
|
||||
}
|
||||
|
||||
deny(id) {
|
||||
return this.http.post(`${this.baseUrl}${id}/deny/`);
|
||||
}
|
||||
}
|
||||
|
||||
export default WorkflowApprovals;
|
||||
@ -55,6 +55,19 @@ class WorkflowJobTemplateNodes extends Base {
|
||||
readCredentials(id) {
|
||||
return this.http.get(`${this.baseUrl}${id}/credentials/`);
|
||||
}
|
||||
|
||||
associateCredentials(id, credentialId) {
|
||||
return this.http.post(`${this.baseUrl}${id}/credentials/`, {
|
||||
id: credentialId,
|
||||
});
|
||||
}
|
||||
|
||||
disassociateCredentials(id, credentialId) {
|
||||
return this.http.post(`${this.baseUrl}${id}/credentials/`, {
|
||||
id: credentialId,
|
||||
disassociate: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
export default WorkflowJobTemplateNodes;
|
||||
|
||||
@ -12,8 +12,8 @@ import {
|
||||
import { BrandName } from '../../variables';
|
||||
import brandLogoImg from './brand-logo.svg';
|
||||
|
||||
class About extends React.Component {
|
||||
static createSpeechBubble(version) {
|
||||
function About({ ansible_version, version, isOpen, onClose, i18n }) {
|
||||
const createSpeechBubble = () => {
|
||||
let text = `${BrandName} ${version}`;
|
||||
let top = '';
|
||||
let bottom = '';
|
||||
@ -28,31 +28,22 @@ class About extends React.Component {
|
||||
bottom = ` --${bottom}-- `;
|
||||
|
||||
return top + text + bottom;
|
||||
}
|
||||
};
|
||||
|
||||
constructor(props) {
|
||||
super(props);
|
||||
const speechBubble = createSpeechBubble();
|
||||
|
||||
this.createSpeechBubble = this.constructor.createSpeechBubble.bind(this);
|
||||
}
|
||||
|
||||
render() {
|
||||
const { ansible_version, version, isOpen, onClose, i18n } = this.props;
|
||||
|
||||
const speechBubble = this.createSpeechBubble(version);
|
||||
|
||||
return (
|
||||
<AboutModal
|
||||
isOpen={isOpen}
|
||||
onClose={onClose}
|
||||
productName={`Ansible ${BrandName}`}
|
||||
trademark={i18n._(t`Copyright 2019 Red Hat, Inc.`)}
|
||||
brandImageSrc={brandLogoImg}
|
||||
brandImageAlt={i18n._(t`Brand Image`)}
|
||||
>
|
||||
<pre>
|
||||
{speechBubble}
|
||||
{`
|
||||
return (
|
||||
<AboutModal
|
||||
isOpen={isOpen}
|
||||
onClose={onClose}
|
||||
productName={`Ansible ${BrandName}`}
|
||||
trademark={i18n._(t`Copyright 2019 Red Hat, Inc.`)}
|
||||
brandImageSrc={brandLogoImg}
|
||||
brandImageAlt={i18n._(t`Brand Image`)}
|
||||
>
|
||||
<pre>
|
||||
{speechBubble}
|
||||
{`
|
||||
\\
|
||||
\\ ^__^
|
||||
(oo)\\_______
|
||||
@ -60,18 +51,17 @@ class About extends React.Component {
|
||||
||----w |
|
||||
|| ||
|
||||
`}
|
||||
</pre>
|
||||
<TextContent>
|
||||
<TextList component="dl">
|
||||
<TextListItem component="dt">
|
||||
{i18n._(t`Ansible Version`)}
|
||||
</TextListItem>
|
||||
<TextListItem component="dd">{ansible_version}</TextListItem>
|
||||
</TextList>
|
||||
</TextContent>
|
||||
</AboutModal>
|
||||
);
|
||||
}
|
||||
</pre>
|
||||
<TextContent>
|
||||
<TextList component="dl">
|
||||
<TextListItem component="dt">
|
||||
{i18n._(t`Ansible Version`)}
|
||||
</TextListItem>
|
||||
<TextListItem component="dd">{ansible_version}</TextListItem>
|
||||
</TextList>
|
||||
</TextContent>
|
||||
</AboutModal>
|
||||
);
|
||||
}
|
||||
|
||||
About.propTypes = {
|
||||
|
||||
@ -57,7 +57,7 @@ function AdHocCommands({ adHocItems, i18n, hasListItems }) {
|
||||
fetchData();
|
||||
}, [fetchData]);
|
||||
const {
|
||||
isloading: isLaunchLoading,
|
||||
isLoading: isLaunchLoading,
|
||||
error: launchError,
|
||||
request: launchAdHocCommands,
|
||||
} = useRequest(
|
||||
|
||||
@ -58,7 +58,7 @@ function AdHocCredentialStep({ i18n, credentialTypeId, onEnableLaunch }) {
|
||||
return <ContentError error={error} />;
|
||||
}
|
||||
if (isLoading) {
|
||||
return <ContentLoading error={error} />;
|
||||
return <ContentLoading />;
|
||||
}
|
||||
return (
|
||||
<Form>
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import React, { Fragment } from 'react';
|
||||
import React, { Fragment, useState } from 'react';
|
||||
import PropTypes from 'prop-types';
|
||||
import { withI18n } from '@lingui/react';
|
||||
import { t } from '@lingui/macro';
|
||||
@ -17,95 +17,57 @@ const readTeams = async queryParams => TeamsAPI.read(queryParams);
|
||||
|
||||
const readTeamsOptions = async () => TeamsAPI.readOptions();
|
||||
|
||||
class AddResourceRole extends React.Component {
|
||||
constructor(props) {
|
||||
super(props);
|
||||
|
||||
this.state = {
|
||||
selectedResource: null,
|
||||
selectedResourceRows: [],
|
||||
selectedRoleRows: [],
|
||||
currentStepId: 1,
|
||||
maxEnabledStep: 1,
|
||||
};
|
||||
|
||||
this.handleResourceCheckboxClick = this.handleResourceCheckboxClick.bind(
|
||||
this
|
||||
);
|
||||
this.handleResourceSelect = this.handleResourceSelect.bind(this);
|
||||
this.handleRoleCheckboxClick = this.handleRoleCheckboxClick.bind(this);
|
||||
this.handleWizardNext = this.handleWizardNext.bind(this);
|
||||
this.handleWizardSave = this.handleWizardSave.bind(this);
|
||||
this.handleWizardGoToStep = this.handleWizardGoToStep.bind(this);
|
||||
}
|
||||
|
||||
handleResourceCheckboxClick(user) {
|
||||
const { selectedResourceRows, currentStepId } = this.state;
|
||||
function AddResourceRole({ onSave, onClose, roles, i18n, resource }) {
|
||||
const [selectedResource, setSelectedResource] = useState(null);
|
||||
const [selectedResourceRows, setSelectedResourceRows] = useState([]);
|
||||
const [selectedRoleRows, setSelectedRoleRows] = useState([]);
|
||||
const [currentStepId, setCurrentStepId] = useState(1);
|
||||
const [maxEnabledStep, setMaxEnabledStep] = useState(1);
|
||||
|
||||
const handleResourceCheckboxClick = user => {
|
||||
const selectedIndex = selectedResourceRows.findIndex(
|
||||
selectedRow => selectedRow.id === user.id
|
||||
);
|
||||
|
||||
if (selectedIndex > -1) {
|
||||
selectedResourceRows.splice(selectedIndex, 1);
|
||||
const stateToUpdate = { selectedResourceRows };
|
||||
if (selectedResourceRows.length === 0) {
|
||||
stateToUpdate.maxEnabledStep = currentStepId;
|
||||
setMaxEnabledStep(currentStepId);
|
||||
}
|
||||
this.setState(stateToUpdate);
|
||||
setSelectedRoleRows(selectedResourceRows);
|
||||
} else {
|
||||
this.setState(prevState => ({
|
||||
selectedResourceRows: [...prevState.selectedResourceRows, user],
|
||||
}));
|
||||
setSelectedResourceRows([...selectedResourceRows, user]);
|
||||
}
|
||||
}
|
||||
|
||||
handleRoleCheckboxClick(role) {
|
||||
const { selectedRoleRows } = this.state;
|
||||
};
|
||||
|
||||
const handleRoleCheckboxClick = role => {
|
||||
const selectedIndex = selectedRoleRows.findIndex(
|
||||
selectedRow => selectedRow.id === role.id
|
||||
);
|
||||
|
||||
if (selectedIndex > -1) {
|
||||
selectedRoleRows.splice(selectedIndex, 1);
|
||||
this.setState({ selectedRoleRows });
|
||||
setSelectedRoleRows(selectedRoleRows);
|
||||
} else {
|
||||
this.setState(prevState => ({
|
||||
selectedRoleRows: [...prevState.selectedRoleRows, role],
|
||||
}));
|
||||
setSelectedRoleRows([...selectedRoleRows, role]);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
handleResourceSelect(resourceType) {
|
||||
this.setState({
|
||||
selectedResource: resourceType,
|
||||
selectedResourceRows: [],
|
||||
selectedRoleRows: [],
|
||||
});
|
||||
}
|
||||
const handleResourceSelect = resourceType => {
|
||||
setSelectedResource(resourceType);
|
||||
setSelectedResourceRows([]);
|
||||
setSelectedRoleRows([]);
|
||||
};
|
||||
|
||||
handleWizardNext(step) {
|
||||
this.setState({
|
||||
currentStepId: step.id,
|
||||
maxEnabledStep: step.id,
|
||||
});
|
||||
}
|
||||
const handleWizardNext = step => {
|
||||
setCurrentStepId(step.id);
|
||||
setMaxEnabledStep(step.id);
|
||||
};
|
||||
|
||||
handleWizardGoToStep(step) {
|
||||
this.setState({
|
||||
currentStepId: step.id,
|
||||
});
|
||||
}
|
||||
|
||||
async handleWizardSave() {
|
||||
const { onSave } = this.props;
|
||||
const {
|
||||
selectedResourceRows,
|
||||
selectedRoleRows,
|
||||
selectedResource,
|
||||
} = this.state;
|
||||
const handleWizardGoToStep = step => {
|
||||
setCurrentStepId(step.id);
|
||||
};
|
||||
|
||||
const handleWizardSave = async () => {
|
||||
try {
|
||||
const roleRequests = [];
|
||||
|
||||
@ -134,205 +96,198 @@ class AddResourceRole extends React.Component {
|
||||
} catch (err) {
|
||||
// TODO: handle this error
|
||||
}
|
||||
};
|
||||
|
||||
// Object roles can be user only, so we remove them when
|
||||
// showing role choices for team access
|
||||
const selectableRoles = { ...roles };
|
||||
if (selectedResource === 'teams') {
|
||||
Object.keys(roles).forEach(key => {
|
||||
if (selectableRoles[key].user_only) {
|
||||
delete selectableRoles[key];
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
render() {
|
||||
const {
|
||||
selectedResource,
|
||||
selectedResourceRows,
|
||||
selectedRoleRows,
|
||||
currentStepId,
|
||||
maxEnabledStep,
|
||||
} = this.state;
|
||||
const { onClose, roles, i18n } = this.props;
|
||||
const userSearchColumns = [
|
||||
{
|
||||
name: i18n._(t`Username`),
|
||||
key: 'username__icontains',
|
||||
isDefault: true,
|
||||
},
|
||||
{
|
||||
name: i18n._(t`First Name`),
|
||||
key: 'first_name__icontains',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Last Name`),
|
||||
key: 'last_name__icontains',
|
||||
},
|
||||
];
|
||||
const userSortColumns = [
|
||||
{
|
||||
name: i18n._(t`Username`),
|
||||
key: 'username',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`First Name`),
|
||||
key: 'first_name',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Last Name`),
|
||||
key: 'last_name',
|
||||
},
|
||||
];
|
||||
const teamSearchColumns = [
|
||||
{
|
||||
name: i18n._(t`Name`),
|
||||
key: 'name',
|
||||
isDefault: true,
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Created By (Username)`),
|
||||
key: 'created_by__username',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Modified By (Username)`),
|
||||
key: 'modified_by__username',
|
||||
},
|
||||
];
|
||||
|
||||
// Object roles can be user only, so we remove them when
|
||||
// showing role choices for team access
|
||||
const selectableRoles = { ...roles };
|
||||
if (selectedResource === 'teams') {
|
||||
Object.keys(roles).forEach(key => {
|
||||
if (selectableRoles[key].user_only) {
|
||||
delete selectableRoles[key];
|
||||
}
|
||||
});
|
||||
}
|
||||
const teamSortColumns = [
|
||||
{
|
||||
name: i18n._(t`Name`),
|
||||
key: 'name',
|
||||
},
|
||||
];
|
||||
|
||||
const userSearchColumns = [
|
||||
{
|
||||
name: i18n._(t`Username`),
|
||||
key: 'username__icontains',
|
||||
isDefault: true,
|
||||
},
|
||||
{
|
||||
name: i18n._(t`First Name`),
|
||||
key: 'first_name__icontains',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Last Name`),
|
||||
key: 'last_name__icontains',
|
||||
},
|
||||
];
|
||||
let wizardTitle = '';
|
||||
|
||||
const userSortColumns = [
|
||||
{
|
||||
name: i18n._(t`Username`),
|
||||
key: 'username',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`First Name`),
|
||||
key: 'first_name',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Last Name`),
|
||||
key: 'last_name',
|
||||
},
|
||||
];
|
||||
switch (selectedResource) {
|
||||
case 'users':
|
||||
wizardTitle = i18n._(t`Add User Roles`);
|
||||
break;
|
||||
case 'teams':
|
||||
wizardTitle = i18n._(t`Add Team Roles`);
|
||||
break;
|
||||
default:
|
||||
wizardTitle = i18n._(t`Add Roles`);
|
||||
}
|
||||
|
||||
const teamSearchColumns = [
|
||||
{
|
||||
name: i18n._(t`Name`),
|
||||
key: 'name',
|
||||
isDefault: true,
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Created By (Username)`),
|
||||
key: 'created_by__username',
|
||||
},
|
||||
{
|
||||
name: i18n._(t`Modified By (Username)`),
|
||||
key: 'modified_by__username',
|
||||
},
|
||||
];
|
||||
|
||||
const teamSortColumns = [
|
||||
{
|
||||
name: i18n._(t`Name`),
|
||||
key: 'name',
|
||||
},
|
||||
];
|
||||
|
||||
let wizardTitle = '';
|
||||
|
||||
switch (selectedResource) {
|
||||
case 'users':
|
||||
wizardTitle = i18n._(t`Add User Roles`);
|
||||
break;
|
||||
case 'teams':
|
||||
wizardTitle = i18n._(t`Add Team Roles`);
|
||||
break;
|
||||
default:
|
||||
wizardTitle = i18n._(t`Add Roles`);
|
||||
}
|
||||
|
||||
const steps = [
|
||||
{
|
||||
id: 1,
|
||||
name: i18n._(t`Select a Resource Type`),
|
||||
component: (
|
||||
<div style={{ display: 'flex', flexWrap: 'wrap' }}>
|
||||
<div style={{ width: '100%', marginBottom: '10px' }}>
|
||||
{i18n._(
|
||||
t`Choose the type of resource that will be receiving new roles. For example, if you'd like to add new roles to a set of users please choose Users and click Next. You'll be able to select the specific resources in the next step.`
|
||||
)}
|
||||
</div>
|
||||
<SelectableCard
|
||||
isSelected={selectedResource === 'users'}
|
||||
label={i18n._(t`Users`)}
|
||||
dataCy="add-role-users"
|
||||
onClick={() => this.handleResourceSelect('users')}
|
||||
/>
|
||||
const steps = [
|
||||
{
|
||||
id: 1,
|
||||
name: i18n._(t`Select a Resource Type`),
|
||||
component: (
|
||||
<div style={{ display: 'flex', flexWrap: 'wrap' }}>
|
||||
<div style={{ width: '100%', marginBottom: '10px' }}>
|
||||
{i18n._(
|
||||
t`Choose the type of resource that will be receiving new roles. For example, if you'd like to add new roles to a set of users please choose Users and click Next. You'll be able to select the specific resources in the next step.`
|
||||
)}
|
||||
</div>
|
||||
<SelectableCard
|
||||
isSelected={selectedResource === 'users'}
|
||||
label={i18n._(t`Users`)}
|
||||
ariaLabel={i18n._(t`Users`)}
|
||||
dataCy="add-role-users"
|
||||
onClick={() => handleResourceSelect('users')}
|
||||
/>
|
||||
{resource?.type === 'credential' && !resource?.organization ? null : (
|
||||
<SelectableCard
|
||||
isSelected={selectedResource === 'teams'}
|
||||
label={i18n._(t`Teams`)}
|
||||
ariaLabel={i18n._(t`Teams`)}
|
||||
dataCy="add-role-teams"
|
||||
onClick={() => this.handleResourceSelect('teams')}
|
||||
onClick={() => handleResourceSelect('teams')}
|
||||
/>
|
||||
</div>
|
||||
),
|
||||
enableNext: selectedResource !== null,
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: i18n._(t`Select Items from List`),
|
||||
component: (
|
||||
<Fragment>
|
||||
{selectedResource === 'users' && (
|
||||
<SelectResourceStep
|
||||
searchColumns={userSearchColumns}
|
||||
sortColumns={userSortColumns}
|
||||
displayKey="username"
|
||||
onRowClick={this.handleResourceCheckboxClick}
|
||||
fetchItems={readUsers}
|
||||
fetchOptions={readUsersOptions}
|
||||
selectedLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
sortedColumnKey="username"
|
||||
/>
|
||||
)}
|
||||
{selectedResource === 'teams' && (
|
||||
<SelectResourceStep
|
||||
searchColumns={teamSearchColumns}
|
||||
sortColumns={teamSortColumns}
|
||||
onRowClick={this.handleResourceCheckboxClick}
|
||||
fetchItems={readTeams}
|
||||
fetchOptions={readTeamsOptions}
|
||||
selectedLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
/>
|
||||
)}
|
||||
</Fragment>
|
||||
),
|
||||
enableNext: selectedResourceRows.length > 0,
|
||||
canJumpTo: maxEnabledStep >= 2,
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: i18n._(t`Select Roles to Apply`),
|
||||
component: (
|
||||
<SelectRoleStep
|
||||
onRolesClick={this.handleRoleCheckboxClick}
|
||||
roles={selectableRoles}
|
||||
selectedListKey={selectedResource === 'users' ? 'username' : 'name'}
|
||||
selectedListLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
selectedRoleRows={selectedRoleRows}
|
||||
/>
|
||||
),
|
||||
nextButtonText: i18n._(t`Save`),
|
||||
enableNext: selectedRoleRows.length > 0,
|
||||
canJumpTo: maxEnabledStep >= 3,
|
||||
},
|
||||
];
|
||||
)}
|
||||
</div>
|
||||
),
|
||||
enableNext: selectedResource !== null,
|
||||
},
|
||||
{
|
||||
id: 2,
|
||||
name: i18n._(t`Select Items from List`),
|
||||
component: (
|
||||
<Fragment>
|
||||
{selectedResource === 'users' && (
|
||||
<SelectResourceStep
|
||||
searchColumns={userSearchColumns}
|
||||
sortColumns={userSortColumns}
|
||||
displayKey="username"
|
||||
onRowClick={handleResourceCheckboxClick}
|
||||
fetchItems={readUsers}
|
||||
fetchOptions={readUsersOptions}
|
||||
selectedLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
sortedColumnKey="username"
|
||||
/>
|
||||
)}
|
||||
{selectedResource === 'teams' && (
|
||||
<SelectResourceStep
|
||||
searchColumns={teamSearchColumns}
|
||||
sortColumns={teamSortColumns}
|
||||
onRowClick={handleResourceCheckboxClick}
|
||||
fetchItems={readTeams}
|
||||
fetchOptions={readTeamsOptions}
|
||||
selectedLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
/>
|
||||
)}
|
||||
</Fragment>
|
||||
),
|
||||
enableNext: selectedResourceRows.length > 0,
|
||||
canJumpTo: maxEnabledStep >= 2,
|
||||
},
|
||||
{
|
||||
id: 3,
|
||||
name: i18n._(t`Select Roles to Apply`),
|
||||
component: (
|
||||
<SelectRoleStep
|
||||
onRolesClick={handleRoleCheckboxClick}
|
||||
roles={selectableRoles}
|
||||
selectedListKey={selectedResource === 'users' ? 'username' : 'name'}
|
||||
selectedListLabel={i18n._(t`Selected`)}
|
||||
selectedResourceRows={selectedResourceRows}
|
||||
selectedRoleRows={selectedRoleRows}
|
||||
/>
|
||||
),
|
||||
nextButtonText: i18n._(t`Save`),
|
||||
enableNext: selectedRoleRows.length > 0,
|
||||
canJumpTo: maxEnabledStep >= 3,
|
||||
},
|
||||
];
|
||||
|
||||
const currentStep = steps.find(step => step.id === currentStepId);
|
||||
const currentStep = steps.find(step => step.id === currentStepId);
|
||||
|
||||
// TODO: somehow internationalize steps and currentStep.nextButtonText
|
||||
return (
|
||||
<Wizard
|
||||
style={{ overflow: 'scroll' }}
|
||||
isOpen
|
||||
onNext={this.handleWizardNext}
|
||||
onClose={onClose}
|
||||
onSave={this.handleWizardSave}
|
||||
onGoToStep={this.handleWizardGoToStep}
|
||||
steps={steps}
|
||||
title={wizardTitle}
|
||||
nextButtonText={currentStep.nextButtonText || undefined}
|
||||
backButtonText={i18n._(t`Back`)}
|
||||
cancelButtonText={i18n._(t`Cancel`)}
|
||||
/>
|
||||
);
|
||||
}
|
||||
// TODO: somehow internationalize steps and currentStep.nextButtonText
|
||||
return (
|
||||
<Wizard
|
||||
style={{ overflow: 'scroll' }}
|
||||
isOpen
|
||||
onNext={handleWizardNext}
|
||||
onClose={onClose}
|
||||
onSave={handleWizardSave}
|
||||
onGoToStep={step => handleWizardGoToStep(step)}
|
||||
steps={steps}
|
||||
title={wizardTitle}
|
||||
nextButtonText={currentStep.nextButtonText || undefined}
|
||||
backButtonText={i18n._(t`Back`)}
|
||||
cancelButtonText={i18n._(t`Cancel`)}
|
||||
/>
|
||||
);
|
||||
}
|
||||
|
||||
AddResourceRole.propTypes = {
|
||||
onClose: PropTypes.func.isRequired,
|
||||
onSave: PropTypes.func.isRequired,
|
||||
roles: PropTypes.shape(),
|
||||
resource: PropTypes.shape(),
|
||||
};
|
||||
|
||||
AddResourceRole.defaultProps = {
|
||||
roles: {},
|
||||
resource: {},
|
||||
};
|
||||
|
||||
export { AddResourceRole as _AddResourceRole };
|
||||
|
||||
@ -1,22 +1,46 @@
|
||||
/* eslint-disable react/jsx-pascal-case */
|
||||
import React from 'react';
|
||||
import { shallow } from 'enzyme';
|
||||
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
|
||||
import { act } from 'react-dom/test-utils';
|
||||
|
||||
import {
|
||||
mountWithContexts,
|
||||
waitForElement,
|
||||
} from '../../../testUtils/enzymeHelpers';
|
||||
import AddResourceRole, { _AddResourceRole } from './AddResourceRole';
|
||||
import { TeamsAPI, UsersAPI } from '../../api';
|
||||
|
||||
jest.mock('../../api');
|
||||
jest.mock('../../api/models/Teams');
|
||||
jest.mock('../../api/models/Users');
|
||||
|
||||
// TODO: Once error handling is functional in
|
||||
// this component write tests for it
|
||||
|
||||
describe('<_AddResourceRole />', () => {
|
||||
UsersAPI.read.mockResolvedValue({
|
||||
data: {
|
||||
count: 2,
|
||||
results: [
|
||||
{ id: 1, username: 'foo' },
|
||||
{ id: 2, username: 'bar' },
|
||||
{ id: 1, username: 'foo', url: '' },
|
||||
{ id: 2, username: 'bar', url: '' },
|
||||
],
|
||||
},
|
||||
});
|
||||
UsersAPI.readOptions.mockResolvedValue({
|
||||
data: { related: {}, actions: { GET: {} } },
|
||||
});
|
||||
TeamsAPI.read.mockResolvedValue({
|
||||
data: {
|
||||
count: 2,
|
||||
results: [
|
||||
{ id: 1, name: 'Team foo', url: '' },
|
||||
{ id: 2, name: 'Team bar', url: '' },
|
||||
],
|
||||
},
|
||||
});
|
||||
TeamsAPI.readOptions.mockResolvedValue({
|
||||
data: { related: {}, actions: { GET: {} } },
|
||||
});
|
||||
const roles = {
|
||||
admin_role: {
|
||||
description: 'Can manage all aspects of the organization',
|
||||
@ -39,186 +63,180 @@ describe('<_AddResourceRole />', () => {
|
||||
/>
|
||||
);
|
||||
});
|
||||
test('handleRoleCheckboxClick properly updates state', () => {
|
||||
const wrapper = shallow(
|
||||
<_AddResourceRole
|
||||
onClose={() => {}}
|
||||
onSave={() => {}}
|
||||
roles={roles}
|
||||
i18n={{ _: val => val.toString() }}
|
||||
/>
|
||||
);
|
||||
wrapper.setState({
|
||||
selectedRoleRows: [
|
||||
{
|
||||
description: 'Can manage all aspects of the organization',
|
||||
name: 'Admin',
|
||||
id: 1,
|
||||
},
|
||||
],
|
||||
test('should save properly', async () => {
|
||||
let wrapper;
|
||||
act(() => {
|
||||
wrapper = mountWithContexts(
|
||||
<AddResourceRole onClose={() => {}} onSave={() => {}} roles={roles} />,
|
||||
{ context: { network: { handleHttpError: () => {} } } }
|
||||
);
|
||||
});
|
||||
wrapper.instance().handleRoleCheckboxClick({
|
||||
description: 'Can manage all aspects of the organization',
|
||||
name: 'Admin',
|
||||
id: 1,
|
||||
});
|
||||
expect(wrapper.state('selectedRoleRows')).toEqual([]);
|
||||
wrapper.instance().handleRoleCheckboxClick({
|
||||
description: 'Can manage all aspects of the organization',
|
||||
name: 'Admin',
|
||||
id: 1,
|
||||
});
|
||||
expect(wrapper.state('selectedRoleRows')).toEqual([
|
||||
{
|
||||
description: 'Can manage all aspects of the organization',
|
||||
name: 'Admin',
|
||||
id: 1,
|
||||
},
|
||||
]);
|
||||
});
|
||||
test('handleResourceCheckboxClick properly updates state', () => {
|
||||
const wrapper = shallow(
|
||||
<_AddResourceRole
|
||||
onClose={() => {}}
|
||||
onSave={() => {}}
|
||||
roles={roles}
|
||||
i18n={{ _: val => val.toString() }}
|
||||
/>
|
||||
);
|
||||
wrapper.setState({
|
||||
selectedResourceRows: [
|
||||
{
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
},
|
||||
],
|
||||
});
|
||||
wrapper.instance().handleResourceCheckboxClick({
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
});
|
||||
expect(wrapper.state('selectedResourceRows')).toEqual([]);
|
||||
wrapper.instance().handleResourceCheckboxClick({
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
});
|
||||
expect(wrapper.state('selectedResourceRows')).toEqual([
|
||||
{
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
},
|
||||
]);
|
||||
});
|
||||
test('clicking user/team cards updates state', () => {
|
||||
const spy = jest.spyOn(_AddResourceRole.prototype, 'handleResourceSelect');
|
||||
const wrapper = mountWithContexts(
|
||||
<AddResourceRole onClose={() => {}} onSave={() => {}} roles={roles} />,
|
||||
{ context: { network: { handleHttpError: () => {} } } }
|
||||
).find('AddResourceRole');
|
||||
wrapper.update();
|
||||
|
||||
// Step 1
|
||||
const selectableCardWrapper = wrapper.find('SelectableCard');
|
||||
expect(selectableCardWrapper.length).toBe(2);
|
||||
selectableCardWrapper.first().simulate('click');
|
||||
expect(spy).toHaveBeenCalledWith('users');
|
||||
expect(wrapper.state('selectedResource')).toBe('users');
|
||||
selectableCardWrapper.at(1).simulate('click');
|
||||
expect(spy).toHaveBeenCalledWith('teams');
|
||||
expect(wrapper.state('selectedResource')).toBe('teams');
|
||||
act(() => wrapper.find('SelectableCard[label="Users"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
await act(async () =>
|
||||
wrapper.find('Button[type="submit"]').prop('onClick')()
|
||||
);
|
||||
wrapper.update();
|
||||
|
||||
// Step 2
|
||||
await waitForElement(wrapper, 'EmptyStateBody', el => el.length === 0);
|
||||
act(() =>
|
||||
wrapper.find('DataListCheck[name="foo"]').invoke('onChange')(true)
|
||||
);
|
||||
wrapper.update();
|
||||
expect(wrapper.find('DataListCheck[name="foo"]').prop('checked')).toBe(
|
||||
true
|
||||
);
|
||||
act(() => wrapper.find('Button[type="submit"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
|
||||
// Step 3
|
||||
act(() =>
|
||||
wrapper.find('Checkbox[aria-label="Admin"]').invoke('onChange')(true)
|
||||
);
|
||||
wrapper.update();
|
||||
expect(wrapper.find('Checkbox[aria-label="Admin"]').prop('isChecked')).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
// Save
|
||||
await act(async () =>
|
||||
wrapper.find('Button[type="submit"]').prop('onClick')()
|
||||
);
|
||||
expect(UsersAPI.associateRole).toBeCalledWith(1, 1);
|
||||
});
|
||||
test('handleResourceSelect clears out selected lists and sets selectedResource', () => {
|
||||
const wrapper = shallow(
|
||||
<_AddResourceRole
|
||||
|
||||
test('should successfuly click user/team cards', async () => {
|
||||
let wrapper;
|
||||
act(() => {
|
||||
wrapper = mountWithContexts(
|
||||
<AddResourceRole onClose={() => {}} onSave={() => {}} roles={roles} />,
|
||||
{ context: { network: { handleHttpError: () => {} } } }
|
||||
);
|
||||
});
|
||||
wrapper.update();
|
||||
|
||||
const selectableCardWrapper = wrapper.find('SelectableCard');
|
||||
expect(selectableCardWrapper.length).toBe(2);
|
||||
act(() => wrapper.find('SelectableCard[label="Users"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
|
||||
await waitForElement(
|
||||
wrapper,
|
||||
'SelectableCard[label="Users"]',
|
||||
el => el.prop('isSelected') === true
|
||||
);
|
||||
act(() => wrapper.find('SelectableCard[label="Teams"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
|
||||
await waitForElement(
|
||||
wrapper,
|
||||
'SelectableCard[label="Teams"]',
|
||||
el => el.prop('isSelected') === true
|
||||
);
|
||||
});
|
||||
|
||||
test('should reset values with resource type changes', async () => {
|
||||
let wrapper;
|
||||
act(() => {
|
||||
wrapper = mountWithContexts(
|
||||
<AddResourceRole onClose={() => {}} onSave={() => {}} roles={roles} />,
|
||||
{ context: { network: { handleHttpError: () => {} } } }
|
||||
);
|
||||
});
|
||||
wrapper.update();
|
||||
|
||||
// Step 1
|
||||
const selectableCardWrapper = wrapper.find('SelectableCard');
|
||||
expect(selectableCardWrapper.length).toBe(2);
|
||||
act(() => wrapper.find('SelectableCard[label="Users"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
await act(async () =>
|
||||
wrapper.find('Button[type="submit"]').prop('onClick')()
|
||||
);
|
||||
wrapper.update();
|
||||
|
||||
// Step 2
|
||||
await waitForElement(wrapper, 'EmptyStateBody', el => el.length === 0);
|
||||
act(() =>
|
||||
wrapper.find('DataListCheck[name="foo"]').invoke('onChange')(true)
|
||||
);
|
||||
wrapper.update();
|
||||
expect(wrapper.find('DataListCheck[name="foo"]').prop('checked')).toBe(
|
||||
true
|
||||
);
|
||||
act(() => wrapper.find('Button[type="submit"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
|
||||
// Step 3
|
||||
act(() =>
|
||||
wrapper.find('Checkbox[aria-label="Admin"]').invoke('onChange')(true)
|
||||
);
|
||||
wrapper.update();
|
||||
expect(wrapper.find('Checkbox[aria-label="Admin"]').prop('isChecked')).toBe(
|
||||
true
|
||||
);
|
||||
|
||||
// Go back to step 1
|
||||
act(() => {
|
||||
wrapper
|
||||
.find('WizardNavItem[content="Select a Resource Type"]')
|
||||
.find('button')
|
||||
.prop('onClick')({ id: 1 });
|
||||
});
|
||||
wrapper.update();
|
||||
expect(
|
||||
wrapper
|
||||
.find('WizardNavItem[content="Select a Resource Type"]')
|
||||
.prop('isCurrent')
|
||||
).toBe(true);
|
||||
|
||||
// Go back to step 1 and this time select teams. Doing so should clear following steps
|
||||
act(() => wrapper.find('SelectableCard[label="Teams"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
await act(async () =>
|
||||
wrapper.find('Button[type="submit"]').prop('onClick')()
|
||||
);
|
||||
wrapper.update();
|
||||
|
||||
// Make sure no teams have been selected
|
||||
await waitForElement(wrapper, 'EmptyStateBody', el => el.length === 0);
|
||||
wrapper
|
||||
.find('DataListCheck')
|
||||
.map(item => expect(item.prop('checked')).toBe(false));
|
||||
act(() => wrapper.find('Button[type="submit"]').prop('onClick')());
|
||||
wrapper.update();
|
||||
|
||||
// Make sure that no roles have been selected
|
||||
wrapper
|
||||
.find('Checkbox')
|
||||
.map(card => expect(card.prop('isChecked')).toBe(false));
|
||||
|
||||
// Make sure the save button is disabled
|
||||
expect(wrapper.find('Button[type="submit"]').prop('isDisabled')).toBe(true);
|
||||
});
|
||||
|
||||
test('should not display team as a choice in case credential does not have organization', () => {
|
||||
const wrapper = mountWithContexts(
|
||||
<AddResourceRole
|
||||
onClose={() => {}}
|
||||
onSave={() => {}}
|
||||
roles={roles}
|
||||
i18n={{ _: val => val.toString() }}
|
||||
/>
|
||||
);
|
||||
wrapper.setState({
|
||||
selectedResource: 'teams',
|
||||
selectedResourceRows: [
|
||||
{
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
},
|
||||
],
|
||||
selectedRoleRows: [
|
||||
{
|
||||
description: 'Can manage all aspects of the organization',
|
||||
id: 1,
|
||||
name: 'Admin',
|
||||
},
|
||||
],
|
||||
});
|
||||
wrapper.instance().handleResourceSelect('users');
|
||||
expect(wrapper.state()).toEqual({
|
||||
selectedResource: 'users',
|
||||
selectedResourceRows: [],
|
||||
selectedRoleRows: [],
|
||||
currentStepId: 1,
|
||||
maxEnabledStep: 1,
|
||||
});
|
||||
wrapper.instance().handleResourceSelect('teams');
|
||||
expect(wrapper.state()).toEqual({
|
||||
selectedResource: 'teams',
|
||||
selectedResourceRows: [],
|
||||
selectedRoleRows: [],
|
||||
currentStepId: 1,
|
||||
maxEnabledStep: 1,
|
||||
});
|
||||
});
|
||||
test('handleWizardSave makes correct api calls, calls onSave when done', async () => {
|
||||
const handleSave = jest.fn();
|
||||
const wrapper = mountWithContexts(
|
||||
<AddResourceRole onClose={() => {}} onSave={handleSave} roles={roles} />,
|
||||
resource={{ type: 'credential', organization: null }}
|
||||
/>,
|
||||
{ context: { network: { handleHttpError: () => {} } } }
|
||||
).find('AddResourceRole');
|
||||
wrapper.setState({
|
||||
selectedResource: 'users',
|
||||
selectedResourceRows: [
|
||||
{
|
||||
id: 1,
|
||||
username: 'foobar',
|
||||
},
|
||||
],
|
||||
selectedRoleRows: [
|
||||
{
|
||||
description: 'Can manage all aspects of the organization',
|
||||
id: 1,
|
||||
name: 'Admin',
|
||||
},
|
||||
{
|
||||
description: 'May run any executable resources in the organization',
|
||||
id: 2,
|
||||
name: 'Execute',
|
||||
},
|
||||
],
|
||||
});
|
||||
await wrapper.instance().handleWizardSave();
|
||||
expect(UsersAPI.associateRole).toHaveBeenCalledTimes(2);
|
||||
expect(handleSave).toHaveBeenCalled();
|
||||
wrapper.setState({
|
||||
selectedResource: 'teams',
|
||||
selectedResourceRows: [
|
||||
{
|
||||
id: 1,
|
||||
name: 'foobar',
|
||||
},
|
||||
],
|
||||
selectedRoleRows: [
|
||||
{
|
||||
description: 'Can manage all aspects of the organization',
|
||||
id: 1,
|
||||
name: 'Admin',
|
||||
},
|
||||
{
|
||||
description: 'May run any executable resources in the organization',
|
||||
id: 2,
|
||||
name: 'Execute',
|
||||
},
|
||||
],
|
||||
});
|
||||
await wrapper.instance().handleWizardSave();
|
||||
expect(TeamsAPI.associateRole).toHaveBeenCalledTimes(2);
|
||||
expect(handleSave).toHaveBeenCalled();
|
||||
);
|
||||
|
||||
expect(wrapper.find('SelectableCard').length).toBe(1);
|
||||
wrapper.find('SelectableCard[label="Users"]').simulate('click');
|
||||
wrapper.update();
|
||||
expect(
|
||||
wrapper.find('SelectableCard[label="Users"]').prop('isSelected')
|
||||
).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
@ -7,59 +7,55 @@ import { t } from '@lingui/macro';
|
||||
import CheckboxCard from './CheckboxCard';
|
||||
import SelectedList from '../SelectedList';
|
||||
|
||||
class RolesStep extends React.Component {
|
||||
render() {
|
||||
const {
|
||||
onRolesClick,
|
||||
roles,
|
||||
selectedListKey,
|
||||
selectedListLabel,
|
||||
selectedResourceRows,
|
||||
selectedRoleRows,
|
||||
i18n,
|
||||
} = this.props;
|
||||
|
||||
return (
|
||||
<Fragment>
|
||||
<div>
|
||||
{i18n._(
|
||||
t`Choose roles to apply to the selected resources. Note that all selected roles will be applied to all selected resources.`
|
||||
)}
|
||||
</div>
|
||||
<div>
|
||||
{selectedResourceRows.length > 0 && (
|
||||
<SelectedList
|
||||
displayKey={selectedListKey}
|
||||
isReadOnly
|
||||
label={selectedListLabel || i18n._(t`Selected`)}
|
||||
selected={selectedResourceRows}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
<div
|
||||
style={{
|
||||
display: 'grid',
|
||||
gridTemplateColumns: '1fr 1fr',
|
||||
gap: '20px 20px',
|
||||
marginTop: '20px',
|
||||
}}
|
||||
>
|
||||
{Object.keys(roles).map(role => (
|
||||
<CheckboxCard
|
||||
description={roles[role].description}
|
||||
itemId={roles[role].id}
|
||||
isSelected={selectedRoleRows.some(
|
||||
item => item.id === roles[role].id
|
||||
)}
|
||||
key={roles[role].id}
|
||||
name={roles[role].name}
|
||||
onSelect={() => onRolesClick(roles[role])}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</Fragment>
|
||||
);
|
||||
}
|
||||
function RolesStep({
|
||||
onRolesClick,
|
||||
roles,
|
||||
selectedListKey,
|
||||
selectedListLabel,
|
||||
selectedResourceRows,
|
||||
selectedRoleRows,
|
||||
i18n,
|
||||
}) {
|
||||
return (
|
||||
<Fragment>
|
||||
<div>
|
||||
{i18n._(
|
||||
t`Choose roles to apply to the selected resources. Note that all selected roles will be applied to all selected resources.`
|
||||
)}
|
||||
</div>
|
||||
<div>
|
||||
{selectedResourceRows.length > 0 && (
|
||||
<SelectedList
|
||||
displayKey={selectedListKey}
|
||||
isReadOnly
|
||||
label={selectedListLabel || i18n._(t`Selected`)}
|
||||
selected={selectedResourceRows}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
<div
|
||||
style={{
|
||||
display: 'grid',
|
||||
gridTemplateColumns: '1fr 1fr',
|
||||
gap: '20px 20px',
|
||||
marginTop: '20px',
|
||||
}}
|
||||
>
|
||||
{Object.keys(roles).map(role => (
|
||||
<CheckboxCard
|
||||
description={roles[role].description}
|
||||
itemId={roles[role].id}
|
||||
isSelected={selectedRoleRows.some(
|
||||
item => item.id === roles[role].id
|
||||
)}
|
||||
key={roles[role].id}
|
||||
name={roles[role].name}
|
||||
onSelect={() => onRolesClick(roles[role])}
|
||||
/>
|
||||
))}
|
||||
</div>
|
||||
</Fragment>
|
||||
);
|
||||
}
|
||||
|
||||
RolesStep.propTypes = {
|
||||
|
||||
@ -12,52 +12,44 @@ import { withI18n } from '@lingui/react';
|
||||
import { t } from '@lingui/macro';
|
||||
import { FormSelect, FormSelectOption } from '@patternfly/react-core';
|
||||
|
||||
class AnsibleSelect extends React.Component {
|
||||
constructor(props) {
|
||||
super(props);
|
||||
this.onSelectChange = this.onSelectChange.bind(this);
|
||||
}
|
||||
|
||||
onSelectChange(val, event) {
|
||||
const { onChange, name } = this.props;
|
||||
function AnsibleSelect({
|
||||
id,
|
||||
data,
|
||||
i18n,
|
||||
isValid,
|
||||
onBlur,
|
||||
value,
|
||||
className,
|
||||
isDisabled,
|
||||
onChange,
|
||||
name,
|
||||
}) {
|
||||
const onSelectChange = (val, event) => {
|
||||
event.target.name = name;
|
||||
onChange(event, val);
|
||||
}
|
||||
};
|
||||
|
||||
render() {
|
||||
const {
|
||||
id,
|
||||
data,
|
||||
i18n,
|
||||
isValid,
|
||||
onBlur,
|
||||
value,
|
||||
className,
|
||||
isDisabled,
|
||||
} = this.props;
|
||||
|
||||
return (
|
||||
<FormSelect
|
||||
id={id}
|
||||
value={value}
|
||||
onChange={this.onSelectChange}
|
||||
onBlur={onBlur}
|
||||
aria-label={i18n._(t`Select Input`)}
|
||||
validated={isValid ? 'default' : 'error'}
|
||||
className={className}
|
||||
isDisabled={isDisabled}
|
||||
>
|
||||
{data.map(option => (
|
||||
<FormSelectOption
|
||||
key={option.key}
|
||||
value={option.value}
|
||||
label={option.label}
|
||||
isDisabled={option.isDisabled}
|
||||
/>
|
||||
))}
|
||||
</FormSelect>
|
||||
);
|
||||
}
|
||||
return (
|
||||
<FormSelect
|
||||
id={id}
|
||||
value={value}
|
||||
onChange={onSelectChange}
|
||||
onBlur={onBlur}
|
||||
aria-label={i18n._(t`Select Input`)}
|
||||
validated={isValid ? 'default' : 'error'}
|
||||
className={className}
|
||||
isDisabled={isDisabled}
|
||||
>
|
||||
{data.map(option => (
|
||||
<FormSelectOption
|
||||
key={option.key}
|
||||
value={option.value}
|
||||
label={option.label}
|
||||
isDisabled={option.isDisabled}
|
||||
/>
|
||||
))}
|
||||
</FormSelect>
|
||||
);
|
||||
}
|
||||
|
||||
const Option = shape({
|
||||
|
||||
@ -1,21 +1,22 @@
|
||||
import React from 'react';
|
||||
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
|
||||
import AnsibleSelect, { _AnsibleSelect } from './AnsibleSelect';
|
||||
import AnsibleSelect from './AnsibleSelect';
|
||||
|
||||
const mockData = [
|
||||
{
|
||||
key: 'baz',
|
||||
label: 'Baz',
|
||||
value: '/venv/baz/',
|
||||
value: '/var/lib/awx/venv/baz/',
|
||||
},
|
||||
{
|
||||
key: 'default',
|
||||
label: 'Default',
|
||||
value: '/venv/ansible/',
|
||||
value: '/var/lib/awx/venv/ansible/',
|
||||
},
|
||||
];
|
||||
|
||||
describe('<AnsibleSelect />', () => {
|
||||
const onChange = jest.fn();
|
||||
test('initially renders succesfully', async () => {
|
||||
mountWithContexts(
|
||||
<AnsibleSelect
|
||||
@ -29,19 +30,18 @@ describe('<AnsibleSelect />', () => {
|
||||
});
|
||||
|
||||
test('calls "onSelectChange" on dropdown select change', () => {
|
||||
const spy = jest.spyOn(_AnsibleSelect.prototype, 'onSelectChange');
|
||||
const wrapper = mountWithContexts(
|
||||
<AnsibleSelect
|
||||
id="bar"
|
||||
value="foo"
|
||||
name="bar"
|
||||
onChange={() => {}}
|
||||
onChange={onChange}
|
||||
data={mockData}
|
||||
/>
|
||||
);
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
expect(onChange).not.toHaveBeenCalled();
|
||||
wrapper.find('select').simulate('change');
|
||||
expect(spy).toHaveBeenCalled();
|
||||
expect(onChange).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test('Returns correct select options', () => {
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
import React, { useEffect, useState, useCallback } from 'react';
|
||||
import React, { useEffect, useState, useCallback, useRef } from 'react';
|
||||
import { useHistory, useLocation, withRouter } from 'react-router-dom';
|
||||
import {
|
||||
Button,
|
||||
Nav,
|
||||
NavList,
|
||||
Page,
|
||||
@ -13,6 +14,8 @@ import styled from 'styled-components';
|
||||
|
||||
import { ConfigAPI, MeAPI, RootAPI } from '../../api';
|
||||
import { ConfigProvider } from '../../contexts/Config';
|
||||
import { SESSION_TIMEOUT_KEY } from '../../constants';
|
||||
import { isAuthenticated } from '../../util/auth';
|
||||
import About from '../About';
|
||||
import AlertModal from '../AlertModal';
|
||||
import ErrorDetail from '../ErrorDetail';
|
||||
@ -20,6 +23,17 @@ import BrandLogo from './BrandLogo';
|
||||
import NavExpandableGroup from './NavExpandableGroup';
|
||||
import PageHeaderToolbar from './PageHeaderToolbar';
|
||||
|
||||
// The maximum supported timeout for setTimeout(), in milliseconds,
|
||||
// is the highest number you can represent as a signed 32bit
|
||||
// integer (approximately 25 days)
|
||||
const MAX_TIMEOUT = 2 ** (32 - 1) - 1;
|
||||
|
||||
// The number of seconds the session timeout warning is displayed
|
||||
// before the user is logged out. Increasing this number (up to
|
||||
// the total session time, which is 1800s by default) will cause
|
||||
// the session timeout warning to display sooner.
|
||||
const SESSION_WARNING_DURATION = 10;
|
||||
|
||||
const PageHeader = styled(PFPageHeader)`
|
||||
& .pf-c-page__header-brand-link {
|
||||
color: inherit;
|
||||
@ -30,6 +44,45 @@ const PageHeader = styled(PFPageHeader)`
|
||||
}
|
||||
`;
|
||||
|
||||
/**
|
||||
* The useStorage hook integrates with the browser's localStorage api.
|
||||
* It accepts a storage key as its only argument and returns a state
|
||||
* variable and setter function for that state variable.
|
||||
*
|
||||
* This utility behaves much like the standard useState hook with some
|
||||
* key differences:
|
||||
* 1. You don't pass it an initial value. Instead, the provided key
|
||||
* is used to retrieve the initial value from local storage. If
|
||||
* the key doesn't exist in local storage, null is returned.
|
||||
* 2. Behind the scenes, this hook registers an event listener with
|
||||
* the Web Storage api to establish a two-way binding between the
|
||||
* state variable and its corresponding local storage value. This
|
||||
* means that updates to the state variable with the setter
|
||||
* function will produce a corresponding update to the local
|
||||
* storage value and vice-versa.
|
||||
* 3. When local storage is shared across browser tabs, the data
|
||||
* binding is also shared across browser tabs. This means that
|
||||
* updates to the state variable using the setter function on
|
||||
* one tab will also update the state variable on any other tab
|
||||
* using this hook with the same key and vice-versa.
|
||||
*/
|
||||
function useStorage(key) {
|
||||
const [storageVal, setStorageVal] = useState(
|
||||
window.localStorage.getItem(key)
|
||||
);
|
||||
window.addEventListener('storage', () => {
|
||||
const newVal = window.localStorage.getItem(key);
|
||||
if (newVal !== storageVal) {
|
||||
setStorageVal(newVal);
|
||||
}
|
||||
});
|
||||
const setValue = val => {
|
||||
window.localStorage.setItem(key, val);
|
||||
setStorageVal(val);
|
||||
};
|
||||
return [storageVal, setValue];
|
||||
}
|
||||
|
||||
function AppContainer({ i18n, navRouteConfig = [], children }) {
|
||||
const history = useHistory();
|
||||
const { pathname } = useLocation();
|
||||
@ -38,14 +91,51 @@ function AppContainer({ i18n, navRouteConfig = [], children }) {
|
||||
const [isAboutModalOpen, setIsAboutModalOpen] = useState(false);
|
||||
const [isReady, setIsReady] = useState(false);
|
||||
|
||||
const sessionTimeoutId = useRef();
|
||||
const sessionIntervalId = useRef();
|
||||
const [sessionTimeout, setSessionTimeout] = useStorage(SESSION_TIMEOUT_KEY);
|
||||
const [timeoutWarning, setTimeoutWarning] = useState(false);
|
||||
const [timeRemaining, setTimeRemaining] = useState(null);
|
||||
|
||||
const handleAboutModalOpen = () => setIsAboutModalOpen(true);
|
||||
const handleAboutModalClose = () => setIsAboutModalOpen(false);
|
||||
const handleConfigErrorClose = () => setConfigError(null);
|
||||
const handleSessionTimeout = () => setTimeoutWarning(true);
|
||||
|
||||
const handleLogout = useCallback(async () => {
|
||||
await RootAPI.logout();
|
||||
history.replace('/login');
|
||||
}, [history]);
|
||||
setSessionTimeout(null);
|
||||
}, [setSessionTimeout]);
|
||||
|
||||
const handleSessionContinue = () => {
|
||||
MeAPI.read();
|
||||
setTimeoutWarning(false);
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
if (!isAuthenticated(document.cookie)) history.replace('/login');
|
||||
const calcRemaining = () =>
|
||||
parseInt(sessionTimeout, 10) - new Date().getTime();
|
||||
const updateRemaining = () => setTimeRemaining(calcRemaining());
|
||||
setTimeoutWarning(false);
|
||||
clearTimeout(sessionTimeoutId.current);
|
||||
clearInterval(sessionIntervalId.current);
|
||||
sessionTimeoutId.current = setTimeout(
|
||||
handleSessionTimeout,
|
||||
Math.min(calcRemaining() - SESSION_WARNING_DURATION * 1000, MAX_TIMEOUT)
|
||||
);
|
||||
sessionIntervalId.current = setInterval(updateRemaining, 1000);
|
||||
return () => {
|
||||
clearTimeout(sessionTimeoutId.current);
|
||||
clearInterval(sessionIntervalId.current);
|
||||
};
|
||||
}, [history, sessionTimeout]);
|
||||
|
||||
useEffect(() => {
|
||||
if (timeRemaining !== null && timeRemaining <= 1) {
|
||||
handleLogout();
|
||||
}
|
||||
}, [handleLogout, timeRemaining]);
|
||||
|
||||
useEffect(() => {
|
||||
const loadConfig = async () => {
|
||||
@ -128,6 +218,31 @@ function AppContainer({ i18n, navRouteConfig = [], children }) {
|
||||
{i18n._(t`Failed to retrieve configuration.`)}
|
||||
<ErrorDetail error={configError} />
|
||||
</AlertModal>
|
||||
<AlertModal
|
||||
title={i18n._(t`Your session is about to expire`)}
|
||||
isOpen={timeoutWarning && sessionTimeout > 0 && timeRemaining !== null}
|
||||
onClose={handleLogout}
|
||||
showClose={false}
|
||||
variant="warning"
|
||||
actions={[
|
||||
<Button
|
||||
key="confirm"
|
||||
variant="primary"
|
||||
onClick={handleSessionContinue}
|
||||
>
|
||||
{i18n._(t`Continue`)}
|
||||
</Button>,
|
||||
<Button key="logout" variant="secondary" onClick={handleLogout}>
|
||||
{i18n._(t`Logout`)}
|
||||
</Button>,
|
||||
]}
|
||||
>
|
||||
{i18n._(
|
||||
t`You will be logged out in ${Number(
|
||||
Math.max(Math.floor(timeRemaining / 1000), 0)
|
||||
)} seconds due to inactivity.`
|
||||
)}
|
||||
</AlertModal>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
@ -1,4 +1,4 @@
|
||||
import React, { Component } from 'react';
|
||||
import React, { useState } from 'react';
|
||||
import PropTypes from 'prop-types';
|
||||
import { withI18n } from '@lingui/react';
|
||||
import { t } from '@lingui/macro';
|
||||
@ -17,129 +17,102 @@ import { QuestionCircleIcon, UserIcon } from '@patternfly/react-icons';
|
||||
const DOCLINK =
|
||||
'https://docs.ansible.com/ansible-tower/latest/html/userguide/index.html';
|
||||
|
||||
class PageHeaderToolbar extends Component {
|
||||
constructor(props) {
|
||||
super(props);
|
||||
this.state = {
|
||||
isHelpOpen: false,
|
||||
isUserOpen: false,
|
||||
};
|
||||
function PageHeaderToolbar({
|
||||
isAboutDisabled,
|
||||
onAboutClick,
|
||||
onLogoutClick,
|
||||
loggedInUser,
|
||||
i18n,
|
||||
}) {
|
||||
const [isHelpOpen, setIsHelpOpen] = useState(false);
|
||||
const [isUserOpen, setIsUserOpen] = useState(false);
|
||||
|
||||
this.handleHelpSelect = this.handleHelpSelect.bind(this);
|
||||
this.handleHelpToggle = this.handleHelpToggle.bind(this);
|
||||
this.handleUserSelect = this.handleUserSelect.bind(this);
|
||||
this.handleUserToggle = this.handleUserToggle.bind(this);
|
||||
}
|
||||
const handleHelpSelect = () => {
|
||||
setIsHelpOpen(!isHelpOpen);
|
||||
};
|
||||
|
||||
handleHelpSelect() {
|
||||
const { isHelpOpen } = this.state;
|
||||
|
||||
this.setState({ isHelpOpen: !isHelpOpen });
|
||||
}
|
||||
|
||||
handleUserSelect() {
|
||||
const { isUserOpen } = this.state;
|
||||
|
||||
this.setState({ isUserOpen: !isUserOpen });
|
||||
}
|
||||
|
||||
handleHelpToggle(isOpen) {
|
||||
this.setState({ isHelpOpen: isOpen });
|
||||
}
|
||||
|
||||
handleUserToggle(isOpen) {
|
||||
this.setState({ isUserOpen: isOpen });
|
||||
}
|
||||
|
||||
render() {
|
||||
const { isHelpOpen, isUserOpen } = this.state;
|
||||
const {
|
||||
isAboutDisabled,
|
||||
onAboutClick,
|
||||
onLogoutClick,
|
||||
loggedInUser,
|
||||
i18n,
|
||||
} = this.props;
|
||||
|
||||
return (
|
||||
<PageHeaderTools>
|
||||
<PageHeaderToolsGroup>
|
||||
<Tooltip position="left" content={<div>{i18n._(t`Info`)}</div>}>
|
||||
<PageHeaderToolsItem>
|
||||
<Dropdown
|
||||
isPlain
|
||||
isOpen={isHelpOpen}
|
||||
position={DropdownPosition.right}
|
||||
onSelect={this.handleHelpSelect}
|
||||
toggle={
|
||||
<DropdownToggle
|
||||
onToggle={this.handleHelpToggle}
|
||||
aria-label={i18n._(t`Info`)}
|
||||
>
|
||||
<QuestionCircleIcon />
|
||||
</DropdownToggle>
|
||||
}
|
||||
dropdownItems={[
|
||||
<DropdownItem key="help" target="_blank" href={DOCLINK}>
|
||||
{i18n._(t`Help`)}
|
||||
</DropdownItem>,
|
||||
<DropdownItem
|
||||
key="about"
|
||||
component="button"
|
||||
isDisabled={isAboutDisabled}
|
||||
onClick={onAboutClick}
|
||||
>
|
||||
{i18n._(t`About`)}
|
||||
</DropdownItem>,
|
||||
]}
|
||||
/>
|
||||
</PageHeaderToolsItem>
|
||||
</Tooltip>
|
||||
<Tooltip position="left" content={<div>{i18n._(t`User`)}</div>}>
|
||||
<PageHeaderToolsItem>
|
||||
<Dropdown
|
||||
id="toolbar-user-dropdown"
|
||||
isPlain
|
||||
isOpen={isUserOpen}
|
||||
position={DropdownPosition.right}
|
||||
onSelect={this.handleUserSelect}
|
||||
toggle={
|
||||
<DropdownToggle onToggle={this.handleUserToggle}>
|
||||
<UserIcon />
|
||||
{loggedInUser && (
|
||||
<span style={{ marginLeft: '10px' }}>
|
||||
{loggedInUser.username}
|
||||
</span>
|
||||
)}
|
||||
</DropdownToggle>
|
||||
}
|
||||
dropdownItems={[
|
||||
<DropdownItem
|
||||
key="user"
|
||||
href={
|
||||
loggedInUser
|
||||
? `/users/${loggedInUser.id}/details`
|
||||
: '/home'
|
||||
}
|
||||
>
|
||||
{i18n._(t`User Details`)}
|
||||
</DropdownItem>,
|
||||
<DropdownItem
|
||||
key="logout"
|
||||
component="button"
|
||||
onClick={onLogoutClick}
|
||||
id="logout-button"
|
||||
>
|
||||
{i18n._(t`Logout`)}
|
||||
</DropdownItem>,
|
||||
]}
|
||||
/>
|
||||
</PageHeaderToolsItem>
|
||||
</Tooltip>
|
||||
</PageHeaderToolsGroup>
|
||||
</PageHeaderTools>
|
||||
);
|
||||
}
|
||||
const handleUserSelect = () => {
|
||||
setIsUserOpen(!isUserOpen);
|
||||
};
|
||||
return (
|
||||
<PageHeaderTools>
|
||||
<PageHeaderToolsGroup>
|
||||
<Tooltip position="left" content={<div>{i18n._(t`Info`)}</div>}>
|
||||
<PageHeaderToolsItem>
|
||||
<Dropdown
|
||||
isPlain
|
||||
isOpen={isHelpOpen}
|
||||
position={DropdownPosition.right}
|
||||
onSelect={handleHelpSelect}
|
||||
toggle={
|
||||
<DropdownToggle
|
||||
onToggle={setIsHelpOpen}
|
||||
aria-label={i18n._(t`Info`)}
|
||||
>
|
||||
<QuestionCircleIcon />
|
||||
</DropdownToggle>
|
||||
}
|
||||
dropdownItems={[
|
||||
<DropdownItem key="help" target="_blank" href={DOCLINK}>
|
||||
{i18n._(t`Help`)}
|
||||
</DropdownItem>,
|
||||
<DropdownItem
|
||||
key="about"
|
||||
component="button"
|
||||
isDisabled={isAboutDisabled}
|
||||
onClick={onAboutClick}
|
||||
>
|
||||
{i18n._(t`About`)}
|
||||
</DropdownItem>,
|
||||
]}
|
||||
/>
|
||||
</PageHeaderToolsItem>
|
||||
</Tooltip>
|
||||
<Tooltip position="left" content={<div>{i18n._(t`User`)}</div>}>
|
||||
<PageHeaderToolsItem>
|
||||
<Dropdown
|
||||
id="toolbar-user-dropdown"
|
||||
isPlain
|
||||
isOpen={isUserOpen}
|
||||
position={DropdownPosition.right}
|
||||
onSelect={handleUserSelect}
|
||||
toggle={
|
||||
<DropdownToggle onToggle={setIsUserOpen}>
|
||||
<UserIcon />
|
||||
{loggedInUser && (
|
||||
<span style={{ marginLeft: '10px' }}>
|
||||
{loggedInUser.username}
|
||||
</span>
|
||||
)}
|
||||
</DropdownToggle>
|
||||
}
|
||||
dropdownItems={[
|
||||
<DropdownItem
|
||||
key="user"
|
||||
aria-label={i18n._(t`User details`)}
|
||||
href={
|
||||
loggedInUser
|
||||
? `/#/users/${loggedInUser.id}/details`
|
||||
: '/#/home'
|
||||
}
|
||||
>
|
||||
{i18n._(t`User Details`)}
|
||||
</DropdownItem>,
|
||||
<DropdownItem
|
||||
key="logout"
|
||||
component="button"
|
||||
onClick={onLogoutClick}
|
||||
id="logout-button"
|
||||
>
|
||||
{i18n._(t`Logout`)}
|
||||
</DropdownItem>,
|
||||
]}
|
||||
/>
|
||||
</PageHeaderToolsItem>
|
||||
</Tooltip>
|
||||
</PageHeaderToolsGroup>
|
||||
</PageHeaderTools>
|
||||
);
|
||||
}
|
||||
|
||||
PageHeaderToolbar.propTypes = {
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user