Compare commits

...

47 Commits

Author SHA1 Message Date
Jake Jackson
6dc4a4508d fix cve 2024-24680 (#15250) 2024-06-04 15:44:09 -04:00
Hao Liu
cf09a4220d Repin cython due to https://github.com/yaml/pyyaml/pull/702 (#15248)
* Revert "Unpin cypthon (#15246)"

This reverts commit 659c3b64de.

* Pin grpcio

Avoid cython 3 due to https://github.com/yaml/pyyaml/pull/702

* Delete asyncpg.txt
2024-06-03 19:42:20 +00:00
Hao Liu
659c3b64de Unpin cypthon (#15246)
* Unpin cython

* Remove unused asyncpg

* Remove asyncpg license file
2024-06-03 11:41:56 -04:00
Ethem Cem Özkan
37ad690d09 Add AWS SNS notification support for webhook (#15184)
Support for AWS SNS notifications. SNS is a widespread service that is used to integrate with other AWS services(EG lambdas). This support would unlock use cases like triggering lambda functions, especially when AWX is deployed on EKS.

Decisions:

Data Structure
- I preferred using the same structure as Webhook for message body data because it contains all job details. For now, I directly linked to Webhook to avoid duplication, but I am open to suggestions.

AWS authentication
- To support non-AWS native environments, I added configuration options for AWS secret key, ID, and session tokens. When entered, these values are supplied to the underlining boto3 SNS client. If not entered, it falls back to the default authentication chain to support the native AWS environment. Properly configured EKS pods are created with temporary credentials that the default authentication chain can pick automatically.

---------

Signed-off-by: Ethem Cem Ozkan <ethemcem.ozkan@gmail.com>
2024-06-02 02:48:56 +00:00
Akira Yokochi
7845ec7e01 Modify the link to terraform_state inventory plugin (#15241)
fix link to terraform_state inventory plugin
2024-06-01 22:36:30 -04:00
Chris Meyers
a15bcf1d55 Add requirements comment 2024-05-31 13:55:17 -04:00
Chris Meyers
7b3fb2c2a8 Add example grafana dashboard
* Per-service log view
2024-05-31 13:55:17 -04:00
Chris Meyers
6df47c8449 Rework which loggers we sent to OTEL
* Send all propagate=False loggers to OTEL AND the awx logger
2024-05-31 13:55:17 -04:00
Chris Meyers
cae42653bf Add recording
* Always output awx logs to a file via otel
* That log file can always be later replayed into a product that
  supports otlp at a later date.
* Useful when you find a problem that you need a time series DB to help
  find and solve.
* Useful if a community member or customer has a problem where a time
  series db would be helpful. You can take a "remote" users log and
  replay it locally for analysis.
2024-05-31 13:55:17 -04:00
Chris Meyers
da46a29f40 Move requirements out of dev and into mainline
* Add new package license files
2024-05-31 13:55:17 -04:00
Chris Meyers
0eb465531c Centralized logging via otel 2024-05-31 13:55:17 -04:00
Hao Liu
d0fe0ed796 Add check_instance_ready management command (#15238)
- throw exception and return 1 if instance not ready
- return 0 if ready
2024-05-31 09:29:40 -04:00
Chris Meyers
ceafa14c9d Use settings fixture in tests
* Otherwise, settings value changes bleeds over into other tests.
* Remove django.conf settings import so that we do not accidentally
  forget to use the settings fixture.
2024-05-30 14:10:35 -05:00
Chris Meyers
08e1454098 Make named url work with optional url prefix
* Handle named url sub-resources
* i.e. /api/v2/inventories/my_inventory++Default/hosts/
2024-05-29 12:39:25 -05:00
Harshith u
776b661fb3 use optional api prefix in collection if set as environ vairable (#15205)
* use optional api prefix if set as environ variable

* Different default depending on collection type
2024-05-29 11:54:05 -04:00
Hao Liu
af6ccdbde5 Fix galaxy publishing (#15233)
- switch to galaxy search API for determining if the version we want to publish already exist
- switch from github action variable to env var for easier copy and paste testing
2024-05-28 15:27:34 -04:00
Matthew Jones
559ab3564b Include Kube credentials in the inventory source picker (#15223) 2024-05-28 14:05:24 -04:00
Alan Rominger
208ef0ce25 Update test so that DAB change can merge (#15222) 2024-05-28 11:53:01 -04:00
Alexander Pykavy
c3d9aa54d8 Mention in the docs that you can skip make docker-compose-build (#15149)
Signed-off-by: Alexander Pykavy <aleksandrpykavyj@gmail.com>
2024-05-22 19:33:13 +00:00
irozet12
66efe7198a Wrap long line to fit help window (#14597) (#15169)
Wrap long line to fit description window (#14597)

Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-05-22 19:31:03 +00:00
Beni ~HB9HNT
adf930ee42 awxkit: replace deprecated locale.format() with locale.format_string() to fix human output on Python 3.12 (#15170)
Replace deprecated locale.format with locale.format_string

This will be removed in Python 3.12 and will break human output unless fixed.
2024-05-22 19:27:31 +00:00
Hao Liu
892410477a Fix promote from release event (#15215) 2024-05-22 18:58:11 +00:00
Seth Foster
0d4f653794 Fix up ansible-test sanity checks due to ansible 2.17 release (#15208)
* Fix up ansible sanity checks

* Fix awx-collection test failure

* Add ignore for ansible-test 2.17 

---------

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-21 15:05:59 -04:00
Alan Rominger
8de8f6dce2 Update a few dev requirements (#15203)
* Update a few dev requirements

* Fix test failures due to upgrade

* Update patterns for mocker usage
2024-05-20 23:37:02 +00:00
Hao Liu
fc9064e27f Allow wsrelay to fail without FATAL (#15191)
We have not identify the root cause of wsrelay failure but attempt to make wsrelay restart itself resulted in postgres and redis connection leak. We were not able to fully identify where the redis connection leak comes from so reverting back to failing and removing startsecs 30 will prevent wsrelay to FATAL
2024-05-20 23:34:12 +00:00
TVo
7de350dc3e Added docs for new RBAC changes (#15150)
* Added docs for new RBAC changes

* Added UI changes with screens and API endpoints with sample commands.

* Update docs/docsite/rst/userguide/rbac.rst

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>

* Incorporated review feedback from @vidyanambiar.

---------

Co-authored-by: Vidya Nambiar <43621546+vidyanambiar@users.noreply.github.com>
2024-05-17 20:10:16 -04:00
Michael Anstis
d4bdaad4d8 Fix success_url_allowed_hosts set instantiation (#15196)
Co-authored-by: Michael Anstis <manstis@redhat.com>
2024-05-16 12:08:50 -04:00
Bikouo Aubin
a9b2ffa3e9 Fix terraform backend credential issue (#15141)
fix issue introduced by PR15055
2024-05-15 15:19:18 -04:00
Sean Sullivan
1b8d409043 Add skip authorization option to collection application module (#15190) 2024-05-15 09:29:00 -04:00
dependabot[bot]
da2bccf5a8 Bump jinja2 from 3.1.3 to 3.1.4 in /docs/docsite (#15168)
Bumps [jinja2](https://github.com/pallets/jinja) from 3.1.3 to 3.1.4.
- [Release notes](https://github.com/pallets/jinja/releases)
- [Changelog](https://github.com/pallets/jinja/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/jinja/compare/3.1.3...3.1.4)

---
updated-dependencies:
- dependency-name: jinja2
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-14 14:26:26 -04:00
Hao Liu
a2f083bd8e Fix podman failure in development environment (#15188)
```
ERRO[0000] path "/var/lib/awx/.config" exists and it is not owned by the current user
```
start to happen with podman 5

it seems that the config files are no longer needed removing it fixes the problem
2024-05-14 14:18:48 -04:00
Michael Anstis
4d641b6cf5 Support Django logout redirects (#15148)
* Allowed hosts for logout redirects can now be set via the LOGOUT_ALLOWED_HOSTS setting

Authored-by: Michael Anstis <manstis@redhat.com>
Co-authored-by: Hao Liu <44379968+TheRealHaoLiu@users.noreply.github.com>
2024-05-13 13:03:27 -04:00
Elijah DeLee
439c3f0c23 Skip 3 expensive calls for jobs saving in 'waiting' status on UnifiedJob (#15174)
skip update parent logic for 'waiting' on UnifiedJob

by not looking up "status_before" from previous instance
we save 2 to 3 expensive calls (the self lookup of old state, the lookup
of parent, and the update to parent if allow_simultaneous == False or status == 'waiting')
2024-05-13 10:26:03 -04:00
jessicamack
946bbe3560 Clean up settings file (#15135)
remove unneeded settings
2024-05-10 11:25:15 -04:00
James
20f054d600 Expose websockets on api prefix v2 2024-05-01 10:44:51 -04:00
Alan Rominger
918d5b3565 Do some aesthetic adjustments to role presentation fields (#15153)
* Do some asthetic adjustments to role presentation fields

* Correctly test managed setup

* Minor migration adjustments
2024-04-29 17:11:10 -04:00
Hao Liu
158314af50 Delete deprecated Cypress UI e2e_test.yml (#15155)
Delete e2e_test.yml

Remove because it's no longer being maintained
2024-04-29 12:58:10 -04:00
Seth Foster
4754819a09 awx modules wait on event processing finished (#15152)
This change makes "wait: true" for jobs and syncs
look at the event_processing_finished instead of
finished field.

Right now there is a race condition where
a module might try to delete an inventory, but the events
for an inventory sync have not yet finished. We have a
RelatedJobsPreventDeleteMixin that checks for this condition.

bulk jobs don't have event_processing_finished so we just
use finished field in that case.
2024-04-26 17:33:34 -04:00
Seth Foster
78fc23138a Pin openssl 3.0.7 (#15147)
followup to PR #15142

This commit pins openssl in the awx image,
not just the builder image.
2024-04-26 12:29:22 -04:00
Alan Rominger
014534bfa5 Upgrade DRF (#15144)
* Upgrade DRF

* Fix failures caused by DRF upgrade
2024-04-25 15:37:08 -04:00
Seth Foster
2502e7c7d8 Temporarily downgrade openssl (#15142)
openssl 3.2.0 has incompatiblity issues with
the libpq version we are using, and causes
some C runtime errors:
"double free or corruption (out)"

see awx issue #15136

also this issue

github.com/conan-io/conan-center-index/pull/22615

once the libpq libraries on centos stream9 are
updated with the patch, we can unpin openssl

Signed-off-by: Seth Foster <fosterbseth@gmail.com>
2024-04-25 14:01:03 -04:00
Jeff Bradberry
fb237e3834 Stop pre-caching every resource in the system upon import
If we don't have something in the cache when we call
get_by_natural_key, do an actual filtered query for it and cache the
results.  We'll get more overall API calls this way, but they'll be
smaller and will happen while we are importing, not upfront.
2024-04-24 17:04:40 -04:00
irozet12
e4646ae611 Add help message for expiration tokens (#15076) (#15077)
Co-authored-by: Ирина Розет <irozet@astralinux.ru>
2024-04-24 19:58:09 +00:00
Bruno Sanchez
7dc77546f4 Adding CSRF Validation for schemas (#15027)
* Adding CSRF Validation for schemas

* Changing retrieve of scheme to avoid importing new library

* check if CSRF_TRUSTED_ORIGINS exists before accessing it

---------

Signed-off-by: Bruno Sanchez <brsanche@redhat.com>
2024-04-24 15:47:03 -04:00
Michael Tipton
f5f85666c8 Add ability to set SameSite policy for userLoggedIn cookie (#15100)
* Add ability to set SameSite policy for userLoggedIn cookie

* reformat line for linter
2024-04-24 15:44:31 -04:00
Alan Rominger
47a061eb39 Fix and test data migration error from DAB RBAC (#15138)
* Fix and test data migration error from DAB RBAC

* Fix up migration test

* Fix custom method bug

* Fix another fat fingered bug
2024-04-24 15:14:03 -04:00
Alan Rominger
c760577855 Adjust test for stricter DAB user view permission enforcement (#15130) 2024-04-23 15:21:06 -04:00
127 changed files with 4724 additions and 898 deletions

View File

@@ -1,75 +0,0 @@
---
name: E2E Tests
env:
LC_ALL: "C.UTF-8" # prevent ERROR: Ansible could not initialize the preferred locale: unsupported locale setting
on:
pull_request_target:
types: [labeled]
jobs:
e2e-test:
if: contains(github.event.pull_request.labels.*.name, 'qe:e2e')
runs-on: ubuntu-latest
timeout-minutes: 40
permissions:
packages: write
contents: read
strategy:
matrix:
job: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
steps:
- uses: actions/checkout@v3
- uses: ./.github/actions/run_awx_devel
id: awx
with:
build-ui: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Pull awx_cypress_base image
run: |
docker pull quay.io/awx/awx_cypress_base:latest
- name: Checkout test project
uses: actions/checkout@v3
with:
repository: ${{ github.repository_owner }}/tower-qa
ssh-key: ${{ secrets.QA_REPO_KEY }}
path: tower-qa
ref: devel
- name: Build cypress
run: |
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
docker build -t awx-pf-tests .
- name: Run E2E tests
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
run: |
export COMMIT_INFO_BRANCH=$GITHUB_HEAD_REF
export COMMIT_INFO_AUTHOR=$GITHUB_ACTOR
export COMMIT_INFO_SHA=$GITHUB_SHA
export COMMIT_INFO_REMOTE=$GITHUB_REPOSITORY_OWNER
cd ${{ secrets.E2E_PROJECT }}/ui-tests/awx-pf-tests
AWX_IP=${{ steps.awx.outputs.ip }}
printenv > .env
echo "Executing tests:"
docker run \
--network '_sources_default' \
--ipc=host \
--env-file=.env \
-e CYPRESS_baseUrl="https://$AWX_IP:8043" \
-e CYPRESS_AWX_E2E_USERNAME=admin \
-e CYPRESS_AWX_E2E_PASSWORD='password' \
-e COMMAND="npm run cypress-concurrently-gha" \
-v /dev/shm:/dev/shm \
-v $PWD:/e2e \
-w /e2e \
awx-pf-tests run --project .
- uses: ./.github/actions/upload_awx_devel_logs
if: always()
with:
log-filename: e2e-${{ matrix.job }}.log

View File

@@ -29,7 +29,7 @@ jobs:
- name: Set GitHub Env vars if release event
if: ${{ github.event_name == 'release' }}
run: |
echo "TAG_NAME=${{ env.TAG_NAME }}" >> $GITHUB_ENV
echo "TAG_NAME=${{ github.event.release.tag_name }}" >> $GITHUB_ENV
- name: Checkout awx
uses: actions/checkout@v3
@@ -60,15 +60,18 @@ jobs:
COLLECTION_VERSION: ${{ env.TAG_NAME }}
COLLECTION_TEMPLATE_VERSION: true
run: |
sudo apt-get install jq
make build_collection
curl_with_redirects=$(curl --head -sLw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
curl_without_redirects=$(curl --head -sw '%{http_code}' https://galaxy.ansible.com/download/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz | tail -1)
if [[ "$curl_with_redirects" == "302" ]] || [[ "$curl_without_redirects" == "302" ]]; then
count=$(curl -s https://galaxy.ansible.com/api/v3/plugin/ansible/search/collection-versions/\?namespace\=${COLLECTION_NAMESPACE}\&name\=awx\&version\=${COLLECTION_VERSION} | jq .meta.count)
if [[ "$count" == "1" ]]; then
echo "Galaxy release already done";
else
elif [[ "$count" == "0" ]]; then
ansible-galaxy collection publish \
--token=${{ secrets.GALAXY_TOKEN }} \
awx_collection_build/${{ env.collection_namespace }}-awx-${{ env.TAG_NAME }}.tar.gz;
awx_collection_build/${COLLECTION_NAMESPACE}-awx-${COLLECTION_VERSION}.tar.gz;
else
echo "Unexpected count from galaxy search: $count";
exit 1;
fi
- name: Set official pypi info

View File

@@ -11,6 +11,8 @@ ignore: |
# django template files
awx/api/templates/instance_install_bundle/**
.readthedocs.yaml
tools/loki
tools/otel
extends: default

View File

@@ -47,6 +47,10 @@ VAULT ?= false
VAULT_TLS ?= false
# If set to true docker-compose will also start a tacacs+ instance
TACACS ?= false
# If set to true docker-compose will also start an OpenTelemetry Collector instance
OTEL ?= false
# If set to true docker-compose will also start a Loki instance
LOKI ?= false
# If set to true docker-compose will install editable dependencies
EDITABLE_DEPENDENCIES ?= false
@@ -535,6 +539,8 @@ docker-compose-sources: .git/hooks/pre-commit
-e enable_vault=$(VAULT) \
-e vault_tls=$(VAULT_TLS) \
-e enable_tacacs=$(TACACS) \
-e enable_otel=$(OTEL) \
-e enable_loki=$(LOKI) \
-e install_editable_dependencies=$(EDITABLE_DEPENDENCIES) \
$(EXTRA_SOURCES_ANSIBLE_OPTS)

View File

@@ -95,7 +95,9 @@ class LoggedLoginView(auth_views.LoginView):
ret = super(LoggedLoginView, self).post(request, *args, **kwargs)
if request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in from {}".format(self.request.user.username, request.META.get('REMOTE_ADDR', None))))
ret.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
ret.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
ret.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return ret
@@ -107,6 +109,9 @@ class LoggedLoginView(auth_views.LoginView):
class LoggedLogoutView(auth_views.LogoutView):
success_url_allowed_hosts = set(settings.LOGOUT_ALLOWED_HOSTS.split(",")) if settings.LOGOUT_ALLOWED_HOSTS else set()
def dispatch(self, request, *args, **kwargs):
original_user = getattr(request, 'user', None)
ret = super(LoggedLogoutView, self).dispatch(request, *args, **kwargs)

View File

@@ -5381,7 +5381,7 @@ class NotificationSerializer(BaseSerializer):
)
def get_body(self, obj):
if obj.notification_type in ('webhook', 'pagerduty'):
if obj.notification_type in ('webhook', 'pagerduty', 'awssns'):
if isinstance(obj.body, dict):
if 'body' in obj.body:
return obj.body['body']
@@ -5403,9 +5403,9 @@ class NotificationSerializer(BaseSerializer):
def to_representation(self, obj):
ret = super(NotificationSerializer, self).to_representation(obj)
if obj.notification_type == 'webhook':
if obj.notification_type in ('webhook', 'awssns'):
ret.pop('subject')
if obj.notification_type not in ('email', 'webhook', 'pagerduty'):
if obj.notification_type not in ('email', 'webhook', 'pagerduty', 'awssns'):
ret.pop('body')
return ret

View File

@@ -61,6 +61,10 @@ class StringListBooleanField(ListField):
def to_representation(self, value):
try:
if isinstance(value, str):
# https://github.com/encode/django-rest-framework/commit/a180bde0fd965915718b070932418cabc831cee1
# DRF changed truthy and falsy lists to be capitalized
value = value.lower()
if isinstance(value, (list, tuple)):
return super(StringListBooleanField, self).to_representation(value)
elif value in BooleanField.TRUE_VALUES:
@@ -78,6 +82,8 @@ class StringListBooleanField(ListField):
def to_internal_value(self, data):
try:
if isinstance(data, str):
data = data.lower()
if isinstance(data, (list, tuple)):
return super(StringListBooleanField, self).to_internal_value(data)
elif data in BooleanField.TRUE_VALUES:

View File

@@ -130,9 +130,9 @@ def test_default_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.cache.get('AWX_SOME_SETTING') == 'DEFAULT'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -146,9 +146,9 @@ def test_setting_is_not_from_setting_file(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system', default='DEFAULT')
settings_to_cache = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache):
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=settings_to_cache)
assert settings.AWX_SOME_SETTING == 'DEFAULT'
assert settings.registry.get_setting_field('AWX_SOME_SETTING').defined_in_file is False
def test_empty_setting(settings, mocker):
@@ -156,10 +156,10 @@ def test_empty_setting(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([]), 'first.return_value': None})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
with pytest.raises(AttributeError):
settings.AWX_SOME_SETTING
assert settings.cache.get('AWX_SOME_SETTING') == SETTING_CACHE_NOTSET
def test_setting_from_db(settings, mocker):
@@ -168,9 +168,9 @@ def test_setting_from_db(settings, mocker):
setting_from_db = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
assert settings.AWX_SOME_SETTING == 'FROM_DB'
assert settings.cache.get('AWX_SOME_SETTING') == 'FROM_DB'
@pytest.mark.defined_in_file(AWX_SOME_SETTING='DEFAULT')
@@ -205,8 +205,8 @@ def test_db_setting_update(settings, mocker):
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
setting_list = mocker.Mock(**{'order_by.return_value.first.return_value': existing_setting})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list):
settings.AWX_SOME_SETTING = 'NEW-VALUE'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=setting_list)
settings.AWX_SOME_SETTING = 'NEW-VALUE'
assert existing_setting.value == 'NEW-VALUE'
existing_setting.save.assert_called_with(update_fields=['value'])
@@ -217,8 +217,8 @@ def test_db_setting_deletion(settings, mocker):
settings.registry.register('AWX_SOME_SETTING', field_class=fields.CharField, category=_('System'), category_slug='system')
existing_setting = mocker.Mock(key='AWX_SOME_SETTING', value='FROM_DB')
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting]):
del settings.AWX_SOME_SETTING
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=[existing_setting])
del settings.AWX_SOME_SETTING
assert existing_setting.delete.call_count == 1
@@ -283,10 +283,10 @@ def test_sensitive_cache_data_is_encrypted(settings, mocker):
# use its primary key as part of the encryption key
setting_from_db = mocker.Mock(pk=123, key='AWX_ENCRYPTED', value='SECRET!')
mocks = mocker.Mock(**{'order_by.return_value': mocker.Mock(**{'__iter__': lambda self: iter([setting_from_db]), 'first.return_value': setting_from_db})})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks):
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=mocks)
cache.set('AWX_ENCRYPTED', 'SECRET!')
assert cache.get('AWX_ENCRYPTED') == 'SECRET!'
assert native_cache.get('AWX_ENCRYPTED') == 'FRPERG!'
def test_readonly_sensitive_cache_data_is_encrypted(settings):

View File

@@ -2,6 +2,7 @@
import logging
# Django
from django.core.checks import Error
from django.utils.translation import gettext_lazy as _
# Django REST Framework
@@ -954,3 +955,27 @@ def logging_validate(serializer, attrs):
register_validate('logging', logging_validate)
def csrf_trusted_origins_validate(serializer, attrs):
if not serializer.instance or not hasattr(serializer.instance, 'CSRF_TRUSTED_ORIGINS'):
return attrs
if 'CSRF_TRUSTED_ORIGINS' not in attrs:
return attrs
errors = []
for origin in attrs['CSRF_TRUSTED_ORIGINS']:
if "://" not in origin:
errors.append(
Error(
"As of Django 4.0, the values in the CSRF_TRUSTED_ORIGINS "
"setting must start with a scheme (usually http:// or "
"https://) but found %s. See the release notes for details." % origin,
)
)
if errors:
error_messages = [error.msg for error in errors]
raise serializers.ValidationError(_('\n'.join(error_messages)))
return attrs
register_validate('system', csrf_trusted_origins_validate)

View File

@@ -0,0 +1,12 @@
from django.core.management.base import BaseCommand, CommandError
from awx.main.models.ha import Instance
class Command(BaseCommand):
help = 'Check if the task manager instance is ready throw error if not ready, can be use as readiness probe for k8s.'
def handle(self, *args, **options):
if Instance.objects.me().node_state != Instance.States.READY:
raise CommandError('Instance is not ready') # so that return code is not 0
return

View File

@@ -101,8 +101,9 @@ class Command(BaseCommand):
migrating = bool(executor.migration_plan(executor.loader.graph.leaf_nodes()))
connection.close() # Because of async nature, main loop will use new connection, so close this
except Exception as exc:
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), retry in 10s...')
time.sleep(10)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f'Error on startup of run_wsrelay (error: {exc}), slept for 10s...')
return
# In containerized deployments, migrations happen in the task container,
@@ -121,13 +122,14 @@ class Command(BaseCommand):
return
try:
my_hostname = Instance.objects.my_hostname()
my_hostname = Instance.objects.my_hostname() # This relies on settings.CLUSTER_HOST_ID which requires database connection
logger.info('Active instance with hostname {} is registered.'.format(my_hostname))
except RuntimeError as e:
# the CLUSTER_HOST_ID in the task, and web instance must match and
# ensure network connectivity between the task and web instance
logger.info('Unable to return currently active instance: {}, retry in 5s...'.format(e))
time.sleep(5)
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Unable to return currently active instance: {e}, slept for 10s before return.")
return
if options.get('status'):
@@ -166,12 +168,14 @@ class Command(BaseCommand):
WebsocketsMetricsServer().start()
while True:
try:
asyncio.run(WebSocketRelayManager().run())
except KeyboardInterrupt:
logger.info('Shutting down Websocket Relayer')
break
except Exception as e:
logger.exception('Error in Websocket Relayer, exception: {}. Restarting in 10 seconds'.format(e))
time.sleep(10)
try:
logger.info('Starting Websocket Relayer...')
websocket_relay_manager = WebSocketRelayManager()
asyncio.run(websocket_relay_manager.run())
except KeyboardInterrupt:
logger.info('Terminating Websocket Relayer')
except BaseException as e: # BaseException is used to catch all exceptions including asyncio.CancelledError
time.sleep(10) # Prevent supervisor from restarting the service too quickly and the service to enter FATAL state
# sleeping before logging because logging rely on setting which require database connection...
logger.warning(f"Encounter error while running Websocket Relayer {e}, slept for 10s...")
return

View File

@@ -6,7 +6,7 @@ import logging
import threading
import time
import urllib.parse
from pathlib import Path
from pathlib import Path, PurePosixPath
from django.conf import settings
from django.contrib.auth import logout
@@ -138,14 +138,36 @@ class URLModificationMiddleware(MiddlewareMixin):
@classmethod
def _convert_named_url(cls, url_path):
url_units = url_path.split('/')
# If the identifier is an empty string, it is always invalid.
if len(url_units) < 6 or url_units[1] != 'api' or url_units[2] not in ['v2'] or not url_units[4]:
return url_path
resource = url_units[3]
default_prefix = PurePosixPath('/api/v2/')
optional_prefix = PurePosixPath(f'/api/{settings.OPTIONAL_API_URLPATTERN_PREFIX}/v2/')
url_path_original = url_path
url_path = PurePosixPath(url_path)
if set(optional_prefix.parts).issubset(set(url_path.parts)):
url_prefix = optional_prefix
elif set(default_prefix.parts).issubset(set(url_path.parts)):
url_prefix = default_prefix
else:
return url_path_original
# Remove prefix
url_path = PurePosixPath(*url_path.parts[len(url_prefix.parts) :])
try:
resource_path = PurePosixPath(url_path.parts[0])
name = url_path.parts[1]
url_suffix = PurePosixPath(*url_path.parts[2:]) # remove name and resource
except IndexError:
return url_path_original
resource = resource_path.parts[0]
if resource in settings.NAMED_URL_MAPPINGS:
url_units[4] = cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, url_units[4])
return '/'.join(url_units)
pk = PurePosixPath(cls._named_url_to_pk(settings.NAMED_URL_GRAPH[settings.NAMED_URL_MAPPINGS[resource]], resource, name))
else:
return url_path_original
parts = url_prefix.parts + resource_path.parts + pk.parts + url_suffix.parts
return PurePosixPath(*parts).as_posix() + '/'
def process_request(self, request):
old_path = request.path_info

View File

@@ -4,7 +4,6 @@ from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0189_inbound_hop_nodes'),
]

View File

@@ -0,0 +1,51 @@
# Generated by Django 4.2.6 on 2024-05-08 07:29
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0192_custom_roles'),
]
operations = [
migrations.AlterField(
model_name='notification',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
migrations.AlterField(
model_name='notificationtemplate',
name='notification_type',
field=models.CharField(
choices=[
('awssns', 'AWS SNS'),
('email', 'Email'),
('grafana', 'Grafana'),
('irc', 'IRC'),
('mattermost', 'Mattermost'),
('pagerduty', 'Pagerduty'),
('rocketchat', 'Rocket.Chat'),
('slack', 'Slack'),
('twilio', 'Twilio'),
('webhook', 'Webhook'),
],
max_length=32,
),
),
]

View File

@@ -140,6 +140,17 @@ def get_permissions_for_role(role_field, children_map, apps):
return perm_list
def model_class(ct, apps):
"""
You can not use model methods in migrations, so this duplicates
what ContentType.model_class does, using current apps
"""
try:
return apps.get_model(ct.app_label, ct.model)
except LookupError:
return None
def migrate_to_new_rbac(apps, schema_editor):
"""
This method moves the assigned permissions from the old rbac.py models
@@ -197,7 +208,7 @@ def migrate_to_new_rbac(apps, schema_editor):
role_definition = managed_definitions[permissions]
else:
action = role.role_field.rsplit('_', 1)[0] # remove the _field ending of the name
role_definition_name = f'{role.content_type.model_class().__name__} {action.title()}'
role_definition_name = f'{model_class(role.content_type, apps).__name__} {action.title()}'
description = role_descriptions[role.role_field]
if type(description) == dict:
@@ -264,7 +275,12 @@ def setup_managed_role_definitions(apps, schema_editor):
"""
Idepotent method to create or sync the managed role definitions
"""
to_create = settings.ANSIBLE_BASE_ROLE_PRECREATE
to_create = {
'object_admin': '{cls.__name__} Admin',
'org_admin': 'Organization Admin',
'org_children': 'Organization {cls.__name__} Admin',
'special': '{cls.__name__} {action}',
}
ContentType = apps.get_model('contenttypes', 'ContentType')
Permission = apps.get_model('dab_rbac', 'DABPermission')

View File

@@ -1660,7 +1660,7 @@ class terraform(PluginFileInjector):
credential = inventory_update.get_cloud_credential()
private_data = {'credentials': {}}
gce_cred = credential.get_input('gce_credentials')
gce_cred = credential.get_input('gce_credentials', default=None)
if gce_cred:
private_data['credentials'][credential] = gce_cred
return private_data
@@ -1669,7 +1669,7 @@ class terraform(PluginFileInjector):
env = super(terraform, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials']
if cred_data[credential]:
if credential in cred_data:
env['GOOGLE_BACKEND_CREDENTIALS'] = to_container_path(cred_data[credential], private_data_dir)
return env

View File

@@ -31,6 +31,7 @@ from awx.main.notifications.mattermost_backend import MattermostBackend
from awx.main.notifications.grafana_backend import GrafanaBackend
from awx.main.notifications.rocketchat_backend import RocketChatBackend
from awx.main.notifications.irc_backend import IrcBackend
from awx.main.notifications.awssns_backend import AWSSNSBackend
logger = logging.getLogger('awx.main.models.notifications')
@@ -40,6 +41,7 @@ __all__ = ['NotificationTemplate', 'Notification']
class NotificationTemplate(CommonModelNameNotUnique):
NOTIFICATION_TYPES = [
('awssns', _('AWS SNS'), AWSSNSBackend),
('email', _('Email'), CustomEmailBackend),
('slack', _('Slack'), SlackBackend),
('twilio', _('Twilio'), TwilioBackend),

View File

@@ -10,6 +10,9 @@ import re
# django-rest-framework
from rest_framework.serializers import ValidationError
# crum to impersonate users
from crum import impersonate
# Django
from django.db import models, transaction, connection
from django.db.models.signals import m2m_changed
@@ -553,17 +556,22 @@ def get_role_definition(role):
return
f = obj._meta.get_field(role.role_field)
action_name = f.name.rsplit("_", 1)[0]
rd_name = f'{type(obj).__name__} {action_name.title()} Compat'
model_print = type(obj).__name__
rd_name = f'{model_print} {action_name.title()} Compat'
perm_list = get_role_codenames(role)
defaults = {'content_type_id': role.content_type_id}
try:
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
except ValidationError:
# This is a tricky case - practically speaking, users should not be allowed to create team roles
# or roles that include the team member permission.
# If we need to create this for compatibility purposes then we will create it as a managed non-editable role
defaults['managed'] = True
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
defaults = {
'content_type_id': role.content_type_id,
'description': f'Has {action_name.title()} permission to {model_print} for backwards API compatibility',
}
with impersonate(None):
try:
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
except ValidationError:
# This is a tricky case - practically speaking, users should not be allowed to create team roles
# or roles that include the team member permission.
# If we need to create this for compatibility purposes then we will create it as a managed non-editable role
defaults['managed'] = True
rd, created = RoleDefinition.objects.get_or_create(name=rd_name, permissions=perm_list, defaults=defaults)
return rd

View File

@@ -823,7 +823,7 @@ class UnifiedJob(
update_fields.append(key)
if parent_instance:
if self.status in ('pending', 'waiting', 'running'):
if self.status in ('pending', 'running'):
if parent_instance.current_job != self:
parent_instance_set('current_job', self)
# Update parent with all the 'good' states of it's child
@@ -860,7 +860,7 @@ class UnifiedJob(
# If this job already exists in the database, retrieve a copy of
# the job in its prior state.
# If update_fields are given without status, then that indicates no change
if self.pk and ((not update_fields) or ('status' in update_fields)):
if self.status != 'waiting' and self.pk and ((not update_fields) or ('status' in update_fields)):
self_before = self.__class__.objects.get(pk=self.pk)
if self_before.status != self.status:
status_before = self_before.status
@@ -902,7 +902,8 @@ class UnifiedJob(
update_fields.append('elapsed')
# Ensure that the job template information is current.
if self.unified_job_template != self._get_parent_instance():
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != 'waiting' and self.unified_job_template != self._get_parent_instance():
self.unified_job_template = self._get_parent_instance()
if 'unified_job_template' not in update_fields:
update_fields.append('unified_job_template')
@@ -915,8 +916,9 @@ class UnifiedJob(
# Okay; we're done. Perform the actual save.
result = super(UnifiedJob, self).save(*args, **kwargs)
# If status changed, update the parent instance.
if self.status != status_before:
# If status changed, update the parent instance
# unless status is 'waiting', because this happens in large batches at end of task manager runs and is blocking
if self.status != status_before and self.status != 'waiting':
# Update parent outside of the transaction for Job w/ allow_simultaneous=True
# This dodges lock contention at the expense of the foreign key not being
# completely correct.

View File

@@ -0,0 +1,70 @@
# Copyright (c) 2016 Ansible, Inc.
# All Rights Reserved.
import json
import logging
import boto3
from botocore.exceptions import ClientError
from awx.main.notifications.base import AWXBaseEmailBackend
from awx.main.notifications.custom_notification_base import CustomNotificationBase
logger = logging.getLogger('awx.main.notifications.awssns_backend')
WEBSOCKET_TIMEOUT = 30
class AWSSNSBackend(AWXBaseEmailBackend, CustomNotificationBase):
init_parameters = {
"aws_region": {"label": "AWS Region", "type": "string", "default": ""},
"aws_access_key_id": {"label": "Access Key ID", "type": "string", "default": ""},
"aws_secret_access_key": {"label": "Secret Access Key", "type": "password", "default": ""},
"aws_session_token": {"label": "Session Token", "type": "password", "default": ""},
"sns_topic_arn": {"label": "SNS Topic ARN", "type": "string", "default": ""},
}
recipient_parameter = "sns_topic_arn"
sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}"
default_messages = CustomNotificationBase.job_metadata_messages
def __init__(self, aws_region, aws_access_key_id, aws_secret_access_key, aws_session_token, fail_silently=False, **kwargs):
session = boto3.session.Session()
client_config = {"service_name": 'sns'}
if aws_region:
client_config["region_name"] = aws_region
if aws_secret_access_key:
client_config["aws_secret_access_key"] = aws_secret_access_key
if aws_access_key_id:
client_config["aws_access_key_id"] = aws_access_key_id
if aws_session_token:
client_config["aws_session_token"] = aws_session_token
self.client = session.client(**client_config)
super(AWSSNSBackend, self).__init__(fail_silently=fail_silently)
def _sns_publish(self, topic_arn, message):
self.client.publish(TopicArn=topic_arn, Message=message, MessageAttributes={})
def format_body(self, body):
if isinstance(body, str):
try:
body = json.loads(body)
except json.JSONDecodeError:
pass
if isinstance(body, dict):
body = json.dumps(body)
# convert dict body to json string
return body
def send_messages(self, messages):
sent_messages = 0
for message in messages:
sns_topic_arn = str(message.recipients()[0])
try:
self._sns_publish(topic_arn=sns_topic_arn, message=message.body)
sent_messages += 1
except ClientError as error:
if not self.fail_silently:
raise error
return sent_messages

View File

@@ -32,3 +32,15 @@ class CustomNotificationBase(object):
"denied": {"message": DEFAULT_APPROVAL_DENIED_MSG, "body": None},
},
}
job_metadata_messages = {
"started": {"body": "{{ job_metadata }}"},
"success": {"body": "{{ job_metadata }}"},
"error": {"body": "{{ job_metadata }}"},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}

View File

@@ -27,17 +27,7 @@ class WebhookBackend(AWXBaseEmailBackend, CustomNotificationBase):
sender_parameter = None
DEFAULT_BODY = "{{ job_metadata }}"
default_messages = {
"started": {"body": DEFAULT_BODY},
"success": {"body": DEFAULT_BODY},
"error": {"body": DEFAULT_BODY},
"workflow_approval": {
"running": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" needs review. This node can be viewed at: {{ workflow_url }}"}'},
"approved": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was approved. {{ workflow_url }}"}'},
"timed_out": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" has timed out. {{ workflow_url }}"}'},
"denied": {"body": '{"body": "The approval node \\"{{ approval_node_name }}\\" was denied. {{ workflow_url }}"}'},
},
}
default_messages = CustomNotificationBase.job_metadata_messages
def __init__(self, http_method, headers, disable_ssl_verification=False, fail_silently=False, username=None, password=None, **kwargs):
self.http_method = http_method

View File

@@ -63,6 +63,10 @@ websocket_urlpatterns = [
re_path(r'api/websocket/$', consumers.EventConsumer.as_asgi()),
re_path(r'websocket/$', consumers.EventConsumer.as_asgi()),
]
if settings.OPTIONAL_API_URLPATTERN_PREFIX:
websocket_urlpatterns.append(re_path(r'api/{}/v2/websocket/$'.format(settings.OPTIONAL_API_URLPATTERN_PREFIX), consumers.EventConsumer.as_asgi()))
websocket_relay_urlpatterns = [
re_path(r'websocket/relay/$', consumers.RelayConsumer.as_asgi()),
]

View File

@@ -9,8 +9,8 @@ def test_user_role_view_access(rando, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:user_roles_list', kwargs={'pk': rando.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, rando, 'members', data, skip_sub_obj_read_check=False)
@@ -21,8 +21,8 @@ def test_team_role_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": role_pk}
mock_access = mocker.MagicMock(can_attach=mocker.MagicMock(return_value=False))
with mocker.patch('awx.main.access.RoleAccess', return_value=mock_access):
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess', return_value=mock_access)
post(url=reverse('api:team_roles_list', kwargs={'pk': team.pk}), data=data, user=rando, expect=403)
mock_access.can_attach.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)
@@ -33,8 +33,8 @@ def test_role_team_view_access(rando, team, inventory, mocker, post):
role_pk = inventory.admin_role.pk
data = {"id": team.pk}
mock_access = mocker.MagicMock(return_value=False, __name__='mocked')
with mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access):
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mocker.patch('awx.main.access.RoleAccess.can_attach', mock_access)
post(url=reverse('api:role_teams_list', kwargs={'pk': role_pk}), data=data, user=rando, expect=403)
mock_access.assert_called_once_with(inventory.admin_role, team, 'member_role.parents', data, skip_sub_obj_read_check=False)

View File

@@ -30,7 +30,7 @@ def test_idempotent_credential_type_setup():
@pytest.mark.django_db
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},
@@ -81,7 +81,7 @@ def test_credential_validation_error_with_multiple_owner_fields(post, admin, ali
@pytest.mark.django_db
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh):
def test_create_user_credential_via_user_credentials_list(post, get, alice, credentialtype_ssh, setup_managed_roles):
params = {
'credential_type': 1,
'inputs': {'username': 'someusername'},

View File

@@ -131,11 +131,11 @@ def test_job_ignore_unprompted_vars(runtime_data, job_template_prompts, post, ad
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ()
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ()
# Check that job is serialized correctly
job_id = response.data['job']
@@ -167,12 +167,12 @@ def test_job_accept_prompted_vars(runtime_data, job_template_prompts, post, admi
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
called_with = data_to_internal(runtime_data)
JobTemplate.create_unified_job.assert_called_with(**called_with)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
called_with = data_to_internal(runtime_data)
JobTemplate.create_unified_job.assert_called_with(**called_with)
job_id = response.data['job']
assert job_id == 968
@@ -187,11 +187,11 @@ def test_job_accept_empty_tags(job_template_prompts, post, admin_user, mocker):
mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'job_tags': '', 'skip_tags': ''}, admin_user, expect=201)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'job_tags': '', 'skip_tags': ''},)
mock_job.signal_start.assert_called_once()
@@ -203,14 +203,14 @@ def test_slice_timeout_forks_need_int(job_template_prompts, post, admin_user, mo
mock_job = mocker.MagicMock(spec=Job, id=968)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400
)
assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.'
assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.'
assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.'
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), {'timeout': '', 'job_slice_count': '', 'forks': ''}, admin_user, expect=400
)
assert 'forks' in response.data and response.data['forks'][0] == 'A valid integer is required.'
assert 'job_slice_count' in response.data and response.data['job_slice_count'][0] == 'A valid integer is required.'
assert 'timeout' in response.data and response.data['timeout'][0] == 'A valid integer is required.'
@pytest.mark.django_db
@@ -244,12 +244,12 @@ def test_job_accept_prompted_vars_null(runtime_data, job_template_prompts_null,
mock_job = mocker.MagicMock(spec=Job, id=968, **runtime_data)
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation'):
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201)
assert JobTemplate.create_unified_job.called
expected_call = data_to_internal(runtime_data)
assert JobTemplate.create_unified_job.call_args == (expected_call,)
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation')
response = post(reverse('api:job_template_launch', kwargs={'pk': job_template.pk}), runtime_data, rando, expect=201)
assert JobTemplate.create_unified_job.called
expected_call = data_to_internal(runtime_data)
assert JobTemplate.create_unified_job.call_args == (expected_call,)
job_id = response.data['job']
assert job_id == 968
@@ -641,18 +641,18 @@ def test_job_launch_unprompted_vars_with_survey(mocker, survey_spec_factory, job
job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}),
admin_user,
expect=201,
)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},)
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(JobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
response = post(
reverse('api:job_template_launch', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}),
admin_user,
expect=201,
)
assert JobTemplate.create_unified_job.called
assert JobTemplate.create_unified_job.call_args == ({'extra_vars': {'survey_var': 4}},)
job_id = response.data['job']
assert job_id == 968
@@ -670,22 +670,22 @@ def test_callback_accept_prompted_extra_var(mocker, survey_spec_factory, job_tem
job_template.survey_spec = survey_spec_factory('survey_var')
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'}
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'extra_vars': {'survey_var': 4, 'job_launch_var': 3}, 'limit': 'single-host'}
mock_job.signal_start.assert_called_once()
@@ -697,22 +697,22 @@ def test_callback_ignore_unprompted_extra_var(mocker, survey_spec_factory, job_t
job_template.host_config_key = "foo"
job_template.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
with mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job):
with mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={}):
with mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host]):
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'limit': 'single-host'}
mocker.patch('awx.main.access.BaseAccess.check_license')
mock_job = mocker.MagicMock(spec=Job, id=968, extra_vars={"job_launch_var": 3, "survey_var": 4})
mocker.patch.object(UnifiedJobTemplate, 'create_unified_job', return_value=mock_job)
mocker.patch('awx.api.serializers.JobSerializer.to_representation', return_value={})
mocker.patch('awx.api.views.JobTemplateCallback.find_matching_hosts', return_value=[host])
post(
reverse('api:job_template_callback', kwargs={'pk': job_template.pk}),
dict(extra_vars={"job_launch_var": 3, "survey_var": 4}, host_config_key="foo"),
admin_user,
expect=201,
format='json',
)
assert UnifiedJobTemplate.create_unified_job.called
call_args = UnifiedJobTemplate.create_unified_job.call_args[1]
call_args.pop('_eager_fields', None) # internal purposes
assert call_args == {'limit': 'single-host'}
mock_job.signal_start.assert_called_once()
@@ -725,9 +725,9 @@ def test_callback_find_matching_hosts(mocker, get, job_template_prompts, admin_u
job_template.save()
host_with_alias = Host(name='localhost', inventory=job_template.inventory)
host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert tuple(r.data['matching_hosts']) == ('localhost',)
mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert tuple(r.data['matching_hosts']) == ('localhost',)
@pytest.mark.django_db
@@ -738,6 +738,6 @@ def test_callback_extra_var_takes_priority_over_host_name(mocker, get, job_templ
job_template.save()
host_with_alias = Host(name='localhost', variables={'ansible_host': 'foobar'}, inventory=job_template.inventory)
host_with_alias.save()
with mocker.patch('awx.main.access.BaseAccess.check_license'):
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert not r.data['matching_hosts']
mocker.patch('awx.main.access.BaseAccess.check_license')
r = get(reverse('api:job_template_callback', kwargs={'pk': job_template.pk}), user=admin_user, expect=200)
assert not r.data['matching_hosts']

View File

@@ -165,8 +165,8 @@ class TestAccessListCapabilities:
def test_access_list_direct_access_capability(self, inventory, rando, get, mocker, mock_access_method):
inventory.admin_role.members.add(rando)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), rando)
mock_access_method.assert_called_once_with(inventory.admin_role, rando, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data)
@@ -174,8 +174,8 @@ class TestAccessListCapabilities:
assert direct_access_list[0]['role']['user_capabilities']['unattach'] == 'foobar'
def test_access_list_indirect_access_capability(self, inventory, organization, org_admin, get, mocker, mock_access_method):
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), org_admin)
mock_access_method.assert_called_once_with(organization.admin_role, org_admin, 'members', **self.extra_kwargs)
self._assert_one_in_list(response.data, sublist='indirect_access')
@@ -185,8 +185,8 @@ class TestAccessListCapabilities:
def test_access_list_team_direct_access_capability(self, inventory, team, team_member, get, mocker, mock_access_method):
team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:inventory_access_list', kwargs={'pk': inventory.id}), team_member)
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', **self.extra_kwargs)
self._assert_one_in_list(response.data)
@@ -198,8 +198,8 @@ class TestAccessListCapabilities:
def test_team_roles_unattach(mocker, team, team_member, inventory, mock_access_method, get):
team.member_role.children.add(inventory.admin_role)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:team_roles_list', kwargs={'pk': team.id}), team_member)
# Did we assess whether team_member can remove team's permission to the inventory?
mock_access_method.assert_called_once_with(inventory.admin_role, team.member_role, 'parents', skip_sub_obj_read_check=True, data={})
@@ -212,8 +212,8 @@ def test_user_roles_unattach(mocker, organization, alice, bob, mock_access_metho
organization.member_role.members.add(alice)
organization.member_role.members.add(bob)
with mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method):
response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob)
mocker.patch.object(access_registry[Role], 'can_unattach', mock_access_method)
response = get(reverse('api:user_roles_list', kwargs={'pk': alice.id}), bob)
# Did we assess whether bob can remove alice's permission to the inventory?
mock_access_method.assert_called_once_with(organization.member_role, alice, 'members', skip_sub_obj_read_check=True, data={})

View File

@@ -43,9 +43,9 @@ def run_command(name, *args, **options):
],
)
def test_update_password_command(mocker, username, password, expected, changed):
with mocker.patch.object(UpdatePassword, 'update_password', return_value=changed):
result, stdout, stderr = run_command('update_password', username=username, password=password)
if result is None:
assert stdout == expected
else:
assert str(result) == expected
mocker.patch.object(UpdatePassword, 'update_password', return_value=changed)
result, stdout, stderr = run_command('update_password', username=username, password=password)
if result is None:
assert stdout == expected
else:
assert str(result) == expected

View File

@@ -16,6 +16,8 @@ from django.db.backends.sqlite3.base import SQLiteCursorWrapper
from django.db.models.signals import post_migrate
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
# AWX
from awx.main.models.projects import Project
from awx.main.models.ha import Instance
@@ -90,6 +92,12 @@ def deploy_jobtemplate(project, inventory, credential):
return jt
@pytest.fixture
def setup_managed_roles():
"Run the migration script to pre-create managed role definitions"
setup_managed_role_definitions(apps, None)
@pytest.fixture
def team(organization):
return organization.teams.create(name='test-team')

View File

@@ -1,10 +0,0 @@
import pytest
from django.apps import apps
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
@pytest.fixture
def managed_roles():
"Run the migration script to pre-create managed role definitions"
setup_managed_role_definitions(apps, None)

View File

@@ -1,45 +0,0 @@
import pytest
from django.apps import apps
from django.test.utils import override_settings
from awx.main.migrations._dab_rbac import setup_managed_role_definitions
from ansible_base.rbac.models import RoleDefinition
INVENTORY_OBJ_PERMISSIONS = ['view_inventory', 'adhoc_inventory', 'use_inventory', 'change_inventory', 'delete_inventory', 'update_inventory']
@pytest.mark.django_db
def test_managed_definitions_precreate():
with override_settings(
ANSIBLE_BASE_ROLE_PRECREATE={
'object_admin': '{cls._meta.model_name}-admin',
'org_admin': 'organization-admin',
'org_children': 'organization-{cls._meta.model_name}-admin',
'special': '{cls._meta.model_name}-{action}',
}
):
setup_managed_role_definitions(apps, None)
rd = RoleDefinition.objects.get(name='inventory-admin')
assert rd.managed is True
# add permissions do not go in the object-level admin
assert set(rd.permissions.values_list('codename', flat=True)) == set(INVENTORY_OBJ_PERMISSIONS)
# test org-level object admin permissions
rd = RoleDefinition.objects.get(name='organization-inventory-admin')
assert rd.managed is True
assert set(rd.permissions.values_list('codename', flat=True)) == set(['add_inventory', 'view_organization'] + INVENTORY_OBJ_PERMISSIONS)
@pytest.mark.django_db
def test_managed_definitions_custom_obj_admin_name():
with override_settings(
ANSIBLE_BASE_ROLE_PRECREATE={
'object_admin': 'foo-{cls._meta.model_name}-foo',
}
):
setup_managed_role_definitions(apps, None)
rd = RoleDefinition.objects.get(name='foo-inventory-foo')
assert rd.managed is True
# add permissions do not go in the object-level admin
assert set(rd.permissions.values_list('codename', flat=True)) == set(INVENTORY_OBJ_PERMISSIONS)

View File

@@ -10,7 +10,7 @@ from ansible_base.rbac.models import RoleDefinition
@pytest.mark.django_db
def test_managed_roles_created(managed_roles):
def test_managed_roles_created(setup_managed_roles):
"Managed RoleDefinitions are created in post_migration signal, we expect to see them here"
for cls in (JobTemplate, Inventory):
ct = ContentType.objects.get_for_model(cls)
@@ -22,7 +22,7 @@ def test_managed_roles_created(managed_roles):
@pytest.mark.django_db
def test_custom_read_role(admin_user, post, managed_roles):
def test_custom_read_role(admin_user, post, setup_managed_roles):
rd_url = django_reverse('roledefinition-list')
resp = post(
url=rd_url, data={"name": "read role made for test", "content_type": "awx.inventory", "permissions": ['view_inventory']}, user=admin_user, expect=201
@@ -40,10 +40,25 @@ def test_custom_system_roles_prohibited(admin_user, post):
@pytest.mark.django_db
def test_assign_managed_role(admin_user, alice, rando, inventory, post, managed_roles):
def test_assignment_to_invisible_user(admin_user, alice, rando, inventory, post, setup_managed_roles):
"Alice can not see rando, and so can not give them a role assignment"
rd = RoleDefinition.objects.get(name='Inventory Admin')
rd.give_permission(alice, inventory)
# Now that alice has full permissions to the inventory, she will give rando permission
url = django_reverse('roleuserassignment-list')
r = post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": inventory.id}, user=alice, expect=400)
assert 'does not exist' in str(r.data)
assert not rando.has_obj_perm(inventory, 'change')
@pytest.mark.django_db
def test_assign_managed_role(admin_user, alice, rando, inventory, post, setup_managed_roles, organization):
rd = RoleDefinition.objects.get(name='Inventory Admin')
rd.give_permission(alice, inventory)
# When alice and rando are members of the same org, they can see each other
member_rd = RoleDefinition.objects.get(name='Organization Member')
for u in (alice, rando):
member_rd.give_permission(u, organization)
# Now that alice has full permissions to the inventory, and can see rando, she will give rando permission
url = django_reverse('roleuserassignment-list')
post(url=url, data={"user": rando.id, "role_definition": rd.id, "object_id": inventory.id}, user=alice, expect=201)
assert rando.has_obj_perm(inventory, 'change') is True
@@ -63,7 +78,7 @@ def test_assign_custom_delete_role(admin_user, rando, inventory, delete, patch):
@pytest.mark.django_db
def test_assign_custom_add_role(admin_user, rando, organization, post, managed_roles):
def test_assign_custom_add_role(admin_user, rando, organization, post, setup_managed_roles):
rd, _ = RoleDefinition.objects.get_or_create(
name='inventory-add', permissions=['add_inventory', 'view_organization'], content_type=ContentType.objects.get_for_model(Organization)
)

View File

@@ -2,11 +2,15 @@ from unittest import mock
import pytest
from django.contrib.contenttypes.models import ContentType
from crum import impersonate
from awx.main.models.rbac import get_role_from_object_role, give_creator_permissions
from awx.main.models import User, Organization, WorkflowJobTemplate, WorkflowJobTemplateNode, Team
from awx.api.versioning import reverse
from ansible_base.rbac.models import RoleUserAssignment
from ansible_base.rbac.models import RoleUserAssignment, RoleDefinition
@pytest.mark.django_db
@@ -14,7 +18,7 @@ from ansible_base.rbac.models import RoleUserAssignment
'role_name',
['execution_environment_admin_role', 'project_admin_role', 'admin_role', 'auditor_role', 'read_role', 'execute_role', 'notification_admin_role'],
)
def test_round_trip_roles(organization, rando, role_name, managed_roles):
def test_round_trip_roles(organization, rando, role_name, setup_managed_roles):
"""
Make an assignment with the old-style role,
get the equivelent new role
@@ -28,7 +32,39 @@ def test_round_trip_roles(organization, rando, role_name, managed_roles):
@pytest.mark.django_db
def test_organization_level_permissions(organization, inventory, managed_roles):
def test_role_naming(setup_managed_roles):
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='dmin')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Admin'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_action_role_naming(setup_managed_roles):
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='ecute')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Execute'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_compat_role_naming(setup_managed_roles, job_template, rando, alice):
with impersonate(alice):
job_template.read_role.members.add(rando)
qs = RoleDefinition.objects.filter(content_type=ContentType.objects.get(model='jobtemplate'), name__endswith='ompat')
assert qs.count() == 1 # sanity
rd = qs.first()
assert rd.name == 'JobTemplate Read Compat'
assert rd.description
assert rd.created_by is None
@pytest.mark.django_db
def test_organization_level_permissions(organization, inventory, setup_managed_roles):
u1 = User.objects.create(username='alice')
u2 = User.objects.create(username='bob')
@@ -58,14 +94,14 @@ def test_organization_level_permissions(organization, inventory, managed_roles):
@pytest.mark.django_db
def test_organization_execute_role(organization, rando, managed_roles):
def test_organization_execute_role(organization, rando, setup_managed_roles):
organization.execute_role.members.add(rando)
assert rando in organization.execute_role
assert set(Organization.accessible_objects(rando, 'execute_role')) == set([organization])
@pytest.mark.django_db
def test_workflow_approval_list(get, post, admin_user, managed_roles):
def test_workflow_approval_list(get, post, admin_user, setup_managed_roles):
workflow_job_template = WorkflowJobTemplate.objects.create()
approval_node = WorkflowJobTemplateNode.objects.create(workflow_job_template=workflow_job_template)
url = reverse('api:workflow_job_template_node_create_approval', kwargs={'pk': approval_node.pk, 'version': 'v2'})
@@ -79,14 +115,14 @@ def test_workflow_approval_list(get, post, admin_user, managed_roles):
@pytest.mark.django_db
def test_creator_permission(rando, admin_user, inventory, managed_roles):
def test_creator_permission(rando, admin_user, inventory, setup_managed_roles):
give_creator_permissions(rando, inventory)
assert rando in inventory.admin_role
assert rando in inventory.admin_role.members.all()
@pytest.mark.django_db
def test_team_team_read_role(rando, team, admin_user, post, managed_roles):
def test_team_team_read_role(rando, team, admin_user, post, setup_managed_roles):
orgs = [Organization.objects.create(name=f'foo-{i}') for i in range(2)]
teams = [Team.objects.create(name=f'foo-{i}', organization=orgs[i]) for i in range(2)]
teams[1].member_role.members.add(rando)

View File

@@ -21,13 +21,13 @@ class TestComputedFields:
def test_computed_fields_normal_use(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory)
with immediate_on_commit():
with mocker.patch.object(update_inventory_computed_fields, 'delay'):
job.delete()
update_inventory_computed_fields.delay.assert_called_once_with(inventory.id)
mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete()
update_inventory_computed_fields.delay.assert_called_once_with(inventory.id)
def test_disable_computed_fields(self, mocker, inventory):
job = Job.objects.create(name='fake-job', inventory=inventory)
with disable_computed_fields():
with mocker.patch.object(update_inventory_computed_fields, 'delay'):
job.delete()
update_inventory_computed_fields.delay.assert_not_called()
mocker.patch.object(update_inventory_computed_fields, 'delay')
job.delete()
update_inventory_computed_fields.delay.assert_not_called()

View File

@@ -21,13 +21,13 @@ def test_multi_group_basic_job_launch(instance_factory, controlplane_instance_gr
j2 = create_job(objects2.job_template)
with mock.patch('awx.main.models.Job.task_impact', new_callable=mock.PropertyMock) as mock_task_impact:
mock_task_impact.return_value = 500
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_has_calls([mock.call(j1, ig1, i1), mock.call(j2, ig2, i2)])
@pytest.mark.django_db
def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, mocker, instance_group_factory, job_template_factory):
def test_multi_group_with_shared_dependency(instance_factory, controlplane_instance_group, instance_group_factory, job_template_factory):
i1 = instance_factory("i1")
i2 = instance_factory("i2")
ig1 = instance_group_factory("ig1", instances=[i1])
@@ -50,7 +50,7 @@ def test_multi_group_with_shared_dependency(instance_factory, controlplane_insta
objects2 = job_template_factory('jt2', organization=objects1.organization, project=p, inventory='inv2', credential='cred2')
objects2.job_template.instance_groups.add(ig2)
j2 = create_job(objects2.job_template, dependencies_processed=False)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
with mock.patch("awx.main.scheduler.TaskManager.start_task"):
DependencyManager().schedule()
TaskManager().schedule()
pu = p.project_updates.first()
@@ -73,10 +73,10 @@ def test_workflow_job_no_instancegroup(workflow_job_template_factory, controlpla
wfj = wfjt.create_unified_job()
wfj.status = "pending"
wfj.save()
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(wfj, None, None)
assert wfj.instance_group is None
@pytest.mark.django_db

View File

@@ -16,9 +16,9 @@ def test_single_job_scheduler_launch(hybrid_instance, controlplane_instance_grou
instance = controlplane_instance_group.instances.all()[0]
objects = job_template_factory('jt', organization='org1', project='proj', inventory='inv', credential='cred')
j = create_job(objects.job_template)
with mocker.patch("awx.main.scheduler.TaskManager.start_task"):
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
mocker.patch("awx.main.scheduler.TaskManager.start_task")
TaskManager().schedule()
TaskManager.start_task.assert_called_once_with(j, controlplane_instance_group, instance)
@pytest.mark.django_db

View File

@@ -1,6 +1,7 @@
import pytest
from django_test_migrations.plan import all_migrations, nodes_to_tuples
from django.utils.timezone import now
"""
Most tests that live in here can probably be deleted at some point. They are mainly
@@ -68,3 +69,19 @@ class TestMigrationSmoke:
bar_peers = bar.peers.all()
assert len(bar_peers) == 1
assert fooaddr in bar_peers
def test_migrate_DAB_RBAC(self, migrator):
old_state = migrator.apply_initial_migration(('main', '0190_alter_inventorysource_source_and_more'))
Organization = old_state.apps.get_model('main', 'Organization')
User = old_state.apps.get_model('auth', 'User')
org = Organization.objects.create(name='arbitrary-org', created=now(), modified=now())
user = User.objects.create(username='random-user')
org.read_role.members.add(user)
new_state = migrator.apply_tested_migration(
('main', '0192_custom_roles'),
)
RoleUserAssignment = new_state.apps.get_model('dab_rbac', 'RoleUserAssignment')
assert RoleUserAssignment.objects.filter(user=user.id, object_id=org.id).exists()

View File

@@ -1,8 +1,6 @@
# -*- coding: utf-8 -*-
import pytest
from django.conf import settings
from awx.api.versioning import reverse
from awx.main.middleware import URLModificationMiddleware
from awx.main.models import ( # noqa
@@ -121,7 +119,7 @@ def test_notification_template(get, admin_user):
@pytest.mark.django_db
def test_instance(get, admin_user):
def test_instance(get, admin_user, settings):
test_instance = Instance.objects.create(uuid=settings.SYSTEM_UUID, hostname="localhost", capacity=100)
url = reverse('api:instance_detail', kwargs={'pk': test_instance.pk})
response = get(url, user=admin_user, expect=200)
@@ -205,3 +203,65 @@ def test_403_vs_404(get):
get(f'/api/v2/users/{cindy.pk}/', expect=401)
get('/api/v2/users/cindy/', expect=404)
@pytest.mark.django_db
class TestConvertNamedUrl:
@pytest.mark.parametrize(
"url",
(
"/api/",
"/api/v2/",
"/api/v2/hosts/",
"/api/v2/hosts/1/",
"/api/v2/organizations/1/inventories/",
"/api/foo/",
"/api/foo/v2/",
"/api/foo/v2/organizations/",
"/api/foo/v2/organizations/1/",
"/api/foo/v2/organizations/1/inventories/",
"/api/foobar/",
"/api/foobar/v2/",
"/api/foobar/v2/organizations/",
"/api/foobar/v2/organizations/1/",
"/api/foobar/v2/organizations/1/inventories/",
"/api/foobar/v2/organizations/1/inventories/",
),
)
def test_noop(self, url, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = ''
assert URLModificationMiddleware._convert_named_url(url) == url
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'foo'
assert URLModificationMiddleware._convert_named_url(url) == url
def test_named_org(self):
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/v2/organizations/test_org/') == f'/api/v2/organizations/{test_org.pk}/'
def test_named_org_optional_api_urlpattern_prefix_interaction(self, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = 'bar'
test_org = Organization.objects.create(name='test_org')
assert URLModificationMiddleware._convert_named_url('/api/bar/v2/organizations/test_org/') == f'/api/bar/v2/organizations/{test_org.pk}/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_org_not_found(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
if prefix:
prefix += '/'
assert URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/does-not-exist/') == f'/api/{prefix}v2/organizations/0/'
@pytest.mark.parametrize("prefix", ['', 'bar'])
def test_named_sub_resource(self, prefix, settings):
settings.OPTIONAL_API_URLPATTERN_PREFIX = prefix
test_org = Organization.objects.create(name='test_org')
if prefix:
prefix += '/'
assert (
URLModificationMiddleware._convert_named_url(f'/api/{prefix}v2/organizations/test_org/inventories/')
== f'/api/{prefix}v2/organizations/{test_org.pk}/inventories/'
)

View File

@@ -187,7 +187,7 @@ def test_remove_role_from_user(role, post, admin):
@pytest.mark.django_db
@override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True)
@override_settings(ANSIBLE_BASE_ALLOW_TEAM_ORG_ADMIN=True, ANSIBLE_BASE_ALLOW_TEAM_ORG_MEMBER=True)
def test_get_teams_roles_list(get, team, organization, admin):
team.member_role.children.add(organization.admin_role)
url = reverse('api:team_roles_list', kwargs={'pk': team.id})

View File

@@ -165,7 +165,7 @@ class TestOrphanJobTemplate:
@pytest.mark.django_db
@pytest.mark.job_permissions
def test_job_template_creator_access(project, organization, rando, post):
def test_job_template_creator_access(project, organization, rando, post, setup_managed_roles):
project.use_role.members.add(rando)
response = post(
url=reverse('api:job_template_list'),

View File

@@ -76,15 +76,15 @@ class TestJobTemplateSerializerGetRelated:
class TestJobTemplateSerializerGetSummaryFields:
def test_survey_spec_exists(self, test_get_summary_fields, mocker, job_template):
job_template.survey_spec = {'name': 'blah', 'description': 'blah blah'}
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj:
mock_rj.return_value = []
test_get_summary_fields(JobTemplateSerializer, job_template, 'survey')
mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = []
test_get_summary_fields(JobTemplateSerializer, job_template, 'survey')
def test_survey_spec_absent(self, get_summary_fields_mock_and_run, mocker, job_template):
job_template.survey_spec = None
with mocker.patch.object(JobTemplateSerializer, '_recent_jobs') as mock_rj:
mock_rj.return_value = []
summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template)
mock_rj = mocker.patch.object(JobTemplateSerializer, '_recent_jobs')
mock_rj.return_value = []
summary = get_summary_fields_mock_and_run(JobTemplateSerializer, job_template)
assert 'survey' not in summary
def test_copy_edit_standard(self, mocker, job_template_factory):
@@ -107,10 +107,10 @@ class TestJobTemplateSerializerGetSummaryFields:
view.kwargs = {}
serializer.context['view'] = view
with mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie'):
with mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar'):
with mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo'):
response = serializer.get_summary_fields(jt_obj)
mocker.patch("awx.api.serializers.role_summary_fields_generator", return_value='Can eat pie')
mocker.patch("awx.main.access.JobTemplateAccess.can_change", return_value='foobar')
mocker.patch("awx.main.access.JobTemplateAccess.can_copy", return_value='foo')
response = serializer.get_summary_fields(jt_obj)
assert response['user_capabilities']['copy'] == 'foo'
assert response['user_capabilities']['edit'] == 'foobar'

View File

@@ -189,8 +189,8 @@ class TestWorkflowJobTemplateNodeSerializerSurveyPasswords:
serializer = WorkflowJobTemplateNodeSerializer()
wfjt = WorkflowJobTemplate.objects.create(name='fake-wfjt')
serializer.instance = WorkflowJobTemplateNode(workflow_job_template=wfjt, unified_job_template=jt, extra_data={'var1': '$encrypted$foooooo'})
with mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo'):
attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}})
mocker.patch('awx.main.models.mixins.decrypt_value', return_value='foo')
attrs = serializer.validate({'unified_job_template': jt, 'workflow_job_template': wfjt, 'extra_data': {'var1': '$encrypted$'}})
assert 'survey_passwords' in attrs
assert 'var1' in attrs['survey_passwords']
assert attrs['extra_data']['var1'] == '$encrypted$foooooo'

View File

@@ -191,16 +191,16 @@ class TestResourceAccessList:
def test_parent_access_check_failed(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=False)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access):
with pytest.raises(PermissionDenied):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
with pytest.raises(PermissionDenied):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
def test_parent_access_check_worked(self, mocker, mock_organization):
mock_access = mocker.MagicMock(__name__='for logger', return_value=True)
with mocker.patch('awx.main.access.BaseAccess.can_read', mock_access):
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
mocker.patch('awx.main.access.BaseAccess.can_read', mock_access)
self.mock_view(parent=mock_organization).check_permissions(self.mock_request())
mock_access.assert_called_once_with(mock_organization)
def test_related_search_reverse_FK_field():

View File

@@ -66,7 +66,7 @@ class TestJobTemplateLabelList:
mock_request = mock.MagicMock()
super(JobTemplateLabelList, view).unattach(mock_request, None, None)
assert mixin_unattach.called_with(mock_request, None, None)
mixin_unattach.assert_called_with(mock_request, None, None)
class TestInventoryInventorySourcesUpdate:
@@ -108,15 +108,16 @@ class TestInventoryInventorySourcesUpdate:
mock_request = mocker.MagicMock()
mock_request.user.can_access.return_value = can_access
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj):
with mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None):
with mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer') as serializer_class:
serializer = serializer_class.return_value
serializer.to_representation.return_value = {}
mocker.patch.object(InventoryInventorySourcesUpdate, 'get_object', return_value=obj)
mocker.patch.object(InventoryInventorySourcesUpdate, 'get_serializer_context', return_value=None)
serializer_class = mocker.patch('awx.api.serializers.InventoryUpdateDetailSerializer')
view = InventoryInventorySourcesUpdate()
response = view.post(mock_request)
assert response.data == expected
serializer = serializer_class.return_value
serializer.to_representation.return_value = {}
view = InventoryInventorySourcesUpdate()
response = view.post(mock_request)
assert response.data == expected
class TestSurveySpecValidation:

View File

@@ -155,35 +155,35 @@ def test_node_getter_and_setters():
class TestWorkflowJobCreate:
def test_create_no_prompts(self, wfjt_node_no_prompts, workflow_job_unit, mocker):
mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create):
wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_no_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_no_prompts.char_prompts,
inventory=None,
unified_job_template=wfjt_node_no_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
def test_create_with_prompts(self, wfjt_node_with_prompts, workflow_job_unit, credential, mocker):
mock_create = mocker.MagicMock()
with mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create):
wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
mocker.patch('awx.main.models.WorkflowJobNode.objects.create', mock_create)
wfjt_node_with_prompts.create_workflow_job_node(workflow_job=workflow_job_unit)
mock_create.assert_called_once_with(
all_parents_must_converge=False,
extra_data={},
survey_passwords={},
char_prompts=wfjt_node_with_prompts.char_prompts,
inventory=wfjt_node_with_prompts.inventory,
unified_job_template=wfjt_node_with_prompts.unified_job_template,
workflow_job=workflow_job_unit,
identifier=mocker.ANY,
execution_environment=None,
)
@pytest.mark.django_db

View File

@@ -0,0 +1,26 @@
from unittest import mock
from django.core.mail.message import EmailMessage
import awx.main.notifications.awssns_backend as awssns_backend
def test_send_messages():
with mock.patch('awx.main.notifications.awssns_backend.AWSSNSBackend._sns_publish') as sns_publish_mock:
aws_region = 'us-east-1'
sns_topic = f"arn:aws:sns:{aws_region}:111111111111:topic-mock"
backend = awssns_backend.AWSSNSBackend(aws_region=aws_region, aws_access_key_id=None, aws_secret_access_key=None, aws_session_token=None)
message = EmailMessage(
'test subject',
{'body': 'test body'},
[],
[
sns_topic,
],
)
sent_messages = backend.send_messages(
[
message,
]
)
sns_publish_mock.assert_called_once_with(topic_arn=sns_topic, message=message.body)
assert sent_messages == 1

View File

@@ -137,10 +137,10 @@ def test_send_notifications_not_list():
def test_send_notifications_job_id(mocker):
with mocker.patch('awx.main.models.UnifiedJob.objects.get'):
system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1)
mocker.patch('awx.main.models.UnifiedJob.objects.get')
system.send_notifications([], job_id=1)
assert UnifiedJob.objects.get.called
assert UnifiedJob.objects.get.called_with(id=1)
@mock.patch('awx.main.models.UnifiedJob.objects.get')

View File

@@ -7,15 +7,15 @@ def test_produce_supervisor_command(mocker):
mock_process = mocker.MagicMock()
mock_process.communicate = communicate_mock
Popen_mock = mocker.MagicMock(return_value=mock_process)
with mocker.patch.object(reload.subprocess, 'Popen', Popen_mock):
reload.supervisor_service_command("restart")
reload.subprocess.Popen.assert_called_once_with(
[
'supervisorctl',
'restart',
'tower-processes:*',
],
stderr=-1,
stdin=-1,
stdout=-1,
)
mocker.patch.object(reload.subprocess, 'Popen', Popen_mock)
reload.supervisor_service_command("restart")
reload.subprocess.Popen.assert_called_once_with(
[
'supervisorctl',
'restart',
'tower-processes:*',
],
stderr=-1,
stdin=-1,
stdout=-1,
)

View File

@@ -2,9 +2,11 @@
# All Rights Reserved.
# Python
import base64
import logging
import sys
import traceback
import os
from datetime import datetime
# Django
@@ -15,6 +17,15 @@ from django.utils.encoding import force_str
# AWX
from awx.main.exceptions import PostRunError
# OTEL
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter as OTLPGrpcLogExporter
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter as OTLPHttpLogExporter
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.sdk.resources import Resource
class RSysLogHandler(logging.handlers.SysLogHandler):
append_nul = False
@@ -133,3 +144,39 @@ if settings.COLOR_LOGS is True:
pass
else:
ColorHandler = logging.StreamHandler
class OTLPHandler(LoggingHandler):
def __init__(self, endpoint=None, protocol='grpc', service_name=None, instance_id=None, auth=None, username=None, password=None):
if not endpoint:
raise ValueError("endpoint required")
if auth == 'basic' and (username is None or password is None):
raise ValueError("auth type basic requires username and passsword parameters")
self.endpoint = endpoint
self.service_name = service_name or (sys.argv[1] if len(sys.argv) > 1 else (sys.argv[0] or 'unknown_service'))
self.instance_id = instance_id or os.uname().nodename
logger_provider = LoggerProvider(
resource=Resource.create(
{
"service.name": self.service_name,
"service.instance.id": self.instance_id,
}
),
)
set_logger_provider(logger_provider)
headers = {}
if auth == 'basic':
secret = f'{username}:{password}'
headers['Authorization'] = "Basic " + base64.b64encode(secret.encode()).decode()
if protocol == 'grpc':
otlp_exporter = OTLPGrpcLogExporter(endpoint=self.endpoint, insecure=True, headers=headers)
elif protocol == 'http':
otlp_exporter = OTLPHttpLogExporter(endpoint=self.endpoint, headers=headers)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(otlp_exporter))
super().__init__(level=logging.NOTSET, logger_provider=logger_provider)

View File

@@ -285,8 +285,6 @@ class WebSocketRelayManager(object):
except asyncio.CancelledError:
# Handle the case where the task was already cancelled by the time we got here.
pass
except Exception as e:
logger.warning(f"Failed to cancel relay connection for {hostname}: {e}")
del self.relay_connections[hostname]
@@ -297,8 +295,6 @@ class WebSocketRelayManager(object):
self.stats_mgr.delete_remote_host_stats(hostname)
except KeyError:
pass
except Exception as e:
logger.warning(f"Failed to delete stats for {hostname}: {e}")
async def run(self):
event_loop = asyncio.get_running_loop()
@@ -306,7 +302,6 @@ class WebSocketRelayManager(object):
self.stats_mgr = RelayWebsocketStatsManager(event_loop, self.local_hostname)
self.stats_mgr.start()
# Set up a pg_notify consumer for allowing web nodes to "provision" and "deprovision" themselves gracefully.
database_conf = deepcopy(settings.DATABASES['default'])
database_conf['OPTIONS'] = deepcopy(database_conf.get('OPTIONS', {}))
@@ -318,79 +313,54 @@ class WebSocketRelayManager(object):
if 'PASSWORD' in database_conf:
database_conf['OPTIONS']['password'] = database_conf.pop('PASSWORD')
task = None
async_conn = await psycopg.AsyncConnection.connect(
dbname=database_conf['NAME'],
host=database_conf['HOST'],
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
# Managing the async_conn here so that we can close it if we need to restart the connection
async_conn = None
await async_conn.set_autocommit(True)
on_ws_heartbeat_task = event_loop.create_task(self.on_ws_heartbeat(async_conn))
# Establishes a websocket connection to /websocket/relay on all API servers
try:
while True:
if not task or task.done():
try:
# Try to close the connection if it's open
if async_conn:
try:
await async_conn.close()
except Exception as e:
logger.warning(f"Failed to close connection to database for pg_notify: {e}")
while True:
if on_ws_heartbeat_task.done():
raise Exception("on_ws_heartbeat_task has exited")
# and re-establish the connection
async_conn = await psycopg.AsyncConnection.connect(
dbname=database_conf['NAME'],
host=database_conf['HOST'],
user=database_conf['USER'],
port=database_conf['PORT'],
**database_conf.get("OPTIONS", {}),
)
await async_conn.set_autocommit(True)
future_remote_hosts = self.known_hosts.keys()
current_remote_hosts = self.relay_connections.keys()
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
# before creating the task that uses the connection
task = event_loop.create_task(self.on_ws_heartbeat(async_conn), name="on_ws_heartbeat")
logger.info("Creating `on_ws_heartbeat` task in event loop.")
# This loop handles if we get an advertisement from a host we already know about but
# the advertisement has a different IP than we are currently connected to.
for hostname, address in self.known_hosts.items():
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
except Exception as e:
logger.warning(f"Failed to connect to database for pg_notify: {e}")
if address != self.relay_connections[hostname].remote_host:
deleted_remote_hosts.add(hostname)
new_remote_hosts.add(hostname)
future_remote_hosts = self.known_hosts.keys()
current_remote_hosts = self.relay_connections.keys()
deleted_remote_hosts = set(current_remote_hosts) - set(future_remote_hosts)
new_remote_hosts = set(future_remote_hosts) - set(current_remote_hosts)
# Delete any hosts with closed connections
for hostname, relay_conn in self.relay_connections.items():
if not relay_conn.connected:
deleted_remote_hosts.add(hostname)
# This loop handles if we get an advertisement from a host we already know about but
# the advertisement has a different IP than we are currently connected to.
for hostname, address in self.known_hosts.items():
if hostname not in self.relay_connections:
# We've picked up a new hostname that we don't know about yet.
continue
if deleted_remote_hosts:
logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if address != self.relay_connections[hostname].remote_host:
deleted_remote_hosts.add(hostname)
new_remote_hosts.add(hostname)
if new_remote_hosts:
logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
# Delete any hosts with closed connections
for hostname, relay_conn in self.relay_connections.items():
if not relay_conn.connected:
deleted_remote_hosts.add(hostname)
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
relay_connection.start()
self.relay_connections[h] = relay_connection
if deleted_remote_hosts:
logger.info(f"Removing {deleted_remote_hosts} from websocket broadcast list")
await asyncio.gather(*[self.cleanup_offline_host(h) for h in deleted_remote_hosts])
if new_remote_hosts:
logger.info(f"Adding {new_remote_hosts} to websocket broadcast list")
for h in new_remote_hosts:
stats = self.stats_mgr.new_remote_host_stats(h)
relay_connection = WebsocketRelayConnection(name=self.local_hostname, stats=stats, remote_host=self.known_hosts[h])
relay_connection.start()
self.relay_connections[h] = relay_connection
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)
finally:
if async_conn:
logger.info("Shutting down db connection for wsrelay.")
try:
await async_conn.close()
except Exception as e:
logger.info(f"Failed to close connection to database for pg_notify: {e}")
await asyncio.sleep(settings.BROADCAST_WEBSOCKET_NEW_INSTANCE_POLL_RATE_SECONDS)

View File

@@ -114,6 +114,7 @@ MEDIA_ROOT = os.path.join(BASE_DIR, 'public', 'media')
MEDIA_URL = '/media/'
LOGIN_URL = '/api/login/'
LOGOUT_ALLOWED_HOSTS = None
# Absolute filesystem path to the directory to host projects (with playbooks).
# This directory should not be web-accessible.
@@ -277,6 +278,9 @@ SESSION_COOKIE_SECURE = True
# Note: This setting may be overridden by database settings.
SESSION_COOKIE_AGE = 1800
# Option to change userLoggedIn cookie SameSite policy.
USER_COOKIE_SAMESITE = 'Lax'
# Name of the cookie that contains the session information.
# Note: Changing this value may require changes to any clients.
SESSION_COOKIE_NAME = 'awx_sessionid'
@@ -876,6 +880,7 @@ LOGGING = {
'address': '/var/run/awx-rsyslog/rsyslog.sock',
'filters': ['external_log_enabled', 'dynamic_level_filter', 'guid'],
},
'otel': {'class': 'logging.NullHandler'},
},
'loggers': {
'django': {'handlers': ['console']},
@@ -1145,13 +1150,8 @@ ANSIBLE_BASE_CUSTOM_VIEW_PARENT = 'awx.api.generics.APIView'
# Settings for the ansible_base RBAC system
# Only used internally, names of the managed RoleDefinitions to create
ANSIBLE_BASE_ROLE_PRECREATE = {
'object_admin': '{cls.__name__} Admin',
'org_admin': 'Organization Admin',
'org_children': 'Organization {cls.__name__} Admin',
'special': '{cls.__name__} {action}',
}
# This has been moved to data migration code
ANSIBLE_BASE_ROLE_PRECREATE = {}
# Name for auto-created roles that give users permissions to what they create
ANSIBLE_BASE_ROLE_CREATOR_NAME = '{cls.__name__} Creator'
@@ -1162,9 +1162,6 @@ ANSIBLE_BASE_ROLE_SYSTEM_ACTIVATED = True
# Permissions a user will get when creating a new item
ANSIBLE_BASE_CREATOR_DEFAULTS = ['change', 'delete', 'execute', 'use', 'adhoc', 'approve', 'update', 'view']
# This is a stopgap, will delete after resource registry integration
ANSIBLE_BASE_SERVICE_PREFIX = "awx"
# Temporary, for old roles API compatibility, save child permissions at organization level
ANSIBLE_BASE_CACHE_PARENT_PERMISSIONS = True
@@ -1178,6 +1175,3 @@ ANSIBLE_BASE_ALLOW_SINGLETON_ROLES_API = False # Do not allow creating user-def
# system username for django-ansible-base
SYSTEM_USERNAME = None
# Use AWX base view, to give 401 on unauthenticated requests
ANSIBLE_BASE_CUSTOM_VIEW_PARENT = 'awx.api.generics.APIView'

View File

@@ -7,18 +7,18 @@ from django.core.cache import cache
def test_ldap_default_settings(mocker):
from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db):
settings = LDAPSettings()
assert settings.ORGANIZATION_MAP == {}
assert settings.TEAM_MAP == {}
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings()
assert settings.ORGANIZATION_MAP == {}
assert settings.TEAM_MAP == {}
def test_ldap_default_network_timeout(mocker):
cache.clear() # clearing cache avoids picking up stray default for OPT_REFERRALS
from_db = mocker.Mock(**{'order_by.return_value': []})
with mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db):
settings = LDAPSettings()
assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30
mocker.patch('awx.conf.models.Setting.objects.filter', return_value=from_db)
settings = LDAPSettings()
assert settings.CONNECTION_OPTIONS[ldap.OPT_NETWORK_TIMEOUT] == 30
def test_ldap_filter_validator():

View File

@@ -38,7 +38,9 @@ class CompleteView(BaseRedirectView):
response = super(CompleteView, self).dispatch(request, *args, **kwargs)
if self.request.user and self.request.user.is_authenticated:
logger.info(smart_str(u"User {} logged in".format(self.request.user.username)))
response.set_cookie('userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False))
response.set_cookie(
'userLoggedIn', 'true', secure=getattr(settings, 'SESSION_COOKIE_SECURE', False), samesite=getattr(settings, 'USER_COOKIE_SAMESITE', 'Lax')
)
response.setdefault('X-API-Session-Cookie-Name', getattr(settings, 'SESSION_COOKIE_NAME', 'awx_sessionid'))
return response

View File

@@ -62,7 +62,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId }
: {};
const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind }
? { credential_type__kind__in: credentialTypeKind }
: {};
const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace }
@@ -125,7 +125,7 @@ function CredentialLookup({
? { credential_type: credentialTypeId }
: {};
const typeKindParams = credentialTypeKind
? { credential_type__kind: credentialTypeKind }
? { credential_type__kind__in: credentialTypeKind }
: {};
const typeNamespaceParams = credentialTypeNamespace
? { credential_type__namespace: credentialTypeNamespace }

View File

@@ -190,6 +190,7 @@ function NotificationList({
name: t`Notification type`,
key: 'or__notification_type',
options: [
['awssns', t`AWS SNS`],
['email', t`Email`],
['grafana', t`Grafana`],
['hipchat', t`Hipchat`],

View File

@@ -12,7 +12,7 @@ const Inner = styled.div`
border-radius: 2px;
color: white;
left: 10px;
max-width: 300px;
max-width: 500px;
padding: 5px 10px;
position: absolute;
top: 10px;

View File

@@ -12,6 +12,7 @@ const GridDL = styled.dl`
column-gap: 15px;
display: grid;
grid-template-columns: max-content;
overflow-wrap: anywhere;
row-gap: 0px;
dt {
grid-column-start: 1;

View File

@@ -22,7 +22,7 @@ const ansibleDocUrls = {
constructed:
'https://docs.ansible.com/ansible/latest/collections/ansible/builtin/constructed_inventory.html',
terraform:
'https://github.com/ansible-collections/cloud.terraform/blob/stable-statefile-inventory/plugins/inventory/terraform_state.py',
'https://github.com/ansible-collections/cloud.terraform/blob/main/docs/cloud.terraform.terraform_state_inventory.rst',
};
const getInventoryHelpTextStrings = () => ({

View File

@@ -87,7 +87,7 @@ const SCMSubForm = ({ autoPopulateProject }) => {
/>
)}
<CredentialLookup
credentialTypeKind="cloud"
credentialTypeKind="cloud,kubernetes"
label={t`Credential`}
value={credentialField.value}
onChange={handleCredentialUpdate}

View File

@@ -138,6 +138,25 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
}
dataCy="nt-detail-type"
/>
{template.notification_type === 'awssns' && (
<>
<Detail
label={t`AWS Region`}
value={configuration.aws_region}
dataCy="nt-detail-aws-region"
/>
<Detail
label={t`Access Key ID`}
value={configuration.aws_access_key_id}
dataCy="nt-detail-aws-access-key-id"
/>
<Detail
label={t`SNS Topic ARN`}
value={configuration.sns_topic_arn}
dataCy="nt-detail-sns-topic-arn"
/>
</>
)}
{template.notification_type === 'email' && (
<>
<Detail
@@ -455,8 +474,8 @@ function NotificationTemplateDetail({ template, defaultMessages }) {
}
function CustomMessageDetails({ messages, defaults, type }) {
const showMessages = type !== 'webhook';
const showBodies = ['email', 'pagerduty', 'webhook'].includes(type);
const showMessages = !['awssns', 'webhook'].includes(type);
const showBodies = ['email', 'pagerduty', 'webhook', 'awssns'].includes(type);
return (
<>

View File

@@ -131,6 +131,7 @@ function NotificationTemplatesList() {
name: t`Notification type`,
key: 'or__notification_type',
options: [
['awssns', t`AWS SNS`],
['email', t`Email`],
['grafana', t`Grafana`],
['hipchat', t`Hipchat`],

View File

@@ -1,5 +1,6 @@
/* eslint-disable-next-line import/prefer-default-export */
export const NOTIFICATION_TYPES = {
awssns: 'AWS SNS',
email: 'Email',
grafana: 'Grafana',
irc: 'IRC',

View File

@@ -11,8 +11,8 @@ import getDocsBaseUrl from 'util/getDocsBaseUrl';
function CustomMessagesSubForm({ defaultMessages, type }) {
const [useCustomField, , useCustomHelpers] = useField('useCustomMessages');
const showMessages = type !== 'webhook';
const showBodies = ['email', 'pagerduty', 'webhook'].includes(type);
const showMessages = !['webhook', 'awssns'].includes(type);
const showBodies = ['email', 'pagerduty', 'webhook', 'awssns'].includes(type);
const { setFieldValue } = useFormikContext();
const config = useConfig();

View File

@@ -78,6 +78,7 @@ function NotificationTemplateFormFields({ defaultMessages, template }) {
label: t`Choose a Notification Type`,
isDisabled: true,
},
{ value: 'awssns', key: 'awssns', label: t`AWS SNS` },
{ value: 'email', key: 'email', label: t`E-mail` },
{ value: 'grafana', key: 'grafana', label: 'Grafana' },
{ value: 'irc', key: 'irc', label: 'IRC' },

View File

@@ -29,6 +29,7 @@ import Popover from '../../../components/Popover/Popover';
import getHelpText from './Notifications.helptext';
const TypeFields = {
awssns: AWSSNSFields,
email: EmailFields,
grafana: GrafanaFields,
irc: IRCFields,
@@ -58,6 +59,44 @@ TypeInputsSubForm.propTypes = {
export default TypeInputsSubForm;
function AWSSNSFields() {
return (
<>
<FormField
id="awssns-aws-region"
label={t`AWS Region`}
name="notification_configuration.aws_region"
type="text"
isRequired
/>
<FormField
id="awssns-aws-access-key-id"
label={t`Access Key ID`}
name="notification_configuration.aws_access_key_id"
type="text"
/>
<PasswordField
id="awssns-aws-secret-access-key"
label={t`Secret Access Key`}
name="notification_configuration.aws_secret_access_key"
/>
<PasswordField
id="awssns-aws-session-token"
label={t`Session Token`}
name="notification_configuration.aws_session_token"
/>
<FormField
id="awssns-sns-topic-arn"
label={t`SNS Topic ARN`}
name="notification_configuration.sns_topic_arn"
type="text"
validate={required(null)}
isRequired
/>
</>
);
}
function EmailFields() {
const helpText = getHelpText();
return (

View File

@@ -203,6 +203,39 @@
}
}
},
"awssns": {
"started": {
"body": "{{ job_metadata }}"
},
"success": {
"body": "{{ job_metadata }}"
},
"error": {
"body": "{{ job_metadata }}"
},
"workflow_approval": {
"running": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" needs review. This node can be viewed at: {{ workflow_url }}"
}
},
"approved": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" was approved. {{ workflow_url }}"
}
},
"timed_out": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" has timed out. {{ workflow_url }}"
}
},
"denied": {
"body": {
"body": "The approval node \"{{ approval_node_name }}\" was denied. {{ workflow_url }}"
}
}
}
},
"mattermost": {
"started": {
"message": "{{ job_friendly_name }} #{{ job.id }} '{{ job.name }}' {{ job.status }}: {{ url }}",

View File

@@ -1,4 +1,11 @@
const typeFieldNames = {
awssns: [
'aws_region',
'aws_access_key_id',
'aws_secret_access_key',
'aws_session_token',
'sns_topic_arn',
],
email: [
'username',
'password',

View File

@@ -78,12 +78,14 @@ function MiscAuthenticationEdit() {
default: OAUTH2_PROVIDER_OPTIONS.default.ACCESS_TOKEN_EXPIRE_SECONDS,
type: OAUTH2_PROVIDER_OPTIONS.child.type,
label: t`Access Token Expiration`,
help_text: t`Access Token Expiration in seconds`,
},
REFRESH_TOKEN_EXPIRE_SECONDS: {
...OAUTH2_PROVIDER_OPTIONS,
default: OAUTH2_PROVIDER_OPTIONS.default.REFRESH_TOKEN_EXPIRE_SECONDS,
type: OAUTH2_PROVIDER_OPTIONS.child.type,
label: t`Refresh Token Expiration`,
help_text: t`Refresh Token Expiration in seconds`,
},
AUTHORIZATION_CODE_EXPIRE_SECONDS: {
...OAUTH2_PROVIDER_OPTIONS,
@@ -91,6 +93,7 @@ function MiscAuthenticationEdit() {
OAUTH2_PROVIDER_OPTIONS.default.AUTHORIZATION_CODE_EXPIRE_SECONDS,
type: OAUTH2_PROVIDER_OPTIONS.child.type,
label: t`Authorization Code Expiration`,
help_text: t`Authorization Code Expiration in seconds`,
},
};

View File

@@ -374,6 +374,7 @@ export const CredentialType = shape({
});
export const NotificationType = oneOf([
'awssns',
'email',
'grafana',
'irc',

View File

@@ -17,7 +17,7 @@ import time
import re
from json import loads, dumps
from os.path import isfile, expanduser, split, join, exists, isdir
from os import access, R_OK, getcwd, environ
from os import access, R_OK, getcwd, environ, getenv
try:
@@ -107,7 +107,7 @@ class ControllerModule(AnsibleModule):
# Perform magic depending on whether controller_oauthtoken is a string or a dict
if self.params.get('controller_oauthtoken'):
token_param = self.params.get('controller_oauthtoken')
if type(token_param) is dict:
if isinstance(token_param, dict):
if 'token' in token_param:
self.oauth_token = self.params.get('controller_oauthtoken')['token']
else:
@@ -148,9 +148,10 @@ class ControllerModule(AnsibleModule):
# Make sure we start with /api/vX
if not endpoint.startswith("/"):
endpoint = "/{0}".format(endpoint)
prefix = self.url_prefix.rstrip("/")
if not endpoint.startswith(prefix + "/api/"):
endpoint = prefix + "/api/v2{0}".format(endpoint)
hostname_prefix = self.url_prefix.rstrip("/")
api_path = self.api_path()
if not endpoint.startswith(hostname_prefix + api_path):
endpoint = hostname_prefix + f"{api_path}v2{endpoint}"
if not endpoint.endswith('/') and '?' not in endpoint:
endpoint = "{0}/".format(endpoint)
@@ -215,7 +216,7 @@ class ControllerModule(AnsibleModule):
try:
config_data = yaml.load(config_string, Loader=yaml.SafeLoader)
# If this is an actual ini file, yaml will return the whole thing as a string instead of a dict
if type(config_data) is not dict:
if not isinstance(config_data, dict):
raise AssertionError("The yaml config file is not properly formatted as a dict.")
try_config_parsing = False
@@ -257,7 +258,7 @@ class ControllerModule(AnsibleModule):
if honorred_setting in config_data:
# Veriffy SSL must be a boolean
if honorred_setting == 'verify_ssl':
if type(config_data[honorred_setting]) is str:
if isinstance(config_data[honorred_setting], str):
setattr(self, honorred_setting, strtobool(config_data[honorred_setting]))
else:
setattr(self, honorred_setting, bool(config_data[honorred_setting]))
@@ -603,6 +604,14 @@ class ControllerAPIModule(ControllerModule):
status_code = response.status
return {'status_code': status_code, 'json': response_json}
def api_path(self):
default_api_path = "/api/"
if self._COLLECTION_TYPE != "awx":
default_api_path = "/api/controller/"
prefix = getenv('CONTROLLER_OPTIONAL_API_URLPATTERN_PREFIX', default_api_path)
return prefix
def authenticate(self, **kwargs):
if self.username and self.password:
# Attempt to get a token from /api/v2/tokens/ by giving it our username/password combo
@@ -613,7 +622,7 @@ class ControllerAPIModule(ControllerModule):
"scope": "write",
}
# Preserve URL prefix
endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/'
endpoint = self.url_prefix.rstrip('/') + f'{self.api_path()}v2/tokens/'
# Post to the tokens endpoint with baisc auth to try and get a token
api_token_url = (self.url._replace(path=endpoint)).geturl()
@@ -1002,7 +1011,7 @@ class ControllerAPIModule(ControllerModule):
if self.authenticated and self.oauth_token_id:
# Attempt to delete our current token from /api/v2/tokens/
# Post to the tokens endpoint with baisc auth to try and get a token
endpoint = self.url_prefix.rstrip('/') + '/api/v2/tokens/{0}/'.format(self.oauth_token_id)
endpoint = self.url_prefix.rstrip('/') + f'{self.api_path()}v2/tokens/{self.oauth_token_id}/'
api_token_url = (self.url._replace(path=endpoint, query=None)).geturl() # in error cases, fail_json exists before exception handling
try:
@@ -1038,7 +1047,10 @@ class ControllerAPIModule(ControllerModule):
# Grab our start time to compare against for the timeout
start = time.time()
result = self.get_endpoint(url)
while not result['json']['finished']:
wait_on_field = 'event_processing_finished'
if wait_on_field not in result['json']:
wait_on_field = 'finished'
while not result['json'][wait_on_field]:
# If we are past our time out fail with a message
if timeout and timeout < time.time() - start:
# Account for Legacy messages

View File

@@ -163,7 +163,7 @@ def main():
for arg in ['job_type', 'limit', 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode']:
if module.params.get(arg):
# extra_var can receive a dict or a string, if a dict covert it to a string
if arg == 'extra_vars' and type(module.params.get(arg)) is not str:
if arg == 'extra_vars' and not isinstance(module.params.get(arg), str):
post_data[arg] = json.dumps(module.params.get(arg))
else:
post_data[arg] = module.params.get(arg)

View File

@@ -121,6 +121,7 @@ def main():
client_type = module.params.get('client_type')
organization = module.params.get('organization')
redirect_uris = module.params.get('redirect_uris')
skip_authorization = module.params.get('skip_authorization')
state = module.params.get('state')
# Attempt to look up the related items the user specified (these will fail the module if not found)
@@ -146,6 +147,8 @@ def main():
application_fields['description'] = description
if redirect_uris is not None:
application_fields['redirect_uris'] = ' '.join(redirect_uris)
if skip_authorization is not None:
application_fields['skip_authorization'] = skip_authorization
response = module.create_or_update_if_needed(application, application_fields, endpoint='applications', item_type='application', auto_exit=False)
if 'client_id' in response:

View File

@@ -56,7 +56,7 @@ import logging
# In this module we don't use EXPORTABLE_RESOURCES, we just want to validate that our installed awxkit has import/export
try:
from awxkit.api.pages.api import EXPORTABLE_RESOURCES # noqa
from awxkit.api.pages.api import EXPORTABLE_RESOURCES # noqa: F401; pylint: disable=unused-import
HAS_EXPORTABLE_RESOURCES = True
except ImportError:

View File

@@ -50,6 +50,7 @@ options:
description:
- The type of notification to be sent.
choices:
- 'awssns'
- 'email'
- 'grafana'
- 'irc'
@@ -219,7 +220,7 @@ def main():
copy_from=dict(),
description=dict(),
organization=dict(),
notification_type=dict(choices=['email', 'grafana', 'irc', 'mattermost', 'pagerduty', 'rocketchat', 'slack', 'twilio', 'webhook']),
notification_type=dict(choices=['awssns', 'email', 'grafana', 'irc', 'mattermost', 'pagerduty', 'rocketchat', 'slack', 'twilio', 'webhook']),
notification_configuration=dict(type='dict'),
messages=dict(type='dict'),
state=dict(choices=['present', 'absent', 'exists'], default='present'),

View File

@@ -19,7 +19,7 @@ from ansible.module_utils.six import raise_from
from ansible_base.rbac.models import RoleDefinition, DABPermission
from awx.main.tests.functional.conftest import _request
from awx.main.tests.functional.conftest import credentialtype_scm, credentialtype_ssh # noqa: F401; pylint: disable=unused-variable
from awx.main.tests.functional.conftest import credentialtype_scm, credentialtype_ssh # noqa: F401; pylint: disable=unused-import
from awx.main.models import (
Organization,
Project,

View File

@@ -0,0 +1 @@
plugins/modules/export.py validate-modules:nonexistent-parameter-documented # needs awxkit to construct argspec

View File

@@ -234,7 +234,7 @@ class ApiV2(base.Base):
return endpoint.get(**{identifier: value}, all_pages=True)
def export_assets(self, **kwargs):
self._cache = page.PageCache()
self._cache = page.PageCache(self.connection)
# If no resource kwargs are explicitly used, export everything.
all_resources = all(kwargs.get(resource) is None for resource in EXPORTABLE_RESOURCES)
@@ -335,7 +335,7 @@ class ApiV2(base.Base):
if name == 'roles':
indexed_roles = defaultdict(list)
for role in S:
if 'content_object' not in role:
if role.get('content_object') is None:
continue
indexed_roles[role['content_object']['type']].append(role)
self._roles.append((_page, indexed_roles))
@@ -411,7 +411,7 @@ class ApiV2(base.Base):
# FIXME: deal with pruning existing relations that do not match the import set
def import_assets(self, data):
self._cache = page.PageCache()
self._cache = page.PageCache(self.connection)
self._related = []
self._roles = []
@@ -420,11 +420,8 @@ class ApiV2(base.Base):
for resource in self._dependent_resources():
endpoint = getattr(self, resource)
# Load up existing objects, so that we can try to update or link to them
self._cache.get_page(endpoint)
imported = self._import_list(endpoint, data.get(resource) or [])
changed = changed or imported
# FIXME: should we delete existing unpatched assets?
self._assign_related()
self._assign_membership()

View File

@@ -11,7 +11,7 @@ from . import page
job_results = ('any', 'error', 'success')
notification_types = ('email', 'irc', 'pagerduty', 'slack', 'twilio', 'webhook', 'mattermost', 'grafana', 'rocketchat')
notification_types = ('awssns', 'email', 'irc', 'pagerduty', 'slack', 'twilio', 'webhook', 'mattermost', 'grafana', 'rocketchat')
class NotificationTemplate(HasCopy, HasCreate, base.Base):
@@ -58,7 +58,10 @@ class NotificationTemplate(HasCopy, HasCreate, base.Base):
if payload.notification_configuration == {}:
services = config.credentials.notification_services
if notification_type == 'email':
if notification_type == 'awssns':
fields = ('aws_region', 'aws_access_key_id', 'aws_secret_access_key', 'aws_session_token', 'sns_topic_arn')
cred = services.awssns
elif notification_type == 'email':
fields = ('host', 'username', 'password', 'port', 'use_ssl', 'use_tls', 'sender', 'recipients')
cred = services.email
elif notification_type == 'irc':

View File

@@ -11,6 +11,7 @@ from awxkit.utils import PseudoNamespace, is_relative_endpoint, are_same_endpoin
from awxkit.api import utils
from awxkit.api.client import Connection
from awxkit.api.registry import URLRegistry
from awxkit.api.resources import resources
from awxkit.config import config
import awxkit.exceptions as exc
@@ -493,10 +494,11 @@ class TentativePage(str):
class PageCache(object):
def __init__(self):
def __init__(self, connection=None):
self.options = {}
self.pages_by_url = {}
self.pages_by_natural_key = {}
self.connection = connection or Connection(config.base_url, not config.assume_untrusted)
def get_options(self, page):
url = page.endpoint if isinstance(page, Page) else str(page)
@@ -550,7 +552,31 @@ class PageCache(object):
return self.set_page(page)
def get_by_natural_key(self, natural_key):
endpoint = self.pages_by_natural_key.get(utils.freeze(natural_key))
log.debug("get_by_natural_key: %s, endpoint: %s", repr(natural_key), endpoint)
if endpoint:
return self.get_page(endpoint)
page = self.pages_by_natural_key.get(utils.freeze(natural_key))
if page is None:
# We need some way to get ahold of the top-level resource
# list endpoint from the natural_key type. The resources
# object more or less has that for each of the detail
# views. Just chop off the /<id>/ bit.
endpoint = getattr(resources, natural_key['type'], None)
if endpoint is None:
return
endpoint = ''.join([endpoint.rsplit('/', 2)[0], '/'])
page_type = get_registered_page(endpoint)
kwargs = {}
for k, v in natural_key.items():
if isinstance(v, str) and k != 'type':
kwargs[k] = v
# Do a filtered query against the list endpoint, usually
# with the name of the object but sometimes more.
list_page = page_type(self.connection, endpoint=endpoint).get(all_pages=True, **kwargs)
if 'results' in list_page:
for p in list_page.results:
self.set_page(p)
page = self.pages_by_natural_key.get(utils.freeze(natural_key))
log.debug("get_by_natural_key: %s, endpoint: %s", repr(natural_key), page)
if page:
return self.get_page(page)

View File

@@ -185,7 +185,7 @@ def format_human(output, fmt):
def format_num(v):
try:
return locale.format("%.*f", (0, int(v)), True)
return locale.format_string("%.*f", (0, int(v)), True)
except (ValueError, TypeError):
if isinstance(v, (list, dict)):
return json.dumps(v)

View File

@@ -23,7 +23,7 @@ idna==3.4
# via requests
imagesize==1.4.1
# via sphinx
jinja2==3.1.3
jinja2==3.1.4
# via
# -r requirements.in
# sphinx

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 95 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

View File

@@ -29,6 +29,7 @@ You can also find lots of AWX discussion and get answers to questions at `forum.
organizations
users
teams
rbac
credentials
credential_types
credential_plugins

View File

@@ -1096,7 +1096,7 @@ Terraform State
pair: inventory source; Terraform state
This inventory source uses the `terraform_state <https://github.com/ansible-collections/cloud.terraform/blob/main/plugins/inventory/terraform_state.py>`_ inventory plugin from the `cloud.terraform <https://github.com/ansible-collections/cloud.terraform>`_ collection. The plugin will parse a terraform state file and add hosts for AWS EC2, GCE, and Azure instances.
This inventory source uses the `terraform_state <https://github.com/ansible-collections/cloud.terraform/blob/main/docs/cloud.terraform.terraform_state_inventory.rst>`_ inventory plugin from the `cloud.terraform <https://github.com/ansible-collections/cloud.terraform>`_ collection. The plugin will parse a terraform state file and add hosts for AWS EC2, GCE, and Azure instances.
1. To configure this type of sourced inventory, select **Terraform State** from the Source field.
@@ -1104,7 +1104,7 @@ This inventory source uses the `terraform_state <https://github.com/ansible-coll
3. You can optionally specify the verbosity, host filter, enabled variable/value, and update options as described in the main procedure for :ref:`adding a source <ug_add_inv_common_fields>`. For Terraform, enable **Overwrite** and **Update on launch** options.
4. Use the **Source Variables** field to override variables used by the ``controller`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the `terraform_state <https://github.com/ansible-collections/cloud.terraform/blob/main/plugins/inventory/terraform_state.py>`_ file for detail.
4. Use the **Source Variables** field to override variables used by the ``controller`` inventory plugin. Enter variables using either JSON or YAML syntax. Use the radio button to toggle between the two. For more information on these variables, see the `terraform_state <https://github.com/ansible-collections/cloud.terraform/blob/main/docs/cloud.terraform.terraform_state_inventory.rst>`_ file for detail.
The ``backend_type`` variable is required by the Terraform state inventory plugin. This should match the remote backend configured in the Terraform backend credential, here is an example for an Amazon S3 backend:

View File

@@ -84,6 +84,7 @@ Notification Types
.. index::
pair: notifications; types
triple: notifications; types; AWS SNS
triple: notifications; types; Email
triple: notifications; types; Grafana
triple: notifications; types; IRC
@@ -101,6 +102,18 @@ Notification types supported with AWX:
Each of these have their own configuration and behavioral semantics and testing them may need to be approached in different ways. Additionally, you can customize each type of notification down to a specific detail, or a set of criteria to trigger a notification. See :ref:`ug_custom_notifications` for more detail on configuring custom notifications. The following sections will give as much detail as possible on each type of notification.
AWS SNS
-------
The AWS SNS(https://aws.amazon.com/sns/) notification type supports sending messages into an SNS topic.
You must provide the following details to setup a SNS notification:
- AWS Region
- AWS Access Key ID
- AWS Secret Access Key
- AWS SNS Topic ARN
Email
-------

View File

@@ -0,0 +1,517 @@
.. _rbac-ug:
Role-Based Access Controls
==========================
.. index::
single: role-based access controls
pair: security; RBAC
Role-Based Access Controls (RBAC) are built into AWX and allow administrators to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow AWX to help you increase security and streamline management.
This chapter has two parts: the latest RBAC model (:ref:`rbac-dab-ug`) and the :ref:`existing RBAC <rbac-legacy-ug>` implementation.
.. _rbac-dab-ug:
DAB RBAC
---------
.. index::
single: DAB
single: roles
pair: DAB; RBAC
This section describes the latest changes to RBAC, involving use of the ``django-ansible-base`` (DAB) library, to enhance existing roles, provide a uniformed model that is compatible with platform (enterprise) components, and allow creation of custom roles. However, the internals of the system in the backend have changes implemented, but they are not reflected yet in the AWX UI. The change to the backend maintains a compatibility layer so the “old” roles in the API still exists temporarily, until a fully-functional compatible UI replaces the existing roles.
New functionality, specifically custom roles, are possible through direct API clients or the API browser, but the presentation in the AWX UI might not reflect the changes made in the API.
The new DAB version of RBAC allows creation of custom roles which can be done via the ``/api/v2/role_definitions/`` endpoint. Then these can only be assigned using the new endpoints, ``/api/v2/role_user_assignments/`` and ``/api/v2/role_team_assignments/``.
If you do not want to allow custom roles, you can change the setting ``ANSIBLE_BASE_ALLOW_CUSTOM_ROLES`` to ``False``. This is still a file-based setting for now.
New “add” permissions are a major highlight of this change. You could create a custom organization role that allows users to create all (or some) types of resources, and apply it to a particular organization. So instead of allowing a user to edit all projects, they can create a new project, and after creating it, they will automatically get admin role just for the objects they created.
Resource access for teams
~~~~~~~~~~~~~~~~~~~~~~~~~~
This section provides a reference for managing team roles within individual resources as shown in the new UI and the corresponding API calls.
Access the resource's **Team Access** tab to manage the team roles.
.. image:: ../common/images/rbac_jt_team_access.png
To obtain a list of team role assignments from the API:
::
GET /api/v2/role_team_assignments/?object_id=<template_id>&content_type__model=jobtemplate
The columns are arranged so that the team name appears in the first column. The role name is under ``summary_fields.role_definition.name``
To revoke a role assignment for a team in the API:
::
DELETE /api/v2/role_team_assignments/<role_id_from_list_API_above>/
Add roles
^^^^^^^^^^
Clicking the **Add roles** button from the **Team Access** tab opens the **Add roles** wizard, where you can select the teams to which you want to add roles.
.. image:: ../common/images/rbac_team_access_add-roles.png
To list the teams from the service endpoint:
::
GET /api/v2/teams
The next step of the wizard in the controller UI is to apply roles to the selected team(s).
.. image:: ../common/images/rbac_team_access_apply-roles.png
To list available role definitions for the selected resource type in the API, issue the following, but replace ``content_type`` below to match the resource type:
::
GET /api/v2/role_definitions/?content_type__model=jobtemplate
Finally, review your selections and click **Save** to save your changes.
.. image:: ../common/images/rbac_team_access_add-roles-review.png
To assign roles to selected teams in the API, you must assign a single role to individual teams separately by referencing the team ID and resource ID from the controller associated with the ``object_id``.
Make a POST request to this resource (``jobtemplate.id`` in this example):
::
POST /api/v2/role_team_assignments/
The following shows an example of the payload sent for the POST request made above:
::
{"team": 25, "role_definition": 4, "object_id": "10"}
When changes are successfully applied via the UI, a message displays to confirm the changes:
.. image:: ../common/images/rbac_team_access_add-roles-success.png
Resource access for users
~~~~~~~~~~~~~~~~~~~~~~~~~~
This section provides a reference for managing user roles within individual resources as shown in the new UI and the corresponding API calls.
Access the resource's **User Access** tab to manage the user roles.
.. image:: ../common/images/rbac_jt_user_access.png
To obtain a list of user role assignments from the API:
::
GET /api/v2/role_user_assignments/?object_id=<template_id>&content_type__model=jobtemplate
The columns are arranged so that the user name appears in the first column. The role name is under ``summary_fields.role_definition.name``
To revoke a role assignment for a user in the API:
::
DELETE /api/v2/role_user_assignments/<role_id_from_list_API_above>/
Add roles
^^^^^^^^^^
Clicking the **Add roles** button from the **User Access** tab opens the **Add roles** wizard, where you can select the users to which you want to add roles.
.. image:: ../common/images/rbac_user_access_add-roles.png
To list the teams from the service endpoint:
::
GET /api/v2/users
The next step of the wizard in the controller UI is to apply roles to the selected team(s).
.. image:: ../common/images/rbac_user_access_apply-roles.png
To list available role definitions for the selected resource type in the API, issue the following, but replace ``content_type`` below to match the resource type:
::
GET /api/v2/role_definitions/?content_type__model=jobtemplate
Finally, review your selections and click **Save** to save your changes.
.. image:: ../common/images/rbac_user_access_add-roles-review.png
To assign roles to selected users in the API, you must assign a single role to individual users separately by referencing the user ID and resource ID from the controller associated with the ``object_id``.
Make a POST request to this resource (``jobtemplate.id`` in this example):
::
POST /api/v2/role_user_assignments/
The following shows an example of the payload sent for the POST request made above:
::
{"user": 25, "role_definition": 4, "object_id": "10"}
When changes are successfully applied via the UI, a message displays to confirm the changes:
.. image:: ../common/images/rbac_team_access_add-roles-success.png
Custom roles
~~~~~~~~~~~~~
.. index::
single: DAB
single: custom roles
pair: custom; roles
In the DAB RBAC model, Superusers have the ability to create, modify, and delete custom roles.
To create a custom role, click the **Create role** button from the **Roles** resource in the UI, and provide the details of the new role:
- **Name**: Required
- **Description**: Enter an arbitrary description as appropriate (optional)
- **Resource Type**: Required. Select the resource type from the drop-down menu (only one resource type per role allowed). This is equivalent to ``content_type`` in ``OPTIONS /api/v2/role_definitions`` for choices.
- Select permissions based on the selected of resource type. (Alan will provide an endpoint containing dictionary for available permissions based on content type (The UI can use this to maintain static readable translatable texts on the client side) TBD)
Modifying a custom role only allows you to change the permissions but does not not allow changes to the content type.
To delete a custom role:
::
DELETE /api/v2/role_definitions/:id
.. _rbac-legacy-ug:
Legacy RBAC model
------------------
.. index::
single: roles
pair: legacy; RBAC
As in the name, RBAC is role-based, and roles contain a list of permissions. This is a domain-centric concept, where organization-level roles can grant you a permission (like ``update_project``) to everything in that domain, including all projects in that organizations.
There are a few main concepts that you should become familiar with regarding AWX's RBAC design--roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with "descendant" roles.
A role is essentially a list of permissions. Users are granted access to these capabilities and AWX's resources through the roles to which they are assigned or through roles inherited through the role hierarchy.
Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles.
Role Hierarchy and Access Inheritance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Imagine that you have an organization named "SomeCompany" and want to allow two people, "Josie" and "Carter", access to manage all the settings associated with that organization. You should make both people members of the organization's ``admin_role``.
|user-role-relationship|
.. |user-role-relationship| image:: ../common/images/user-role-relationship.png
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on.
This concept is referred to as the 'Role Hierarchy':
- Parent roles get all capabilities bestowed on any child roles
- Members of roles automatically get all capabilities for the role they are a member of, as well as any child roles.
The Role Hierarchy is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on).
|rbac-role-hierarchy|
.. |rbac-role-hierarchy| image:: ../common/images/rbac-role-hierarchy.png
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on. We refer to this concept as the 'Role Hierarchy' and it is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on). Of course Roles can have more than one parent, and capabilities are implicitly granted to all parents.
|rbac-heirarchy-morecomplex|
.. |rbac-heirarchy-morecomplex| image:: ../common/images/rbac-heirarchy-morecomplex.png
RBAC controls also give you the capability to explicitly permit User and Teams of Users to run playbooks against certain sets of hosts. Users and teams are restricted to just the sets of playbooks and hosts to which they are granted capabilities. And, with AWX, you can create or import as many Users and Teams as you require--create users and teams manually or import them from LDAP or Active Directory.
RBACs are easiest to think of in terms of who or what can see, change, or delete an "object" for which a specific capability is being determined.
Applying RBAC
~~~~~~~~~~~~~~~~~
The following sections cover how to apply AWX's RBAC system in your environment.
Editing Users
^^^^^^^^^^^^^^^
When editing a user, a AWX system administrator may specify the user as being either a *System Administrator* (also referred to as the Superuser) or a *System Auditor*.
- System administrators implicitly inherit all capabilities for all objects (read/write/execute) within the AWX environment.
- System Auditors implicitly inherit the read-only capability for all objects within the AWX environment.
Editing Organizations
^^^^^^^^^^^^^^^^^^^^^^^^
When editing an organization, system administrators may specify the following roles:
- One or more users as organization administrators
- One or more users as organization auditors
- And one or more users (or teams) as organization members
Users/teams that are members of an organization can view their organization administrator.
Users who are organization administrators implicitly inherit all capabilities for all objects within that AWX organization.
Users who are organization auditors implicitly inherit the read-only capability for all objects within that AWX organization.
Editing Projects in an Organization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When editing a project in an organization for which they are the administrator, system administrators and organization administrators may specify:
- One or more users/teams that are project administrators
- One or more users/teams that are project members
- And one or more users/teams that may update the project from SCM, from among the users/teams that are members of that organization.
Users who are members of a project can view their project administrators.
Project administrators implicitly inherit the capability to update the project from SCM.
Administrators can also specify one or more users/teams (from those that are members of that project) that can use that project in a job template.
Creating Inventories and Credentials within an Organization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All access that is granted to use, read, or write credentials is handled through roles, which use AWX's RBAC system to grant ownership, auditor, or usage roles.
System administrators and organization administrators may create inventories and credentials within organizations under their administrative capabilities.
Whether editing an inventory or a credential, System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) to be granted the usage capability for that inventory or credential.
System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) that have the capabilities to update (dynamic or manually) an inventory. Administrators can also execute ad hoc commands for an inventory.
Editing Job Templates
^^^^^^^^^^^^^^^^^^^^^^
System administrators, organization administrators, and project administrators, within a project under their administrative capabilities, may create and modify new job templates for that project.
When editing a job template, administrators (AWX, organization, and project) can select among the inventory and credentials in the organization for which they have usage capabilities or they may leave those fields blank so that they will be selected at runtime.
Additionally, they may specify one or more users/teams (from those that are members of that project) that have execution capabilities for that job template. The execution capability is valid regardless of any explicit capabilities the user/team may have been granted against the inventory or credential specified in the job template.
User View
^^^^^^^^^^^^^
A user can:
- See any organization or project for which they are a member
- Create their own credential objects which only belong to them
- See and execute any job template for which they have been granted execution capabilities
If a job template that a user has been granted execution capabilities on does not specify an inventory or credential, the user will be prompted at run-time to select among the inventory and credentials in the organization they own or have been granted usage capabilities.
Users that are job template administrators can make changes to job templates; however, to change to the inventory, project, playbook, credentials, or instance groups used in the job template, the user must also have the "Use" role for the project and inventory currently being used or being set.
.. _rbac-ug-roles:
Roles
~~~~~~~~~~~~~
All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource.
Built-in roles
^^^^^^^^^^^^^^
The following table lists the RBAC system roles and a brief description of the how that role is defined with regard to privileges in AWX.
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| System Role | What it can do |
+=======================================================================+==========================================================================================+
| System Administrator - System wide singleton | Manages all aspects of the system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| System Auditor - System wide singleton | Views all aspects of the system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Ad Hoc Role - Inventory | Runs ad hoc commands on an Inventory |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Admin Role - Organizations, Teams, Inventory, Projects, Job Templates | Manages all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Auditor Role - All | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Execute Role - Job Templates | Runs assigned Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Member Role - Organization, Team | User is a member of a defined Organization or Team |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Read Role - Organizations, Teams, Inventory, Projects, Job Templates | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Update Role - Project | Updates the Project from the configured source control management system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Update Role - Inventory | Updates the Inventory using the cloud source update system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Owner Role - Credential | Owns and manages all aspects of this Credential |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Use Role - Credential, Inventory, Project, IGs, CGs | Uses the Credential, Inventory, Project, IGs, or CGs in a Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
A Singleton Role is a special role that grants system-wide permissions. AWX currently provides two built-in Singleton Roles but the ability to create or customize a Singleton Role is not supported at this time.
Common Team Roles - "Personas"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Support personnel typically works on ensuring that AWX is available and manages it a way to balance supportability and ease-of-use for users. Often, support will assign “Organization Owner/Admin” to users in order to allow them to create a new Organization and add members from their team the respective access needed. This minimizes supporting individuals and focuses more on maintaining uptime of the service and assisting users who are using AWX.
Below are some common roles managed by the AWX Organization:
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | System Role | | Common User | | Description |
| | (for Organizations) | | Roles | |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Owner | | Team Lead - | | This user has the ability to control access for other users in their organization. |
| | | Technical Lead | | They can add/remove and grant users specific access to projects, inventories, and job templates. |
| | | | This user also has the ability to create/remove/modify any aspect of an organizations projects, |
| | | | templates, inventories, teams, and credentials. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Auditor | | Security Engineer - | | This account can view all aspects of the organization in read-only mode. |
| | | Project Manager | | This may be good for a user who checks in and maintains compliance. |
| | | | This might also be a good role for a service account who manages or |
| | | | ships job data from AWX to some other data collector. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | All other users | | These users by default as an organization member do not receive any access to any aspect |
| | Team | | | of the organization. In order to grant them access the respective organization owner needs |
| | | | to add them to their respective team and grant them Admin, Execute, Use, Update, Ad-hoc |
| | | | permissions to each component of the organizations projects, inventories, and job templates. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Power users - | | Organization Owners can provide “admin” through the team interface, over any component |
| | Team “Owner” | | Lead Developer | | of their organization including projects, inventories, and job templates. These users are able |
| | | | to modify and utilize the respective component given access. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This will be the most common and allows the organization member the ability to execute |
| | Team “Execute” | | Engineers | | job templates and read permission to the specific components. This is permission applies to templates. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This permission applies to an organizations credentials, inventories, and projects. |
| | Team “Use” | | Engineers | | This permission allows the ability for a user to use the respective component within their job template.|
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This permission applies to projects. Allows the user to be able to run an SCM update on a project. |
| | Team “Update” | | Engineers | |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
Function of roles: editing and creating
------------------------------------------
Organization “resource roles” functionality are specific to a certain resource type - such as workflows. Being a member of such a role usually provides two types of permissions, in the case of workflows, where a user is given a "workflow admin role" for the organization "Default":
- this user can create new workflows in the organization "Default"
- user can edit all workflows in the "Default" organization
One exception is job templates, where having the role is irrelevant of creation permission (more details on its own section).
Independence of resource roles and organization membership roles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Resource-specific organization roles are independent of the organization roles of admin and member. Having the "workflow admin role" for the "Default" organization will not allow a user to view all users in the organization, but having a "member" role in the "Default" organization will. The two types of roles are delegated independently of each other.
Necessary permissions to edit job templates
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Users can edit fields not impacting job runs (non-sensitive fields) with a Job Template admin role alone. However, to edit fields that impact job runs in a job template, a user needs the following:
- **admin** role to the job template and container groups
- **use** role to related project
- **use** role to related inventory
- **use** role to related instance groups
An "organization job template admin" role was introduced, but having this role isn't sufficient by itself to edit a job template within the organization if the user does not have use role to the project / inventory / instance group or an admin role to the container group that a job template uses.
In order to delegate *full* job template control (within an organization) to a user or team, you will need grant the team or user all 3 organization-level roles:
- job template admin
- project admin
- inventory admin
This will ensure that the user (or all users who are members of the team with these roles) have full access to modify job templates in the organization. If a job template uses an inventory or project from another organization, the user with these organization roles may still not have permission to modify that job template. For clarity of managing permissions, it is best-practice to not mix projects / inventories from different organizations.
RBAC permissions
^^^^^^^^^^^^^^^^^^^
Each role should have a content object, for instance, the org admin role has a content object of the org. To delegate a role, you need admin permission to the content object, with some exceptions that would result in you being able to reset a user's password.
**Parent** is the organization.
**Allow** is what this new permission will explicitly allow.
**Scope** is the parent resource that this new role will be created on. Example: ``Organization.project_create_role``.
An assumption is being made that the creator of the resource should be given the admin role for that resource. If there are any instances where resource creation does not also imply resource administration, they will be explicitly called out.
Here are the rules associated with each admin type:
**Project Admin**
- Allow: Create, read, update, delete any project
- Scope: Organization
- User Interface: *Project Add Screen - Organizations*
**Inventory Admin**
- Parent: Org admin
- Allow: Create, read, update, delete any inventory
- Scope: Organization
- User Interface: *Inventory Add Screen - Organizations*
.. note::
As it is with the **Use** role, if you give a user Project Admin and Inventory Admin, it allows them to create Job Templates (not workflows) for your organization.
**Credential Admin**
- Parent: Org admin
- Allow: Create, read, update, delete shared credentials
- Scope: Organization
- User Interface: *Credential Add Screen - Organizations*
**Notification Admin**
- Parent: Org admin
- Allow: Assignment of notifications
- Scope: Organization
**Workflow Admin**
- Parent: Org admin
- Allow: Create a workflow
- Scope: Organization
**Org Execute**
- Parent: Org admin
- Allow: Executing JTs and WFJTs
- Scope: Organization
The following is a sample scenario showing an organization with its roles and which resource(s) each have access to:
.. image:: ../common/images/rbac-multiple-resources-scenario.png

View File

@@ -39,320 +39,4 @@ Isolation functionality and variables
pair: isolation; functionality
pair: isolation; variables
.. include:: ../common/isolation_variables.rst
.. _rbac-ug:
Role-Based Access Controls
-----------------------------
.. index::
single: role-based access controls
pair: security; RBAC
Role-Based Access Controls (RBAC) are built into AWX and allow administrators to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow AWX to help you increase security and streamline management.
RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an "object" for which a specific capability is being set. RBAC is the practice of granting roles to users or teams.
There are a few main concepts that you should become familiar with regarding AWX's RBAC design--roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with "descendant" roles.
A role is essentially a collection of capabilities. Users are granted access to these capabilities and AWX's resources through the roles to which they are assigned or through roles inherited through the role hierarchy.
Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles.
Role Hierarchy and Access Inheritance
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Imagine that you have an organization named "SomeCompany" and want to allow two people, "Josie" and "Carter", access to manage all the settings associated with that organization. You should make both people members of the organization's ``admin_role``.
|user-role-relationship|
.. |user-role-relationship| image:: ../common/images/user-role-relationship.png
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on.
This concept is referred to as the 'Role Hierarchy':
- Parent roles get all capabilities bestowed on any child roles
- Members of roles automatically get all capabilities for the role they are a member of, as well as any child roles.
The Role Hierarchy is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on).
|rbac-role-hierarchy|
.. |rbac-role-hierarchy| image:: ../common/images/rbac-role-hierarchy.png
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on. We refer to this concept as the 'Role Hierarchy' and it is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on). Of course Roles can have more than one parent, and capabilities are implicitly granted to all parents.
|rbac-heirarchy-morecomplex|
.. |rbac-heirarchy-morecomplex| image:: ../common/images/rbac-heirarchy-morecomplex.png
RBAC controls also give you the capability to explicitly permit User and Teams of Users to run playbooks against certain sets of hosts. Users and teams are restricted to just the sets of playbooks and hosts to which they are granted capabilities. And, with AWX, you can create or import as many Users and Teams as you require--create users and teams manually or import them from LDAP or Active Directory.
RBACs are easiest to think of in terms of who or what can see, change, or delete an "object" for which a specific capability is being determined.
Applying RBAC
~~~~~~~~~~~~~~~~~
The following sections cover how to apply AWX's RBAC system in your environment.
Editing Users
^^^^^^^^^^^^^^^
When editing a user, a AWX system administrator may specify the user as being either a *System Administrator* (also referred to as the Superuser) or a *System Auditor*.
- System administrators implicitly inherit all capabilities for all objects (read/write/execute) within the AWX environment.
- System Auditors implicitly inherit the read-only capability for all objects within the AWX environment.
Editing Organizations
^^^^^^^^^^^^^^^^^^^^^^^^
When editing an organization, system administrators may specify the following roles:
- One or more users as organization administrators
- One or more users as organization auditors
- And one or more users (or teams) as organization members
Users/teams that are members of an organization can view their organization administrator.
Users who are organization administrators implicitly inherit all capabilities for all objects within that AWX organization.
Users who are organization auditors implicitly inherit the read-only capability for all objects within that AWX organization.
Editing Projects in an Organization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When editing a project in an organization for which they are the administrator, system administrators and organization administrators may specify:
- One or more users/teams that are project administrators
- One or more users/teams that are project members
- And one or more users/teams that may update the project from SCM, from among the users/teams that are members of that organization.
Users who are members of a project can view their project administrators.
Project administrators implicitly inherit the capability to update the project from SCM.
Administrators can also specify one or more users/teams (from those that are members of that project) that can use that project in a job template.
Creating Inventories and Credentials within an Organization
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All access that is granted to use, read, or write credentials is handled through roles, which use AWX's RBAC system to grant ownership, auditor, or usage roles.
System administrators and organization administrators may create inventories and credentials within organizations under their administrative capabilities.
Whether editing an inventory or a credential, System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) to be granted the usage capability for that inventory or credential.
System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) that have the capabilities to update (dynamic or manually) an inventory. Administrators can also execute ad hoc commands for an inventory.
Editing Job Templates
^^^^^^^^^^^^^^^^^^^^^^
System administrators, organization administrators, and project administrators, within a project under their administrative capabilities, may create and modify new job templates for that project.
When editing a job template, administrators (AWX, organization, and project) can select among the inventory and credentials in the organization for which they have usage capabilities or they may leave those fields blank so that they will be selected at runtime.
Additionally, they may specify one or more users/teams (from those that are members of that project) that have execution capabilities for that job template. The execution capability is valid regardless of any explicit capabilities the user/team may have been granted against the inventory or credential specified in the job template.
User View
^^^^^^^^^^^^^
A user can:
- See any organization or project for which they are a member
- Create their own credential objects which only belong to them
- See and execute any job template for which they have been granted execution capabilities
If a job template that a user has been granted execution capabilities on does not specify an inventory or credential, the user will be prompted at run-time to select among the inventory and credentials in the organization they own or have been granted usage capabilities.
Users that are job template administrators can make changes to job templates; however, to change to the inventory, project, playbook, credentials, or instance groups used in the job template, the user must also have the "Use" role for the project and inventory currently being used or being set.
.. _rbac-ug-roles:
Roles
~~~~~~~~~~~~~
All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource.
Built-in roles
^^^^^^^^^^^^^^
The following table lists the RBAC system roles and a brief description of the how that role is defined with regard to privileges in AWX.
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| System Role | What it can do |
+=======================================================================+==========================================================================================+
| System Administrator - System wide singleton | Manages all aspects of the system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| System Auditor - System wide singleton | Views all aspects of the system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Ad Hoc Role - Inventory | Runs ad hoc commands on an Inventory |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Admin Role - Organizations, Teams, Inventory, Projects, Job Templates | Manages all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Auditor Role - All | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Execute Role - Job Templates | Runs assigned Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Member Role - Organization, Team | User is a member of a defined Organization or Team |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Read Role - Organizations, Teams, Inventory, Projects, Job Templates | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Update Role - Project | Updates the Project from the configured source control management system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Update Role - Inventory | Updates the Inventory using the cloud source update system |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Owner Role - Credential | Owns and manages all aspects of this Credential |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
| Use Role - Credential, Inventory, Project, IGs, CGs | Uses the Credential, Inventory, Project, IGs, or CGs in a Job Template |
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
A Singleton Role is a special role that grants system-wide permissions. AWX currently provides two built-in Singleton Roles but the ability to create or customize a Singleton Role is not supported at this time.
Common Team Roles - "Personas"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Support personnel typically works on ensuring that AWX is available and manages it a way to balance supportability and ease-of-use for users. Often, support will assign “Organization Owner/Admin” to users in order to allow them to create a new Organization and add members from their team the respective access needed. This minimizes supporting individuals and focuses more on maintaining uptime of the service and assisting users who are using AWX.
Below are some common roles managed by the AWX Organization:
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | System Role | | Common User | | Description |
| | (for Organizations) | | Roles | |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Owner | | Team Lead - | | This user has the ability to control access for other users in their organization. |
| | | Technical Lead | | They can add/remove and grant users specific access to projects, inventories, and job templates. |
| | | | This user also has the ability to create/remove/modify any aspect of an organizations projects, |
| | | | templates, inventories, teams, and credentials. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Auditor | | Security Engineer - | | This account can view all aspects of the organization in read-only mode. |
| | | Project Manager | | This may be good for a user who checks in and maintains compliance. |
| | | | This might also be a good role for a service account who manages or |
| | | | ships job data from AWX to some other data collector. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | All other users | | These users by default as an organization member do not receive any access to any aspect |
| | Team | | | of the organization. In order to grant them access the respective organization owner needs |
| | | | to add them to their respective team and grant them Admin, Execute, Use, Update, Ad-hoc |
| | | | permissions to each component of the organizations projects, inventories, and job templates. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Power users - | | Organization Owners can provide “admin” through the team interface, over any component |
| | Team “Owner” | | Lead Developer | | of their organization including projects, inventories, and job templates. These users are able |
| | | | to modify and utilize the respective component given access. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This will be the most common and allows the organization member the ability to execute |
| | Team “Execute” | | Engineers | | job templates and read permission to the specific components. This is permission applies to templates. |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This permission applies to an organizations credentials, inventories, and projects. |
| | Team “Use” | | Engineers | | This permission allows the ability for a user to use the respective component within their job template.|
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
| | Member - | | Developers - | | This permission applies to projects. Allows the user to be able to run an SCM update on a project. |
| | Team “Update” | | Engineers | |
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
Function of roles: editing and creating
------------------------------------------
Organization “resource roles” functionality are specific to a certain resource type - such as workflows. Being a member of such a role usually provides two types of permissions, in the case of workflows, where a user is given a "workflow admin role" for the organization "Default":
- this user can create new workflows in the organization "Default"
- user can edit all workflows in the "Default" organization
One exception is job templates, where having the role is irrelevant of creation permission (more details on its own section).
Independence of resource roles and organization membership roles
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Resource-specific organization roles are independent of the organization roles of admin and member. Having the "workflow admin role" for the "Default" organization will not allow a user to view all users in the organization, but having a "member" role in the "Default" organization will. The two types of roles are delegated independently of each other.
Necessary permissions to edit job templates
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Users can edit fields not impacting job runs (non-sensitive fields) with a Job Template admin role alone. However, to edit fields that impact job runs in a job template, a user needs the following:
- **admin** role to the job template and container groups
- **use** role to related project
- **use** role to related inventory
- **use** role to related instance groups
An "organization job template admin" role was introduced, but having this role isn't sufficient by itself to edit a job template within the organization if the user does not have use role to the project / inventory / instance group or an admin role to the container group that a job template uses.
In order to delegate *full* job template control (within an organization) to a user or team, you will need grant the team or user all 3 organization-level roles:
- job template admin
- project admin
- inventory admin
This will ensure that the user (or all users who are members of the team with these roles) have full access to modify job templates in the organization. If a job template uses an inventory or project from another organization, the user with these organization roles may still not have permission to modify that job template. For clarity of managing permissions, it is best-practice to not mix projects / inventories from different organizations.
RBAC permissions
^^^^^^^^^^^^^^^^^^^
Each role should have a content object, for instance, the org admin role has a content object of the org. To delegate a role, you need admin permission to the content object, with some exceptions that would result in you being able to reset a user's password.
**Parent** is the organization.
**Allow** is what this new permission will explicitly allow.
**Scope** is the parent resource that this new role will be created on. Example: ``Organization.project_create_role``.
An assumption is being made that the creator of the resource should be given the admin role for that resource. If there are any instances where resource creation does not also imply resource administration, they will be explicitly called out.
Here are the rules associated with each admin type:
**Project Admin**
- Allow: Create, read, update, delete any project
- Scope: Organization
- User Interface: *Project Add Screen - Organizations*
**Inventory Admin**
- Parent: Org admin
- Allow: Create, read, update, delete any inventory
- Scope: Organization
- User Interface: *Inventory Add Screen - Organizations*
.. note::
As it is with the **Use** role, if you give a user Project Admin and Inventory Admin, it allows them to create Job Templates (not workflows) for your organization.
**Credential Admin**
- Parent: Org admin
- Allow: Create, read, update, delete shared credentials
- Scope: Organization
- User Interface: *Credential Add Screen - Organizations*
**Notification Admin**
- Parent: Org admin
- Allow: Assignment of notifications
- Scope: Organization
**Workflow Admin**
- Parent: Org admin
- Allow: Create a workflow
- Scope: Organization
**Org Execute**
- Parent: Org admin
- Allow: Executing JTs and WFJTs
- Scope: Organization
The following is a sample scenario showing an organization with its roles and which resource(s) each have access to:
.. image:: ../common/images/rbac-multiple-resources-scenario.png
.. include:: ../common/isolation_variables.rst

View File

@@ -70,6 +70,7 @@ Once a Notification Template has been created, its configuration can be tested b
The currently-defined Notification Types are:
* AWS SNS
* Email
* Slack
* Mattermost
@@ -82,6 +83,10 @@ The currently-defined Notification Types are:
Each of these have their own configuration and behavioral semantics and testing them may need to be approached in different ways. The following sections will give as much detail as possible.
## AWS SNS
The AWS SNS notification type supports sending messages into an SNS topic.
## Email
The email notification type supports a wide variety of SMTP servers and has support for SSL/TLS connections and timeouts.

21
licenses/deprecated.txt Normal file
View File

@@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2017 Laurent LAPORTE
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,4 +1,3 @@
Copyright (C) 2016-present the asyncpg authors and contributors.
Apache License
Version 2.0, January 2004
@@ -188,8 +187,7 @@ Copyright (C) 2016-present the asyncpg authors and contributors.
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright (C) 2016-present the asyncpg authors and contributors
<see AUTHORS file>
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.

610
licenses/grpcio.txt Normal file
View File

@@ -0,0 +1,610 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-----------------------------------------------------------
BSD 3-Clause License
Copyright 2016, Google Inc.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from this
software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
-----------------------------------------------------------
Mozilla Public License Version 2.0
==================================
1. Definitions
--------------
1.1. "Contributor"
means each individual or legal entity that creates, contributes to
the creation of, or owns Covered Software.
1.2. "Contributor Version"
means the combination of the Contributions of others (if any) used
by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution"
means Covered Software of a particular Contributor.
1.4. "Covered Software"
means Source Code Form to which the initial Contributor has attached
the notice in Exhibit A, the Executable Form of such Source Code
Form, and Modifications of such Source Code Form, in each case
including portions thereof.
1.5. "Incompatible With Secondary Licenses"
means
(a) that the initial Contributor has attached the notice described
in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of
version 1.1 or earlier of the License, but not also under the
terms of a Secondary License.
1.6. "Executable Form"
means any form of the work other than Source Code Form.
1.7. "Larger Work"
means a work that combines Covered Software with other material, in
a separate file or files, that is not Covered Software.
1.8. "License"
means this document.
1.9. "Licensable"
means having the right to grant, to the maximum extent possible,
whether at the time of the initial grant or subsequently, any and
all of the rights conveyed by this License.
1.10. "Modifications"
means any of the following:
(a) any file in Source Code Form that results from an addition to,
deletion from, or modification of the contents of Covered
Software; or
(b) any new file in Source Code Form that contains any Covered
Software.
1.11. "Patent Claims" of a Contributor
means any patent claim(s), including without limitation, method,
process, and apparatus claims, in any patent Licensable by such
Contributor that would be infringed, but for the grant of the
License, by the making, using, selling, offering for sale, having
made, import, or transfer of either its Contributions or its
Contributor Version.
1.12. "Secondary License"
means either the GNU General Public License, Version 2.0, the GNU
Lesser General Public License, Version 2.1, the GNU Affero General
Public License, Version 3.0, or any later versions of those
licenses.
1.13. "Source Code Form"
means the form of the work preferred for making modifications.
1.14. "You" (or "Your")
means an individual or a legal entity exercising rights under this
License. For legal entities, "You" includes any entity that
controls, is controlled by, or is under common control with You. For
purposes of this definition, "control" means (a) the power, direct
or indirect, to cause the direction or management of such entity,
whether by contract or otherwise, or (b) ownership of more than
fifty percent (50%) of the outstanding shares or beneficial
ownership of such entity.
2. License Grants and Conditions
--------------------------------
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free,
non-exclusive license:
(a) under intellectual property rights (other than patent or trademark)
Licensable by such Contributor to use, reproduce, make available,
modify, display, perform, distribute, and otherwise exploit its
Contributions, either on an unmodified basis, with Modifications, or
as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer
for sale, have made, import, and otherwise transfer either its
Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution
become effective for each Contribution on the date the Contributor first
distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under
this License. No additional rights or licenses will be implied from the
distribution or licensing of Covered Software under this License.
Notwithstanding Section 2.1(b) above, no patent license is granted by a
Contributor:
(a) for any code that a Contributor has removed from Covered Software;
or
(b) for infringements caused by: (i) Your and any other third party's
modifications of Covered Software, or (ii) the combination of its
Contributions with other software (except as part of its Contributor
Version); or
(c) under Patent Claims infringed by Covered Software in the absence of
its Contributions.
This License does not grant any rights in the trademarks, service marks,
or logos of any Contributor (except as may be necessary to comply with
the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to
distribute the Covered Software under a subsequent version of this
License (see Section 10.2) or under the terms of a Secondary License (if
permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its
Contributions are its original creation(s) or it has sufficient rights
to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under
applicable copyright doctrines of fair use, fair dealing, or other
equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
in Section 2.1.
3. Responsibilities
-------------------
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any
Modifications that You create or to which You contribute, must be under
the terms of this License. You must inform recipients that the Source
Code Form of the Covered Software is governed by the terms of this
License, and how they can obtain a copy of this License. You may not
attempt to alter or restrict the recipients' rights in the Source Code
Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code
Form, as described in Section 3.1, and You must inform recipients of
the Executable Form how they can obtain a copy of such Source Code
Form by reasonable means in a timely manner, at a charge no more
than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this
License, or sublicense it under different terms, provided that the
license for the Executable Form does not attempt to limit or alter
the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice,
provided that You also comply with the requirements of this License for
the Covered Software. If the Larger Work is a combination of Covered
Software with a work governed by one or more Secondary Licenses, and the
Covered Software is not Incompatible With Secondary Licenses, this
License permits You to additionally distribute such Covered Software
under the terms of such Secondary License(s), so that the recipient of
the Larger Work may, at their option, further distribute the Covered
Software under the terms of either this License or such Secondary
License(s).
3.4. Notices
You may not remove or alter the substance of any license notices
(including copyright notices, patent notices, disclaimers of warranty,
or limitations of liability) contained within the Source Code Form of
the Covered Software, except that You may alter any license notices to
the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support,
indemnity or liability obligations to one or more recipients of Covered
Software. However, You may do so only on Your own behalf, and not on
behalf of any Contributor. You must make it absolutely clear that any
such warranty, support, indemnity, or liability obligation is offered by
You alone, and You hereby agree to indemnify every Contributor for any
liability incurred by such Contributor as a result of warranty, support,
indemnity or liability terms You offer. You may include additional
disclaimers of warranty and limitations of liability specific to any
jurisdiction.
4. Inability to Comply Due to Statute or Regulation
---------------------------------------------------
If it is impossible for You to comply with any of the terms of this
License with respect to some or all of the Covered Software due to
statute, judicial order, or regulation then You must: (a) comply with
the terms of this License to the maximum extent possible; and (b)
describe the limitations and the code they affect. Such description must
be placed in a text file included with all distributions of the Covered
Software under this License. Except to the extent prohibited by statute
or regulation, such description must be sufficiently detailed for a
recipient of ordinary skill to be able to understand it.
5. Termination
--------------
5.1. The rights granted under this License will terminate automatically
if You fail to comply with any of its terms. However, if You become
compliant, then the rights granted under this License from a particular
Contributor are reinstated (a) provisionally, unless and until such
Contributor explicitly and finally terminates Your grants, and (b) on an
ongoing basis, if such Contributor fails to notify You of the
non-compliance by some reasonable means prior to 60 days after You have
come back into compliance. Moreover, Your grants from a particular
Contributor are reinstated on an ongoing basis if such Contributor
notifies You of the non-compliance by some reasonable means, this is the
first time You have received notice of non-compliance with this License
from such Contributor, and You become compliant prior to 30 days after
Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent
infringement claim (excluding declaratory judgment actions,
counter-claims, and cross-claims) alleging that a Contributor Version
directly or indirectly infringes any patent, then the rights granted to
You by any and all Contributors for the Covered Software under Section
2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
end user license agreements (excluding distributors and resellers) which
have been validly granted by You or Your distributors under this License
prior to termination shall survive termination.
************************************************************************
* *
* 6. Disclaimer of Warranty *
* ------------------------- *
* *
* Covered Software is provided under this License on an "as is" *
* basis, without warranty of any kind, either expressed, implied, or *
* statutory, including, without limitation, warranties that the *
* Covered Software is free of defects, merchantable, fit for a *
* particular purpose or non-infringing. The entire risk as to the *
* quality and performance of the Covered Software is with You. *
* Should any Covered Software prove defective in any respect, You *
* (not any Contributor) assume the cost of any necessary servicing, *
* repair, or correction. This disclaimer of warranty constitutes an *
* essential part of this License. No use of any Covered Software is *
* authorized under this License except under this disclaimer. *
* *
************************************************************************
************************************************************************
* *
* 7. Limitation of Liability *
* -------------------------- *
* *
* Under no circumstances and under no legal theory, whether tort *
* (including negligence), contract, or otherwise, shall any *
* Contributor, or anyone who distributes Covered Software as *
* permitted above, be liable to You for any direct, indirect, *
* special, incidental, or consequential damages of any character *
* including, without limitation, damages for lost profits, loss of *
* goodwill, work stoppage, computer failure or malfunction, or any *
* and all other commercial damages or losses, even if such party *
* shall have been informed of the possibility of such damages. This *
* limitation of liability shall not apply to liability for death or *
* personal injury resulting from such party's negligence to the *
* extent applicable law prohibits such limitation. Some *
* jurisdictions do not allow the exclusion or limitation of *
* incidental or consequential damages, so this exclusion and *
* limitation may not apply to You. *
* *
************************************************************************
8. Litigation
-------------
Any litigation relating to this License may be brought only in the
courts of a jurisdiction where the defendant maintains its principal
place of business and such litigation shall be governed by laws of that
jurisdiction, without reference to its conflict-of-law provisions.
Nothing in this Section shall prevent a party's ability to bring
cross-claims or counter-claims.
9. Miscellaneous
----------------
This License represents the complete agreement concerning the subject
matter hereof. If any provision of this License is held to be
unenforceable, such provision shall be reformed only to the extent
necessary to make it enforceable. Any law or regulation which provides
that the language of a contract shall be construed against the drafter
shall not be used to construe this License against a Contributor.
10. Versions of the License
---------------------------
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section
10.3, no one other than the license steward has the right to modify or
publish new versions of this License. Each version will be given a
distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version
of the License under which You originally received the Covered Software,
or under the terms of any subsequent version published by the license
steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to
create a new license for such software, you may create and use a
modified version of this License if you rename the license and remove
any references to the name of the license steward (except to note that
such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary
Licenses
If You choose to distribute Source Code Form that is Incompatible With
Secondary Licenses under the terms of this version of the License, the
notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
-------------------------------------------
This Source Code Form is subject to the terms of the Mozilla Public
License, v. 2.0. If a copy of the MPL was not distributed with this
file, You can obtain one at http://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular
file, then You may include the notice in a location (such as a LICENSE
file in a relevant directory) where a recipient would be likely to look
for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
---------------------------------------------------------
This Source Code Form is "Incompatible With Secondary Licenses", as
defined by the Mozilla Public License, v. 2.0.

View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Some files were not shown because too many files have changed in this diff Show More