This commit is contained in:
sean-m-ssullivan
2021-03-08 10:18:34 -06:00
319 changed files with 5031 additions and 9733 deletions

View File

@@ -1 +1,3 @@
awx/ui/node_modules awx/ui/node_modules
awx/ui_next/node_modules
Dockerfile

View File

@@ -1,410 +1,120 @@
Table of Contents
=================
* [Installing AWX](#installing-awx)
* [The AWX Operator](#the-awx-operator)
* [Quickstart with minikube](#quickstart-with-minikube)
* [Starting minikube](#starting-minikube)
* [Deploying the AWX Operator](#deploying-the-awx-operator)
* [Verifying the Operator Deployment](#verifying-the-operator-deployment)
* [Deploy AWX](#deploy-awx)
* [Accessing AWX](#accessing-awx)
* [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
# Installing AWX # Installing AWX
This document provides a guide for installing AWX. :warning: NOTE |
--- |
If you're installing an older release of AWX (prior to 18.0), these instructions have changed. Take a look at your version specific instructions, e.g., for AWX 17.0.1, see: [https://github.com/ansible/awx/blob/17.0.1/INSTALL.md](https://github.com/ansible/awx/blob/17.0.1/INSTALL.md)
If you're attempting to migrate an older Docker-based AWX installation, see: [Migrating Data from Local Docker](https://github.com/ansible/awx/blob/devel/tools/docker-compose/docs/data_migration.md) |
## Table of contents ## The AWX Operator
- [Installing AWX](#installing-awx) Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX.
* [Getting started](#getting-started)
+ [Clone the repo](#clone-the-repo)
+ [AWX branding](#awx-branding)
+ [Prerequisites](#prerequisites)
+ [System Requirements](#system-requirements)
+ [Choose a deployment platform](#choose-a-deployment-platform)
+ [Official vs Building Images](#official-vs-building-images)
* [OpenShift](#openshift)
+ [Prerequisites](#prerequisites-1)
+ [Pre-install steps](#pre-install-steps)
- [Deploying to Minishift](#deploying-to-minishift)
- [PostgreSQL](#postgresql)
+ [Run the installer](#run-the-installer)
+ [Post-install](#post-install)
+ [Accessing AWX](#accessing-awx)
* [Kubernetes](#kubernetes)
+ [Prerequisites](#prerequisites-2)
+ [Pre-install steps](#pre-install-steps-1)
+ [Configuring Helm](#configuring-helm)
+ [Run the installer](#run-the-installer-1)
+ [Post-install](#post-install-1)
+ [Accessing AWX](#accessing-awx-1)
+ [SSL Termination](#ssl-termination)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
### Quickstart with minikube
## Getting started If you don't have an existing OpenShift or Kubernetes cluster, minikube is a fast and easy way to get up and running.
### Clone the repo To install minikube, follow the steps in their [documentation](https://minikube.sigs.k8s.io/docs/start/).
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page: #### Starting minikube
https://github.com/ansible/awx/releases Once you have installed minikube, run the following command to start it. You may wish to customize these options.
...and clone the latest stable release, e.g.,
`git clone -b x.y.z https://github.com/ansible/awx.git`
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run the commands in the following sections from the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.8+
- [Docker](https://docs.docker.com/engine/installation/)
+ A recent version
- [docker](https://pypi.org/project/docker/) Python module
+ This is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
+ We use this module instead of `docker-py` because it is what the `docker-compose` Python module requires.
- [community.general.docker_image collection](https://docs.ansible.com/ansible/latest/collections/community/general/docker_image_module.html)
+ This is only required if you are using Ansible >= 2.10
- [GNU Make](https://www.gnu.org/software/make/)
- [Git](https://git-scm.com/) Requires Version 1.8.4+
- Python 3.6+
### System Requirements
The system that runs the AWX service will need to satisfy the following requirements
- At least 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster or a Kubernetes cluster. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
### Official vs Building Images
When installing AWX you have the option of building your own image or using the image provided on DockerHub (see [awx](https://hub.docker.com/r/ansible/awx/))
This is controlled by the following variables in the `inventory` file
``` ```
dockerhub_base=ansible $ minikube start --cpus=4 --memory=8g --addons=ingress
dockerhub_version=latest
``` ```
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install. #### Deploying the AWX Operator
*dockerhub_base* For a comprehensive overview of features, see [README.md](https://github.com/ansible/awx-operator/blob/devel/README.md) in the awx-operator repo. The following steps are the bare minimum to get AWX up and running.
> The base location on DockerHub where the images are hosted (by default this pulls a container image named `ansible/awx:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
To build your own container use the `build.yml` playbook:
``` ```
ansible-playbook tools/ansible/build.yml -e awx_version=test-build $ minikube kubectl -- apply -f https://raw.githubusercontent.com/ansible/awx-operator/devel/deploy/awx-operator.yaml
``` ```
The resulting image will automatically be pushed to a registry if `docker_registry` is defined. ##### Verifying the Operator Deployment
After a few seconds, the operator should be up and running. Verify it by running the following command:
## OpenShift
### Prerequisites
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-deployment requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
### Pre-install steps
Before starting the install, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*openshift_skip_tls_verify*
> Boolean. Set to True if using self-signed certs.
*openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*openshift_pg_emptydir*
> Boolean. Set to True to use an emptyDir volume when deploying the PostgreSQL pod. Note: This should only be used for demo and testing purposes.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The recommended minimum resources for your Minishift VM:
```bash
$ minishift start --cpus=4 --memory=8GB
```
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval $(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
#### PostgreSQL
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Run the installer
To start the install, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
Run the following command (docker_registry_password is optional if using official images):
```bash
# Start the install
$ ansible-playbook -i inventory install.yml -e openshift_password=developer -e docker_registry_password=$(oc whoami -t)
```
### Post-install
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
``` ```
$ minikube kubectl get pods
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment: NAME READY STATUS RESTARTS AGE
awx-operator-7c78bfbfd-xb6th 1/1 Running 0 11s
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
``` ```
You will see the following indicating that database migrations are running: #### Deploy AWX
```bash Once the Operator is running, you can now deploy AWX by creating a simple YAML file:
Using /etc/ansible/ansible.cfg as config file
127.0.0.1 | SUCCESS => { ```
"changed": false, $ cat myawx.yml
"db": "awx" ---
} apiVersion: awx.ansible.com/v1beta1
Operations to perform: kind: AWX
Synchronize unmigrated apps: solo, api, staticfiles, messages, channels, django_extensions, ui, rest_framework, polymorphic metadata:
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main name: awx
Synchronizing apps without migrations: spec:
Creating tables... tower_ingress_type: Ingress
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
...
``` ```
When you see output similar to the following, you'll know that database migrations have completed, and you can access the web interface: And then creating the AWX object in the Kubernetes API:
```bash ```
Python 2.7.5 (default, Nov 6 2016, 00:28:07) $ minikube kubectl apply -- -f myawx.yml
[GCC 4.8.5 20150623 (Red Hat 4.8.5-11)] on linux2 awx.awx.ansible.com/awx created
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx-3886581826-5mv0l
(changed: True)
Creating instance group tower
Added instance awx-3886581826-5mv0l to tower
``` ```
Once database migrations complete, the web interface will be accessible. After creating the AWX object in the Kubernetes API, the operator will begin running its reconciliation loop.
### Accessing AWX To see what's going on, you can tail the logs of the operator pod (note that your pod name will be different):
The AWX web interface is running in the AWX pod, behind the `awx-web-svc` service. To view the service, and its port value, run the following command: ```
$ minikube kubectl logs -- -f awx-operator-7c78bfbfd-xb6th
```bash
# View available services
$ oc get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-web-svc 172.30.111.74 <nodes> 8052:30083/TCP 37m
postgresql 172.30.102.9 <none> 5432/TCP 38m
``` ```
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command: After a few seconds, you will see the database and application pods show up. On a fresh system, it may take a few minutes for the container images to download.
```bash ```
# View available routes $ minikube kubectl get pods
$ oc get routes NAME READY STATUS RESTARTS AGE
awx-5ffbfd489c-bvtvf 3/3 Running 0 2m54s
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD awx-operator-7c78bfbfd-xb6th 1/1 Running 0 6m42s
awx-web-svc awx-web-svc-awx.192.168.64.2.nip.io awx-web-svc http edge/Allow None awx-postgres-0 1/1 Running 0 2m58s
``` ```
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io). ##### Accessing AWX
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`. To access the AWX UI, you'll need to grab the service url from minikube:
## Kubernetes ```
$ minikube service awx-service --url
### Prerequisites http://192.168.59.2:31868
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [helm](https://helm.sh/docs/intro/quickstart/)
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-install steps
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://helm.sh/docs/intro/quickstart/](https://helm.sh/docs/intro/quickstart/).
You do not need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) as Helm does it for you. However, an existing one may be used by setting the `pg_persistence_existingclaim` variable.
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://helm.sh/docs/topics/rbac/](https://helm.sh/docs/topics/rbac/)
### Run the installer
After making changes to the `inventory` file use `ansible-playbook` to begin the install
```bash
$ ansible-playbook -i inventory install.yml
``` ```
### Post-install On fresh installs, you will see the "AWX is currently upgrading." page until database migrations finish.
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used): Once you are redirected to the login screen, you can now log in by obtaining the generated admin password (note: do not copy the trailing `%`):
```bash ```
# View the running pods, it may take a few minutes for everything to be marked in the Running state $ minikube kubectl -- get secret awx-admin-password -o jsonpath='{.data.password}' | base64 --decode
$ kubectl get pods --namespace awx b6ChwVmqEiAsil2KSpH4xGaZPeZvWnWj%
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
``` ```
### Accessing AWX Now you can log in at the URL above with the username "admin" and the password above. Happy Automating!
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
```bash
# View available services
$ kubectl get svc --namespace awx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
awx-postgresql ClusterIP 10.7.250.208 <none> 5432/TCP 2m
awx-web-svc NodePort 10.7.241.35 <none> 80:30177/TCP 1m
```
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
# Installing the AWX CLI # Installing the AWX CLI

View File

@@ -20,7 +20,6 @@ COMPOSE_TAG ?= $(GIT_BRANCH)
COMPOSE_HOST ?= $(shell hostname) COMPOSE_HOST ?= $(shell hostname)
VENV_BASE ?= /var/lib/awx/venv/ VENV_BASE ?= /var/lib/awx/venv/
COLLECTION_BASE ?= /var/lib/awx/vendor/awx_ansible_collections
SCL_PREFIX ?= SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db CELERY_SCHEDULE_FILE ?= /var/lib/awx/beat.db
@@ -62,11 +61,11 @@ WHEEL_FILE ?= $(WHEEL_NAME)-py2-none-any.whl
I18N_FLAG_FILE = .i18n_built I18N_FLAG_FILE = .i18n_built
.PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \ .PHONY: awx-link clean clean-tmp clean-venv requirements requirements_dev \
develop refresh adduser migrate dbchange runserver \ develop refresh adduser migrate dbchange \
receiver test test_unit test_coverage coverage_html \ receiver test test_unit test_coverage coverage_html \
dev_build release_build release_clean sdist \ dev_build release_build sdist \
ui-docker-machine ui-docker ui-release ui-devel \ ui-release ui-devel \
ui-test ui-deps ui-test-ci VERSION docker-compose-sources VERSION docker-compose-sources
clean-tmp: clean-tmp:
rm -rf tmp/ rm -rf tmp/
@@ -115,31 +114,7 @@ guard-%:
exit 1; \ exit 1; \
fi fi
virtualenv: virtualenv_ansible virtualenv_awx virtualenv: virtualenv_awx
# virtualenv_* targets do not use --system-site-packages to prevent bugs installing packages
# but Ansible venvs are expected to have this, so that must be done after venv creation
virtualenv_ansible:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv -p python $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
fi; \
fi
virtualenv_ansible_py3:
if [ "$(VENV_BASE)" ]; then \
if [ ! -d "$(VENV_BASE)" ]; then \
mkdir $(VENV_BASE); \
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv -p $(PYTHON) $(VENV_BASE)/ansible; \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) $(VENV_BOOTSTRAP); \
fi; \
fi
# flit is needed for offline install of certain packages, specifically ptyprocess # flit is needed for offline install of certain packages, specifically ptyprocess
# it is needed for setup, but not always recognized as a setup dependency # it is needed for setup, but not always recognized as a setup dependency
@@ -155,32 +130,6 @@ virtualenv_awx:
fi; \ fi; \
fi fi
# --ignore-install flag is not used because *.txt files should specify exact versions
requirements_ansible: virtualenv_ansible
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) -r /dev/stdin ; \
else \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
fi
$(VENV_BASE)/ansible/bin/pip uninstall --yes -r requirements/requirements_ansible_uninstall.txt
# Same effect as using --system-site-packages flag on venv creation
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
requirements_ansible_py3: virtualenv_ansible_py3
if [[ "$(PIP_OPTIONS)" == *"--no-index"* ]]; then \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_local.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) -r /dev/stdin ; \
else \
cat requirements/requirements_ansible.txt requirements/requirements_ansible_git.txt | PYCURL_SSL_LIBRARY=$(PYCURL_SSL_LIBRARY) $(VENV_BASE)/ansible/bin/pip3 install $(PIP_OPTIONS) --no-binary $(SRC_ONLY_PKGS) -r /dev/stdin ; \
fi
$(VENV_BASE)/ansible/bin/pip3 uninstall --yes -r requirements/requirements_ansible_uninstall.txt
# Same effect as using --system-site-packages flag on venv creation
rm $(shell ls -d $(VENV_BASE)/ansible/lib/python* | head -n 1)/no-global-site-packages.txt
requirements_ansible_dev:
if [ "$(VENV_BASE)" ]; then \
$(VENV_BASE)/ansible/bin/pip install pytest mock; \
fi
# Install third-party requirements needed for AWX's environment. # Install third-party requirements needed for AWX's environment.
# this does not use system site packages intentionally # this does not use system site packages intentionally
requirements_awx: virtualenv_awx requirements_awx: virtualenv_awx
@@ -194,17 +143,9 @@ requirements_awx: virtualenv_awx
requirements_awx_dev: requirements_awx_dev:
$(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt $(VENV_BASE)/awx/bin/pip install -r requirements/requirements_dev.txt
requirements_collections: requirements: requirements_awx
mkdir -p $(COLLECTION_BASE)
n=0; \
until [ "$$n" -ge 5 ]; do \
ansible-galaxy collection install -r requirements/collections_requirements.yml -p $(COLLECTION_BASE) && break; \
n=$$((n+1)); \
done
requirements: requirements_ansible requirements_awx requirements_collections requirements_dev: requirements_awx requirements_awx_dev
requirements_dev: requirements_awx requirements_ansible_py3 requirements_awx_dev requirements_ansible_dev
requirements_test: requirements requirements_test: requirements
@@ -383,7 +324,8 @@ test_collection:
rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt rm -f $(shell ls -d $(VENV_BASE)/awx/lib/python* | head -n 1)/no-global-site-packages.txt
if [ "$(VENV_BASE)" ]; then \ if [ "$(VENV_BASE)" ]; then \
. $(VENV_BASE)/awx/bin/activate; \ . $(VENV_BASE)/awx/bin/activate; \
fi; \ fi && \
pip install ansible && \
py.test $(COLLECTION_TEST_DIRS) -v py.test $(COLLECTION_TEST_DIRS) -v
# The python path needs to be modified so that the tests can find Ansible within the container # The python path needs to be modified so that the tests can find Ansible within the container
# First we will use anything expility set as PYTHONPATH # First we will use anything expility set as PYTHONPATH
@@ -457,7 +399,6 @@ clean-ui:
rm -rf awx/ui_next/build rm -rf awx/ui_next/build
rm -rf awx/ui_next/src/locales/_build rm -rf awx/ui_next/src/locales/_build
rm -rf $(UI_BUILD_FLAG_FILE) rm -rf $(UI_BUILD_FLAG_FILE)
git checkout awx/ui_next/src/locales
awx/ui_next/node_modules: awx/ui_next/node_modules:
$(NPM_BIN) --prefix awx/ui_next --loglevel warn --ignore-scripts install $(NPM_BIN) --prefix awx/ui_next --loglevel warn --ignore-scripts install
@@ -533,30 +474,29 @@ awx/projects:
@mkdir -p $@ @mkdir -p $@
COMPOSE_UP_OPTS ?= COMPOSE_UP_OPTS ?=
CLUSTER_NODE_COUNT ?= 1
docker-compose-sources: docker-compose-sources:
ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \ ansible-playbook -i tools/docker-compose/inventory tools/docker-compose/ansible/sources.yml \
-e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \ -e awx_image=$(DEV_DOCKER_TAG_BASE)/awx_devel \
-e awx_image_tag=$(COMPOSE_TAG) -e awx_image_tag=$(COMPOSE_TAG) \
-e cluster_node_count=$(CLUSTER_NODE_COUNT)
docker-compose: docker-auth awx/projects docker-compose-sources docker-compose: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_UP_OPTS) up --no-recreate awx docker-compose -f tools/docker-compose/_sources/docker-compose.yml $(COMPOSE_UP_OPTS) up
docker-compose-cluster: docker-auth awx/projects
docker-compose -f tools/docker-compose-cluster.yml up
docker-compose-credential-plugins: docker-auth awx/projects docker-compose-sources docker-compose-credential-plugins: docker-auth awx/projects docker-compose-sources
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m" echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
docker-compose-test: docker-auth awx/projects docker-compose-sources docker-compose-test: docker-auth awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx /bin/bash docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest: awx/projects docker-compose-sources docker-compose-runtest: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx /start_tests.sh docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger: awx/projects docker-compose-sources docker-compose-build-swagger: awx/projects docker-compose-sources
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx /start_tests.sh swagger docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports --no-deps awx_1 /start_tests.sh swagger
detect-schema-change: genschema detect-schema-change: genschema
curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json curl https://s3.amazonaws.com/awx-public-ci-files/schema.json -o reference-schema.json

View File

@@ -1,4 +1,5 @@
[![Gated by Zuul](https://zuul-ci.org/gated.svg)](https://ansible.softwarefactory-project.io/zuul/status) [![Gated by Zuul](https://zuul-ci.org/gated.svg)](https://ansible.softwarefactory-project.io/zuul/status) [![Code of Conduct](https://img.shields.io/badge/code%20of%20conduct-Ansible-yellow.svg)](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html) [![Apache v2 License](https://img.shields.io/badge/license-Apache%202.0-brightgreen.svg)](https://github.com/ansible/awx/blob/devel/LICENSE.md) [![AWX Mailing List](https://img.shields.io/badge/mailing%20list-AWX-orange.svg)](https://groups.google.com/g/awx-project)
[![IRC Chat](https://img.shields.io/badge/IRC-%23ansible--awx-blueviolet.svg)](https://webchat.freenode.net/#ansible-awx)
<img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" /> <img src="https://raw.githubusercontent.com/ansible/awx-logos/master/awx/ui/client/assets/logo-login.svg?sanitize=true" width=200 alt="AWX" />
@@ -38,8 +39,3 @@ We welcome your feedback and ideas. Here's how to reach us with feedback and que
- Join the `#ansible-awx` channel on irc.freenode.net - Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project) - Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
License
-------
[Apache v2](./LICENSE.md)

View File

@@ -24,7 +24,7 @@ from rest_framework.request import clone_request
from awx.api.fields import ChoiceNullField from awx.api.fields import ChoiceNullField
from awx.main.fields import JSONField, ImplicitRoleField from awx.main.fields import JSONField, ImplicitRoleField
from awx.main.models import NotificationTemplate from awx.main.models import NotificationTemplate
from awx.main.scheduler.kubernetes import PodManager from awx.main.tasks import AWXReceptorJob
class Metadata(metadata.SimpleMetadata): class Metadata(metadata.SimpleMetadata):
@@ -209,7 +209,7 @@ class Metadata(metadata.SimpleMetadata):
continue continue
if field == "pod_spec_override": if field == "pod_spec_override":
meta['default'] = PodManager().pod_definition meta['default'] = AWXReceptorJob().pod_definition
# Add type choices if available from the serializer. # Add type choices if available from the serializer.
if field == 'type' and hasattr(serializer, 'get_type_choices'): if field == 'type' and hasattr(serializer, 'get_type_choices'):

View File

@@ -50,7 +50,7 @@ from awx.main.constants import (
) )
from awx.main.models import ( from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialInputSource, ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialInputSource,
CredentialType, CustomInventoryScript, Group, Host, Instance, CredentialType, CustomInventoryScript, ExecutionEnvironment, Group, Host, Instance,
InstanceGroup, Inventory, InventorySource, InventoryUpdate, InstanceGroup, Inventory, InventorySource, InventoryUpdate,
InventoryUpdateEvent, Job, JobEvent, JobHostSummary, JobLaunchConfig, InventoryUpdateEvent, Job, JobEvent, JobHostSummary, JobLaunchConfig,
JobNotificationMixin, JobTemplate, Label, Notification, NotificationTemplate, JobNotificationMixin, JobTemplate, Label, Notification, NotificationTemplate,
@@ -107,6 +107,8 @@ SUMMARIZABLE_FK_FIELDS = {
'insights_credential_id',), 'insights_credential_id',),
'host': DEFAULT_SUMMARY_FIELDS, 'host': DEFAULT_SUMMARY_FIELDS,
'group': DEFAULT_SUMMARY_FIELDS, 'group': DEFAULT_SUMMARY_FIELDS,
'default_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'execution_environment': DEFAULT_SUMMARY_FIELDS + ('image',),
'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'), 'project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'), 'source_project': DEFAULT_SUMMARY_FIELDS + ('status', 'scm_type'),
'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',), 'project_update': DEFAULT_SUMMARY_FIELDS + ('status', 'failed',),
@@ -129,7 +131,7 @@ SUMMARIZABLE_FK_FIELDS = {
'source_script': DEFAULT_SUMMARY_FIELDS, 'source_script': DEFAULT_SUMMARY_FIELDS,
'role': ('id', 'role_field'), 'role': ('id', 'role_field'),
'notification_template': DEFAULT_SUMMARY_FIELDS, 'notification_template': DEFAULT_SUMMARY_FIELDS,
'instance_group': ('id', 'name', 'controller_id', 'is_containerized'), 'instance_group': ('id', 'name', 'controller_id', 'is_container_group'),
'insights_credential': DEFAULT_SUMMARY_FIELDS, 'insights_credential': DEFAULT_SUMMARY_FIELDS,
'source_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'), 'source_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
'target_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'), 'target_credential': DEFAULT_SUMMARY_FIELDS + ('kind', 'cloud', 'credential_type_id'),
@@ -647,7 +649,7 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
class Meta: class Meta:
model = UnifiedJobTemplate model = UnifiedJobTemplate
fields = ('*', 'last_job_run', 'last_job_failed', fields = ('*', 'last_job_run', 'last_job_failed',
'next_job_run', 'status') 'next_job_run', 'status', 'execution_environment')
def get_related(self, obj): def get_related(self, obj):
res = super(UnifiedJobTemplateSerializer, self).get_related(obj) res = super(UnifiedJobTemplateSerializer, self).get_related(obj)
@@ -657,6 +659,9 @@ class UnifiedJobTemplateSerializer(BaseSerializer):
res['last_job'] = obj.last_job.get_absolute_url(request=self.context.get('request')) res['last_job'] = obj.last_job.get_absolute_url(request=self.context.get('request'))
if obj.next_schedule: if obj.next_schedule:
res['next_schedule'] = obj.next_schedule.get_absolute_url(request=self.context.get('request')) res['next_schedule'] = obj.next_schedule.get_absolute_url(request=self.context.get('request'))
if obj.execution_environment_id:
res['execution_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.execution_environment_id})
return res return res
def get_types(self): def get_types(self):
@@ -711,6 +716,7 @@ class UnifiedJobSerializer(BaseSerializer):
class Meta: class Meta:
model = UnifiedJob model = UnifiedJob
fields = ('*', 'unified_job_template', 'launch_type', 'status', fields = ('*', 'unified_job_template', 'launch_type', 'status',
'execution_environment',
'failed', 'started', 'finished', 'canceled_on', 'elapsed', 'job_args', 'failed', 'started', 'finished', 'canceled_on', 'elapsed', 'job_args',
'job_cwd', 'job_env', 'job_explanation', 'job_cwd', 'job_env', 'job_explanation',
'execution_node', 'controller_node', 'execution_node', 'controller_node',
@@ -748,6 +754,9 @@ class UnifiedJobSerializer(BaseSerializer):
res['stdout'] = self.reverse('api:ad_hoc_command_stdout', kwargs={'pk': obj.pk}) res['stdout'] = self.reverse('api:ad_hoc_command_stdout', kwargs={'pk': obj.pk})
if obj.workflow_job_id: if obj.workflow_job_id:
res['source_workflow_job'] = self.reverse('api:workflow_job_detail', kwargs={'pk': obj.workflow_job_id}) res['source_workflow_job'] = self.reverse('api:workflow_job_detail', kwargs={'pk': obj.workflow_job_id})
if obj.execution_environment_id:
res['execution_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.execution_environment_id})
return res return res
def get_summary_fields(self, obj): def get_summary_fields(self, obj):
@@ -1243,11 +1252,13 @@ class OrganizationSerializer(BaseSerializer):
class Meta: class Meta:
model = Organization model = Organization
fields = ('*', 'max_hosts', 'custom_virtualenv',) fields = ('*', 'max_hosts', 'custom_virtualenv', 'default_environment',)
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj): def get_related(self, obj):
res = super(OrganizationSerializer, self).get_related(obj) res = super(OrganizationSerializer, self).get_related(obj)
res.update(dict( res.update(
execution_environments = self.reverse('api:organization_execution_environments_list', kwargs={'pk': obj.pk}),
projects = self.reverse('api:organization_projects_list', kwargs={'pk': obj.pk}), projects = self.reverse('api:organization_projects_list', kwargs={'pk': obj.pk}),
inventories = self.reverse('api:organization_inventories_list', kwargs={'pk': obj.pk}), inventories = self.reverse('api:organization_inventories_list', kwargs={'pk': obj.pk}),
job_templates = self.reverse('api:organization_job_templates_list', kwargs={'pk': obj.pk}), job_templates = self.reverse('api:organization_job_templates_list', kwargs={'pk': obj.pk}),
@@ -1267,7 +1278,10 @@ class OrganizationSerializer(BaseSerializer):
access_list = self.reverse('api:organization_access_list', kwargs={'pk': obj.pk}), access_list = self.reverse('api:organization_access_list', kwargs={'pk': obj.pk}),
instance_groups = self.reverse('api:organization_instance_groups_list', kwargs={'pk': obj.pk}), instance_groups = self.reverse('api:organization_instance_groups_list', kwargs={'pk': obj.pk}),
galaxy_credentials = self.reverse('api:organization_galaxy_credentials_list', kwargs={'pk': obj.pk}), galaxy_credentials = self.reverse('api:organization_galaxy_credentials_list', kwargs={'pk': obj.pk}),
)) )
if obj.default_environment:
res['default_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.default_environment_id})
return res return res
def get_summary_fields(self, obj): def get_summary_fields(self, obj):
@@ -1347,6 +1361,29 @@ class ProjectOptionsSerializer(BaseSerializer):
return super(ProjectOptionsSerializer, self).validate(attrs) return super(ProjectOptionsSerializer, self).validate(attrs)
class ExecutionEnvironmentSerializer(BaseSerializer):
show_capabilities = ['edit', 'delete', 'copy']
managed_by_tower = serializers.ReadOnlyField()
class Meta:
model = ExecutionEnvironment
fields = ('*', 'organization', 'image', 'managed_by_tower', 'credential', 'pull')
def get_related(self, obj):
res = super(ExecutionEnvironmentSerializer, self).get_related(obj)
res.update(
activity_stream=self.reverse('api:execution_environment_activity_stream_list', kwargs={'pk': obj.pk}),
unified_job_templates=self.reverse('api:execution_environment_job_template_list', kwargs={'pk': obj.pk}),
copy=self.reverse('api:execution_environment_copy', kwargs={'pk': obj.pk}),
)
if obj.organization:
res['organization'] = self.reverse('api:organization_detail', kwargs={'pk': obj.organization.pk})
if obj.credential:
res['credential'] = self.reverse('api:credential_detail',
kwargs={'pk': obj.credential.pk})
return res
class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer): class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
status = serializers.ChoiceField(choices=Project.PROJECT_STATUS_CHOICES, read_only=True) status = serializers.ChoiceField(choices=Project.PROJECT_STATUS_CHOICES, read_only=True)
@@ -1360,9 +1397,10 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
class Meta: class Meta:
model = Project model = Project
fields = ('*', 'organization', 'scm_update_on_launch', fields = ('*', '-execution_environment', 'organization', 'scm_update_on_launch',
'scm_update_cache_timeout', 'allow_override', 'custom_virtualenv',) + \ 'scm_update_cache_timeout', 'allow_override', 'custom_virtualenv', 'default_environment') + \
('last_update_failed', 'last_updated') # Backwards compatibility ('last_update_failed', 'last_updated') # Backwards compatibility
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj): def get_related(self, obj):
res = super(ProjectSerializer, self).get_related(obj) res = super(ProjectSerializer, self).get_related(obj)
@@ -1386,6 +1424,9 @@ class ProjectSerializer(UnifiedJobTemplateSerializer, ProjectOptionsSerializer):
if obj.organization: if obj.organization:
res['organization'] = self.reverse('api:organization_detail', res['organization'] = self.reverse('api:organization_detail',
kwargs={'pk': obj.organization.pk}) kwargs={'pk': obj.organization.pk})
if obj.default_environment:
res['default_environment'] = self.reverse('api:execution_environment_detail',
kwargs={'pk': obj.default_environment_id})
# Backwards compatibility. # Backwards compatibility.
if obj.current_update: if obj.current_update:
res['current_update'] = self.reverse('api:project_update_detail', res['current_update'] = self.reverse('api:project_update_detail',
@@ -1939,6 +1980,7 @@ class InventorySourceOptionsSerializer(BaseSerializer):
fields = ('*', 'source', 'source_path', 'source_script', 'source_vars', 'credential', fields = ('*', 'source', 'source_path', 'source_script', 'source_vars', 'credential',
'enabled_var', 'enabled_value', 'host_filter', 'overwrite', 'overwrite_vars', 'enabled_var', 'enabled_value', 'host_filter', 'overwrite', 'overwrite_vars',
'custom_virtualenv', 'timeout', 'verbosity') 'custom_virtualenv', 'timeout', 'verbosity')
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj): def get_related(self, obj):
res = super(InventorySourceOptionsSerializer, self).get_related(obj) res = super(InventorySourceOptionsSerializer, self).get_related(obj)
@@ -2924,6 +2966,7 @@ class JobTemplateSerializer(JobTemplateMixin, UnifiedJobTemplateSerializer, JobO
'become_enabled', 'diff_mode', 'allow_simultaneous', 'custom_virtualenv', 'become_enabled', 'diff_mode', 'allow_simultaneous', 'custom_virtualenv',
'job_slice_count', 'webhook_service', 'webhook_credential', 'job_slice_count', 'webhook_service', 'webhook_credential',
) )
read_only_fields = ('*', 'custom_virtualenv',)
def get_related(self, obj): def get_related(self, obj):
res = super(JobTemplateSerializer, self).get_related(obj) res = super(JobTemplateSerializer, self).get_related(obj)
@@ -4731,7 +4774,7 @@ class InstanceGroupSerializer(BaseSerializer):
'Isolated groups have a designated controller group.'), 'Isolated groups have a designated controller group.'),
read_only=True read_only=True
) )
is_containerized = serializers.BooleanField( is_container_group = serializers.BooleanField(
help_text=_('Indicates whether instances in this group are containerized.' help_text=_('Indicates whether instances in this group are containerized.'
'Containerized groups have a designated Openshift or Kubernetes cluster.'), 'Containerized groups have a designated Openshift or Kubernetes cluster.'),
read_only=True read_only=True
@@ -4761,7 +4804,7 @@ class InstanceGroupSerializer(BaseSerializer):
fields = ("id", "type", "url", "related", "name", "created", "modified", fields = ("id", "type", "url", "related", "name", "created", "modified",
"capacity", "committed_capacity", "consumed_capacity", "capacity", "committed_capacity", "consumed_capacity",
"percent_capacity_remaining", "jobs_running", "jobs_total", "percent_capacity_remaining", "jobs_running", "jobs_total",
"instances", "controller", "is_controller", "is_isolated", "is_containerized", "credential", "instances", "controller", "is_controller", "is_isolated", "is_container_group", "credential",
"policy_instance_percentage", "policy_instance_minimum", "policy_instance_list", "policy_instance_percentage", "policy_instance_minimum", "policy_instance_list",
"pod_spec_override", "summary_fields") "pod_spec_override", "summary_fields")
@@ -4786,17 +4829,17 @@ class InstanceGroupSerializer(BaseSerializer):
raise serializers.ValidationError(_('Isolated instances may not be added or removed from instances groups via the API.')) raise serializers.ValidationError(_('Isolated instances may not be added or removed from instances groups via the API.'))
if self.instance and self.instance.controller_id is not None: if self.instance and self.instance.controller_id is not None:
raise serializers.ValidationError(_('Isolated instance group membership may not be managed via the API.')) raise serializers.ValidationError(_('Isolated instance group membership may not be managed via the API.'))
if value and self.instance and self.instance.is_containerized: if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API')) raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value return value
def validate_policy_instance_percentage(self, value): def validate_policy_instance_percentage(self, value):
if value and self.instance and self.instance.is_containerized: if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API')) raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value return value
def validate_policy_instance_minimum(self, value): def validate_policy_instance_minimum(self, value):
if value and self.instance and self.instance.is_containerized: if value and self.instance and self.instance.is_container_group:
raise serializers.ValidationError(_('Containerized instances may not be managed via the API')) raise serializers.ValidationError(_('Containerized instances may not be managed via the API'))
return value return value

View File

@@ -0,0 +1,20 @@
from django.conf.urls import url
from awx.api.views import (
ExecutionEnvironmentList,
ExecutionEnvironmentDetail,
ExecutionEnvironmentJobTemplateList,
ExecutionEnvironmentCopy,
ExecutionEnvironmentActivityStreamList,
)
urls = [
url(r'^$', ExecutionEnvironmentList.as_view(), name='execution_environment_list'),
url(r'^(?P<pk>[0-9]+)/$', ExecutionEnvironmentDetail.as_view(), name='execution_environment_detail'),
url(r'^(?P<pk>[0-9]+)/unified_job_templates/$', ExecutionEnvironmentJobTemplateList.as_view(), name='execution_environment_job_template_list'),
url(r'^(?P<pk>[0-9]+)/copy/$', ExecutionEnvironmentCopy.as_view(), name='execution_environment_copy'),
url(r'^(?P<pk>[0-9]+)/activity_stream/$', ExecutionEnvironmentActivityStreamList.as_view(), name='execution_environment_activity_stream_list'),
]
__all__ = ['urls']

View File

@@ -9,6 +9,7 @@ from awx.api.views import (
OrganizationUsersList, OrganizationUsersList,
OrganizationAdminsList, OrganizationAdminsList,
OrganizationInventoriesList, OrganizationInventoriesList,
OrganizationExecutionEnvironmentsList,
OrganizationProjectsList, OrganizationProjectsList,
OrganizationJobTemplatesList, OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList, OrganizationWorkflowJobTemplatesList,
@@ -34,6 +35,7 @@ urls = [
url(r'^(?P<pk>[0-9]+)/users/$', OrganizationUsersList.as_view(), name='organization_users_list'), url(r'^(?P<pk>[0-9]+)/users/$', OrganizationUsersList.as_view(), name='organization_users_list'),
url(r'^(?P<pk>[0-9]+)/admins/$', OrganizationAdminsList.as_view(), name='organization_admins_list'), url(r'^(?P<pk>[0-9]+)/admins/$', OrganizationAdminsList.as_view(), name='organization_admins_list'),
url(r'^(?P<pk>[0-9]+)/inventories/$', OrganizationInventoriesList.as_view(), name='organization_inventories_list'), url(r'^(?P<pk>[0-9]+)/inventories/$', OrganizationInventoriesList.as_view(), name='organization_inventories_list'),
url(r'^(?P<pk>[0-9]+)/execution_environments/$', OrganizationExecutionEnvironmentsList.as_view(), name='organization_execution_environments_list'),
url(r'^(?P<pk>[0-9]+)/projects/$', OrganizationProjectsList.as_view(), name='organization_projects_list'), url(r'^(?P<pk>[0-9]+)/projects/$', OrganizationProjectsList.as_view(), name='organization_projects_list'),
url(r'^(?P<pk>[0-9]+)/job_templates/$', OrganizationJobTemplatesList.as_view(), name='organization_job_templates_list'), url(r'^(?P<pk>[0-9]+)/job_templates/$', OrganizationJobTemplatesList.as_view(), name='organization_job_templates_list'),
url(r'^(?P<pk>[0-9]+)/workflow_job_templates/$', OrganizationWorkflowJobTemplatesList.as_view(), name='organization_workflow_job_templates_list'), url(r'^(?P<pk>[0-9]+)/workflow_job_templates/$', OrganizationWorkflowJobTemplatesList.as_view(), name='organization_workflow_job_templates_list'),

View File

@@ -42,6 +42,7 @@ from .user import urls as user_urls
from .project import urls as project_urls from .project import urls as project_urls
from .project_update import urls as project_update_urls from .project_update import urls as project_update_urls
from .inventory import urls as inventory_urls from .inventory import urls as inventory_urls
from .execution_environments import urls as execution_environment_urls
from .team import urls as team_urls from .team import urls as team_urls
from .host import urls as host_urls from .host import urls as host_urls
from .group import urls as group_urls from .group import urls as group_urls
@@ -106,6 +107,7 @@ v2_urls = [
url(r'^schedules/', include(schedule_urls)), url(r'^schedules/', include(schedule_urls)),
url(r'^organizations/', include(organization_urls)), url(r'^organizations/', include(organization_urls)),
url(r'^users/', include(user_urls)), url(r'^users/', include(user_urls)),
url(r'^execution_environments/', include(execution_environment_urls)),
url(r'^projects/', include(project_urls)), url(r'^projects/', include(project_urls)),
url(r'^project_updates/', include(project_update_urls)), url(r'^project_updates/', include(project_update_urls)),
url(r'^teams/', include(team_urls)), url(r'^teams/', include(team_urls)),

View File

@@ -112,6 +112,7 @@ from awx.api.views.organization import ( # noqa
OrganizationInventoriesList, OrganizationInventoriesList,
OrganizationUsersList, OrganizationUsersList,
OrganizationAdminsList, OrganizationAdminsList,
OrganizationExecutionEnvironmentsList,
OrganizationProjectsList, OrganizationProjectsList,
OrganizationJobTemplatesList, OrganizationJobTemplatesList,
OrganizationWorkflowJobTemplatesList, OrganizationWorkflowJobTemplatesList,
@@ -396,7 +397,7 @@ class InstanceGroupDetail(RelatedJobsPreventDeleteMixin, RetrieveUpdateDestroyAP
permission_classes = (InstanceGroupTowerPermission,) permission_classes = (InstanceGroupTowerPermission,)
def update_raw_data(self, data): def update_raw_data(self, data):
if self.get_object().is_containerized: if self.get_object().is_container_group:
data.pop('policy_instance_percentage', None) data.pop('policy_instance_percentage', None)
data.pop('policy_instance_minimum', None) data.pop('policy_instance_minimum', None)
data.pop('policy_instance_list', None) data.pop('policy_instance_list', None)
@@ -685,6 +686,52 @@ class TeamAccessList(ResourceAccessList):
parent_model = models.Team parent_model = models.Team
class ExecutionEnvironmentList(ListCreateAPIView):
always_allow_superuser = False
model = models.ExecutionEnvironment
serializer_class = serializers.ExecutionEnvironmentSerializer
swagger_topic = "Execution Environments"
class ExecutionEnvironmentDetail(RetrieveUpdateDestroyAPIView):
always_allow_superuser = False
model = models.ExecutionEnvironment
serializer_class = serializers.ExecutionEnvironmentSerializer
swagger_topic = "Execution Environments"
class ExecutionEnvironmentJobTemplateList(SubListAPIView):
model = models.UnifiedJobTemplate
serializer_class = serializers.UnifiedJobTemplateSerializer
parent_model = models.ExecutionEnvironment
relationship = 'unifiedjobtemplates'
class ExecutionEnvironmentCopy(CopyAPIView):
model = models.ExecutionEnvironment
copy_return_serializer_class = serializers.ExecutionEnvironmentSerializer
class ExecutionEnvironmentActivityStreamList(SubListAPIView):
model = models.ActivityStream
serializer_class = serializers.ActivityStreamSerializer
parent_model = models.ExecutionEnvironment
relationship = 'activitystream_set'
search_fields = ('changes',)
def get_queryset(self):
parent = self.get_parent_object()
self.check_parent_access(parent)
qs = self.request.user.get_queryset(self.model)
return qs.filter(execution_environment=parent)
class ProjectList(ListCreateAPIView): class ProjectList(ListCreateAPIView):
model = models.Project model = models.Project

View File

@@ -15,6 +15,7 @@ from awx.main.models import (
Inventory, Inventory,
Host, Host,
Project, Project,
ExecutionEnvironment,
JobTemplate, JobTemplate,
WorkflowJobTemplate, WorkflowJobTemplate,
Organization, Organization,
@@ -45,6 +46,7 @@ from awx.api.serializers import (
RoleSerializer, RoleSerializer,
NotificationTemplateSerializer, NotificationTemplateSerializer,
InstanceGroupSerializer, InstanceGroupSerializer,
ExecutionEnvironmentSerializer,
ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer, ProjectSerializer, JobTemplateSerializer, WorkflowJobTemplateSerializer,
CredentialSerializer CredentialSerializer
) )
@@ -141,6 +143,16 @@ class OrganizationProjectsList(SubListCreateAPIView):
parent_key = 'organization' parent_key = 'organization'
class OrganizationExecutionEnvironmentsList(SubListCreateAttachDetachAPIView):
model = ExecutionEnvironment
serializer_class = ExecutionEnvironmentSerializer
parent_model = Organization
relationship = 'executionenvironments'
parent_key = 'organization'
swagger_topic = "Execution Environments"
class OrganizationJobTemplatesList(SubListCreateAPIView): class OrganizationJobTemplatesList(SubListCreateAPIView):
model = JobTemplate model = JobTemplate

View File

@@ -100,6 +100,7 @@ class ApiVersionRootView(APIView):
data['dashboard'] = reverse('api:dashboard_view', request=request) data['dashboard'] = reverse('api:dashboard_view', request=request)
data['organizations'] = reverse('api:organization_list', request=request) data['organizations'] = reverse('api:organization_list', request=request)
data['users'] = reverse('api:user_list', request=request) data['users'] = reverse('api:user_list', request=request)
data['execution_environments'] = reverse('api:execution_environment_list', request=request)
data['projects'] = reverse('api:project_list', request=request) data['projects'] = reverse('api:project_list', request=request)
data['project_updates'] = reverse('api:project_update_list', request=request) data['project_updates'] = reverse('api:project_update_list', request=request)
data['teams'] = reverse('api:team_list', request=request) data['teams'] = reverse('api:team_list', request=request)

View File

@@ -14,6 +14,7 @@ from rest_framework.fields import ( # noqa
BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField, BooleanField, CharField, ChoiceField, DictField, DateTimeField, EmailField,
IntegerField, ListField, NullBooleanField IntegerField, ListField, NullBooleanField
) )
from rest_framework.serializers import PrimaryKeyRelatedField # noqa
logger = logging.getLogger('awx.conf.fields') logger = logging.getLogger('awx.conf.fields')

View File

@@ -29,9 +29,9 @@ from awx.main.utils import (
) )
from awx.main.models import ( from awx.main.models import (
ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialType, ActivityStream, AdHocCommand, AdHocCommandEvent, Credential, CredentialType,
CredentialInputSource, CustomInventoryScript, Group, Host, Instance, InstanceGroup, CredentialInputSource, CustomInventoryScript, ExecutionEnvironment, Group, Host, Instance,
Inventory, InventorySource, InventoryUpdate, InventoryUpdateEvent, Job, JobEvent, InstanceGroup, Inventory, InventorySource, InventoryUpdate, InventoryUpdateEvent, Job,
JobHostSummary, JobLaunchConfig, JobTemplate, Label, Notification, JobEvent, JobHostSummary, JobLaunchConfig, JobTemplate, Label, Notification,
NotificationTemplate, Organization, Project, ProjectUpdate, NotificationTemplate, Organization, Project, ProjectUpdate,
ProjectUpdateEvent, Role, Schedule, SystemJob, SystemJobEvent, ProjectUpdateEvent, Role, Schedule, SystemJob, SystemJobEvent,
SystemJobTemplate, Team, UnifiedJob, UnifiedJobTemplate, WorkflowJob, SystemJobTemplate, Team, UnifiedJob, UnifiedJobTemplate, WorkflowJob,
@@ -1308,6 +1308,54 @@ class TeamAccess(BaseAccess):
*args, **kwargs) *args, **kwargs)
class ExecutionEnvironmentAccess(BaseAccess):
"""
I can see an execution environment when:
- I'm a superuser
- I'm a member of the same organization
- it is a global ExecutionEnvironment
I can create/change an execution environment when:
- I'm a superuser
- I'm an admin for the organization(s)
"""
model = ExecutionEnvironment
select_related = ('organization',)
prefetch_related = ('organization__admin_role', 'organization__execution_environment_admin_role')
def filtered_queryset(self):
return ExecutionEnvironment.objects.filter(
Q(organization__in=Organization.accessible_pk_qs(self.user, 'read_role')) |
Q(organization__isnull=True)
).distinct()
@check_superuser
def can_add(self, data):
if not data: # So the browseable API will work
return Organization.accessible_objects(self.user, 'execution_environment_admin_role').exists()
return self.check_related('organization', Organization, data, mandatory=True,
role_field='execution_environment_admin_role')
def can_change(self, obj, data):
if obj.managed_by_tower:
raise PermissionDenied
if self.user.is_superuser:
return True
if obj and obj.organization_id is None:
raise PermissionDenied
if self.user not in obj.organization.execution_environment_admin_role:
raise PermissionDenied
if data and 'organization' in data:
new_org = get_object_from_data('organization', Organization, data, obj=obj)
if not new_org or self.user not in new_org.execution_environment_admin_role:
return False
return self.check_related('organization', Organization, data, obj=obj, mandatory=True,
role_field='execution_environment_admin_role')
def can_delete(self, obj):
return self.can_change(obj, None)
class ProjectAccess(NotificationAttachMixin, BaseAccess): class ProjectAccess(NotificationAttachMixin, BaseAccess):
''' '''
I can see projects when: I can see projects when:

View File

@@ -311,7 +311,7 @@ def events_table(since, full_path, until, **kwargs):
return _copy_table(table='events', query=events_query, path=full_path) return _copy_table(table='events', query=events_query, path=full_path)
@register('unified_jobs_table', '1.1', format='csv', description=_('Data on jobs run'), expensive=True) @register('unified_jobs_table', '1.2', format='csv', description=_('Data on jobs run'), expensive=True)
def unified_jobs_table(since, full_path, until, **kwargs): def unified_jobs_table(since, full_path, until, **kwargs):
unified_job_query = '''COPY (SELECT main_unifiedjob.id, unified_job_query = '''COPY (SELECT main_unifiedjob.id,
main_unifiedjob.polymorphic_ctype_id, main_unifiedjob.polymorphic_ctype_id,
@@ -334,7 +334,8 @@ def unified_jobs_table(since, full_path, until, **kwargs):
main_unifiedjob.finished, main_unifiedjob.finished,
main_unifiedjob.elapsed, main_unifiedjob.elapsed,
main_unifiedjob.job_explanation, main_unifiedjob.job_explanation,
main_unifiedjob.instance_group_id main_unifiedjob.instance_group_id,
main_unifiedjob.installed_collections
FROM main_unifiedjob FROM main_unifiedjob
JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id JOIN django_content_type ON main_unifiedjob.polymorphic_ctype_id = django_content_type.id
LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id LEFT JOIN main_job ON main_unifiedjob.id = main_job.unifiedjob_ptr_id

View File

@@ -10,6 +10,7 @@ from rest_framework.fields import FloatField
# Tower # Tower
from awx.conf import fields, register, register_validate from awx.conf import fields, register, register_validate
from awx.main.models import ExecutionEnvironment
logger = logging.getLogger('awx.main.conf') logger = logging.getLogger('awx.main.conf')
@@ -176,6 +177,18 @@ register(
read_only=True, read_only=True,
) )
register(
'DEFAULT_EXECUTION_ENVIRONMENT',
field_class=fields.PrimaryKeyRelatedField,
allow_null=True,
default=None,
queryset=ExecutionEnvironment.objects.all(),
label=_('Global default execution environment'),
help_text=_('.'),
category=_('System'),
category_slug='system',
)
register( register(
'CUSTOM_VENV_PATHS', 'CUSTOM_VENV_PATHS',
field_class=fields.StringListPathField, field_class=fields.StringListPathField,

View File

@@ -6,7 +6,6 @@ import stat
import tempfile import tempfile
import time import time
import logging import logging
import yaml
import datetime import datetime
from django.conf import settings from django.conf import settings
@@ -32,7 +31,7 @@ def set_pythonpath(venv_libdir, env):
class IsolatedManager(object): class IsolatedManager(object):
def __init__(self, event_handler, canceled_callback=None, check_callback=None, pod_manager=None): def __init__(self, event_handler, canceled_callback=None, check_callback=None):
""" """
:param event_handler: a callable used to persist event data from isolated nodes :param event_handler: a callable used to persist event data from isolated nodes
:param canceled_callback: a callable - which returns `True` or `False` :param canceled_callback: a callable - which returns `True` or `False`
@@ -45,28 +44,12 @@ class IsolatedManager(object):
self.started_at = None self.started_at = None
self.captured_command_artifact = False self.captured_command_artifact = False
self.instance = None self.instance = None
self.pod_manager = pod_manager
def build_inventory(self, hosts): def build_inventory(self, hosts):
if self.instance and self.instance.is_containerized: inventory = '\n'.join([
inventory = {'all': {'hosts': {}}} '{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
fd, path = tempfile.mkstemp( for host in hosts
prefix='.kubeconfig', dir=self.private_data_dir ])
)
with open(path, 'wb') as temp:
temp.write(yaml.dump(self.pod_manager.kube_config).encode())
temp.flush()
os.chmod(temp.name, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
for host in hosts:
inventory['all']['hosts'][host] = {
"ansible_connection": "kubectl",
"ansible_kubectl_config": path,
}
else:
inventory = '\n'.join([
'{} ansible_ssh_user={}'.format(host, settings.AWX_ISOLATED_USERNAME)
for host in hosts
])
return inventory return inventory

View File

@@ -2,22 +2,22 @@
# All Rights Reserved # All Rights Reserved
from django.core.management.base import BaseCommand from django.core.management.base import BaseCommand
from django.conf import settings
from crum import impersonate from crum import impersonate
from awx.main.models import User, Organization, Project, Inventory, CredentialType, Credential, Host, JobTemplate from awx.main.models import (
User, Organization, Project, Inventory, CredentialType,
Credential, Host, JobTemplate, ExecutionEnvironment
)
from awx.main.signals import disable_computed_fields from awx.main.signals import disable_computed_fields
class Command(BaseCommand): class Command(BaseCommand):
"""Create preloaded data, intended for new installs """Create preloaded data, intended for new installs
""" """
help = 'Creates a preload tower data iff there is none.' help = 'Creates a preload tower data if there is none.'
def handle(self, *args, **kwargs): def handle(self, *args, **kwargs):
# Sanity check: Is there already an organization in the system? changed = False
if Organization.objects.count():
print('An organization is already in the system, exiting.')
print('(changed: False)')
return
# Create a default organization as the first superuser found. # Create a default organization as the first superuser found.
try: try:
@@ -26,44 +26,62 @@ class Command(BaseCommand):
superuser = None superuser = None
with impersonate(superuser): with impersonate(superuser):
with disable_computed_fields(): with disable_computed_fields():
o = Organization.objects.create(name='Default') if not Organization.objects.exists():
p = Project(name='Demo Project', o = Organization.objects.create(name='Default')
scm_type='git',
scm_url='https://github.com/ansible/ansible-tower-samples', p = Project(name='Demo Project',
scm_update_on_launch=True, scm_type='git',
scm_update_cache_timeout=0, scm_url='https://github.com/ansible/ansible-tower-samples',
organization=o) scm_update_on_launch=True,
p.save(skip_update=True) scm_update_cache_timeout=0,
ssh_type = CredentialType.objects.filter(namespace='ssh').first() organization=o)
c = Credential.objects.create(credential_type=ssh_type, p.save(skip_update=True)
name='Demo Credential',
inputs={ ssh_type = CredentialType.objects.filter(namespace='ssh').first()
'username': superuser.username c = Credential.objects.create(credential_type=ssh_type,
}, name='Demo Credential',
created_by=superuser) inputs={
c.admin_role.members.add(superuser) 'username': superuser.username
public_galaxy_credential = Credential( },
name='Ansible Galaxy', created_by=superuser)
managed_by_tower=True,
credential_type=CredentialType.objects.get(kind='galaxy'), c.admin_role.members.add(superuser)
inputs = {
'url': 'https://galaxy.ansible.com/' public_galaxy_credential = Credential(name='Ansible Galaxy',
} managed_by_tower=True,
) credential_type=CredentialType.objects.get(kind='galaxy'),
public_galaxy_credential.save() inputs={'url': 'https://galaxy.ansible.com/'})
o.galaxy_credentials.add(public_galaxy_credential) public_galaxy_credential.save()
i = Inventory.objects.create(name='Demo Inventory', o.galaxy_credentials.add(public_galaxy_credential)
organization=o,
created_by=superuser) i = Inventory.objects.create(name='Demo Inventory',
Host.objects.create(name='localhost', organization=o,
inventory=i, created_by=superuser)
variables="ansible_connection: local\nansible_python_interpreter: '{{ ansible_playbook_python }}'",
created_by=superuser) Host.objects.create(name='localhost',
jt = JobTemplate.objects.create(name='Demo Job Template', inventory=i,
playbook='hello_world.yml', variables="ansible_connection: local\nansible_python_interpreter: '{{ ansible_playbook_python }}'",
project=p, created_by=superuser)
inventory=i)
jt.credentials.add(c) jt = JobTemplate.objects.create(name='Demo Job Template',
print('Default organization added.') playbook='hello_world.yml',
print('Demo Credential, Inventory, and Job Template added.') project=p,
print('(changed: True)') inventory=i)
jt.credentials.add(c)
print('Default organization added.')
print('Demo Credential, Inventory, and Job Template added.')
changed = True
default_ee = settings.AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE
ee, created = ExecutionEnvironment.objects.get_or_create(name='Default EE', defaults={'image': default_ee,
'managed_by_tower': True})
if created:
changed = True
print('Default Execution Environment registered.')
if changed:
print('(changed: True)')
else:
print('(changed: False)')

View File

@@ -237,7 +237,7 @@ class InstanceGroupManager(models.Manager):
elif t.status == 'running': elif t.status == 'running':
# Subtract capacity from all groups that contain the instance # Subtract capacity from all groups that contain the instance
if t.execution_node not in instance_ig_mapping: if t.execution_node not in instance_ig_mapping:
if not t.is_containerized: if not t.is_container_group_task:
logger.warning('Detected %s running inside lost instance, ' logger.warning('Detected %s running inside lost instance, '
'may still be waiting for reaper.', t.log_format) 'may still be waiting for reaper.', t.log_format)
if t.instance_group: if t.instance_group:

View File

@@ -0,0 +1,59 @@
# Generated by Django 2.2.11 on 2020-07-08 18:42
from django.conf import settings
from django.db import migrations, models
import django.db.models.deletion
import django.db.models.expressions
import taggit.managers
class Migration(migrations.Migration):
dependencies = [
('taggit', '0003_taggeditem_add_unique_index'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('main', '0123_drop_hg_support'),
]
operations = [
migrations.CreateModel(
name='ExecutionEnvironment',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('created', models.DateTimeField(default=None, editable=False)),
('modified', models.DateTimeField(default=None, editable=False)),
('description', models.TextField(blank=True, default='')),
('image', models.CharField(help_text='The registry location where the container is stored.', max_length=1024, verbose_name='image location')),
('managed_by_tower', models.BooleanField(default=False, editable=False)),
('created_by', models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{'class': 'executionenvironment', 'model_name': 'executionenvironment', 'app_label': 'main'}(class)s_created+", to=settings.AUTH_USER_MODEL)),
('credential', models.ForeignKey(blank=True, default=None, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='executionenvironments', to='main.Credential')),
('modified_by', models.ForeignKey(default=None, editable=False, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name="{'class': 'executionenvironment', 'model_name': 'executionenvironment', 'app_label': 'main'}(class)s_modified+", to=settings.AUTH_USER_MODEL)),
('organization', models.ForeignKey(blank=True, default=None, help_text='The organization used to determine access to this execution environment.', null=True, on_delete=django.db.models.deletion.CASCADE, related_name='executionenvironments', to='main.Organization')),
('tags', taggit.managers.TaggableManager(blank=True, help_text='A comma-separated list of tags.', through='taggit.TaggedItem', to='taggit.Tag', verbose_name='Tags')),
],
options={
'ordering': (django.db.models.expressions.OrderBy(django.db.models.expressions.F('organization_id'), nulls_first=True), 'image'),
'unique_together': {('organization', 'image')},
},
),
migrations.AddField(
model_name='activitystream',
name='execution_environment',
field=models.ManyToManyField(blank=True, to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='organization',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run by this organization.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjob',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='unifiedjobs', to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='execution_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The container image to be used for execution.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='unifiedjobtemplates', to='main.ExecutionEnvironment'),
),
]

View File

@@ -0,0 +1,46 @@
# Generated by Django 2.2.16 on 2020-11-19 16:20
import uuid
import awx.main.fields
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0124_execution_environments'),
]
operations = [
migrations.AlterModelOptions(
name='executionenvironment',
options={'ordering': ('-created',)},
),
migrations.AddField(
model_name='executionenvironment',
name='name',
field=models.CharField(default=uuid.uuid4, max_length=512, unique=True),
preserve_default=False,
),
migrations.AddField(
model_name='organization',
name='execution_environment_admin_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role='admin_role', related_name='+', to='main.Role'),
preserve_default='True',
),
migrations.AddField(
model_name='project',
name='default_environment',
field=models.ForeignKey(blank=True, default=None, help_text='The default execution environment for jobs run using this project.', null=True, on_delete=django.db.models.deletion.SET_NULL, related_name='+', to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='credentialtype',
name='kind',
field=models.CharField(choices=[('ssh', 'Machine'), ('vault', 'Vault'), ('net', 'Network'), ('scm', 'Source Control'), ('cloud', 'Cloud'), ('registry', 'Container Registry'), ('token', 'Personal Access Token'), ('insights', 'Insights'), ('external', 'External'), ('kubernetes', 'Kubernetes'), ('galaxy', 'Galaxy/Automation Hub')], max_length=32),
),
migrations.AlterUniqueTogether(
name='executionenvironment',
unique_together=set(),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.16 on 2021-01-27 22:31
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('main', '0125_more_ee_modeling_changes'),
]
operations = [
migrations.AddField(
model_name='executionenvironment',
name='pull',
field=models.CharField(choices=[('always', 'Always pull container before running.'), ('missing', 'No pull option has been selected.'), ('never', 'Never pull container before running.')], blank=True, default='', help_text='Pull image before running?', max_length=16),
),
]

View File

@@ -0,0 +1,18 @@
# Generated by Django 2.2.16 on 2021-02-15 22:02
from django.db import migrations
def reset_pod_specs(apps, schema_editor):
InstanceGroup = apps.get_model('main', 'InstanceGroup')
InstanceGroup.objects.update(pod_spec_override="")
class Migration(migrations.Migration):
dependencies = [
('main', '0126_executionenvironment_container_options'),
]
operations = [
migrations.RunPython(reset_pod_specs)
]

View File

@@ -0,0 +1,20 @@
# Generated by Django 2.2.16 on 2021-02-18 22:57
import awx.main.fields
from django.db import migrations
import django.db.models.deletion
class Migration(migrations.Migration):
dependencies = [
('main', '0127_reset_pod_spec_override'),
]
operations = [
migrations.AlterField(
model_name='organization',
name='read_role',
field=awx.main.fields.ImplicitRoleField(editable=False, null='True', on_delete=django.db.models.deletion.CASCADE, parent_role=['member_role', 'auditor_role', 'execute_role', 'project_admin_role', 'inventory_admin_role', 'workflow_admin_role', 'notification_admin_role', 'credential_admin_role', 'job_template_admin_role', 'approval_role', 'execution_environment_admin_role'], related_name='+', to='main.Role'),
),
]

View File

@@ -0,0 +1,19 @@
# Generated by Django 2.2.16 on 2021-02-16 20:27
import awx.main.fields
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('main', '0128_organiaztion_read_roles_ee_admin'),
]
operations = [
migrations.AddField(
model_name='unifiedjob',
name='installed_collections',
field=awx.main.fields.JSONBField(blank=True, default=dict, editable=False, help_text='The Collections names and versions installed in the execution environment.'),
),
]

View File

@@ -35,6 +35,7 @@ from awx.main.models.events import ( # noqa
) )
from awx.main.models.ad_hoc_commands import AdHocCommand # noqa from awx.main.models.ad_hoc_commands import AdHocCommand # noqa
from awx.main.models.schedules import Schedule # noqa from awx.main.models.schedules import Schedule # noqa
from awx.main.models.execution_environments import ExecutionEnvironment # noqa
from awx.main.models.activity_stream import ActivityStream # noqa from awx.main.models.activity_stream import ActivityStream # noqa
from awx.main.models.ha import ( # noqa from awx.main.models.ha import ( # noqa
Instance, InstanceGroup, TowerScheduleState, Instance, InstanceGroup, TowerScheduleState,
@@ -45,7 +46,7 @@ from awx.main.models.rbac import ( # noqa
ROLE_SINGLETON_SYSTEM_AUDITOR, ROLE_SINGLETON_SYSTEM_AUDITOR,
) )
from awx.main.models.mixins import ( # noqa from awx.main.models.mixins import ( # noqa
CustomVirtualEnvMixin, ResourceMixin, SurveyJobMixin, CustomVirtualEnvMixin, ExecutionEnvironmentMixin, ResourceMixin, SurveyJobMixin,
SurveyJobTemplateMixin, TaskManagerInventoryUpdateMixin, SurveyJobTemplateMixin, TaskManagerInventoryUpdateMixin,
TaskManagerJobMixin, TaskManagerProjectUpdateMixin, TaskManagerJobMixin, TaskManagerProjectUpdateMixin,
TaskManagerUnifiedJobMixin, TaskManagerUnifiedJobMixin,
@@ -221,6 +222,7 @@ activity_stream_registrar.connect(CredentialType)
activity_stream_registrar.connect(Team) activity_stream_registrar.connect(Team)
activity_stream_registrar.connect(Project) activity_stream_registrar.connect(Project)
#activity_stream_registrar.connect(ProjectUpdate) #activity_stream_registrar.connect(ProjectUpdate)
activity_stream_registrar.connect(ExecutionEnvironment)
activity_stream_registrar.connect(JobTemplate) activity_stream_registrar.connect(JobTemplate)
activity_stream_registrar.connect(Job) activity_stream_registrar.connect(Job)
activity_stream_registrar.connect(AdHocCommand) activity_stream_registrar.connect(AdHocCommand)

View File

@@ -61,6 +61,7 @@ class ActivityStream(models.Model):
team = models.ManyToManyField("Team", blank=True) team = models.ManyToManyField("Team", blank=True)
project = models.ManyToManyField("Project", blank=True) project = models.ManyToManyField("Project", blank=True)
project_update = models.ManyToManyField("ProjectUpdate", blank=True) project_update = models.ManyToManyField("ProjectUpdate", blank=True)
execution_environment = models.ManyToManyField("ExecutionEnvironment", blank=True)
job_template = models.ManyToManyField("JobTemplate", blank=True) job_template = models.ManyToManyField("JobTemplate", blank=True)
job = models.ManyToManyField("Job", blank=True) job = models.ManyToManyField("Job", blank=True)
workflow_job_template_node = models.ManyToManyField("WorkflowJobTemplateNode", blank=True) workflow_job_template_node = models.ManyToManyField("WorkflowJobTemplateNode", blank=True)
@@ -74,6 +75,7 @@ class ActivityStream(models.Model):
ad_hoc_command = models.ManyToManyField("AdHocCommand", blank=True) ad_hoc_command = models.ManyToManyField("AdHocCommand", blank=True)
schedule = models.ManyToManyField("Schedule", blank=True) schedule = models.ManyToManyField("Schedule", blank=True)
custom_inventory_script = models.ManyToManyField("CustomInventoryScript", blank=True) custom_inventory_script = models.ManyToManyField("CustomInventoryScript", blank=True)
execution_environment = models.ManyToManyField("ExecutionEnvironment", blank=True)
notification_template = models.ManyToManyField("NotificationTemplate", blank=True) notification_template = models.ManyToManyField("NotificationTemplate", blank=True)
notification = models.ManyToManyField("Notification", blank=True) notification = models.ManyToManyField("Notification", blank=True)
label = models.ManyToManyField("Label", blank=True) label = models.ManyToManyField("Label", blank=True)

View File

@@ -151,8 +151,8 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
return True return True
@property @property
def is_containerized(self): def is_container_group_task(self):
return bool(self.instance_group and self.instance_group.is_containerized) return bool(self.instance_group and self.instance_group.is_container_group)
@property @property
def can_run_containerized(self): def can_run_containerized(self):
@@ -198,8 +198,8 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
def copy(self): def copy(self):
data = {} data = {}
for field in ('job_type', 'inventory_id', 'limit', 'credential_id', for field in ('job_type', 'inventory_id', 'limit', 'credential_id',
'module_name', 'module_args', 'forks', 'verbosity', 'execution_environment_id', 'module_name', 'module_args',
'extra_vars', 'become_enabled', 'diff_mode'): 'forks', 'verbosity', 'extra_vars', 'become_enabled', 'diff_mode'):
data[field] = getattr(self, field) data[field] = getattr(self, field)
return AdHocCommand.objects.create(**data) return AdHocCommand.objects.create(**data)
@@ -209,6 +209,9 @@ class AdHocCommand(UnifiedJob, JobNotificationMixin):
self.name = Truncator(u': '.join(filter(None, (self.module_name, self.module_args)))).chars(512) self.name = Truncator(u': '.join(filter(None, (self.module_name, self.module_args)))).chars(512)
if 'name' not in update_fields: if 'name' not in update_fields:
update_fields.append('name') update_fields.append('name')
if not self.execution_environment_id:
self.execution_environment = self.resolve_execution_environment()
update_fields.append('execution_environment')
super(AdHocCommand, self).save(*args, **kwargs) super(AdHocCommand, self).save(*args, **kwargs)
@property @property

View File

@@ -331,6 +331,7 @@ class CredentialType(CommonModelNameNotUnique):
('net', _('Network')), ('net', _('Network')),
('scm', _('Source Control')), ('scm', _('Source Control')),
('cloud', _('Cloud')), ('cloud', _('Cloud')),
('registry', _('Container Registry')),
('token', _('Personal Access Token')), ('token', _('Personal Access Token')),
('insights', _('Insights')), ('insights', _('Insights')),
('external', _('External')), ('external', _('External')),
@@ -528,15 +529,20 @@ class CredentialType(CommonModelNameNotUnique):
with open(path, 'w') as f: with open(path, 'w') as f:
f.write(data) f.write(data)
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
# FIXME: develop some better means of referencing paths inside containers
container_path = os.path.join(
'/runner',
os.path.basename(path)
)
# determine if filename indicates single file or many # determine if filename indicates single file or many
if file_label.find('.') == -1: if file_label.find('.') == -1:
tower_namespace.filename = path tower_namespace.filename = container_path
else: else:
if not hasattr(tower_namespace, 'filename'): if not hasattr(tower_namespace, 'filename'):
tower_namespace.filename = TowerNamespace() tower_namespace.filename = TowerNamespace()
file_label = file_label.split('.')[1] file_label = file_label.split('.')[1]
setattr(tower_namespace.filename, file_label, path) setattr(tower_namespace.filename, file_label, container_path)
injector_field = self._meta.get_field('injectors') injector_field = self._meta.get_field('injectors')
for env_var, tmpl in self.injectors.get('env', {}).items(): for env_var, tmpl in self.injectors.get('env', {}).items():
@@ -564,7 +570,12 @@ class CredentialType(CommonModelNameNotUnique):
if extra_vars: if extra_vars:
path = build_extra_vars_file(extra_vars, private_data_dir) path = build_extra_vars_file(extra_vars, private_data_dir)
args.extend(['-e', '@%s' % path]) # FIXME: develop some better means of referencing paths inside containers
container_path = os.path.join(
'/runner',
os.path.basename(path)
)
args.extend(['-e', '@%s' % container_path])
class ManagedCredentialType(SimpleNamespace): class ManagedCredentialType(SimpleNamespace):
@@ -1123,7 +1134,6 @@ ManagedCredentialType(
}, },
) )
ManagedCredentialType( ManagedCredentialType(
namespace='kubernetes_bearer_token', namespace='kubernetes_bearer_token',
kind='kubernetes', kind='kubernetes',
@@ -1155,6 +1165,37 @@ ManagedCredentialType(
} }
) )
ManagedCredentialType(
namespace='registry',
kind='registry',
name=ugettext_noop('Container Registry'),
inputs={
'fields': [{
'id': 'host',
'label': ugettext_noop('Authentication URL'),
'type': 'string',
'help_text': ugettext_noop('Authentication endpoint for the container registry.'),
}, {
'id': 'username',
'label': ugettext_noop('Username'),
'type': 'string',
}, {
'id': 'password',
'label': ugettext_noop('Password'),
'type': 'string',
'secret': True,
}, {
'id': 'token',
'label': ugettext_noop('Access Token'),
'type': 'string',
'secret': True,
'help_text': ugettext_noop('A token to use to authenticate with. '
'This should not be set if username/password are being used.'),
}],
'required': ['host'],
}
)
ManagedCredentialType( ManagedCredentialType(
namespace='galaxy_api_token', namespace='galaxy_api_token',

View File

@@ -35,8 +35,8 @@ def gce(cred, env, private_data_dir):
json.dump(json_cred, f, indent=2) json.dump(json_cred, f, indent=2)
f.close() f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['GCE_CREDENTIALS_FILE_PATH'] = path env['GCE_CREDENTIALS_FILE_PATH'] = os.path.join('/runner', os.path.basename(path))
env['GCP_SERVICE_ACCOUNT_FILE'] = path env['GCP_SERVICE_ACCOUNT_FILE'] = os.path.join('/runner', os.path.basename(path))
# Handle env variables for new module types. # Handle env variables for new module types.
# This includes gcp_compute inventory plugin and # This includes gcp_compute inventory plugin and
@@ -105,7 +105,8 @@ def openstack(cred, env, private_data_dir):
yaml.safe_dump(openstack_data, f, default_flow_style=False, allow_unicode=True) yaml.safe_dump(openstack_data, f, default_flow_style=False, allow_unicode=True)
f.close() f.close()
os.chmod(path, stat.S_IRUSR | stat.S_IWUSR) os.chmod(path, stat.S_IRUSR | stat.S_IWUSR)
env['OS_CLIENT_CONFIG_FILE'] = path # TODO: constant for container base path
env['OS_CLIENT_CONFIG_FILE'] = os.path.join('/runner', os.path.basename(path))
def kubernetes_bearer_token(cred, env, private_data_dir): def kubernetes_bearer_token(cred, env, private_data_dir):

View File

@@ -0,0 +1,53 @@
from django.db import models
from django.utils.translation import ugettext_lazy as _
from awx.api.versioning import reverse
from awx.main.models.base import CommonModel
__all__ = ['ExecutionEnvironment']
class ExecutionEnvironment(CommonModel):
class Meta:
ordering = ('-created',)
PULL_CHOICES = [
('always', _("Always pull container before running.")),
('missing', _("No pull option has been selected.")),
('never', _("Never pull container before running."))
]
organization = models.ForeignKey(
'Organization',
null=True,
default=None,
blank=True,
on_delete=models.CASCADE,
related_name='%(class)ss',
help_text=_('The organization used to determine access to this execution environment.'),
)
image = models.CharField(
max_length=1024,
verbose_name=_('image location'),
help_text=_("The registry location where the container is stored."),
)
managed_by_tower = models.BooleanField(default=False, editable=False)
credential = models.ForeignKey(
'Credential',
related_name='%(class)ss',
blank=True,
null=True,
default=None,
on_delete=models.SET_NULL,
)
pull = models.CharField(
max_length=16,
choices=PULL_CHOICES,
blank=True,
default='',
help_text=_('Pull image before running?'),
)
def get_absolute_url(self, request=None):
return reverse('api:execution_environment_detail', kwargs={'pk': self.pk}, request=request)

View File

@@ -147,6 +147,13 @@ class Instance(HasPolicyEditsMixin, BaseModel):
return self.rampart_groups.filter(controller__isnull=False).exists() return self.rampart_groups.filter(controller__isnull=False).exists()
def refresh_capacity(self): def refresh_capacity(self):
if settings.IS_K8S:
self.capacity = self.cpu = self.memory = self.cpu_capacity = self.mem_capacity = 0 # noqa
self.version = awx_application_version
self.save(update_fields=['capacity', 'version', 'modified', 'cpu',
'memory', 'cpu_capacity', 'mem_capacity'])
return
cpu = get_cpu_capacity() cpu = get_cpu_capacity()
mem = get_mem_capacity() mem = get_mem_capacity()
if self.enabled: if self.enabled:
@@ -247,7 +254,10 @@ class InstanceGroup(HasPolicyEditsMixin, BaseModel, RelatedJobsMixin):
return bool(self.controller) return bool(self.controller)
@property @property
def is_containerized(self): def is_container_group(self):
if settings.IS_K8S:
return True
return bool(self.credential and self.credential.kubernetes) return bool(self.credential and self.credential.kubernetes)
''' '''
@@ -306,9 +316,9 @@ def schedule_policy_task():
@receiver(post_save, sender=InstanceGroup) @receiver(post_save, sender=InstanceGroup)
def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs): def on_instance_group_saved(sender, instance, created=False, raw=False, **kwargs):
if created or instance.has_policy_changes(): if created or instance.has_policy_changes():
if not instance.is_containerized: if not instance.is_container_group:
schedule_policy_task() schedule_policy_task()
elif created or instance.is_containerized: elif created or instance.is_container_group:
instance.set_default_policy_fields() instance.set_default_policy_fields()
@@ -320,7 +330,7 @@ def on_instance_saved(sender, instance, created=False, raw=False, **kwargs):
@receiver(post_delete, sender=InstanceGroup) @receiver(post_delete, sender=InstanceGroup)
def on_instance_group_deleted(sender, instance, using, **kwargs): def on_instance_group_deleted(sender, instance, using, **kwargs):
if not instance.is_containerized: if not instance.is_container_group:
schedule_policy_task() schedule_policy_task()

View File

@@ -1373,6 +1373,7 @@ class PluginFileInjector(object):
collection = None collection = None
collection_migration = '2.9' # Starting with this version, we use collections collection_migration = '2.9' # Starting with this version, we use collections
# TODO: delete this method and update unit tests
@classmethod @classmethod
def get_proper_name(cls): def get_proper_name(cls):
if cls.plugin_name is None: if cls.plugin_name is None:
@@ -1397,13 +1398,12 @@ class PluginFileInjector(object):
def inventory_as_dict(self, inventory_update, private_data_dir): def inventory_as_dict(self, inventory_update, private_data_dir):
source_vars = dict(inventory_update.source_vars_dict) # make a copy source_vars = dict(inventory_update.source_vars_dict) # make a copy
proper_name = self.get_proper_name()
''' '''
None conveys that we should use the user-provided plugin. None conveys that we should use the user-provided plugin.
Note that a plugin value of '' should still be overridden. Note that a plugin value of '' should still be overridden.
''' '''
if proper_name is not None: if self.plugin_name is not None:
source_vars['plugin'] = proper_name source_vars['plugin'] = self.plugin_name
return source_vars return source_vars
def build_env(self, inventory_update, env, private_data_dir, private_data_files): def build_env(self, inventory_update, env, private_data_dir, private_data_files):
@@ -1441,7 +1441,6 @@ class PluginFileInjector(object):
def get_plugin_env(self, inventory_update, private_data_dir, private_data_files): def get_plugin_env(self, inventory_update, private_data_dir, private_data_files):
env = self._get_shared_env(inventory_update, private_data_dir, private_data_files) env = self._get_shared_env(inventory_update, private_data_dir, private_data_files)
env['ANSIBLE_COLLECTIONS_PATHS'] = settings.AWX_ANSIBLE_COLLECTIONS_PATHS
return env return env
def build_private_data(self, inventory_update, private_data_dir): def build_private_data(self, inventory_update, private_data_dir):
@@ -1544,7 +1543,7 @@ class openstack(PluginFileInjector):
env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files) env = super(openstack, self).get_plugin_env(inventory_update, private_data_dir, private_data_files)
credential = inventory_update.get_cloud_credential() credential = inventory_update.get_cloud_credential()
cred_data = private_data_files['credentials'] cred_data = private_data_files['credentials']
env['OS_CLIENT_CONFIG_FILE'] = cred_data[credential] env['OS_CLIENT_CONFIG_FILE'] = os.path.join('/runner', os.path.basename(cred_data[credential]))
return env return env
@@ -1574,6 +1573,12 @@ class satellite6(PluginFileInjector):
ret['FOREMAN_PASSWORD'] = credential.get_input('password', default='') ret['FOREMAN_PASSWORD'] = credential.get_input('password', default='')
return ret return ret
def inventory_as_dict(self, inventory_update, private_data_dir):
ret = super(satellite6, self).inventory_as_dict(inventory_update, private_data_dir)
# this inventory plugin requires the fully qualified inventory plugin name
ret['plugin'] = f'{self.namespace}.{self.collection}.{self.plugin_name}'
return ret
class tower(PluginFileInjector): class tower(PluginFileInjector):
plugin_name = 'tower' plugin_name = 'tower'

View File

@@ -284,7 +284,7 @@ class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, Resour
def _get_unified_job_field_names(cls): def _get_unified_job_field_names(cls):
return set(f.name for f in JobOptions._meta.fields) | set( return set(f.name for f in JobOptions._meta.fields) | set(
['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials', ['name', 'description', 'organization', 'survey_passwords', 'labels', 'credentials',
'job_slice_number', 'job_slice_count'] 'job_slice_number', 'job_slice_count', 'execution_environment']
) )
@property @property
@@ -768,11 +768,11 @@ class Job(UnifiedJob, JobOptions, SurveyJobMixin, JobNotificationMixin, TaskMana
@property @property
def can_run_containerized(self): def can_run_containerized(self):
return any([ig for ig in self.preferred_instance_groups if ig.is_containerized]) return any([ig for ig in self.preferred_instance_groups if ig.is_container_group])
@property @property
def is_containerized(self): def is_container_group_task(self):
return bool(self.instance_group and self.instance_group.is_containerized) return bool(self.instance_group and self.instance_group.is_container_group)
@property @property
def preferred_instance_groups(self): def preferred_instance_groups(self):
@@ -1286,6 +1286,8 @@ class SystemJob(UnifiedJob, SystemJobOptions, JobNotificationMixin):
@property @property
def task_impact(self): def task_impact(self):
if settings.IS_K8S:
return 0
return 5 return 5
@property @property

View File

@@ -34,7 +34,7 @@ logger = logging.getLogger('awx.main.models.mixins')
__all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin', __all__ = ['ResourceMixin', 'SurveyJobTemplateMixin', 'SurveyJobMixin',
'TaskManagerUnifiedJobMixin', 'TaskManagerJobMixin', 'TaskManagerProjectUpdateMixin', 'TaskManagerUnifiedJobMixin', 'TaskManagerJobMixin', 'TaskManagerProjectUpdateMixin',
'TaskManagerInventoryUpdateMixin', 'CustomVirtualEnvMixin'] 'TaskManagerInventoryUpdateMixin', 'ExecutionEnvironmentMixin', 'CustomVirtualEnvMixin']
class ResourceMixin(models.Model): class ResourceMixin(models.Model):
@@ -441,6 +441,44 @@ class TaskManagerInventoryUpdateMixin(TaskManagerUpdateOnLaunchMixin):
abstract = True abstract = True
class ExecutionEnvironmentMixin(models.Model):
class Meta:
abstract = True
execution_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=models.SET_NULL,
related_name='%(class)ss',
help_text=_('The container image to be used for execution.'),
)
def get_execution_environment_default(self):
from awx.main.models.execution_environments import ExecutionEnvironment
if settings.DEFAULT_EXECUTION_ENVIRONMENT is not None:
return settings.DEFAULT_EXECUTION_ENVIRONMENT
return ExecutionEnvironment.objects.filter(organization=None, managed_by_tower=True).first()
def resolve_execution_environment(self):
"""
Return the execution environment that should be used when creating a new job.
"""
if self.execution_environment is not None:
return self.execution_environment
if getattr(self, 'project_id', None) and self.project.default_environment is not None:
return self.project.default_environment
if getattr(self, 'organization', None) and self.organization.default_environment is not None:
return self.organization.default_environment
if getattr(self, 'inventory', None) and self.inventory.organization is not None:
if self.inventory.organization.default_environment is not None:
return self.inventory.organization.default_environment
return self.get_execution_environment_default()
class CustomVirtualEnvMixin(models.Model): class CustomVirtualEnvMixin(models.Model):
class Meta: class Meta:
abstract = True abstract = True

View File

@@ -61,6 +61,15 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
blank=True, blank=True,
related_name='%(class)s_notification_templates_for_approvals' related_name='%(class)s_notification_templates_for_approvals'
) )
default_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=models.SET_NULL,
related_name='+',
help_text=_('The default execution environment for jobs run by this organization.'),
)
admin_role = ImplicitRoleField( admin_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR, parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_ADMINISTRATOR,
@@ -86,6 +95,9 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
job_template_admin_role = ImplicitRoleField( job_template_admin_role = ImplicitRoleField(
parent_role='admin_role', parent_role='admin_role',
) )
execution_environment_admin_role = ImplicitRoleField(
parent_role='admin_role',
)
auditor_role = ImplicitRoleField( auditor_role = ImplicitRoleField(
parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR, parent_role='singleton:' + ROLE_SINGLETON_SYSTEM_AUDITOR,
) )
@@ -97,7 +109,8 @@ class Organization(CommonModel, NotificationFieldsModel, ResourceMixin, CustomVi
'execute_role', 'project_admin_role', 'execute_role', 'project_admin_role',
'inventory_admin_role', 'workflow_admin_role', 'inventory_admin_role', 'workflow_admin_role',
'notification_admin_role', 'credential_admin_role', 'notification_admin_role', 'credential_admin_role',
'job_template_admin_role', 'approval_role',], 'job_template_admin_role', 'approval_role',
'execution_environment_admin_role',],
) )
approval_role = ImplicitRoleField( approval_role = ImplicitRoleField(
parent_role='admin_role', parent_role='admin_role',

View File

@@ -187,6 +187,14 @@ class ProjectOptions(models.Model):
pass pass
return cred return cred
def resolve_execution_environment(self):
"""
Project updates, themselves, will use the default execution environment.
Jobs using the project can use the default_environment, but the project updates
are not flexible enough to allow customizing the image they use.
"""
return self.get_execution_environment_default()
def get_project_path(self, check_if_exists=True): def get_project_path(self, check_if_exists=True):
local_path = os.path.basename(self.local_path) local_path = os.path.basename(self.local_path)
if local_path and not local_path.startswith('.'): if local_path and not local_path.startswith('.'):
@@ -259,6 +267,15 @@ class Project(UnifiedJobTemplate, ProjectOptions, ResourceMixin, CustomVirtualEn
app_label = 'main' app_label = 'main'
ordering = ('id',) ordering = ('id',)
default_environment = models.ForeignKey(
'ExecutionEnvironment',
null=True,
blank=True,
default=None,
on_delete=models.SET_NULL,
related_name='+',
help_text=_('The default execution environment for jobs run using this project.'),
)
scm_update_on_launch = models.BooleanField( scm_update_on_launch = models.BooleanField(
default=False, default=False,
help_text=_('Update the project when a job is launched that uses the project.'), help_text=_('Update the project when a job is launched that uses the project.'),
@@ -554,6 +571,8 @@ class ProjectUpdate(UnifiedJob, ProjectOptions, JobNotificationMixin, TaskManage
@property @property
def task_impact(self): def task_impact(self):
if settings.IS_K8S:
return 0
return 0 if self.job_type == 'run' else 1 return 0 if self.job_type == 'run' else 1
@property @property

View File

@@ -40,6 +40,7 @@ role_names = {
'inventory_admin_role': _('Inventory Admin'), 'inventory_admin_role': _('Inventory Admin'),
'credential_admin_role': _('Credential Admin'), 'credential_admin_role': _('Credential Admin'),
'job_template_admin_role': _('Job Template Admin'), 'job_template_admin_role': _('Job Template Admin'),
'execution_environment_admin_role': _('Execution Environment Admin'),
'workflow_admin_role': _('Workflow Admin'), 'workflow_admin_role': _('Workflow Admin'),
'notification_admin_role': _('Notification Admin'), 'notification_admin_role': _('Notification Admin'),
'auditor_role': _('Auditor'), 'auditor_role': _('Auditor'),
@@ -60,6 +61,7 @@ role_descriptions = {
'inventory_admin_role': _('Can manage all inventories of the %s'), 'inventory_admin_role': _('Can manage all inventories of the %s'),
'credential_admin_role': _('Can manage all credentials of the %s'), 'credential_admin_role': _('Can manage all credentials of the %s'),
'job_template_admin_role': _('Can manage all job templates of the %s'), 'job_template_admin_role': _('Can manage all job templates of the %s'),
'execution_environment_admin_role': _('Can manage all execution environments of the %s'),
'workflow_admin_role': _('Can manage all workflows of the %s'), 'workflow_admin_role': _('Can manage all workflows of the %s'),
'notification_admin_role': _('Can manage all notifications of the %s'), 'notification_admin_role': _('Can manage all notifications of the %s'),
'auditor_role': _('Can view all aspects of the %s'), 'auditor_role': _('Can view all aspects of the %s'),

View File

@@ -39,7 +39,7 @@ from awx.main.models.base import (
from awx.main.dispatch import get_local_queuename from awx.main.dispatch import get_local_queuename
from awx.main.dispatch.control import Control as ControlDispatcher from awx.main.dispatch.control import Control as ControlDispatcher
from awx.main.registrar import activity_stream_registrar from awx.main.registrar import activity_stream_registrar
from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin from awx.main.models.mixins import ResourceMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin
from awx.main.utils import ( from awx.main.utils import (
camelcase_to_underscore, get_model_for_type, camelcase_to_underscore, get_model_for_type,
encrypt_dict, decrypt_field, _inventory_updates, encrypt_dict, decrypt_field, _inventory_updates,
@@ -50,7 +50,7 @@ from awx.main.utils import (
from awx.main.constants import ACTIVE_STATES, CAN_CANCEL from awx.main.constants import ACTIVE_STATES, CAN_CANCEL
from awx.main.redact import UriCleaner, REPLACE_STR from awx.main.redact import UriCleaner, REPLACE_STR
from awx.main.consumers import emit_channel_notification from awx.main.consumers import emit_channel_notification
from awx.main.fields import JSONField, AskForField, OrderedManyToManyField from awx.main.fields import JSONField, JSONBField, AskForField, OrderedManyToManyField
__all__ = ['UnifiedJobTemplate', 'UnifiedJob', 'StdoutMaxBytesExceeded'] __all__ = ['UnifiedJobTemplate', 'UnifiedJob', 'StdoutMaxBytesExceeded']
@@ -59,7 +59,7 @@ logger_job_lifecycle = logging.getLogger('awx.analytics.job_lifecycle')
# NOTE: ACTIVE_STATES moved to constants because it is used by parent modules # NOTE: ACTIVE_STATES moved to constants because it is used by parent modules
class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, NotificationFieldsModel): class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, ExecutionEnvironmentMixin, NotificationFieldsModel):
''' '''
Concrete base class for unified job templates. Concrete base class for unified job templates.
''' '''
@@ -376,6 +376,8 @@ class UnifiedJobTemplate(PolymorphicModel, CommonModelNameNotUnique, Notificatio
for fd, val in eager_fields.items(): for fd, val in eager_fields.items():
setattr(unified_job, fd, val) setattr(unified_job, fd, val)
unified_job.execution_environment = self.resolve_execution_environment()
# NOTE: slice workflow jobs _get_parent_field_name method # NOTE: slice workflow jobs _get_parent_field_name method
# is not correct until this is set # is not correct until this is set
if not parent_field_name: if not parent_field_name:
@@ -527,7 +529,7 @@ class StdoutMaxBytesExceeded(Exception):
class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique, class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique,
UnifiedJobTypeStringMixin, TaskManagerUnifiedJobMixin): UnifiedJobTypeStringMixin, TaskManagerUnifiedJobMixin, ExecutionEnvironmentMixin):
''' '''
Concrete base class for unified job run by the task engine. Concrete base class for unified job run by the task engine.
''' '''
@@ -720,6 +722,12 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
'Credential', 'Credential',
related_name='%(class)ss', related_name='%(class)ss',
) )
installed_collections = JSONBField(
blank=True,
default=dict,
editable=False,
help_text=_("The Collections names and versions installed in the execution environment."),
)
def get_absolute_url(self, request=None): def get_absolute_url(self, request=None):
RealClass = self.get_real_instance_class() RealClass = self.get_real_instance_class()
@@ -1488,7 +1496,7 @@ class UnifiedJob(PolymorphicModel, PasswordFieldsModel, CommonModelNameNotUnique
return bool(self.controller_node) return bool(self.controller_node)
@property @property
def is_containerized(self): def is_container_group_task(self):
return False return False
def log_lifecycle(self, state, blocked_by=None): def log_lifecycle(self, state, blocked_by=None):

View File

@@ -70,7 +70,7 @@ class TaskManager():
''' '''
Init AFTER we know this instance of the task manager will run because the lock is acquired. Init AFTER we know this instance of the task manager will run because the lock is acquired.
''' '''
instances = Instance.objects.filter(~Q(hostname=None), capacity__gt=0, enabled=True) instances = Instance.objects.filter(~Q(hostname=None), enabled=True)
self.real_instances = {i.hostname: i for i in instances} self.real_instances = {i.hostname: i for i in instances}
instances_partial = [SimpleNamespace(obj=instance, instances_partial = [SimpleNamespace(obj=instance,
@@ -86,7 +86,7 @@ class TaskManager():
capacity_total=rampart_group.capacity, capacity_total=rampart_group.capacity,
consumed_capacity=0, consumed_capacity=0,
instances=[]) instances=[])
for instance in rampart_group.instances.filter(capacity__gt=0, enabled=True).order_by('hostname'): for instance in rampart_group.instances.filter(enabled=True).order_by('hostname'):
if instance.hostname in instances_by_hostname: if instance.hostname in instances_by_hostname:
self.graph[rampart_group.name]['instances'].append(instances_by_hostname[instance.hostname]) self.graph[rampart_group.name]['instances'].append(instances_by_hostname[instance.hostname])
@@ -283,12 +283,12 @@ class TaskManager():
task.controller_node = controller_node task.controller_node = controller_node
logger.debug('Submitting isolated {} to queue {} controlled by {}.'.format( logger.debug('Submitting isolated {} to queue {} controlled by {}.'.format(
task.log_format, task.execution_node, controller_node)) task.log_format, task.execution_node, controller_node))
elif rampart_group.is_containerized: elif rampart_group.is_container_group:
# find one real, non-containerized instance with capacity to # find one real, non-containerized instance with capacity to
# act as the controller for k8s API interaction # act as the controller for k8s API interaction
match = None match = None
for group in InstanceGroup.objects.all(): for group in InstanceGroup.objects.all():
if group.is_containerized or group.controller_id: if group.is_container_group or group.controller_id:
continue continue
match = group.fit_task_to_most_remaining_capacity_instance(task, group.instances.all()) match = group.fit_task_to_most_remaining_capacity_instance(task, group.instances.all())
if match: if match:
@@ -521,14 +521,17 @@ class TaskManager():
self.start_task(task, None, task.get_jobs_fail_chain(), None) self.start_task(task, None, task.get_jobs_fail_chain(), None)
continue continue
for rampart_group in preferred_instance_groups: for rampart_group in preferred_instance_groups:
if task.can_run_containerized and rampart_group.is_containerized: if task.can_run_containerized and rampart_group.is_container_group:
self.graph[rampart_group.name]['graph'].add_job(task) self.graph[rampart_group.name]['graph'].add_job(task)
self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None) self.start_task(task, rampart_group, task.get_jobs_fail_chain(), None)
found_acceptable_queue = True found_acceptable_queue = True
break break
remaining_capacity = self.get_remaining_capacity(rampart_group.name) remaining_capacity = self.get_remaining_capacity(rampart_group.name)
if not rampart_group.is_containerized and self.get_remaining_capacity(rampart_group.name) <= 0: if (
task.task_impact > 0 and # project updates have a cost of zero
not rampart_group.is_container_group and
self.get_remaining_capacity(rampart_group.name) <= 0):
logger.debug("Skipping group {}, remaining_capacity {} <= 0".format( logger.debug("Skipping group {}, remaining_capacity {} <= 0".format(
rampart_group.name, remaining_capacity)) rampart_group.name, remaining_capacity))
continue continue
@@ -536,8 +539,8 @@ class TaskManager():
execution_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(task, self.graph[rampart_group.name]['instances']) or \ execution_instance = InstanceGroup.fit_task_to_most_remaining_capacity_instance(task, self.graph[rampart_group.name]['instances']) or \
InstanceGroup.find_largest_idle_instance(self.graph[rampart_group.name]['instances']) InstanceGroup.find_largest_idle_instance(self.graph[rampart_group.name]['instances'])
if execution_instance or rampart_group.is_containerized: if execution_instance or rampart_group.is_container_group:
if not rampart_group.is_containerized: if not rampart_group.is_container_group:
execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact) execution_instance.remaining_capacity = max(0, execution_instance.remaining_capacity - task.task_impact)
execution_instance.jobs_running += 1 execution_instance.jobs_running += 1
logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format( logger.debug("Starting {} in group {} instance {} (remaining_capacity={})".format(
@@ -594,7 +597,7 @@ class TaskManager():
).exclude( ).exclude(
execution_node__in=Instance.objects.values_list('hostname', flat=True) execution_node__in=Instance.objects.values_list('hostname', flat=True)
): ):
if j.execution_node and not j.is_containerized: if j.execution_node and not j.is_container_group_task:
logger.error(f'{j.execution_node} is not a registered instance; reaping {j.log_format}') logger.error(f'{j.execution_node} is not a registered instance; reaping {j.log_format}')
reap_job(j, 'failed') reap_job(j, 'failed')

View File

@@ -368,6 +368,7 @@ def model_serializer_mapping():
models.Credential: serializers.CredentialSerializer, models.Credential: serializers.CredentialSerializer,
models.Team: serializers.TeamSerializer, models.Team: serializers.TeamSerializer,
models.Project: serializers.ProjectSerializer, models.Project: serializers.ProjectSerializer,
models.ExecutionEnvironment: serializers.ExecutionEnvironmentSerializer,
models.JobTemplate: serializers.JobTemplateWithSpecSerializer, models.JobTemplate: serializers.JobTemplateWithSpecSerializer,
models.Job: serializers.JobSerializer, models.Job: serializers.JobSerializer,
models.AdHocCommand: serializers.AdHocCommandSerializer, models.AdHocCommand: serializers.AdHocCommandSerializer,

View File

@@ -23,6 +23,10 @@ import fcntl
from pathlib import Path from pathlib import Path
from uuid import uuid4 from uuid import uuid4
import urllib.parse as urlparse import urllib.parse as urlparse
import socket
import threading
import concurrent.futures
from base64 import b64encode
# Django # Django
from django.conf import settings from django.conf import settings
@@ -36,9 +40,6 @@ from django.core.cache import cache
from django.core.exceptions import ObjectDoesNotExist from django.core.exceptions import ObjectDoesNotExist
from django_guid.middleware import GuidMiddleware from django_guid.middleware import GuidMiddleware
# Kubernetes
from kubernetes.client.rest import ApiException
# Django-CRUM # Django-CRUM
from crum import impersonate from crum import impersonate
@@ -49,6 +50,9 @@ from gitdb.exc import BadName as BadGitName
# Runner # Runner
import ansible_runner import ansible_runner
# Receptor
from receptorctl.socket_interface import ReceptorControl
# AWX # AWX
from awx import __version__ as awx_application_version from awx import __version__ as awx_application_version
from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV from awx.main.constants import PRIVILEGE_ESCALATION_METHODS, STANDARD_INVENTORY_UPDATE_ENV
@@ -72,9 +76,10 @@ from awx.main.dispatch import get_local_queuename, reaper
from awx.main.utils import (update_scm_url, from awx.main.utils import (update_scm_url,
ignore_inventory_computed_fields, ignore_inventory_computed_fields,
ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager, ignore_inventory_group_removal, extract_ansible_vars, schedule_task_manager,
get_awx_version) get_awx_version,
deepmerge,
parse_yaml_or_json)
from awx.main.utils.ansible import read_ansible_config from awx.main.utils.ansible import read_ansible_config
from awx.main.utils.common import get_custom_venv_choices
from awx.main.utils.external_logging import reconfigure_rsyslog from awx.main.utils.external_logging import reconfigure_rsyslog
from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja from awx.main.utils.safe_yaml import safe_dump, sanitize_jinja
from awx.main.utils.reload import stop_local_services from awx.main.utils.reload import stop_local_services
@@ -257,7 +262,7 @@ def apply_cluster_membership_policies():
# On a differential basis, apply instances to non-isolated groups # On a differential basis, apply instances to non-isolated groups
with transaction.atomic(): with transaction.atomic():
for g in actual_groups: for g in actual_groups:
if g.obj.is_containerized: if g.obj.is_container_group:
logger.debug('Skipping containerized group {} for policy calculation'.format(g.obj.name)) logger.debug('Skipping containerized group {} for policy calculation'.format(g.obj.name))
continue continue
instances_to_add = set(g.instances) - set(g.prior_instances) instances_to_add = set(g.instances) - set(g.prior_instances)
@@ -502,7 +507,7 @@ def cluster_node_heartbeat():
def awx_k8s_reaper(): def awx_k8s_reaper():
from awx.main.scheduler.kubernetes import PodManager # prevent circular import from awx.main.scheduler.kubernetes import PodManager # prevent circular import
for group in InstanceGroup.objects.filter(credential__isnull=False).iterator(): for group in InstanceGroup.objects.filter(credential__isnull=False).iterator():
if group.is_containerized: if group.is_container_group:
logger.debug("Checking for orphaned k8s pods for {}.".format(group)) logger.debug("Checking for orphaned k8s pods for {}.".format(group))
for job in UnifiedJob.objects.filter( for job in UnifiedJob.objects.filter(
pk__in=list(PodManager.list_active_jobs(group)) pk__in=list(PodManager.list_active_jobs(group))
@@ -887,6 +892,34 @@ class BaseTask(object):
''' '''
return os.path.abspath(os.path.join(os.path.dirname(__file__), *args)) return os.path.abspath(os.path.join(os.path.dirname(__file__), *args))
def build_execution_environment_params(self, instance):
if settings.IS_K8S:
return {}
if instance.execution_environment_id is None:
from awx.main.signals import disable_activity_stream
with disable_activity_stream():
self.instance = instance = self.update_model(
instance.pk, execution_environment=instance.resolve_execution_environment())
image = instance.execution_environment.image
params = {
"container_image": image,
"process_isolation": True,
"container_options": ['--user=root'],
}
pull = instance.execution_environment.pull
if pull:
params['container_options'].append(f'--pull={pull}')
if settings.AWX_PROOT_SHOW_PATHS:
params['container_volume_mounts'] = []
for this_path in settings.AWX_PROOT_SHOW_PATHS:
params['container_volume_mounts'].append(f'{this_path}:{this_path}:Z')
return params
def build_private_data(self, instance, private_data_dir): def build_private_data(self, instance, private_data_dir):
''' '''
Return SSH private key data (only if stored in DB as ssh_key_data). Return SSH private key data (only if stored in DB as ssh_key_data).
@@ -981,46 +1014,6 @@ class BaseTask(object):
Build ansible yaml file filled with extra vars to be passed via -e@file.yml Build ansible yaml file filled with extra vars to be passed via -e@file.yml
''' '''
def build_params_process_isolation(self, instance, private_data_dir, cwd):
'''
Build ansible runner .run() parameters for process isolation.
'''
process_isolation_params = dict()
if self.should_use_proot(instance):
local_paths = [private_data_dir]
if cwd != private_data_dir and Path(private_data_dir) not in Path(cwd).parents:
local_paths.append(cwd)
show_paths = self.proot_show_paths + local_paths + \
settings.AWX_PROOT_SHOW_PATHS
pi_path = settings.AWX_PROOT_BASE_PATH
if not self.instance.is_isolated() and not self.instance.is_containerized:
pi_path = tempfile.mkdtemp(
prefix='ansible_runner_pi_',
dir=settings.AWX_PROOT_BASE_PATH
)
os.chmod(pi_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
self.cleanup_paths.append(pi_path)
process_isolation_params = {
'process_isolation': True,
'process_isolation_path': pi_path,
'process_isolation_show_paths': show_paths,
'process_isolation_hide_paths': [
settings.AWX_PROOT_BASE_PATH,
'/etc/tower',
'/etc/ssh',
'/var/lib/awx',
'/var/log',
settings.PROJECTS_ROOT,
settings.JOBOUTPUT_ROOT,
] + getattr(settings, 'AWX_PROOT_HIDE_PATHS', None) or [],
'process_isolation_ro_paths': [settings.ANSIBLE_VENV_PATH, settings.AWX_VENV_PATH],
}
if getattr(instance, 'ansible_virtualenv_path', settings.ANSIBLE_VENV_PATH) != settings.ANSIBLE_VENV_PATH:
process_isolation_params['process_isolation_ro_paths'].append(instance.ansible_virtualenv_path)
return process_isolation_params
def build_params_resource_profiling(self, instance, private_data_dir): def build_params_resource_profiling(self, instance, private_data_dir):
resource_profiling_params = {} resource_profiling_params = {}
if self.should_use_resource_profiling(instance): if self.should_use_resource_profiling(instance):
@@ -1031,6 +1024,8 @@ class BaseTask(object):
results_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling') results_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling')
if not os.path.isdir(results_dir): if not os.path.isdir(results_dir):
os.makedirs(results_dir, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC) os.makedirs(results_dir, stat.S_IREAD | stat.S_IWRITE | stat.S_IEXEC)
# FIXME: develop some better means of referencing paths inside containers
container_results_dir = os.path.join('/runner', 'artifacts/playbook_profiling')
logger.debug('Collected the following resource profiling intervals: cpu: {} mem: {} pid: {}' logger.debug('Collected the following resource profiling intervals: cpu: {} mem: {} pid: {}'
.format(cpu_poll_interval, mem_poll_interval, pid_poll_interval)) .format(cpu_poll_interval, mem_poll_interval, pid_poll_interval))
@@ -1040,7 +1035,7 @@ class BaseTask(object):
'resource_profiling_cpu_poll_interval': cpu_poll_interval, 'resource_profiling_cpu_poll_interval': cpu_poll_interval,
'resource_profiling_memory_poll_interval': mem_poll_interval, 'resource_profiling_memory_poll_interval': mem_poll_interval,
'resource_profiling_pid_poll_interval': pid_poll_interval, 'resource_profiling_pid_poll_interval': pid_poll_interval,
'resource_profiling_results_dir': results_dir}) 'resource_profiling_results_dir': container_results_dir})
return resource_profiling_params return resource_profiling_params
@@ -1063,30 +1058,18 @@ class BaseTask(object):
os.chmod(path, stat.S_IRUSR) os.chmod(path, stat.S_IRUSR)
return path return path
def add_ansible_venv(self, venv_path, env, isolated=False):
env['VIRTUAL_ENV'] = venv_path
env['PATH'] = os.path.join(venv_path, "bin") + ":" + env['PATH']
venv_libdir = os.path.join(venv_path, "lib")
if not isolated and (
not os.path.exists(venv_libdir) or
os.path.join(venv_path, '') not in get_custom_venv_choices()
):
raise InvalidVirtualenvError(_(
'Invalid virtual environment selected: {}'.format(venv_path)
))
isolated_manager.set_pythonpath(venv_libdir, env)
def add_awx_venv(self, env): def add_awx_venv(self, env):
env['VIRTUAL_ENV'] = settings.AWX_VENV_PATH env['VIRTUAL_ENV'] = settings.AWX_VENV_PATH
env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin") + ":" + env['PATH'] if 'PATH' in env:
env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin") + ":" + env['PATH']
else:
env['PATH'] = os.path.join(settings.AWX_VENV_PATH, "bin")
def build_env(self, instance, private_data_dir, isolated, private_data_files=None): def build_env(self, instance, private_data_dir, isolated, private_data_files=None):
''' '''
Build environment dictionary for ansible-playbook. Build environment dictionary for ansible-playbook.
''' '''
env = dict(os.environ.items()) env = {}
# Add ANSIBLE_* settings to the subprocess environment. # Add ANSIBLE_* settings to the subprocess environment.
for attr in dir(settings): for attr in dir(settings):
if attr == attr.upper() and attr.startswith('ANSIBLE_'): if attr == attr.upper() and attr.startswith('ANSIBLE_'):
@@ -1094,14 +1077,9 @@ class BaseTask(object):
# Also set environment variables configured in AWX_TASK_ENV setting. # Also set environment variables configured in AWX_TASK_ENV setting.
for key, value in settings.AWX_TASK_ENV.items(): for key, value in settings.AWX_TASK_ENV.items():
env[key] = str(value) env[key] = str(value)
# Set environment variables needed for inventory and job event
# callbacks to work.
# Update PYTHONPATH to use local site-packages.
# NOTE:
# Derived class should call add_ansible_venv() or add_awx_venv()
if self.should_use_proot(instance):
env['PROOT_TMP_DIR'] = settings.AWX_PROOT_BASE_PATH
env['AWX_PRIVATE_DATA_DIR'] = private_data_dir env['AWX_PRIVATE_DATA_DIR'] = private_data_dir
return env return env
def should_use_resource_profiling(self, job): def should_use_resource_profiling(self, job):
@@ -1129,12 +1107,13 @@ class BaseTask(object):
for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items() for hostname, hv in script_data.get('_meta', {}).get('hostvars', {}).items()
} }
json_data = json.dumps(script_data) json_data = json.dumps(script_data)
handle, path = tempfile.mkstemp(dir=private_data_dir) path = os.path.join(private_data_dir, 'inventory')
f = os.fdopen(handle, 'w') os.makedirs(path, mode=0o700)
f.write('#! /usr/bin/env python\n# -*- coding: utf-8 -*-\nprint(%r)\n' % json_data) fn = os.path.join(path, 'hosts')
f.close() with open(fn, 'w') as f:
os.chmod(path, stat.S_IRUSR | stat.S_IXUSR | stat.S_IWUSR) os.chmod(fn, stat.S_IRUSR | stat.S_IXUSR | stat.S_IWUSR)
return path f.write('#! /usr/bin/env python3\n# -*- coding: utf-8 -*-\nprint(%r)\n' % json_data)
return fn
def build_args(self, instance, private_data_dir, passwords): def build_args(self, instance, private_data_dir, passwords):
raise NotImplementedError raise NotImplementedError
@@ -1205,17 +1184,17 @@ class BaseTask(object):
instance.log_lifecycle("finalize_run") instance.log_lifecycle("finalize_run")
job_profiling_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling') job_profiling_dir = os.path.join(private_data_dir, 'artifacts/playbook_profiling')
awx_profiling_dir = '/var/log/tower/playbook_profiling/' awx_profiling_dir = '/var/log/tower/playbook_profiling/'
collections_info = os.path.join(private_data_dir, 'artifacts/', 'collections.json')
if not os.path.exists(awx_profiling_dir): if not os.path.exists(awx_profiling_dir):
os.mkdir(awx_profiling_dir) os.mkdir(awx_profiling_dir)
if os.path.isdir(job_profiling_dir): if os.path.isdir(job_profiling_dir):
shutil.copytree(job_profiling_dir, os.path.join(awx_profiling_dir, str(instance.pk))) shutil.copytree(job_profiling_dir, os.path.join(awx_profiling_dir, str(instance.pk)))
if os.path.exists(collections_info):
if instance.is_containerized: with open(collections_info) as ee_json_info:
from awx.main.scheduler.kubernetes import PodManager # prevent circular import ee_collections_info = json.loads(ee_json_info.read())
pm = PodManager(instance) instance.installed_collections = ee_collections_info
logger.debug(f"Deleting pod {pm.pod_name}") instance.save(update_fields=['installed_collections'])
pm.delete()
def event_handler(self, event_data): def event_handler(self, event_data):
# #
@@ -1355,16 +1334,6 @@ class BaseTask(object):
Run the job/task and capture its output. Run the job/task and capture its output.
''' '''
self.instance = self.model.objects.get(pk=pk) self.instance = self.model.objects.get(pk=pk)
containerized = self.instance.is_containerized
pod_manager = None
if containerized:
# Here we are trying to launch a pod before transitioning the job into a running
# state. For some scenarios (like waiting for resources to become available) we do this
# rather than marking the job as error or failed. This is not always desirable. Cases
# such as invalid authentication should surface as an error.
pod_manager = self.deploy_container_group_pod(self.instance)
if not pod_manager:
return
# self.instance because of the update_model pattern and when it's used in callback handlers # self.instance because of the update_model pattern and when it's used in callback handlers
self.instance = self.update_model(pk, status='running', self.instance = self.update_model(pk, status='running',
@@ -1423,12 +1392,8 @@ class BaseTask(object):
passwords = self.build_passwords(self.instance, kwargs) passwords = self.build_passwords(self.instance, kwargs)
self.build_extra_vars_file(self.instance, private_data_dir) self.build_extra_vars_file(self.instance, private_data_dir)
args = self.build_args(self.instance, private_data_dir, passwords) args = self.build_args(self.instance, private_data_dir, passwords)
cwd = self.build_cwd(self.instance, private_data_dir)
resource_profiling_params = self.build_params_resource_profiling(self.instance, resource_profiling_params = self.build_params_resource_profiling(self.instance,
private_data_dir) private_data_dir)
process_isolation_params = self.build_params_process_isolation(self.instance,
private_data_dir,
cwd)
env = self.build_env(self.instance, private_data_dir, isolated, env = self.build_env(self.instance, private_data_dir, isolated,
private_data_files=private_data_files) private_data_files=private_data_files)
self.safe_env = build_safe_env(env) self.safe_env = build_safe_env(env)
@@ -1451,27 +1416,17 @@ class BaseTask(object):
params = { params = {
'ident': self.instance.id, 'ident': self.instance.id,
'private_data_dir': private_data_dir, 'private_data_dir': private_data_dir,
'project_dir': cwd,
'playbook': self.build_playbook_path_relative_to_cwd(self.instance, private_data_dir), 'playbook': self.build_playbook_path_relative_to_cwd(self.instance, private_data_dir),
'inventory': self.build_inventory(self.instance, private_data_dir), 'inventory': self.build_inventory(self.instance, private_data_dir),
'passwords': expect_passwords, 'passwords': expect_passwords,
'envvars': env, 'envvars': env,
'event_handler': self.event_handler,
'cancel_callback': self.cancel_callback,
'finished_callback': self.finished_callback,
'status_handler': self.status_handler,
'settings': { 'settings': {
'job_timeout': self.get_instance_timeout(self.instance), 'job_timeout': self.get_instance_timeout(self.instance),
'suppress_ansible_output': True, 'suppress_ansible_output': True,
**process_isolation_params,
**resource_profiling_params, **resource_profiling_params,
}, },
} }
if containerized:
# We don't want HOME passed through to container groups.
params['envvars'].pop('HOME')
if isinstance(self.instance, AdHocCommand): if isinstance(self.instance, AdHocCommand):
params['module'] = self.build_module_name(self.instance) params['module'] = self.build_module_name(self.instance)
params['module_args'] = self.build_module_args(self.instance) params['module_args'] = self.build_module_args(self.instance)
@@ -1483,6 +1438,9 @@ class BaseTask(object):
# Disable Ansible fact cache. # Disable Ansible fact cache.
params['fact_cache_type'] = '' params['fact_cache_type'] = ''
if self.instance.is_container_group_task or settings.IS_K8S:
params['envvars'].pop('HOME', None)
''' '''
Delete parameters if the values are None or empty array Delete parameters if the values are None or empty array
''' '''
@@ -1491,37 +1449,24 @@ class BaseTask(object):
del params[v] del params[v]
self.dispatcher = CallbackQueueDispatcher() self.dispatcher = CallbackQueueDispatcher()
if self.instance.is_isolated() or containerized:
module_args = None
if 'module_args' in params:
# if it's adhoc, copy the module args
module_args = ansible_runner.utils.args2cmdline(
params.get('module_args'),
)
shutil.move(
params.pop('inventory'),
os.path.join(private_data_dir, 'inventory')
)
ansible_runner.utils.dump_artifacts(params)
isolated_manager_instance = isolated_manager.IsolatedManager(
self.event_handler,
canceled_callback=lambda: self.update_model(self.instance.pk).cancel_flag,
check_callback=self.check_handler,
pod_manager=pod_manager
)
status, rc = isolated_manager_instance.run(self.instance,
private_data_dir,
params.get('playbook'),
params.get('module'),
module_args,
ident=str(self.instance.pk))
self.finished_callback(None)
else:
res = ansible_runner.interface.run(**params)
status = res.status
rc = res.rc
self.instance.log_lifecycle("running_playbook") self.instance.log_lifecycle("running_playbook")
if isinstance(self.instance, SystemJob):
cwd = self.build_cwd(self.instance, private_data_dir)
res = ansible_runner.interface.run(project_dir=cwd,
event_handler=self.event_handler,
finished_callback=self.finished_callback,
status_handler=self.status_handler,
**params)
else:
receptor_job = AWXReceptorJob(self, params)
res = receptor_job.run()
if not res:
return
status = res.status
rc = res.rc
if status == 'timeout': if status == 'timeout':
self.instance.job_explanation = "Job terminated due to timeout" self.instance.job_explanation = "Job terminated due to timeout"
@@ -1569,37 +1514,6 @@ class BaseTask(object):
raise AwxTaskError.TaskError(self.instance, rc) raise AwxTaskError.TaskError(self.instance, rc)
def deploy_container_group_pod(self, task):
from awx.main.scheduler.kubernetes import PodManager # Avoid circular import
pod_manager = PodManager(self.instance)
try:
log_name = task.log_format
logger.debug(f"Launching pod for {log_name}.")
pod_manager.deploy()
except (ApiException, Exception) as exc:
if isinstance(exc, ApiException) and exc.status == 403:
try:
if 'exceeded quota' in json.loads(exc.body)['message']:
# If the k8s cluster does not have capacity, we move the
# job back into pending and wait until the next run of
# the task manager. This does not exactly play well with
# our current instance group precendence logic, since it
# will just sit here forever if kubernetes returns this
# error.
logger.warn(exc.body)
logger.warn(f"Could not launch pod for {log_name}. Exceeded quota.")
self.update_model(task.pk, status='pending')
return
except Exception:
logger.exception(f"Unable to handle response from Kubernetes API for {log_name}.")
logger.exception(f"Error when launching pod for {log_name}")
self.update_model(task.pk, status='error', result_traceback=traceback.format_exc())
return
self.update_model(task.pk, execution_node=pod_manager.pod_name)
return pod_manager
@@ -1690,7 +1604,6 @@ class RunJob(BaseTask):
private_data_files=private_data_files) private_data_files=private_data_files)
if private_data_files is None: if private_data_files is None:
private_data_files = {} private_data_files = {}
self.add_ansible_venv(job.ansible_virtualenv_path, env, isolated=isolated)
# Set environment variables needed for inventory and job event # Set environment variables needed for inventory and job event
# callbacks to work. # callbacks to work.
env['JOB_ID'] = str(job.pk) env['JOB_ID'] = str(job.pk)
@@ -1709,13 +1622,17 @@ class RunJob(BaseTask):
cp_dir = os.path.join(private_data_dir, 'cp') cp_dir = os.path.join(private_data_dir, 'cp')
if not os.path.exists(cp_dir): if not os.path.exists(cp_dir):
os.mkdir(cp_dir, 0o700) os.mkdir(cp_dir, 0o700)
env['ANSIBLE_SSH_CONTROL_PATH_DIR'] = cp_dir # FIXME: more elegant way to manage this path in container
env['ANSIBLE_SSH_CONTROL_PATH_DIR'] = '/runner/cp'
# Set environment variables for cloud credentials. # Set environment variables for cloud credentials.
cred_files = private_data_files.get('credentials', {}) cred_files = private_data_files.get('credentials', {})
for cloud_cred in job.cloud_credentials: for cloud_cred in job.cloud_credentials:
if cloud_cred and cloud_cred.credential_type.namespace == 'openstack': if cloud_cred and cloud_cred.credential_type.namespace == 'openstack':
env['OS_CLIENT_CONFIG_FILE'] = cred_files.get(cloud_cred, '') env['OS_CLIENT_CONFIG_FILE'] = os.path.join(
'/runner',
os.path.basename(cred_files.get(cloud_cred, ''))
)
for network_cred in job.network_credentials: for network_cred in job.network_credentials:
env['ANSIBLE_NET_USERNAME'] = network_cred.get_input('username', default='') env['ANSIBLE_NET_USERNAME'] = network_cred.get_input('username', default='')
@@ -1746,7 +1663,8 @@ class RunJob(BaseTask):
for path in config_values[config_setting].split(':'): for path in config_values[config_setting].split(':'):
if path not in paths: if path not in paths:
paths = [config_values[config_setting]] + paths paths = [config_values[config_setting]] + paths
paths = [os.path.join(private_data_dir, folder)] + paths # FIXME: again, figure out more elegant way for inside container
paths = [os.path.join('/runner', folder)] + paths
env[env_key] = os.pathsep.join(paths) env[env_key] = os.pathsep.join(paths)
return env return env
@@ -1875,10 +1793,26 @@ class RunJob(BaseTask):
''' '''
Return whether this task should use proot. Return whether this task should use proot.
''' '''
if job.is_containerized: if job.is_container_group_task:
return False return False
return getattr(settings, 'AWX_PROOT_ENABLED', False) return getattr(settings, 'AWX_PROOT_ENABLED', False)
def build_execution_environment_params(self, instance):
if settings.IS_K8S:
return {}
params = super(RunJob, self).build_execution_environment_params(instance)
# If this has an insights agent and it is not already mounted then show it
insights_dir = os.path.dirname(settings.INSIGHTS_SYSTEM_ID_FILE)
if instance.use_fact_cache and os.path.exists(insights_dir):
logger.info('not parent of others')
params.setdefault('container_volume_mounts', [])
params['container_volume_mounts'].extend([
f"{insights_dir}:{insights_dir}:Z",
])
return params
def pre_run_hook(self, job, private_data_dir): def pre_run_hook(self, job, private_data_dir):
super(RunJob, self).pre_run_hook(job, private_data_dir) super(RunJob, self).pre_run_hook(job, private_data_dir)
if job.inventory is None: if job.inventory is None:
@@ -1989,10 +1923,10 @@ class RunJob(BaseTask):
return return
if job.use_fact_cache: if job.use_fact_cache:
job.finish_job_fact_cache( job.finish_job_fact_cache(
os.path.join(private_data_dir, 'artifacts', str(job.id), 'fact_cache'), os.path.join(private_data_dir, 'artifacts', 'fact_cache'),
fact_modification_times, fact_modification_times,
) )
if isolated_manager_instance and not job.is_containerized: if isolated_manager_instance and not job.is_container_group_task:
isolated_manager_instance.cleanup() isolated_manager_instance.cleanup()
try: try:
@@ -2068,7 +2002,6 @@ class RunProjectUpdate(BaseTask):
env = super(RunProjectUpdate, self).build_env(project_update, private_data_dir, env = super(RunProjectUpdate, self).build_env(project_update, private_data_dir,
isolated=isolated, isolated=isolated,
private_data_files=private_data_files) private_data_files=private_data_files)
self.add_ansible_venv(settings.ANSIBLE_VENV_PATH, env)
env['ANSIBLE_RETRY_FILES_ENABLED'] = str(False) env['ANSIBLE_RETRY_FILES_ENABLED'] = str(False)
env['ANSIBLE_ASK_PASS'] = str(False) env['ANSIBLE_ASK_PASS'] = str(False)
env['ANSIBLE_BECOME_ASK_PASS'] = str(False) env['ANSIBLE_BECOME_ASK_PASS'] = str(False)
@@ -2202,6 +2135,14 @@ class RunProjectUpdate(BaseTask):
elif project_update.project.allow_override: elif project_update.project.allow_override:
# If branch is override-able, do extra fetch for all branches # If branch is override-able, do extra fetch for all branches
extra_vars['scm_refspec'] = 'refs/heads/*:refs/remotes/origin/*' extra_vars['scm_refspec'] = 'refs/heads/*:refs/remotes/origin/*'
if project_update.scm_type == 'archive':
# for raw archive, prevent error moving files between volumes
extra_vars['ansible_remote_tmp'] = os.path.join(
project_update.get_project_path(check_if_exists=False),
'.ansible_awx', 'tmp'
)
self._write_extra_vars_file(private_data_dir, extra_vars) self._write_extra_vars_file(private_data_dir, extra_vars)
def build_cwd(self, project_update, private_data_dir): def build_cwd(self, project_update, private_data_dir):
@@ -2330,10 +2271,14 @@ class RunProjectUpdate(BaseTask):
# re-create root project folder if a natural disaster has destroyed it # re-create root project folder if a natural disaster has destroyed it
if not os.path.exists(settings.PROJECTS_ROOT): if not os.path.exists(settings.PROJECTS_ROOT):
os.mkdir(settings.PROJECTS_ROOT) os.mkdir(settings.PROJECTS_ROOT)
project_path = instance.project.get_project_path(check_if_exists=False)
if not os.path.exists(project_path):
os.makedirs(project_path) # used as container mount
self.acquire_lock(instance) self.acquire_lock(instance)
self.original_branch = None self.original_branch = None
if instance.scm_type == 'git' and instance.branch_override: if instance.scm_type == 'git' and instance.branch_override:
project_path = instance.project.get_project_path(check_if_exists=False)
if os.path.exists(project_path): if os.path.exists(project_path):
git_repo = git.Repo(project_path) git_repo = git.Repo(project_path)
if git_repo.head.is_detached: if git_repo.head.is_detached:
@@ -2349,7 +2294,7 @@ class RunProjectUpdate(BaseTask):
# the project update playbook is not in a git repo, but uses a vendoring directory # the project update playbook is not in a git repo, but uses a vendoring directory
# to be consistent with the ansible-runner model, # to be consistent with the ansible-runner model,
# that is moved into the runner projecct folder here # that is moved into the runner project folder here
awx_playbooks = self.get_path_to('..', 'playbooks') awx_playbooks = self.get_path_to('..', 'playbooks')
copy_tree(awx_playbooks, os.path.join(private_data_dir, 'project')) copy_tree(awx_playbooks, os.path.join(private_data_dir, 'project'))
@@ -2484,6 +2429,20 @@ class RunProjectUpdate(BaseTask):
''' '''
return getattr(settings, 'AWX_PROOT_ENABLED', False) return getattr(settings, 'AWX_PROOT_ENABLED', False)
def build_execution_environment_params(self, instance):
if settings.IS_K8S:
return {}
params = super(RunProjectUpdate, self).build_execution_environment_params(instance)
project_path = instance.get_project_path(check_if_exists=False)
cache_path = instance.get_cache_path()
params.setdefault('container_volume_mounts', [])
params['container_volume_mounts'].extend([
f"{project_path}:{project_path}:Z",
f"{cache_path}:{cache_path}:Z",
])
return params
@task(queue=get_local_queuename) @task(queue=get_local_queuename)
class RunInventoryUpdate(BaseTask): class RunInventoryUpdate(BaseTask):
@@ -2492,18 +2451,6 @@ class RunInventoryUpdate(BaseTask):
event_model = InventoryUpdateEvent event_model = InventoryUpdateEvent
event_data_key = 'inventory_update_id' event_data_key = 'inventory_update_id'
# TODO: remove once inv updates run in containers
def should_use_proot(self, inventory_update):
'''
Return whether this task should use proot.
'''
return getattr(settings, 'AWX_PROOT_ENABLED', False)
# TODO: remove once inv updates run in containers
@property
def proot_show_paths(self):
return [settings.AWX_ANSIBLE_COLLECTIONS_PATHS]
def build_private_data(self, inventory_update, private_data_dir): def build_private_data(self, inventory_update, private_data_dir):
""" """
Return private data needed for inventory update. Return private data needed for inventory update.
@@ -2530,17 +2477,13 @@ class RunInventoryUpdate(BaseTask):
are accomplished by the inventory source injectors (in this method) are accomplished by the inventory source injectors (in this method)
or custom credential type injectors (in main run method). or custom credential type injectors (in main run method).
""" """
env = super(RunInventoryUpdate, self).build_env(inventory_update, env = super(RunInventoryUpdate, self).build_env(
private_data_dir, inventory_update, private_data_dir, isolated,
isolated, private_data_files=private_data_files)
private_data_files=private_data_files)
if private_data_files is None: if private_data_files is None:
private_data_files = {} private_data_files = {}
# TODO: remove once containers replace custom venvs # Pass inventory source ID to inventory script.
self.add_ansible_venv(inventory_update.ansible_virtualenv_path, env, isolated=isolated)
# Legacy environment variables, were used as signal to awx-manage command
# now they are provided in case some scripts may be relying on them
env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id) env['INVENTORY_SOURCE_ID'] = str(inventory_update.inventory_source_id)
env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk) env['INVENTORY_UPDATE_ID'] = str(inventory_update.pk)
env.update(STANDARD_INVENTORY_UPDATE_ENV) env.update(STANDARD_INVENTORY_UPDATE_ENV)
@@ -2578,7 +2521,8 @@ class RunInventoryUpdate(BaseTask):
for path in config_values[config_setting].split(':'): for path in config_values[config_setting].split(':'):
if path not in paths: if path not in paths:
paths = [config_values[config_setting]] + paths paths = [config_values[config_setting]] + paths
paths = [os.path.join(private_data_dir, folder)] + paths # FIXME: containers
paths = [os.path.join('/runner', folder)] + paths
env[env_key] = os.pathsep.join(paths) env[env_key] = os.pathsep.join(paths)
return env return env
@@ -2606,17 +2550,20 @@ class RunInventoryUpdate(BaseTask):
args = ['ansible-inventory', '--list', '--export'] args = ['ansible-inventory', '--list', '--export']
# Add arguments for the source inventory file/script/thing # Add arguments for the source inventory file/script/thing
source_location = self.pseudo_build_inventory(inventory_update, private_data_dir) rel_path = self.pseudo_build_inventory(inventory_update, private_data_dir)
container_location = os.path.join('/runner', rel_path) # TODO: make container paths elegant
source_location = os.path.join(private_data_dir, rel_path)
args.append('-i') args.append('-i')
args.append(source_location) args.append(container_location)
args.append('--output') args.append('--output')
args.append(os.path.join(private_data_dir, 'artifacts', 'output.json')) args.append(os.path.join('/runner', 'artifacts', 'output.json'))
if os.path.isdir(source_location): if os.path.isdir(source_location):
playbook_dir = source_location playbook_dir = container_location
else: else:
playbook_dir = os.path.dirname(source_location) playbook_dir = os.path.dirname(container_location)
args.extend(['--playbook-dir', playbook_dir]) args.extend(['--playbook-dir', playbook_dir])
if inventory_update.verbosity: if inventory_update.verbosity:
@@ -2647,8 +2594,10 @@ class RunInventoryUpdate(BaseTask):
with open(inventory_path, 'w') as f: with open(inventory_path, 'w') as f:
f.write(content) f.write(content)
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
rel_path = injector.filename
elif src == 'scm': elif src == 'scm':
inventory_path = os.path.join(private_data_dir, 'project', inventory_update.source_path) rel_path = os.path.join('project', inventory_update.source_path)
elif src == 'custom': elif src == 'custom':
handle, inventory_path = tempfile.mkstemp(dir=private_data_dir) handle, inventory_path = tempfile.mkstemp(dir=private_data_dir)
f = os.fdopen(handle, 'w') f = os.fdopen(handle, 'w')
@@ -2657,7 +2606,9 @@ class RunInventoryUpdate(BaseTask):
f.write(inventory_update.source_script.script) f.write(inventory_update.source_script.script)
f.close() f.close()
os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR) os.chmod(inventory_path, stat.S_IRUSR | stat.S_IWUSR | stat.S_IXUSR)
return inventory_path
rel_path = os.path.split(inventory_path)[-1]
return rel_path
def build_cwd(self, inventory_update, private_data_dir): def build_cwd(self, inventory_update, private_data_dir):
''' '''
@@ -2666,9 +2617,10 @@ class RunInventoryUpdate(BaseTask):
- SCM, where source needs to live in the project folder - SCM, where source needs to live in the project folder
''' '''
src = inventory_update.source src = inventory_update.source
container_dir = '/runner' # TODO: make container paths elegant
if src == 'scm' and inventory_update.source_project_update: if src == 'scm' and inventory_update.source_project_update:
return os.path.join(private_data_dir, 'project') return os.path.join(container_dir, 'project')
return private_data_dir return container_dir
def build_playbook_path_relative_to_cwd(self, inventory_update, private_data_dir): def build_playbook_path_relative_to_cwd(self, inventory_update, private_data_dir):
return None return None
@@ -2853,7 +2805,6 @@ class RunAdHocCommand(BaseTask):
env = super(RunAdHocCommand, self).build_env(ad_hoc_command, private_data_dir, env = super(RunAdHocCommand, self).build_env(ad_hoc_command, private_data_dir,
isolated=isolated, isolated=isolated,
private_data_files=private_data_files) private_data_files=private_data_files)
self.add_ansible_venv(settings.ANSIBLE_VENV_PATH, env)
# Set environment variables needed for inventory and ad hoc event # Set environment variables needed for inventory and ad hoc event
# callbacks to work. # callbacks to work.
env['AD_HOC_COMMAND_ID'] = str(ad_hoc_command.pk) env['AD_HOC_COMMAND_ID'] = str(ad_hoc_command.pk)
@@ -2867,7 +2818,8 @@ class RunAdHocCommand(BaseTask):
cp_dir = os.path.join(private_data_dir, 'cp') cp_dir = os.path.join(private_data_dir, 'cp')
if not os.path.exists(cp_dir): if not os.path.exists(cp_dir):
os.mkdir(cp_dir, 0o700) os.mkdir(cp_dir, 0o700)
env['ANSIBLE_SSH_CONTROL_PATH'] = cp_dir # FIXME: more elegant way to manage this path in container
env['ANSIBLE_SSH_CONTROL_PATH'] = '/runner/cp'
return env return env
@@ -2974,7 +2926,7 @@ class RunAdHocCommand(BaseTask):
''' '''
Return whether this task should use proot. Return whether this task should use proot.
''' '''
if ad_hoc_command.is_containerized: if ad_hoc_command.is_container_group_task:
return False return False
return getattr(settings, 'AWX_PROOT_ENABLED', False) return getattr(settings, 'AWX_PROOT_ENABLED', False)
@@ -2991,6 +2943,9 @@ class RunSystemJob(BaseTask):
event_model = SystemJobEvent event_model = SystemJobEvent
event_data_key = 'system_job_id' event_data_key = 'system_job_id'
def build_execution_environment_params(self, system_job):
return {}
def build_args(self, system_job, private_data_dir, passwords): def build_args(self, system_job, private_data_dir, passwords):
args = ['awx-manage', system_job.job_type] args = ['awx-manage', system_job.job_type]
try: try:
@@ -3022,10 +2977,13 @@ class RunSystemJob(BaseTask):
return path return path
def build_env(self, instance, private_data_dir, isolated=False, private_data_files=None): def build_env(self, instance, private_data_dir, isolated=False, private_data_files=None):
env = super(RunSystemJob, self).build_env(instance, private_data_dir, base_env = super(RunSystemJob, self).build_env(
isolated=isolated, instance, private_data_dir, isolated=isolated,
private_data_files=private_data_files) private_data_files=private_data_files)
self.add_awx_venv(env) # TODO: this is able to run by turning off isolation
# the goal is to run it a container instead
env = dict(os.environ.items())
env.update(base_env)
return env return env
def build_cwd(self, instance, private_data_dir): def build_cwd(self, instance, private_data_dir):
@@ -3103,3 +3061,235 @@ def deep_copy_model_obj(
permission_check_func(creater, copy_mapping.values()) permission_check_func(creater, copy_mapping.values())
if isinstance(new_obj, Inventory): if isinstance(new_obj, Inventory):
update_inventory_computed_fields.delay(new_obj.id) update_inventory_computed_fields.delay(new_obj.id)
class AWXReceptorJob:
def __init__(self, task=None, runner_params=None):
self.task = task
self.runner_params = runner_params
self.unit_id = None
if self.task and not self.task.instance.is_container_group_task:
execution_environment_params = self.task.build_execution_environment_params(self.task.instance)
self.runner_params['settings'].update(execution_environment_params)
def run(self):
# We establish a connection to the Receptor socket
receptor_ctl = ReceptorControl('/var/run/receptor/receptor.sock')
try:
return self._run_internal(receptor_ctl)
finally:
# Make sure to always release the work unit if we established it
if self.unit_id is not None:
receptor_ctl.simple_command(f"work release {self.unit_id}")
def _run_internal(self, receptor_ctl):
# Create a socketpair. Where the left side will be used for writing our payload
# (private data dir, kwargs). The right side will be passed to Receptor for
# reading.
sockin, sockout = socket.socketpair()
threading.Thread(target=self.transmit, args=[sockin]).start()
# submit our work, passing
# in the right side of our socketpair for reading.
result = receptor_ctl.submit_work(worktype=self.work_type,
payload=sockout.makefile('rb'),
params=self.receptor_params)
self.unit_id = result['unitid']
sockin.close()
sockout.close()
resultsock, resultfile = receptor_ctl.get_work_results(self.unit_id,
return_socket=True,
return_sockfile=True)
# Both "processor" and "cancel_watcher" are spawned in separate threads.
# We wait for the first one to return. If cancel_watcher returns first,
# we yank the socket out from underneath the processor, which will cause it
# to exit. A reference to the processor_future is passed into the cancel_watcher_future,
# Which exits if the job has finished normally. The context manager ensures we do not
# leave any threads laying around.
with concurrent.futures.ThreadPoolExecutor(max_workers=2) as executor:
processor_future = executor.submit(self.processor, resultfile)
cancel_watcher_future = executor.submit(self.cancel_watcher, processor_future)
futures = [processor_future, cancel_watcher_future]
first_future = concurrent.futures.wait(futures,
return_when=concurrent.futures.FIRST_COMPLETED)
res = list(first_future.done)[0].result()
if res.status == 'canceled':
receptor_ctl.simple_command(f"work cancel {self.unit_id}")
resultsock.shutdown(socket.SHUT_RDWR)
resultfile.close()
elif res.status == 'error':
# TODO: There should be a more efficient way of getting this information
receptor_work_list = receptor_ctl.simple_command("work list")
detail = receptor_work_list[self.unit_id]['Detail']
if 'exceeded quota' in detail:
logger.warn(detail)
log_name = self.task.instance.log_format
logger.warn(f"Could not launch pod for {log_name}. Exceeded quota.")
self.task.update_model(self.task.instance.pk, status='pending')
return
raise RuntimeError(detail)
return res
# Spawned in a thread so Receptor can start reading before we finish writing, we
# write our payload to the left side of our socketpair.
def transmit(self, _socket):
if not settings.IS_K8S and self.work_type == 'local':
self.runner_params['only_transmit_kwargs'] = True
ansible_runner.interface.run(streamer='transmit',
_output=_socket.makefile('wb'),
**self.runner_params)
# Socket must be shutdown here, or the reader will hang forever.
_socket.shutdown(socket.SHUT_WR)
def processor(self, resultfile):
return ansible_runner.interface.run(streamer='process',
quiet=True,
_input=resultfile,
event_handler=self.task.event_handler,
finished_callback=self.task.finished_callback,
status_handler=self.task.status_handler,
**self.runner_params)
@property
def receptor_params(self):
if self.task.instance.is_container_group_task:
spec_yaml = yaml.dump(self.pod_definition, explicit_start=True)
receptor_params = {
"secret_kube_pod": spec_yaml,
}
if self.credential:
kubeconfig_yaml = yaml.dump(self.kube_config, explicit_start=True)
receptor_params["secret_kube_config"] = kubeconfig_yaml
else:
private_data_dir = self.runner_params['private_data_dir']
receptor_params = {
"params": f"--private-data-dir={private_data_dir}"
}
return receptor_params
@property
def work_type(self):
if self.task.instance.is_container_group_task:
if self.credential:
work_type = 'kubernetes-runtime-auth'
else:
work_type = 'kubernetes-incluster-auth'
else:
work_type = 'local'
return work_type
def cancel_watcher(self, processor_future):
while True:
if processor_future.done():
return processor_future.result()
if self.task.cancel_callback():
result = namedtuple('result', ['status', 'rc'])
return result('canceled', 1)
time.sleep(1)
@property
def pod_definition(self):
default_pod_spec = {
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"namespace": settings.AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE
},
"spec": {
"containers": [{
"image": settings.AWX_CONTAINER_GROUP_DEFAULT_IMAGE,
"name": 'worker',
"args": ['ansible-runner', 'worker']
}]
}
}
pod_spec_override = {}
if self.task and self.task.instance.instance_group.pod_spec_override:
pod_spec_override = parse_yaml_or_json(
self.task.instance.instance_group.pod_spec_override)
pod_spec = {**default_pod_spec, **pod_spec_override}
if self.task:
pod_spec['metadata'] = deepmerge(
pod_spec.get('metadata', {}),
dict(name=self.pod_name,
labels={
'ansible-awx': settings.INSTALL_UUID,
'ansible-awx-job-id': str(self.task.instance.id)
}))
return pod_spec
@property
def pod_name(self):
return f"awx-job-{self.task.instance.id}"
@property
def credential(self):
return self.task.instance.instance_group.credential
@property
def namespace(self):
return self.pod_definition['metadata']['namespace']
@property
def kube_config(self):
host_input = self.credential.get_input('host')
config = {
"apiVersion": "v1",
"kind": "Config",
"preferences": {},
"clusters": [
{
"name": host_input,
"cluster": {
"server": host_input
}
}
],
"users": [
{
"name": host_input,
"user": {
"token": self.credential.get_input('bearer_token')
}
}
],
"contexts": [
{
"name": host_input,
"context": {
"cluster": host_input,
"user": host_input,
"namespace": self.namespace
}
}
],
"current-context": host_input
}
if self.credential.get_input('verify_ssl') and 'ssl_ca_cert' in self.credential.inputs:
config["clusters"][0]["cluster"]["certificate-authority-data"] = b64encode(
self.credential.get_input('ssl_ca_cert').encode() # encode to bytes
).decode() # decode the base64 data into a str
else:
config["clusters"][0]["cluster"]["insecure-skip-tls-verify"] = True
return config

View File

@@ -255,7 +255,7 @@ def test_instance_group_update_fields(patch, instance, instance_group, admin, co
# policy_instance_ variables can only be updated in instance groups that are NOT containerized # policy_instance_ variables can only be updated in instance groups that are NOT containerized
# instance group (not containerized) # instance group (not containerized)
ig_url = reverse("api:instance_group_detail", kwargs={'pk': instance_group.pk}) ig_url = reverse("api:instance_group_detail", kwargs={'pk': instance_group.pk})
assert not instance_group.is_containerized assert not instance_group.is_container_group
assert not containerized_instance_group.is_isolated assert not containerized_instance_group.is_isolated
resp = patch(ig_url, {'policy_instance_percentage':15}, admin, expect=200) resp = patch(ig_url, {'policy_instance_percentage':15}, admin, expect=200)
assert 15 == resp.data['policy_instance_percentage'] assert 15 == resp.data['policy_instance_percentage']
@@ -266,7 +266,7 @@ def test_instance_group_update_fields(patch, instance, instance_group, admin, co
# containerized instance group # containerized instance group
cg_url = reverse("api:instance_group_detail", kwargs={'pk': containerized_instance_group.pk}) cg_url = reverse("api:instance_group_detail", kwargs={'pk': containerized_instance_group.pk})
assert containerized_instance_group.is_containerized assert containerized_instance_group.is_container_group
assert not containerized_instance_group.is_isolated assert not containerized_instance_group.is_isolated
resp = patch(cg_url, {'policy_instance_percentage':15}, admin, expect=400) resp = patch(cg_url, {'policy_instance_percentage':15}, admin, expect=400)
assert ["Containerized instances may not be managed via the API"] == resp.data['policy_instance_percentage'] assert ["Containerized instances may not be managed via the API"] == resp.data['policy_instance_percentage']
@@ -291,4 +291,3 @@ def test_containerized_group_default_fields(instance_group, kube_credential):
assert ig.policy_instance_list == [] assert ig.policy_instance_list == []
assert ig.policy_instance_minimum == 0 assert ig.policy_instance_minimum == 0
assert ig.policy_instance_percentage == 0 assert ig.policy_instance_percentage == 0

View File

@@ -1,6 +1,3 @@
import os
from backports.tempfile import TemporaryDirectory
import pytest import pytest
# AWX # AWX
@@ -10,7 +7,6 @@ from awx.main.models import Job, JobTemplate, CredentialType, WorkflowJobTemplat
from awx.main.migrations import _save_password_keys as save_password_keys from awx.main.migrations import _save_password_keys as save_password_keys
# Django # Django
from django.conf import settings
from django.apps import apps from django.apps import apps
# DRF # DRF
@@ -302,61 +298,6 @@ def test_save_survey_passwords_on_migration(job_template_with_survey_passwords):
assert job.survey_passwords == {'SSN': '$encrypted$', 'secret_key': '$encrypted$'} assert job.survey_passwords == {'SSN': '$encrypted$', 'secret_key': '$encrypted$'}
@pytest.mark.django_db
@pytest.mark.parametrize('access', ["superuser", "admin", "peon"])
def test_job_template_custom_virtualenv(get, patch, organization_factory, job_template_factory, alice, access):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
user = alice
if access == "superuser":
user = objs.superusers.admin
elif access == "admin":
jt.admin_role.members.add(alice)
else:
jt.read_role.members.add(alice)
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
if access == "peon":
patch(url, {'custom_virtualenv': temp_dir}, user=user, expect=403)
assert 'custom_virtualenv' not in get(url, user=user)
assert JobTemplate.objects.get(pk=jt.id).custom_virtualenv is None
else:
patch(url, {'custom_virtualenv': temp_dir}, user=user, expect=200)
assert get(url, user=user).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_job_template_invalid_custom_virtualenv(get, patch, organization_factory,
job_template_factory):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=objs.superusers.admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_job_template_unset_custom_virtualenv(get, patch, organization_factory,
job_template_factory, value):
objs = organization_factory("org", superusers=['admin'])
jt = job_template_factory("jt", organization=objs.organization,
inventory='test_inv', project='test_proj').job_template
url = reverse('api:job_template_detail', kwargs={'pk': jt.id})
resp = patch(url, {'custom_virtualenv': value}, user=objs.superusers.admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db @pytest.mark.django_db
def test_jt_organization_follows_project(post, patch, admin_user): def test_jt_organization_follows_project(post, patch, admin_user):
org1 = Organization.objects.create(name='foo1') org1 = Organization.objects.create(name='foo1')

View File

@@ -1,11 +1,6 @@
# Copyright (c) 2015 Ansible, Inc. # Copyright (c) 2015 Ansible, Inc.
# All Rights Reserved. # All Rights Reserved.
# Python
import os
from backports.tempfile import TemporaryDirectory
from django.conf import settings
import pytest import pytest
# AWX # AWX
@@ -242,32 +237,6 @@ def test_delete_organization_xfail2(delete, organization):
delete(reverse('api:organization_detail', kwargs={'pk': organization.id}), user=None, expect=401) delete(reverse('api:organization_detail', kwargs={'pk': organization.id}), user=None, expect=401)
@pytest.mark.django_db
def test_organization_custom_virtualenv(get, patch, organization, admin):
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
patch(url, {'custom_virtualenv': temp_dir}, user=admin, expect=200)
assert get(url, user=admin).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_organization_invalid_custom_virtualenv(get, patch, organization, admin):
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_organization_unset_custom_virtualenv(get, patch, organization, admin, value):
url = reverse('api:organization_detail', kwargs={'pk': organization.id})
resp = patch(url, {'custom_virtualenv': value}, user=admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db @pytest.mark.django_db
def test_organization_delete(delete, admin, organization, organization_jobs_successful): def test_organization_delete(delete, admin, organization, organization_jobs_successful):
url = reverse('api:organization_detail', kwargs={'pk': organization.id}) url = reverse('api:organization_detail', kwargs={'pk': organization.id})

View File

@@ -1,7 +1,3 @@
import os
from backports.tempfile import TemporaryDirectory
from django.conf import settings
import pytest import pytest
from awx.api.versioning import reverse from awx.api.versioning import reverse
@@ -21,32 +17,6 @@ class TestInsightsCredential:
expect=400) expect=400)
@pytest.mark.django_db
def test_project_custom_virtualenv(get, patch, project, admin):
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
url = reverse('api:project_detail', kwargs={'pk': project.id})
patch(url, {'custom_virtualenv': temp_dir}, user=admin, expect=200)
assert get(url, user=admin).data['custom_virtualenv'] == os.path.join(temp_dir, '')
@pytest.mark.django_db
def test_project_invalid_custom_virtualenv(get, patch, project, admin):
url = reverse('api:project_detail', kwargs={'pk': project.id})
resp = patch(url, {'custom_virtualenv': '/foo/bar'}, user=admin, expect=400)
assert resp.data['custom_virtualenv'] == [
'/foo/bar is not a valid virtualenv in {}'.format(settings.BASE_VENV_PATH)
]
@pytest.mark.django_db
@pytest.mark.parametrize('value', ["", None])
def test_project_unset_custom_virtualenv(get, patch, project, admin, value):
url = reverse('api:project_detail', kwargs={'pk': project.id})
resp = patch(url, {'custom_virtualenv': value}, user=admin, expect=200)
assert resp.data['custom_virtualenv'] is None
@pytest.mark.django_db @pytest.mark.django_db
def test_no_changing_overwrite_behavior_if_used(post, patch, organization, admin_user): def test_no_changing_overwrite_behavior_if_used(post, patch, organization, admin_user):
r1 = post( r1 = post(

View File

@@ -52,6 +52,7 @@ from awx.main.models.events import (
from awx.main.models.workflow import WorkflowJobTemplate from awx.main.models.workflow import WorkflowJobTemplate
from awx.main.models.ad_hoc_commands import AdHocCommand from awx.main.models.ad_hoc_commands import AdHocCommand
from awx.main.models.oauth import OAuth2Application as Application from awx.main.models.oauth import OAuth2Application as Application
from awx.main.models.execution_environments import ExecutionEnvironment
__SWAGGER_REQUESTS__ = {} __SWAGGER_REQUESTS__ = {}
@@ -850,3 +851,8 @@ def slice_job_factory(slice_jt_factory):
node.save() node.save()
return slice_job return slice_job
return r return r
@pytest.fixture
def execution_environment(organization):
return ExecutionEnvironment.objects.create(name="test-ee", description="test-ee", organization=organization)

View File

@@ -29,8 +29,8 @@ def containerized_job(default_instance_group, kube_credential, job_template_fact
@pytest.mark.django_db @pytest.mark.django_db
def test_containerized_job(containerized_job): def test_containerized_job(containerized_job):
assert containerized_job.is_containerized assert containerized_job.is_container_group_task
assert containerized_job.instance_group.is_containerized assert containerized_job.instance_group.is_container_group
assert containerized_job.instance_group.credential.kubernetes assert containerized_job.instance_group.credential.kubernetes

View File

@@ -90,6 +90,7 @@ def test_default_cred_types():
'kubernetes_bearer_token', 'kubernetes_bearer_token',
'net', 'net',
'openstack', 'openstack',
'registry',
'rhv', 'rhv',
'satellite6', 'satellite6',
'scm', 'scm',

View File

@@ -0,0 +1,19 @@
import pytest
from awx.main.models import (ExecutionEnvironment)
@pytest.mark.django_db
def test_execution_environment_creation(execution_environment, organization):
execution_env = ExecutionEnvironment.objects.create(
name='Hello Environment',
image='',
organization=organization,
managed_by_tower=False,
credential=None,
pull='missing'
)
assert type(execution_env) is type(execution_environment)
assert execution_env.organization == organization
assert execution_env.name == 'Hello Environment'
assert execution_env.pull == 'missing'

View File

@@ -6,7 +6,7 @@ import re
from collections import namedtuple from collections import namedtuple
from awx.main.tasks import RunInventoryUpdate from awx.main.tasks import RunInventoryUpdate
from awx.main.models import InventorySource, Credential, CredentialType, UnifiedJob from awx.main.models import InventorySource, Credential, CredentialType, UnifiedJob, ExecutionEnvironment
from awx.main.constants import CLOUD_PROVIDERS, STANDARD_INVENTORY_UPDATE_ENV from awx.main.constants import CLOUD_PROVIDERS, STANDARD_INVENTORY_UPDATE_ENV
from awx.main.tests import data from awx.main.tests import data
@@ -110,7 +110,8 @@ def read_content(private_data_dir, raw_env, inventory_update):
continue # Ansible runner continue # Ansible runner
abs_file_path = os.path.join(private_data_dir, filename) abs_file_path = os.path.join(private_data_dir, filename)
file_aliases[abs_file_path] = filename file_aliases[abs_file_path] = filename
if abs_file_path in inverse_env: runner_path = os.path.join('/runner', os.path.basename(abs_file_path))
if runner_path in inverse_env:
referenced_paths.add(abs_file_path) referenced_paths.add(abs_file_path)
alias = 'file_reference' alias = 'file_reference'
for i in range(10): for i in range(10):
@@ -121,7 +122,7 @@ def read_content(private_data_dir, raw_env, inventory_update):
raise RuntimeError('Test not able to cope with >10 references by env vars. ' raise RuntimeError('Test not able to cope with >10 references by env vars. '
'Something probably went very wrong.') 'Something probably went very wrong.')
file_aliases[abs_file_path] = alias file_aliases[abs_file_path] = alias
for env_key in inverse_env[abs_file_path]: for env_key in inverse_env[runner_path]:
env[env_key] = '{{{{ {} }}}}'.format(alias) env[env_key] = '{{{{ {} }}}}'.format(alias)
try: try:
with open(abs_file_path, 'r') as f: with open(abs_file_path, 'r') as f:
@@ -182,6 +183,8 @@ def create_reference_data(source_dir, env, content):
@pytest.mark.django_db @pytest.mark.django_db
@pytest.mark.parametrize('this_kind', CLOUD_PROVIDERS) @pytest.mark.parametrize('this_kind', CLOUD_PROVIDERS)
def test_inventory_update_injected_content(this_kind, inventory, fake_credential_factory): def test_inventory_update_injected_content(this_kind, inventory, fake_credential_factory):
ExecutionEnvironment.objects.create(name='test EE', managed_by_tower=True)
injector = InventorySource.injectors[this_kind] injector = InventorySource.injectors[this_kind]
if injector.plugin_name is None: if injector.plugin_name is None:
pytest.skip('Use of inventory plugin is not enabled for this source') pytest.skip('Use of inventory plugin is not enabled for this source')
@@ -197,12 +200,14 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
inventory_update = inventory_source.create_unified_job() inventory_update = inventory_source.create_unified_job()
task = RunInventoryUpdate() task = RunInventoryUpdate()
def substitute_run(envvars=None, **_kw): def substitute_run(awx_receptor_job):
"""This method will replace run_pexpect """This method will replace run_pexpect
instead of running, it will read the private data directory contents instead of running, it will read the private data directory contents
It will make assertions that the contents are correct It will make assertions that the contents are correct
If MAKE_INVENTORY_REFERENCE_FILES is set, it will produce reference files If MAKE_INVENTORY_REFERENCE_FILES is set, it will produce reference files
""" """
envvars = awx_receptor_job.runner_params['envvars']
private_data_dir = envvars.pop('AWX_PRIVATE_DATA_DIR') private_data_dir = envvars.pop('AWX_PRIVATE_DATA_DIR')
assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == 'auto' assert envvars.pop('ANSIBLE_INVENTORY_ENABLED') == 'auto'
set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0']) set_files = bool(os.getenv("MAKE_INVENTORY_REFERENCE_FILES", 'false').lower()[0] not in ['f', '0'])
@@ -214,9 +219,6 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
f"'{inventory_filename}' file not found in inventory update runtime files {content.keys()}" f"'{inventory_filename}' file not found in inventory update runtime files {content.keys()}"
env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test env.pop('ANSIBLE_COLLECTIONS_PATHS', None) # collection paths not relevant to this test
env.pop('PYTHONPATH')
env.pop('VIRTUAL_ENV')
env.pop('PROOT_TMP_DIR')
base_dir = os.path.join(DATA, 'plugins') base_dir = os.path.join(DATA, 'plugins')
if not os.path.exists(base_dir): if not os.path.exists(base_dir):
os.mkdir(base_dir) os.mkdir(base_dir)
@@ -256,6 +258,6 @@ def test_inventory_update_injected_content(this_kind, inventory, fake_credential
# Also do not send websocket status updates # Also do not send websocket status updates
with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()): with mock.patch.object(UnifiedJob, 'websocket_emit_status', mock.Mock()):
# The point of this test is that we replace run with assertions # The point of this test is that we replace run with assertions
with mock.patch('awx.main.tasks.ansible_runner.interface.run', substitute_run): with mock.patch('awx.main.tasks.AWXReceptorJob.run', substitute_run):
# so this sets up everything for a run and then yields control over to substitute_run # so this sets up everything for a run and then yields control over to substitute_run
task.run(inventory_update.pk) task.run(inventory_update.pk)

View File

@@ -49,7 +49,7 @@ def test_python_and_js_licenses():
def read_api_requirements(path): def read_api_requirements(path):
ret = {} ret = {}
for req_file in ['requirements.txt', 'requirements_ansible.txt', 'requirements_git.txt', 'requirements_ansible_git.txt']: for req_file in ['requirements.txt', 'requirements_git.txt']:
fname = '%s/%s' % (path, req_file) fname = '%s/%s' % (path, req_file)
for reqt in parse_requirements(fname, session=''): for reqt in parse_requirements(fname, session=''):

View File

@@ -40,7 +40,7 @@ def project_update(mocker):
@pytest.fixture @pytest.fixture
def job(mocker, job_template, project_update): def job(mocker, job_template, project_update):
return mocker.MagicMock(pk=5, job_template=job_template, project_update=project_update, return mocker.MagicMock(pk=5, job_template=job_template, project_update=project_update,
workflow_job_id=None) workflow_job_id=None, execution_environment_id=None)
@pytest.fixture @pytest.fixture

View File

@@ -11,7 +11,7 @@ class FakeObject(object):
class Job(FakeObject): class Job(FakeObject):
task_impact = 43 task_impact = 43
is_containerized = False is_container_group_task = False
def log_format(self): def log_format(self):
return 'job 382 (fake)' return 'job 382 (fake)'

View File

@@ -6,7 +6,6 @@ import os
import shutil import shutil
import tempfile import tempfile
from backports.tempfile import TemporaryDirectory
import fcntl import fcntl
from unittest import mock from unittest import mock
import pytest import pytest
@@ -19,6 +18,7 @@ from awx.main.models import (
AdHocCommand, AdHocCommand,
Credential, Credential,
CredentialType, CredentialType,
ExecutionEnvironment,
Inventory, Inventory,
InventorySource, InventorySource,
InventoryUpdate, InventoryUpdate,
@@ -347,11 +347,12 @@ def pytest_generate_tests(metafunc):
) )
def parse_extra_vars(args): def parse_extra_vars(args, private_data_dir):
extra_vars = {} extra_vars = {}
for chunk in args: for chunk in args:
if chunk.startswith('@/tmp/'): if chunk.startswith('@/runner/'):
with open(chunk.strip('@'), 'r') as f: local_path = os.path.join(private_data_dir, os.path.basename(chunk.strip('@')))
with open(local_path, 'r') as f:
extra_vars.update(yaml.load(f, Loader=SafeLoader)) extra_vars.update(yaml.load(f, Loader=SafeLoader))
return extra_vars return extra_vars
@@ -546,44 +547,6 @@ class TestGenericRun():
job_cwd='/foobar', job_env={'switch': 'blade', 'foot': 'ball', 'secret_key': 'redacted_value'}) job_cwd='/foobar', job_env={'switch': 'blade', 'foot': 'ball', 'secret_key': 'redacted_value'})
def test_uses_process_isolation(self, settings):
job = Job(project=Project(), inventory=Inventory())
task = tasks.RunJob()
task.should_use_proot = lambda instance: True
task.instance = job
private_data_dir = '/foo'
cwd = '/bar'
settings.AWX_PROOT_HIDE_PATHS = ['/AWX_PROOT_HIDE_PATHS1', '/AWX_PROOT_HIDE_PATHS2']
settings.ANSIBLE_VENV_PATH = '/ANSIBLE_VENV_PATH'
settings.AWX_VENV_PATH = '/AWX_VENV_PATH'
process_isolation_params = task.build_params_process_isolation(job, private_data_dir, cwd)
assert True is process_isolation_params['process_isolation']
assert process_isolation_params['process_isolation_path'].startswith(settings.AWX_PROOT_BASE_PATH), \
"Directory where a temp directory will be created for the remapping to take place"
assert private_data_dir in process_isolation_params['process_isolation_show_paths'], \
"The per-job private data dir should be in the list of directories the user can see."
assert cwd in process_isolation_params['process_isolation_show_paths'], \
"The current working directory should be in the list of directories the user can see."
for p in [settings.AWX_PROOT_BASE_PATH,
'/etc/tower',
'/etc/ssh',
'/var/lib/awx',
'/var/log',
settings.PROJECTS_ROOT,
settings.JOBOUTPUT_ROOT,
'/AWX_PROOT_HIDE_PATHS1',
'/AWX_PROOT_HIDE_PATHS2']:
assert p in process_isolation_params['process_isolation_hide_paths']
assert 9 == len(process_isolation_params['process_isolation_hide_paths'])
assert '/ANSIBLE_VENV_PATH' in process_isolation_params['process_isolation_ro_paths']
assert '/AWX_VENV_PATH' in process_isolation_params['process_isolation_ro_paths']
assert 2 == len(process_isolation_params['process_isolation_ro_paths'])
@mock.patch('os.makedirs') @mock.patch('os.makedirs')
def test_build_params_resource_profiling(self, os_makedirs): def test_build_params_resource_profiling(self, os_makedirs):
job = Job(project=Project(), inventory=Inventory()) job = Job(project=Project(), inventory=Inventory())
@@ -597,7 +560,7 @@ class TestGenericRun():
assert resource_profiling_params['resource_profiling_cpu_poll_interval'] == '0.25' assert resource_profiling_params['resource_profiling_cpu_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_memory_poll_interval'] == '0.25' assert resource_profiling_params['resource_profiling_memory_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_pid_poll_interval'] == '0.25' assert resource_profiling_params['resource_profiling_pid_poll_interval'] == '0.25'
assert resource_profiling_params['resource_profiling_results_dir'] == '/fake_private_data_dir/artifacts/playbook_profiling' assert resource_profiling_params['resource_profiling_results_dir'] == '/runner/artifacts/playbook_profiling'
@pytest.mark.parametrize("scenario, profiling_enabled", [ @pytest.mark.parametrize("scenario, profiling_enabled", [
@@ -656,34 +619,13 @@ class TestGenericRun():
env = task.build_env(job, private_data_dir) env = task.build_env(job, private_data_dir)
assert env['FOO'] == 'BAR' assert env['FOO'] == 'BAR'
def test_valid_custom_virtualenv(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
with TemporaryDirectory(dir=settings.BASE_VENV_PATH) as tempdir:
job.project.custom_virtualenv = tempdir
os.makedirs(os.path.join(tempdir, 'lib'))
os.makedirs(os.path.join(tempdir, 'bin', 'activate'))
task = tasks.RunJob()
env = task.build_env(job, private_data_dir)
assert env['PATH'].startswith(os.path.join(tempdir, 'bin'))
assert env['VIRTUAL_ENV'] == tempdir
def test_invalid_custom_virtualenv(self, patch_Job, private_data_dir):
job = Job(project=Project(), inventory=Inventory())
job.project.custom_virtualenv = '/var/lib/awx/venv/missing'
task = tasks.RunJob()
with pytest.raises(tasks.InvalidVirtualenvError) as e:
task.build_env(job, private_data_dir)
assert 'Invalid virtual environment selected: /var/lib/awx/venv/missing' == str(e.value)
@pytest.mark.django_db
class TestAdhocRun(TestJobExecution): class TestAdhocRun(TestJobExecution):
def test_options_jinja_usage(self, adhoc_job, adhoc_update_model_wrapper): def test_options_jinja_usage(self, adhoc_job, adhoc_update_model_wrapper):
ExecutionEnvironment.objects.create(name='test EE', managed_by_tower=True)
adhoc_job.module_args = '{{ ansible_ssh_pass }}' adhoc_job.module_args = '{{ ansible_ssh_pass }}'
adhoc_job.websocket_emit_status = mock.Mock() adhoc_job.websocket_emit_status = mock.Mock()
adhoc_job.send_notification_templates = mock.Mock() adhoc_job.send_notification_templates = mock.Mock()
@@ -1203,7 +1145,9 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential( credential.credential_type.inject_credential(
credential, env, safe_env, [], private_data_dir credential, env, safe_env, [], private_data_dir
) )
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb')) runner_path = env['GCE_CREDENTIALS_FILE_PATH']
local_path = os.path.join(private_data_dir, os.path.basename(runner_path))
json_data = json.load(open(local_path, 'rb'))
assert json_data['type'] == 'service_account' assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob' assert json_data['client_email'] == 'bob'
@@ -1306,7 +1250,11 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir credential, env, {}, [], private_data_dir
) )
shade_config = open(env['OS_CLIENT_CONFIG_FILE'], 'r').read() # convert container path to host machine path
config_loc = os.path.join(
private_data_dir, os.path.basename(env['OS_CLIENT_CONFIG_FILE'])
)
shade_config = open(config_loc, 'r').read()
assert shade_config == '\n'.join([ assert shade_config == '\n'.join([
'clouds:', 'clouds:',
' devstack:', ' devstack:',
@@ -1344,7 +1292,7 @@ class TestJobCredentials(TestJobExecution):
) )
config = configparser.ConfigParser() config = configparser.ConfigParser()
config.read(env['OVIRT_INI_PATH']) config.read(os.path.join(private_data_dir, os.path.basename(env['OVIRT_INI_PATH'])))
assert config.get('ovirt', 'ovirt_url') == 'some-ovirt-host.example.org' assert config.get('ovirt', 'ovirt_url') == 'some-ovirt-host.example.org'
assert config.get('ovirt', 'ovirt_username') == 'bob' assert config.get('ovirt', 'ovirt_username') == 'bob'
assert config.get('ovirt', 'ovirt_password') == 'some-pass' assert config.get('ovirt', 'ovirt_password') == 'some-pass'
@@ -1577,7 +1525,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential( credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir credential, {}, {}, args, private_data_dir
) )
extra_vars = parse_extra_vars(args) extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["api_token"] == "ABC123" assert extra_vars["api_token"] == "ABC123"
assert hasattr(extra_vars["api_token"], '__UNSAFE__') assert hasattr(extra_vars["api_token"], '__UNSAFE__')
@@ -1612,7 +1560,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential( credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir credential, {}, {}, args, private_data_dir
) )
extra_vars = parse_extra_vars(args) extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["turbo_button"] == "True" assert extra_vars["turbo_button"] == "True"
return ['successful', 0] return ['successful', 0]
@@ -1647,7 +1595,7 @@ class TestJobCredentials(TestJobExecution):
credential.credential_type.inject_credential( credential.credential_type.inject_credential(
credential, {}, {}, args, private_data_dir credential, {}, {}, args, private_data_dir
) )
extra_vars = parse_extra_vars(args) extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["turbo_button"] == "FAST!" assert extra_vars["turbo_button"] == "FAST!"
@@ -1687,7 +1635,7 @@ class TestJobCredentials(TestJobExecution):
credential, {}, {}, args, private_data_dir credential, {}, {}, args, private_data_dir
) )
extra_vars = parse_extra_vars(args) extra_vars = parse_extra_vars(args, private_data_dir)
assert extra_vars["password"] == "SUPER-SECRET-123" assert extra_vars["password"] == "SUPER-SECRET-123"
def test_custom_environment_injectors_with_file(self, private_data_dir): def test_custom_environment_injectors_with_file(self, private_data_dir):
@@ -1722,7 +1670,8 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir credential, env, {}, [], private_data_dir
) )
assert open(env['MY_CLOUD_INI_FILE'], 'r').read() == '[mycloud]\nABC123' path = os.path.join(private_data_dir, os.path.basename(env['MY_CLOUD_INI_FILE']))
assert open(path, 'r').read() == '[mycloud]\nABC123'
def test_custom_environment_injectors_with_unicode_content(self, private_data_dir): def test_custom_environment_injectors_with_unicode_content(self, private_data_dir):
value = 'Iñtërnâtiônàlizætiøn' value = 'Iñtërnâtiônàlizætiøn'
@@ -1746,7 +1695,8 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir credential, env, {}, [], private_data_dir
) )
assert open(env['MY_CLOUD_INI_FILE'], 'r').read() == value path = os.path.join(private_data_dir, os.path.basename(env['MY_CLOUD_INI_FILE']))
assert open(path, 'r').read() == value
def test_custom_environment_injectors_with_files(self, private_data_dir): def test_custom_environment_injectors_with_files(self, private_data_dir):
some_cloud = CredentialType( some_cloud = CredentialType(
@@ -1786,8 +1736,10 @@ class TestJobCredentials(TestJobExecution):
credential, env, {}, [], private_data_dir credential, env, {}, [], private_data_dir
) )
assert open(env['MY_CERT_INI_FILE'], 'r').read() == '[mycert]\nCERT123' cert_path = os.path.join(private_data_dir, os.path.basename(env['MY_CERT_INI_FILE']))
assert open(env['MY_KEY_INI_FILE'], 'r').read() == '[mykey]\nKEY123' key_path = os.path.join(private_data_dir, os.path.basename(env['MY_KEY_INI_FILE']))
assert open(cert_path, 'r').read() == '[mycert]\nCERT123'
assert open(key_path, 'r').read() == '[mykey]\nKEY123'
def test_multi_cloud(self, private_data_dir): def test_multi_cloud(self, private_data_dir):
gce = CredentialType.defaults['gce']() gce = CredentialType.defaults['gce']()
@@ -1826,7 +1778,8 @@ class TestJobCredentials(TestJobExecution):
assert env['AZURE_AD_USER'] == 'bob' assert env['AZURE_AD_USER'] == 'bob'
assert env['AZURE_PASSWORD'] == 'secret' assert env['AZURE_PASSWORD'] == 'secret'
json_data = json.load(open(env['GCE_CREDENTIALS_FILE_PATH'], 'rb')) path = os.path.join(private_data_dir, os.path.basename(env['GCE_CREDENTIALS_FILE_PATH']))
json_data = json.load(open(path, 'rb'))
assert json_data['type'] == 'service_account' assert json_data['type'] == 'service_account'
assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY assert json_data['private_key'] == self.EXAMPLE_PRIVATE_KEY
assert json_data['client_email'] == 'bob' assert json_data['client_email'] == 'bob'
@@ -1971,29 +1924,6 @@ class TestProjectUpdateCredentials(TestJobExecution):
] ]
} }
def test_process_isolation_exposes_projects_root(self, private_data_dir, project_update):
task = tasks.RunProjectUpdate()
task.revision_path = 'foobar'
task.instance = project_update
ssh = CredentialType.defaults['ssh']()
project_update.scm_type = 'git'
project_update.credential = Credential(
pk=1,
credential_type=ssh,
)
process_isolation = task.build_params_process_isolation(job, private_data_dir, 'cwd')
assert process_isolation['process_isolation'] is True
assert settings.PROJECTS_ROOT in process_isolation['process_isolation_show_paths']
task._write_extra_vars_file = mock.Mock()
with mock.patch.object(Licenser, 'validate', lambda *args, **kw: {}):
task.build_extra_vars_file(project_update, private_data_dir)
call_args, _ = task._write_extra_vars_file.call_args_list[0]
_, extra_vars = call_args
def test_username_and_password_auth(self, project_update, scm_type): def test_username_and_password_auth(self, project_update, scm_type):
task = tasks.RunProjectUpdate() task = tasks.RunProjectUpdate()
ssh = CredentialType.defaults['ssh']() ssh = CredentialType.defaults['ssh']()
@@ -2107,7 +2037,8 @@ class TestInventoryUpdateCredentials(TestJobExecution):
assert '-i' in ' '.join(args) assert '-i' in ' '.join(args)
script = args[args.index('-i') + 1] script = args[args.index('-i') + 1]
with open(script, 'r') as f: host_script = script.replace('/runner', private_data_dir)
with open(host_script, 'r') as f:
assert f.read() == inventory_update.source_script.script assert f.read() == inventory_update.source_script.script
assert env['FOO'] == 'BAR' assert env['FOO'] == 'BAR'
if with_credential: if with_credential:
@@ -2307,7 +2238,8 @@ class TestInventoryUpdateCredentials(TestJobExecution):
private_data_files = task.build_private_data_files(inventory_update, private_data_dir) private_data_files = task.build_private_data_files(inventory_update, private_data_dir)
env = task.build_env(inventory_update, private_data_dir, False, private_data_files) env = task.build_env(inventory_update, private_data_dir, False, private_data_files)
shade_config = open(env['OS_CLIENT_CONFIG_FILE'], 'r').read() path = os.path.join(private_data_dir, os.path.basename(env['OS_CLIENT_CONFIG_FILE']))
shade_config = open(path, 'r').read()
assert '\n'.join([ assert '\n'.join([
'clouds:', 'clouds:',
' devstack:', ' devstack:',

View File

@@ -9,9 +9,6 @@ import json
import yaml import yaml
from unittest import mock from unittest import mock
from backports.tempfile import TemporaryDirectory
from django.conf import settings
from rest_framework.exceptions import ParseError from rest_framework.exceptions import ParseError
from awx.main.utils import common from awx.main.utils import common
@@ -194,24 +191,3 @@ def test_extract_ansible_vars():
redacted, var_list = common.extract_ansible_vars(json.dumps(my_dict)) redacted, var_list = common.extract_ansible_vars(json.dumps(my_dict))
assert var_list == set(['ansible_connetion_setting']) assert var_list == set(['ansible_connetion_setting'])
assert redacted == {"foobar": "baz"} assert redacted == {"foobar": "baz"}
def test_get_custom_venv_choices():
bundled_venv = os.path.join(settings.BASE_VENV_PATH, 'ansible', '')
assert sorted(common.get_custom_venv_choices()) == [bundled_venv]
with TemporaryDirectory(dir=settings.BASE_VENV_PATH, prefix='tmp') as temp_dir:
os.makedirs(os.path.join(temp_dir, 'bin', 'activate'))
custom_venv_dir = os.path.join(temp_dir, 'custom')
custom_venv_1 = os.path.join(custom_venv_dir, 'venv-1')
custom_venv_awx = os.path.join(custom_venv_dir, 'custom', 'awx')
os.makedirs(os.path.join(custom_venv_1, 'bin', 'activate'))
os.makedirs(os.path.join(custom_venv_awx, 'bin', 'activate'))
assert sorted(common.get_custom_venv_choices([custom_venv_dir])) == [
bundled_venv,
os.path.join(temp_dir, ''),
os.path.join(custom_venv_1, '')
]

View File

@@ -55,7 +55,8 @@ __all__ = [
'model_instance_diff', 'parse_yaml_or_json', 'RequireDebugTrueOrTest', 'model_instance_diff', 'parse_yaml_or_json', 'RequireDebugTrueOrTest',
'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError', 'has_model_field_prefetched', 'set_environ', 'IllegalArgumentError',
'get_custom_venv_choices', 'get_external_account', 'task_manager_bulk_reschedule', 'get_custom_venv_choices', 'get_external_account', 'task_manager_bulk_reschedule',
'schedule_task_manager', 'classproperty', 'create_temporary_fifo', 'truncate_stdout' 'schedule_task_manager', 'classproperty', 'create_temporary_fifo', 'truncate_stdout',
'deepmerge'
] ]
@@ -1079,3 +1080,21 @@ def truncate_stdout(stdout, size):
set_count += 1 set_count += 1
return stdout + u'\u001b[0m' * (set_count - reset_count) return stdout + u'\u001b[0m' * (set_count - reset_count)
def deepmerge(a, b):
"""
Merge dict structures and return the result.
>>> a = {'first': {'all_rows': {'pass': 'dog', 'number': '1'}}}
>>> b = {'first': {'all_rows': {'fail': 'cat', 'number': '5'}}}
>>> import pprint; pprint.pprint(deepmerge(a, b))
{'first': {'all_rows': {'fail': 'cat', 'number': '5', 'pass': 'dog'}}}
"""
if isinstance(a, dict) and isinstance(b, dict):
return dict([(k, deepmerge(a.get(k), b.get(k)))
for k in set(a.keys()).union(b.keys())])
elif b is None:
return a
else:
return b

View File

@@ -32,7 +32,7 @@ def construct_rsyslog_conf_template(settings=settings):
'$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf', '$IncludeConfig /var/lib/awx/rsyslog/conf.d/*.conf',
f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa f'main_queue(queue.spoolDirectory="{spool_directory}" queue.maxdiskspace="{max_disk_space}g" queue.type="Disk" queue.filename="awx-external-logger-backlog")', # noqa
'module(load="imuxsock" SysSock.Use="off")', 'module(load="imuxsock" SysSock.Use="off")',
'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on")', 'input(type="imuxsock" Socket="' + settings.LOGGING['handlers']['external_logger']['address'] + '" unlink="on" RateLimit.Burst="0")',
'template(name="awx" type="string" string="%rawmsg-after-pri%")', 'template(name="awx" type="string" string="%rawmsg-after-pri%")',
]) ])

View File

@@ -3,7 +3,8 @@
# Python # Python
import logging import logging
import os.path import sys
import traceback
# Django # Django
from django.conf import settings from django.conf import settings
@@ -21,27 +22,31 @@ class RSysLogHandler(logging.handlers.SysLogHandler):
super(RSysLogHandler, self)._connect_unixsocket(address) super(RSysLogHandler, self)._connect_unixsocket(address)
self.socket.setblocking(False) self.socket.setblocking(False)
def handleError(self, record):
# for any number of reasons, rsyslogd has gone to lunch;
# this usually means that it's just been restarted (due to
# a configuration change) unfortunately, we can't log that
# because...rsyslogd is down (and would just put us back down this
# code path)
# as a fallback, it makes the most sense to just write the
# messages to sys.stderr (which will end up in supervisord logs,
# and in containerized installs, cascaded down to pod logs)
# because the alternative is blocking the
# socket.send() in the Python process, which we definitely don't
# want to do)
msg = f'{record.asctime} ERROR rsyslogd was unresponsive: '
exc = traceback.format_exc()
try:
msg += exc.splitlines()[-1]
except Exception:
msg += exc
msg = '\n'.join([msg, record.msg, ''])
sys.stderr.write(msg)
def emit(self, msg): def emit(self, msg):
if not settings.LOG_AGGREGATOR_ENABLED: if not settings.LOG_AGGREGATOR_ENABLED:
return return
if not os.path.exists(settings.LOGGING['handlers']['external_logger']['address']): return super(RSysLogHandler, self).emit(msg)
return
try:
return super(RSysLogHandler, self).emit(msg)
except ConnectionRefusedError:
# rsyslogd has gone to lunch; this generally means that it's just
# been restarted (due to a configuration change)
# unfortunately, we can't log that because...rsyslogd is down (and
# would just us back ddown this code path)
pass
except BlockingIOError:
# for <some reason>, rsyslogd is no longer reading from the domain socket, and
# we're unable to write any more to it without blocking (we've seen this behavior
# from time to time when logging is totally misconfigured;
# in this scenario, it also makes more sense to just drop the messages,
# because the alternative is blocking the socket.send() in the
# Python process, which we definitely don't want to do)
pass
class SpecialInventoryHandler(logging.Handler): class SpecialInventoryHandler(logging.Handler):

View File

@@ -24,9 +24,7 @@
tasks: tasks:
- name: delete project directory before update - name: delete project directory before update
file: command: "rm -rf {{project_path}}/*" # volume mounted, cannot delete folder itself
path: "{{project_path|quote}}"
state: absent
tags: tags:
- delete - delete
@@ -57,6 +55,8 @@
force: "{{scm_clean}}" force: "{{scm_clean}}"
username: "{{scm_username|default(omit)}}" username: "{{scm_username|default(omit)}}"
password: "{{scm_password|default(omit)}}" password: "{{scm_password|default(omit)}}"
# must be in_place because folder pre-existing, because it is mounted
in_place: true
environment: environment:
LC_ALL: 'en_US.UTF-8' LC_ALL: 'en_US.UTF-8'
register: svn_result register: svn_result
@@ -206,6 +206,9 @@
ANSIBLE_FORCE_COLOR: false ANSIBLE_FORCE_COLOR: false
ANSIBLE_COLLECTIONS_PATHS: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections" ANSIBLE_COLLECTIONS_PATHS: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/requirements_collections"
GIT_SSH_COMMAND: "ssh -o StrictHostKeyChecking=no" GIT_SSH_COMMAND: "ssh -o StrictHostKeyChecking=no"
# Put the local tmp directory in same volume as collection destination
# otherwise, files cannot be moved accross volumes and will cause error
ANSIBLE_LOCAL_TEMP: "{{projects_root}}/.__awx_cache/{{local_path}}/stage/tmp"
when: when:
- "ansible_version.full is version_compare('2.9', '>=')" - "ansible_version.full is version_compare('2.9', '>=')"

View File

@@ -59,11 +59,23 @@ DATABASES = {
} }
} }
# Whether or not the deployment is a K8S-based deployment
# In K8S-based deployments, instances have zero capacity - all playbook
# automation is intended to flow through defined Container Groups that
# interface with some (or some set of) K8S api (which may or may not include
# the K8S cluster where awx itself is running)
IS_K8S = False
# TODO: remove this setting in favor of a default execution environment
AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE = 'quay.io/ansible/awx-ee'
AWX_CONTAINER_GROUP_K8S_API_TIMEOUT = 10 AWX_CONTAINER_GROUP_K8S_API_TIMEOUT = 10
AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES = 100 AWX_CONTAINER_GROUP_POD_LAUNCH_RETRIES = 100
AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY = 5 AWX_CONTAINER_GROUP_POD_LAUNCH_RETRY_DELAY = 5
AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE = 'default' AWX_CONTAINER_GROUP_DEFAULT_NAMESPACE = os.getenv('MY_POD_NAMESPACE', 'default')
AWX_CONTAINER_GROUP_DEFAULT_IMAGE = 'ansible/ansible-runner'
# TODO: remove this setting in favor of a default execution environment
AWX_CONTAINER_GROUP_DEFAULT_IMAGE = AWX_EXECUTION_ENVIRONMENT_DEFAULT_IMAGE
# Internationalization # Internationalization
# https://docs.djangoproject.com/en/dev/topics/i18n/ # https://docs.djangoproject.com/en/dev/topics/i18n/
@@ -173,6 +185,7 @@ REMOTE_HOST_HEADERS = ['REMOTE_ADDR', 'REMOTE_HOST']
PROXY_IP_ALLOWED_LIST = [] PROXY_IP_ALLOWED_LIST = []
CUSTOM_VENV_PATHS = [] CUSTOM_VENV_PATHS = []
DEFAULT_EXECUTION_ENVIRONMENT = None
# Note: This setting may be overridden by database settings. # Note: This setting may be overridden by database settings.
STDOUT_MAX_BYTES_DISPLAY = 1048576 STDOUT_MAX_BYTES_DISPLAY = 1048576
@@ -679,7 +692,7 @@ AD_HOC_COMMANDS = [
'win_user', 'win_user',
] ]
INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM") INV_ENV_VARIABLE_BLOCKED = ("HOME", "USER", "_", "TERM", "PATH")
# ---------------- # ----------------
# -- Amazon EC2 -- # -- Amazon EC2 --
@@ -783,6 +796,8 @@ TOWER_URL_BASE = "https://towerhost"
INSIGHTS_URL_BASE = "https://example.org" INSIGHTS_URL_BASE = "https://example.org"
INSIGHTS_AGENT_MIME = 'application/example' INSIGHTS_AGENT_MIME = 'application/example'
# See https://github.com/ansible/awx-facts-playbooks
INSIGHTS_SYSTEM_ID_FILE='/etc/redhat-access-insights/machine-id'
TOWER_SETTINGS_MANIFEST = {} TOWER_SETTINGS_MANIFEST = {}

View File

@@ -177,15 +177,6 @@ CELERYBEAT_SCHEDULE.update({ # noqa
CLUSTER_HOST_ID = socket.gethostname() CLUSTER_HOST_ID = socket.gethostname()
if 'Docker Desktop' in os.getenv('OS', ''):
os.environ['SDB_NOTIFY_HOST'] = 'docker.for.mac.host.internal'
else:
try:
os.environ['SDB_NOTIFY_HOST'] = os.popen('ip route').read().split(' ')[2]
except Exception:
pass
AWX_CALLBACK_PROFILE = True AWX_CALLBACK_PROFILE = True
if 'sqlite3' not in DATABASES['default']['ENGINE']: # noqa if 'sqlite3' not in DATABASES['default']['ENGINE']: # noqa

View File

@@ -8,6 +8,7 @@ WORKDIR /ui_next
ADD public public ADD public public
ADD package.json package.json ADD package.json package.json
ADD package-lock.json package-lock.json ADD package-lock.json package-lock.json
ADD .linguirc .linguirc
COPY ${NPMRC_FILE} .npmrc COPY ${NPMRC_FILE} .npmrc
RUN npm install RUN npm install
ADD src src ADD src src

View File

@@ -86,7 +86,7 @@ Instances of orgs list include:
**Instance Groups list** **Instance Groups list**
- Name - search is ?name=ig - Name - search is ?name=ig
- ? is_containerized boolean choice (doesn't work right now in API but will soon) - search is ?is_containerized=true - ? is_container_group boolean choice (doesn't work right now in API but will soon) - search is ?is_container_group=true
- ? credential name - search is ?credentials__name=kubey - ? credential name - search is ?credentials__name=kubey
Instance of instance groups list include: Instance of instance groups list include:
@@ -136,7 +136,7 @@ Instance of team lists include:
**Credentials list** **Credentials list**
- Name - Name
- ? Type (dropdown on right with different types) - ? Type (dropdown on right with different types)
- ? Created by (username) - ? Created by (username)
- ? Modified by (username) - ? Modified by (username)
@@ -273,7 +273,7 @@ For the UI url params, we want to only encode those params that aren't defaults,
#### mergeParams vs. replaceParams #### mergeParams vs. replaceParams
**mergeParams** is used to suppport putting values with the same key **mergeParams** is used to suppport putting values with the same key
From a UX perspective, we wanted to be able to support searching on the same key multiple times (i.e. searching for things like `?foo=bar&foo=baz`). We do this by creating an array of all values. i.e.: From a UX perspective, we wanted to be able to support searching on the same key multiple times (i.e. searching for things like `?foo=bar&foo=baz`). We do this by creating an array of all values. i.e.:
@@ -361,7 +361,7 @@ Smart search will be able to craft the tag through various states. Note that th
"instance_groups__search" "instance_groups__search"
], ],
``` ```
PHASE 3: keys, give by object key names for data.actions.GET PHASE 3: keys, give by object key names for data.actions.GET
- type is given for each key which we could use to help craft the value - type is given for each key which we could use to help craft the value

View File

@@ -55,6 +55,11 @@
"react-scripts": "^3.4.4" "react-scripts": "^3.4.4"
}, },
"scripts": { "scripts": {
"prelint": "lingui compile",
"prestart": "lingui compile",
"prestart-instrumented": "lingui compile",
"pretest": "lingui compile",
"pretest-watch": "lingui compile",
"start": "PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts start", "start": "PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts start",
"start-instrumented": "DEBUG=instrument-cra PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts -r @cypress/instrument-cra start", "start-instrumented": "DEBUG=instrument-cra PORT=3001 HTTPS=true DANGEROUSLY_DISABLE_HOST_CHECK=true react-scripts -r @cypress/instrument-cra start",
"build": "INLINE_RUNTIME_CHUNK=false react-scripts build", "build": "INLINE_RUNTIME_CHUNK=false react-scripts build",

View File

@@ -7,6 +7,7 @@ import CredentialInputSources from './models/CredentialInputSources';
import CredentialTypes from './models/CredentialTypes'; import CredentialTypes from './models/CredentialTypes';
import Credentials from './models/Credentials'; import Credentials from './models/Credentials';
import Dashboard from './models/Dashboard'; import Dashboard from './models/Dashboard';
import ExecutionEnvironments from './models/ExecutionEnvironments';
import Groups from './models/Groups'; import Groups from './models/Groups';
import Hosts from './models/Hosts'; import Hosts from './models/Hosts';
import InstanceGroups from './models/InstanceGroups'; import InstanceGroups from './models/InstanceGroups';
@@ -50,6 +51,7 @@ const CredentialInputSourcesAPI = new CredentialInputSources();
const CredentialTypesAPI = new CredentialTypes(); const CredentialTypesAPI = new CredentialTypes();
const CredentialsAPI = new Credentials(); const CredentialsAPI = new Credentials();
const DashboardAPI = new Dashboard(); const DashboardAPI = new Dashboard();
const ExecutionEnvironmentsAPI = new ExecutionEnvironments();
const GroupsAPI = new Groups(); const GroupsAPI = new Groups();
const HostsAPI = new Hosts(); const HostsAPI = new Hosts();
const InstanceGroupsAPI = new InstanceGroups(); const InstanceGroupsAPI = new InstanceGroups();
@@ -94,6 +96,7 @@ export {
CredentialTypesAPI, CredentialTypesAPI,
CredentialsAPI, CredentialsAPI,
DashboardAPI, DashboardAPI,
ExecutionEnvironmentsAPI,
GroupsAPI, GroupsAPI,
HostsAPI, HostsAPI,
InstanceGroupsAPI, InstanceGroupsAPI,

View File

@@ -0,0 +1,10 @@
import Base from '../Base';
class ExecutionEnvironments extends Base {
constructor(http) {
super(http);
this.baseUrl = '/api/v2/execution_environments/';
}
}
export default ExecutionEnvironments;

View File

@@ -30,6 +30,18 @@ class Organizations extends InstanceGroupsMixin(NotificationsMixin(Base)) {
}); });
} }
readExecutionEnvironments(id, params) {
return this.http.get(`${this.baseUrl}${id}/execution_environments/`, {
params,
});
}
readExecutionEnvironmentsOptions(id, params) {
return this.http.options(`${this.baseUrl}${id}/execution_environments/`, {
params,
});
}
createUser(id, data) { createUser(id, data) {
return this.http.post(`${this.baseUrl}${id}/users/`, data); return this.http.post(`${this.baseUrl}${id}/users/`, data);
} }

View File

@@ -167,9 +167,10 @@ function CredentialsStep({ i18n }) {
const hasSameVaultID = val => const hasSameVaultID = val =>
val?.inputs?.vault_id !== undefined && val?.inputs?.vault_id !== undefined &&
val?.inputs?.vault_id === item?.inputs?.vault_id; val?.inputs?.vault_id === item?.inputs?.vault_id;
const hasSameKind = val => val.kind === item.kind; const hasSameCredentialType = val =>
val.credential_type === item.credential_type;
const newItems = field.value.filter(i => const newItems = field.value.filter(i =>
isVault ? !hasSameVaultID(i) : !hasSameKind(i) isVault ? !hasSameVaultID(i) : !hasSameCredentialType(i)
); );
newItems.push(item); newItems.push(item);
helpers.setValue(newItems); helpers.setValue(newItems);

View File

@@ -0,0 +1,195 @@
import React, { useCallback, useEffect } from 'react';
import { string, func, bool, oneOfType, number } from 'prop-types';
import { useLocation } from 'react-router-dom';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { FormGroup, Tooltip } from '@patternfly/react-core';
import { ExecutionEnvironmentsAPI, ProjectsAPI } from '../../api';
import { ExecutionEnvironment } from '../../types';
import { getQSConfig, parseQueryString, mergeParams } from '../../util/qs';
import Popover from '../Popover';
import OptionsList from '../OptionsList';
import useRequest from '../../util/useRequest';
import Lookup from './Lookup';
import LookupErrorMessage from './shared/LookupErrorMessage';
const QS_CONFIG = getQSConfig('execution_environments', {
page: 1,
page_size: 5,
order_by: 'name',
});
function ExecutionEnvironmentLookup({
globallyAvailable,
i18n,
isDefaultEnvironment,
isDisabled,
onBlur,
onChange,
organizationId,
popoverContent,
projectId,
tooltip,
value,
}) {
const location = useLocation();
const {
request: fetchProject,
error: fetchProjectError,
isLoading: fetchProjectLoading,
result: project,
} = useRequest(
useCallback(async () => {
if (!projectId) {
return {};
}
const { data } = await ProjectsAPI.readDetail(projectId);
return data;
}, [projectId]),
{
project: null,
}
);
useEffect(() => {
fetchProject();
}, [fetchProject]);
const {
result: {
executionEnvironments,
count,
relatedSearchableKeys,
searchableKeys,
},
request: fetchExecutionEnvironments,
error,
isLoading,
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, location.search);
const globallyAvailableParams = globallyAvailable
? { or__organization__isnull: 'True' }
: {};
const organizationIdParams =
organizationId || project?.organization
? { or__organization__id: organizationId }
: {};
const [{ data }, actionsResponse] = await Promise.all([
ExecutionEnvironmentsAPI.read(
mergeParams(params, {
...globallyAvailableParams,
...organizationIdParams,
})
),
ExecutionEnvironmentsAPI.readOptions(),
]);
return {
executionEnvironments: data.results,
count: data.count,
relatedSearchableKeys: (
actionsResponse?.data?.related_search_fields || []
).map(val => val.slice(0, -8)),
searchableKeys: Object.keys(
actionsResponse.data.actions?.GET || {}
).filter(key => actionsResponse.data.actions?.GET[key].filterable),
};
}, [location, globallyAvailable, organizationId, project]),
{
executionEnvironments: [],
count: 0,
relatedSearchableKeys: [],
searchableKeys: [],
}
);
useEffect(() => {
fetchExecutionEnvironments();
}, [fetchExecutionEnvironments]);
const renderLookup = () => (
<>
<Lookup
id="execution-environments"
header={i18n._(t`Execution Environments`)}
value={value}
onBlur={onBlur}
onChange={onChange}
qsConfig={QS_CONFIG}
isLoading={isLoading || fetchProjectLoading}
isDisabled={isDisabled}
renderOptionsList={({ state, dispatch, canDelete }) => (
<OptionsList
value={state.selectedItems}
options={executionEnvironments}
optionCount={count}
searchColumns={[
{
name: i18n._(t`Name`),
key: 'name__icontains',
isDefault: true,
},
]}
sortColumns={[
{
name: i18n._(t`Name`),
key: 'name',
},
]}
searchableKeys={searchableKeys}
relatedSearchableKeys={relatedSearchableKeys}
multiple={state.multiple}
header={i18n._(t`Execution Environment`)}
name="executionEnvironments"
qsConfig={QS_CONFIG}
readOnly={!canDelete}
selectItem={item => dispatch({ type: 'SELECT_ITEM', item })}
deselectItem={item => dispatch({ type: 'DESELECT_ITEM', item })}
/>
)}
/>
</>
);
return (
<FormGroup
fieldId="execution-environment-lookup"
label={
isDefaultEnvironment
? i18n._(t`Default Execution Environment`)
: i18n._(t`Execution Environment`)
}
labelIcon={popoverContent && <Popover content={popoverContent} />}
>
{isDisabled ? (
<Tooltip content={tooltip}>{renderLookup()}</Tooltip>
) : (
renderLookup()
)}
<LookupErrorMessage error={error || fetchProjectError} />
</FormGroup>
);
}
ExecutionEnvironmentLookup.propTypes = {
value: ExecutionEnvironment,
popoverContent: string,
onChange: func.isRequired,
isDefaultEnvironment: bool,
projectId: oneOfType([number, string]),
organizationId: oneOfType([number, string]),
};
ExecutionEnvironmentLookup.defaultProps = {
popoverContent: '',
isDefaultEnvironment: false,
value: null,
projectId: null,
organizationId: null,
};
export default withI18n()(ExecutionEnvironmentLookup);

View File

@@ -0,0 +1,100 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import ExecutionEnvironmentLookup from './ExecutionEnvironmentLookup';
import { ExecutionEnvironmentsAPI, ProjectsAPI } from '../../api';
jest.mock('../../api');
const mockedExecutionEnvironments = {
count: 1,
results: [
{
id: 2,
name: 'Foo',
image: 'quay.io/ansible/awx-ee',
pull: 'missing',
},
],
};
const executionEnvironment = {
id: 42,
name: 'Bar',
image: 'quay.io/ansible/bar',
pull: 'missing',
};
describe('ExecutionEnvironmentLookup', () => {
let wrapper;
beforeEach(() => {
ExecutionEnvironmentsAPI.read.mockResolvedValue(
mockedExecutionEnvironments
);
ProjectsAPI.read.mockResolvedValue({
data: {
count: 1,
results: [
{
id: 1,
name: 'Fuz',
},
],
},
});
});
afterEach(() => {
jest.clearAllMocks();
wrapper.unmount();
});
test('should render successfully', async () => {
ExecutionEnvironmentsAPI.readOptions.mockReturnValue({
data: {
actions: {
GET: {},
POST: {},
},
related_search_fields: [],
},
});
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentLookup
isDefaultEnvironment
value={executionEnvironment}
onChange={() => {}}
/>
);
});
wrapper.update();
expect(ExecutionEnvironmentsAPI.read).toHaveBeenCalledTimes(2);
expect(wrapper.find('ExecutionEnvironmentLookup')).toHaveLength(1);
expect(
wrapper.find('FormGroup[label="Default Execution Environment"]').length
).toBe(1);
expect(
wrapper.find('FormGroup[label="Execution Environment"]').length
).toBe(0);
});
test('should fetch execution environments', async () => {
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentLookup
value={executionEnvironment}
onChange={() => {}}
/>
);
});
expect(ExecutionEnvironmentsAPI.read).toHaveBeenCalledTimes(2);
expect(
wrapper.find('FormGroup[label="Default Execution Environment"]').length
).toBe(0);
expect(
wrapper.find('FormGroup[label="Execution Environment"]').length
).toBe(1);
});
});

View File

@@ -30,6 +30,7 @@ function OrganizationLookup({
history, history,
autoPopulate, autoPopulate,
isDisabled, isDisabled,
helperText,
}) { }) {
const autoPopulateLookup = useAutoPopulateLookup(onChange); const autoPopulateLookup = useAutoPopulateLookup(onChange);
@@ -79,6 +80,7 @@ function OrganizationLookup({
isRequired={required} isRequired={required}
validated={isValid ? 'default' : 'error'} validated={isValid ? 'default' : 'error'}
label={i18n._(t`Organization`)} label={i18n._(t`Organization`)}
helperText={helperText}
> >
<Lookup <Lookup
isDisabled={isDisabled} isDisabled={isDisabled}

View File

@@ -7,3 +7,4 @@ export { default as CredentialLookup } from './CredentialLookup';
export { default as ApplicationLookup } from './ApplicationLookup'; export { default as ApplicationLookup } from './ApplicationLookup';
export { default as HostFilterLookup } from './HostFilterLookup'; export { default as HostFilterLookup } from './HostFilterLookup';
export { default as OrganizationLookup } from './OrganizationLookup'; export { default as OrganizationLookup } from './OrganizationLookup';
export { default as ExecutionEnvironmentLookup } from './ExecutionEnvironmentLookup';

View File

@@ -20,11 +20,11 @@ const WarningMessage = styled(Alert)`
margin-top: 10px; margin-top: 10px;
`; `;
const requireNameOrUsername = props => { const requiredField = props => {
const { name, username } = props; const { name, username, image } = props;
if (!name && !username) { if (!name && !username && !image) {
return new Error( return new Error(
`One of 'name' or 'username' is required by ItemToDelete component.` `One of 'name', 'username' or 'image' is required by ItemToDelete component.`
); );
} }
if (name) { if (name) {
@@ -47,13 +47,24 @@ const requireNameOrUsername = props => {
'ItemToDelete' 'ItemToDelete'
); );
} }
if (image) {
checkPropTypes(
{
image: string,
},
{ image: props.image },
'prop',
'ItemToDelete'
);
}
return null; return null;
}; };
const ItemToDelete = shape({ const ItemToDelete = shape({
id: number.isRequired, id: number.isRequired,
name: requireNameOrUsername, name: requiredField,
username: requireNameOrUsername, username: requiredField,
image: requiredField,
summary_fields: shape({ summary_fields: shape({
user_capabilities: shape({ user_capabilities: shape({
delete: bool.isRequired, delete: bool.isRequired,
@@ -171,7 +182,7 @@ function ToolbarDeleteButton({
<div>{i18n._(t`This action will delete the following:`)}</div> <div>{i18n._(t`This action will delete the following:`)}</div>
{itemsToDelete.map(item => ( {itemsToDelete.map(item => (
<span key={item.id}> <span key={item.id}>
<strong>{item.name || item.username}</strong> <strong>{item.name || item.username || item.image}</strong>
<br /> <br />
</span> </span>
))} ))}

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -2,13 +2,13 @@ import { t } from '@lingui/macro';
import ActivityStream from './screens/ActivityStream'; import ActivityStream from './screens/ActivityStream';
import Applications from './screens/Application'; import Applications from './screens/Application';
import Credentials from './screens/Credential';
import CredentialTypes from './screens/CredentialType'; import CredentialTypes from './screens/CredentialType';
import Credentials from './screens/Credential';
import Dashboard from './screens/Dashboard'; import Dashboard from './screens/Dashboard';
import ExecutionEnvironments from './screens/ExecutionEnvironment';
import Hosts from './screens/Host'; import Hosts from './screens/Host';
import InstanceGroups from './screens/InstanceGroup'; import InstanceGroups from './screens/InstanceGroup';
import Inventory from './screens/Inventory'; import Inventory from './screens/Inventory';
import { Jobs } from './screens/Job';
import ManagementJobs from './screens/ManagementJob'; import ManagementJobs from './screens/ManagementJob';
import NotificationTemplates from './screens/NotificationTemplate'; import NotificationTemplates from './screens/NotificationTemplate';
import Organizations from './screens/Organization'; import Organizations from './screens/Organization';
@@ -19,6 +19,7 @@ import Teams from './screens/Team';
import Templates from './screens/Template'; import Templates from './screens/Template';
import Users from './screens/User'; import Users from './screens/User';
import WorkflowApprovals from './screens/WorkflowApproval'; import WorkflowApprovals from './screens/WorkflowApproval';
import { Jobs } from './screens/Job';
// Ideally, this should just be a regular object that we export, but we // Ideally, this should just be a regular object that we export, but we
// need the i18n. When lingui3 arrives, we will be able to import i18n // need the i18n. When lingui3 arrives, we will be able to import i18n
@@ -138,6 +139,11 @@ function getRouteConfig(i18n) {
path: '/applications', path: '/applications',
screen: Applications, screen: Applications,
}, },
{
title: i18n._(t`Execution Environments`),
path: '/execution_environments',
screen: ExecutionEnvironments,
},
], ],
}, },
{ {

View File

@@ -0,0 +1,126 @@
import React, { useEffect, useCallback } from 'react';
import {
Link,
Redirect,
Route,
Switch,
useLocation,
useParams,
} from 'react-router-dom';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Card, PageSection } from '@patternfly/react-core';
import { CaretLeftIcon } from '@patternfly/react-icons';
import useRequest from '../../util/useRequest';
import { ExecutionEnvironmentsAPI } from '../../api';
import RoutedTabs from '../../components/RoutedTabs';
import ContentError from '../../components/ContentError';
import ContentLoading from '../../components/ContentLoading';
import ExecutionEnvironmentDetails from './ExecutionEnvironmentDetails';
import ExecutionEnvironmentEdit from './ExecutionEnvironmentEdit';
function ExecutionEnvironment({ i18n, setBreadcrumb }) {
const { id } = useParams();
const { pathname } = useLocation();
const {
isLoading,
error: contentError,
request: fetchExecutionEnvironments,
result: executionEnvironment,
} = useRequest(
useCallback(async () => {
const { data } = await ExecutionEnvironmentsAPI.readDetail(id);
return data;
}, [id]),
null
);
useEffect(() => {
fetchExecutionEnvironments();
}, [fetchExecutionEnvironments, pathname]);
useEffect(() => {
if (executionEnvironment) {
setBreadcrumb(executionEnvironment);
}
}, [executionEnvironment, setBreadcrumb]);
const tabsArray = [
{
name: (
<>
<CaretLeftIcon />
{i18n._(t`Back to execution environments`)}
</>
),
link: '/execution_environments',
id: 99,
},
{
name: i18n._(t`Details`),
link: `/execution_environments/${id}/details`,
id: 0,
},
];
if (!isLoading && contentError) {
return (
<PageSection>
<Card>
<ContentError error={contentError}>
{contentError.response?.status === 404 && (
<span>
{i18n._(t`Execution environment not found.`)}{' '}
<Link to="/execution_environments">
{i18n._(t`View all execution environments`)}
</Link>
</span>
)}
</ContentError>
</Card>
</PageSection>
);
}
let cardHeader = <RoutedTabs tabsArray={tabsArray} />;
if (pathname.endsWith('edit')) {
cardHeader = null;
}
return (
<PageSection>
<Card>
{cardHeader}
{isLoading && <ContentLoading />}
{!isLoading && executionEnvironment && (
<Switch>
<Redirect
from="/execution_environments/:id"
to="/execution_environments/:id/details"
exact
/>
{executionEnvironment && (
<>
<Route path="/execution_environments/:id/edit">
<ExecutionEnvironmentEdit
executionEnvironment={executionEnvironment}
/>
</Route>
<Route path="/execution_environments/:id/details">
<ExecutionEnvironmentDetails
executionEnvironment={executionEnvironment}
/>
</Route>
</>
)}
</Switch>
)}
</Card>
</PageSection>
);
}
export default withI18n()(ExecutionEnvironment);

View File

@@ -0,0 +1,50 @@
import React, { useState } from 'react';
import { Card, PageSection } from '@patternfly/react-core';
import { useHistory } from 'react-router-dom';
import { ExecutionEnvironmentsAPI } from '../../../api';
import { Config } from '../../../contexts/Config';
import { CardBody } from '../../../components/Card';
import ExecutionEnvironmentForm from '../shared/ExecutionEnvironmentForm';
function ExecutionEnvironmentAdd() {
const history = useHistory();
const [submitError, setSubmitError] = useState(null);
const handleSubmit = async values => {
try {
const { data: response } = await ExecutionEnvironmentsAPI.create({
...values,
credential: values.credential?.id,
organization: values.organization?.id,
});
history.push(`/execution_environments/${response.id}/details`);
} catch (error) {
setSubmitError(error);
}
};
const handleCancel = () => {
history.push(`/execution_environments`);
};
return (
<PageSection>
<Card>
<CardBody>
<Config>
{({ me }) => (
<ExecutionEnvironmentForm
onSubmit={handleSubmit}
submitError={submitError}
onCancel={handleCancel}
me={me || {}}
/>
)}
</Config>
</CardBody>
</Card>
</PageSection>
);
}
export default ExecutionEnvironmentAdd;

View File

@@ -0,0 +1,109 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { createMemoryHistory } from 'history';
import {
mountWithContexts,
waitForElement,
} from '../../../../testUtils/enzymeHelpers';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentAdd from './ExecutionEnvironmentAdd';
jest.mock('../../../api');
const mockMe = {
is_superuser: true,
is_system_auditor: false,
};
const executionEnvironmentData = {
name: 'Test EE',
credential: 4,
description: 'A simple EE',
image: 'https://registry.com/image/container',
pull: 'one',
};
const mockOptions = {
data: {
actions: {
POST: {
pull: {
choices: [
['one', 'One'],
['two', 'Two'],
['three', 'Three'],
],
},
},
},
},
};
ExecutionEnvironmentsAPI.readOptions.mockResolvedValue(mockOptions);
ExecutionEnvironmentsAPI.create.mockResolvedValue({
data: {
id: 42,
},
});
describe('<ExecutionEnvironmentAdd/>', () => {
let wrapper;
let history;
beforeEach(async () => {
history = createMemoryHistory({
initialEntries: ['/execution_environments'],
});
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentAdd me={mockMe} />, {
context: { router: { history } },
});
});
});
afterEach(() => {
jest.clearAllMocks();
wrapper.unmount();
});
test('handleSubmit should call the api and redirect to details page', async () => {
await act(async () => {
wrapper.find('ExecutionEnvironmentForm').prop('onSubmit')({
executionEnvironmentData,
});
});
wrapper.update();
expect(ExecutionEnvironmentsAPI.create).toHaveBeenCalledWith({
executionEnvironmentData,
});
expect(history.location.pathname).toBe(
'/execution_environments/42/details'
);
});
test('handleCancel should return the user back to the execution environments list', async () => {
await waitForElement(wrapper, 'ContentLoading', el => el.length === 0);
wrapper.find('Button[aria-label="Cancel"]').simulate('click');
expect(history.location.pathname).toEqual('/execution_environments');
});
test('failed form submission should show an error message', async () => {
const error = {
response: {
data: { detail: 'An error occurred' },
},
};
ExecutionEnvironmentsAPI.create.mockImplementationOnce(() =>
Promise.reject(error)
);
await act(async () => {
wrapper.find('ExecutionEnvironmentForm').invoke('onSubmit')(
executionEnvironmentData
);
});
wrapper.update();
expect(wrapper.find('FormSubmitError').length).toBe(1);
});
});

View File

@@ -0,0 +1 @@
export { default } from './ExecutionEnvironmentAdd';

View File

@@ -0,0 +1,138 @@
import React, { useCallback } from 'react';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Link, useHistory } from 'react-router-dom';
import { Button, Label } from '@patternfly/react-core';
import AlertModal from '../../../components/AlertModal';
import { CardBody, CardActionsRow } from '../../../components/Card';
import DeleteButton from '../../../components/DeleteButton';
import {
Detail,
DetailList,
UserDateDetail,
} from '../../../components/DetailList';
import useRequest, { useDismissableError } from '../../../util/useRequest';
import { toTitleCase } from '../../../util/strings';
import { ExecutionEnvironmentsAPI } from '../../../api';
function ExecutionEnvironmentDetails({ executionEnvironment, i18n }) {
const history = useHistory();
const {
id,
name,
image,
description,
pull,
organization,
summary_fields,
} = executionEnvironment;
const {
request: deleteExecutionEnvironment,
isLoading,
error: deleteError,
} = useRequest(
useCallback(async () => {
await ExecutionEnvironmentsAPI.destroy(id);
history.push(`/execution_environments`);
}, [id, history])
);
const { error, dismissError } = useDismissableError(deleteError);
return (
<CardBody>
<DetailList>
<Detail
label={i18n._(t`Name`)}
value={name}
dataCy="execution-environment-detail-name"
/>
<Detail
label={i18n._(t`Image`)}
value={image}
dataCy="execution-environment-detail-image"
/>
<Detail
label={i18n._(t`Description`)}
value={description}
dataCy="execution-environment-detail-description"
/>
<Detail
label={i18n._(t`Organization`)}
value={
organization ? (
<Link
to={`/organizations/${summary_fields.organization.id}/details`}
>
{summary_fields.organization.name}
</Link>
) : (
i18n._(t`Globally Available`)
)
}
dataCy="execution-environment-detail-organization"
/>
<Detail
label={i18n._(t`Pull`)}
value={pull === '' ? i18n._(t`Missing`) : toTitleCase(pull)}
dataCy="execution-environment-pull"
/>
{executionEnvironment.summary_fields.credential && (
<Detail
label={i18n._(t`Credential`)}
value={
<Label variant="outline" color="blue">
{executionEnvironment.summary_fields.credential.name}
</Label>
}
dataCy="execution-environment-credential"
/>
)}
<UserDateDetail
label={i18n._(t`Created`)}
date={executionEnvironment.created}
user={executionEnvironment.summary_fields.created_by}
dataCy="execution-environment-created"
/>
<UserDateDetail
label={i18n._(t`Last Modified`)}
date={executionEnvironment.modified}
user={executionEnvironment.summary_fields.modified_by}
dataCy="execution-environment-modified"
/>
</DetailList>
<CardActionsRow>
<Button
aria-label={i18n._(t`edit`)}
component={Link}
to={`/execution_environments/${id}/edit`}
ouiaId="edit-button"
>
{i18n._(t`Edit`)}
</Button>
<DeleteButton
name={image}
modalTitle={i18n._(t`Delete Execution Environment`)}
onConfirm={deleteExecutionEnvironment}
isDisabled={isLoading}
ouiaId="delete-button"
>
{i18n._(t`Delete`)}
</DeleteButton>
</CardActionsRow>
{error && (
<AlertModal
isOpen={error}
onClose={dismissError}
title={i18n._(t`Error`)}
variant="error"
/>
)}
</CardBody>
);
}
export default withI18n()(ExecutionEnvironmentDetails);

View File

@@ -0,0 +1,138 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { createMemoryHistory } from 'history';
import { mountWithContexts } from '../../../../testUtils/enzymeHelpers';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentDetails from './ExecutionEnvironmentDetails';
jest.mock('../../../api');
const executionEnvironment = {
id: 17,
type: 'execution_environment',
url: '/api/v2/execution_environments/17/',
related: {
created_by: '/api/v2/users/1/',
modified_by: '/api/v2/users/1/',
activity_stream: '/api/v2/execution_environments/17/activity_stream/',
unified_job_templates:
'/api/v2/execution_environments/17/unified_job_templates/',
credential: '/api/v2/credentials/4/',
},
summary_fields: {
credential: {
id: 4,
name: 'Container Registry',
},
created_by: {
id: 1,
username: 'admin',
first_name: '',
last_name: '',
},
modified_by: {
id: 1,
username: 'admin',
first_name: '',
last_name: '',
},
},
name: 'Default EE',
created: '2020-09-17T20:14:15.408782Z',
modified: '2020-09-17T20:14:15.408802Z',
description: 'Foo',
organization: null,
image: 'https://localhost:90/12345/ma',
managed_by_tower: false,
credential: 4,
};
describe('<ExecutionEnvironmentDetails/>', () => {
let wrapper;
test('should render details properly', async () => {
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentDetails
executionEnvironment={executionEnvironment}
/>
);
});
wrapper.update();
expect(wrapper.find('Detail[label="Image"]').prop('value')).toEqual(
executionEnvironment.image
);
expect(wrapper.find('Detail[label="Description"]').prop('value')).toEqual(
'Foo'
);
expect(wrapper.find('Detail[label="Organization"]').prop('value')).toEqual(
'Globally Available'
);
expect(
wrapper.find('Detail[label="Credential"]').prop('value').props.children
).toEqual(executionEnvironment.summary_fields.credential.name);
const dates = wrapper.find('UserDateDetail');
expect(dates).toHaveLength(2);
expect(dates.at(0).prop('date')).toEqual(executionEnvironment.created);
expect(dates.at(1).prop('date')).toEqual(executionEnvironment.modified);
});
test('should render organization detail', async () => {
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentDetails
executionEnvironment={{
...executionEnvironment,
organization: 1,
summary_fields: {
organization: { id: 1, name: 'Bar' },
credential: {
id: 4,
name: 'Container Registry',
},
},
}}
/>
);
});
wrapper.update();
expect(wrapper.find('Detail[label="Image"]').prop('value')).toEqual(
executionEnvironment.image
);
expect(wrapper.find('Detail[label="Description"]').prop('value')).toEqual(
'Foo'
);
expect(wrapper.find(`Detail[label="Organization"] dd`).text()).toBe('Bar');
expect(
wrapper.find('Detail[label="Credential"]').prop('value').props.children
).toEqual(executionEnvironment.summary_fields.credential.name);
const dates = wrapper.find('UserDateDetail');
expect(dates).toHaveLength(2);
expect(dates.at(0).prop('date')).toEqual(executionEnvironment.created);
expect(dates.at(1).prop('date')).toEqual(executionEnvironment.modified);
});
test('expected api call is made for delete', async () => {
const history = createMemoryHistory({
initialEntries: ['/execution_environments/42/details'],
});
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentDetails
executionEnvironment={executionEnvironment}
/>,
{
context: { router: { history } },
}
);
});
await act(async () => {
wrapper.find('DeleteButton').invoke('onConfirm')();
});
expect(ExecutionEnvironmentsAPI.destroy).toHaveBeenCalledTimes(1);
expect(history.location.pathname).toBe('/execution_environments');
});
});

View File

@@ -0,0 +1 @@
export { default } from './ExecutionEnvironmentDetails';

View File

@@ -0,0 +1,47 @@
import React, { useState } from 'react';
import { useHistory } from 'react-router-dom';
import { CardBody } from '../../../components/Card';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentForm from '../shared/ExecutionEnvironmentForm';
import { Config } from '../../../contexts/Config';
function ExecutionEnvironmentEdit({ executionEnvironment }) {
const history = useHistory();
const [submitError, setSubmitError] = useState(null);
const detailsUrl = `/execution_environments/${executionEnvironment.id}/details`;
const handleSubmit = async values => {
try {
await ExecutionEnvironmentsAPI.update(executionEnvironment.id, {
...values,
credential: values.credential ? values.credential.id : null,
organization: values.organization ? values.organization.id : null,
});
history.push(detailsUrl);
} catch (error) {
setSubmitError(error);
}
};
const handleCancel = () => {
history.push(detailsUrl);
};
return (
<CardBody>
<Config>
{({ me }) => (
<ExecutionEnvironmentForm
executionEnvironment={executionEnvironment}
onSubmit={handleSubmit}
submitError={submitError}
onCancel={handleCancel}
me={me || {}}
/>
)}
</Config>
</CardBody>
);
}
export default ExecutionEnvironmentEdit;

View File

@@ -0,0 +1,130 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { createMemoryHistory } from 'history';
import { mountWithContexts } from '../../../../testUtils/enzymeHelpers';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentEdit from './ExecutionEnvironmentEdit';
jest.mock('../../../api');
const mockMe = {
is_superuser: true,
is_system_auditor: false,
};
const executionEnvironmentData = {
id: 42,
credential: { id: 4 },
description: 'A simple EE',
image: 'https://registry.com/image/container',
pull: 'one',
name: 'Test EE',
};
const updateExecutionEnvironmentData = {
image: 'https://registry.com/image/container2',
description: 'Updated new description',
};
const mockOptions = {
data: {
actions: {
POST: {
pull: {
choices: [
['one', 'One'],
['two', 'Two'],
['three', 'Three'],
],
},
},
},
},
};
ExecutionEnvironmentsAPI.readOptions.mockResolvedValue(mockOptions);
describe('<ExecutionEnvironmentEdit/>', () => {
let wrapper;
let history;
beforeAll(async () => {
history = createMemoryHistory();
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentEdit
executionEnvironment={executionEnvironmentData}
me={mockMe}
/>,
{
context: { router: { history } },
}
);
});
});
afterAll(() => {
jest.clearAllMocks();
wrapper.unmount();
});
test('handleSubmit should call the api and redirect to details page', async () => {
await act(async () => {
wrapper.find('ExecutionEnvironmentForm').invoke('onSubmit')(
updateExecutionEnvironmentData
);
wrapper.update();
expect(ExecutionEnvironmentsAPI.update).toHaveBeenCalledWith(42, {
...updateExecutionEnvironmentData,
credential: null,
organization: null,
});
});
expect(history.location.pathname).toEqual(
'/execution_environments/42/details'
);
});
test('should navigate to execution environments details when cancel is clicked', async () => {
await act(async () => {
wrapper.find('button[aria-label="Cancel"]').prop('onClick')();
});
expect(history.location.pathname).toEqual(
'/execution_environments/42/details'
);
});
test('should navigate to execution environments detail after successful submission', async () => {
await act(async () => {
wrapper.find('ExecutionEnvironmentForm').invoke('onSubmit')({
updateExecutionEnvironmentData,
});
});
wrapper.update();
expect(wrapper.find('FormSubmitError').length).toBe(0);
expect(history.location.pathname).toEqual(
'/execution_environments/42/details'
);
});
test('failed form submission should show an error message', async () => {
const error = {
response: {
data: { detail: 'An error occurred' },
},
};
ExecutionEnvironmentsAPI.update.mockImplementationOnce(() =>
Promise.reject(error)
);
await act(async () => {
wrapper.find('ExecutionEnvironmentForm').invoke('onSubmit')(
updateExecutionEnvironmentData
);
});
wrapper.update();
expect(wrapper.find('FormSubmitError').length).toBe(1);
});
});

View File

@@ -0,0 +1 @@
export { default } from './ExecutionEnvironmentEdit';

View File

@@ -0,0 +1,188 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import {
mountWithContexts,
waitForElement,
} from '../../../../testUtils/enzymeHelpers';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentList from './ExecutionEnvironmentList';
jest.mock('../../../api/models/ExecutionEnvironments');
const executionEnvironments = {
data: {
results: [
{
name: 'Foo',
id: 1,
image: 'https://registry.com/r/image/manifest',
organization: null,
credential: null,
url: '/api/v2/execution_environments/1/',
summary_fields: { user_capabilities: { edit: true, delete: true } },
},
{
name: 'Bar',
id: 2,
image: 'https://registry.com/r/image2/manifest',
organization: null,
credential: null,
url: '/api/v2/execution_environments/2/',
summary_fields: { user_capabilities: { edit: false, delete: true } },
},
],
count: 2,
},
};
const options = { data: { actions: { POST: true } } };
describe('<ExecutionEnvironmentList/>', () => {
beforeEach(() => {
ExecutionEnvironmentsAPI.read.mockResolvedValue(executionEnvironments);
ExecutionEnvironmentsAPI.readOptions.mockResolvedValue(options);
});
afterEach(() => {
jest.clearAllMocks();
});
let wrapper;
test('should mount successfully', async () => {
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
await waitForElement(
wrapper,
'ExecutionEnvironmentList',
el => el.length > 0
);
});
test('should have data fetched and render 2 rows', async () => {
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
await waitForElement(
wrapper,
'ExecutionEnvironmentList',
el => el.length > 0
);
expect(wrapper.find('ExecutionEnvironmentListItem').length).toBe(2);
expect(ExecutionEnvironmentsAPI.read).toBeCalled();
expect(ExecutionEnvironmentsAPI.readOptions).toBeCalled();
});
test('should delete items successfully', async () => {
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
await waitForElement(
wrapper,
'ExecutionEnvironmentList',
el => el.length > 0
);
await act(async () => {
wrapper
.find('ExecutionEnvironmentListItem')
.at(0)
.invoke('onSelect')();
});
wrapper.update();
await act(async () => {
wrapper
.find('ExecutionEnvironmentListItem')
.at(1)
.invoke('onSelect')();
});
wrapper.update();
await act(async () => {
wrapper.find('ToolbarDeleteButton').invoke('onDelete')();
});
expect(ExecutionEnvironmentsAPI.destroy).toHaveBeenCalledTimes(2);
});
test('should render deletion error modal', async () => {
ExecutionEnvironmentsAPI.destroy.mockRejectedValue(
new Error({
response: {
config: {
method: 'DELETE',
url: '/api/v2/execution_environments',
},
data: 'An error occurred',
},
})
);
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
waitForElement(wrapper, 'ExecutionEnvironmentList', el => el.length > 0);
wrapper
.find('ExecutionEnvironmentListItem')
.at(0)
.find('input')
.simulate('change', 'a');
wrapper.update();
expect(
wrapper
.find('ExecutionEnvironmentListItem')
.at(0)
.find('input')
.prop('checked')
).toBe(true);
await act(async () =>
wrapper.find('Button[aria-label="Delete"]').prop('onClick')()
);
wrapper.update();
await act(async () =>
wrapper.find('Button[aria-label="confirm delete"]').prop('onClick')()
);
wrapper.update();
expect(wrapper.find('ErrorDetail').length).toBe(1);
});
test('should thrown content error', async () => {
ExecutionEnvironmentsAPI.read.mockRejectedValue(
new Error({
response: {
config: {
method: 'GET',
url: '/api/v2/execution_environments',
},
data: 'An error occurred',
},
})
);
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
await waitForElement(
wrapper,
'ExecutionEnvironmentList',
el => el.length > 0
);
expect(wrapper.find('ContentError').length).toBe(1);
});
test('should not render add button', async () => {
ExecutionEnvironmentsAPI.read.mockResolvedValue(executionEnvironments);
ExecutionEnvironmentsAPI.readOptions.mockResolvedValue({
data: { actions: { POST: false } },
});
await act(async () => {
wrapper = mountWithContexts(<ExecutionEnvironmentList />);
});
waitForElement(wrapper, 'ExecutionEnvironmentList', el => el.length > 0);
expect(wrapper.find('ToolbarAddButton').length).toBe(0);
});
});

View File

@@ -0,0 +1,221 @@
import React, { useEffect, useCallback } from 'react';
import { useLocation, useRouteMatch } from 'react-router-dom';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Card, PageSection } from '@patternfly/react-core';
import { ExecutionEnvironmentsAPI } from '../../../api';
import { getQSConfig, parseQueryString } from '../../../util/qs';
import useRequest, { useDeleteItems } from '../../../util/useRequest';
import useSelected from '../../../util/useSelected';
import {
ToolbarDeleteButton,
ToolbarAddButton,
} from '../../../components/PaginatedDataList';
import PaginatedTable, {
HeaderRow,
HeaderCell,
} from '../../../components/PaginatedTable';
import ErrorDetail from '../../../components/ErrorDetail';
import AlertModal from '../../../components/AlertModal';
import DatalistToolbar from '../../../components/DataListToolbar';
import ExecutionEnvironmentsListItem from './ExecutionEnvironmentListItem';
const QS_CONFIG = getQSConfig('execution_environments', {
page: 1,
page_size: 20,
order_by: 'name',
});
function ExecutionEnvironmentList({ i18n }) {
const location = useLocation();
const match = useRouteMatch();
const {
error: contentError,
isLoading,
request: fetchExecutionEnvironments,
result: {
executionEnvironments,
executionEnvironmentsCount,
actions,
relatedSearchableKeys,
searchableKeys,
},
} = useRequest(
useCallback(async () => {
const params = parseQueryString(QS_CONFIG, location.search);
const [response, responseActions] = await Promise.all([
ExecutionEnvironmentsAPI.read(params),
ExecutionEnvironmentsAPI.readOptions(),
]);
return {
executionEnvironments: response.data.results,
executionEnvironmentsCount: response.data.count,
actions: responseActions.data.actions,
relatedSearchableKeys: (
responseActions?.data?.related_search_fields || []
).map(val => val.slice(0, -8)),
searchableKeys: Object.keys(
responseActions.data.actions?.GET || {}
).filter(key => responseActions.data.actions?.GET[key].filterable),
};
}, [location]),
{
executionEnvironments: [],
executionEnvironmentsCount: 0,
actions: {},
relatedSearchableKeys: [],
searchableKeys: [],
}
);
useEffect(() => {
fetchExecutionEnvironments();
}, [fetchExecutionEnvironments]);
const { selected, isAllSelected, handleSelect, setSelected } = useSelected(
executionEnvironments
);
const {
isLoading: deleteLoading,
deletionError,
deleteItems: deleteExecutionEnvironments,
clearDeletionError,
} = useDeleteItems(
useCallback(async () => {
await Promise.all(
selected.map(({ id }) => ExecutionEnvironmentsAPI.destroy(id))
);
}, [selected]),
{
qsConfig: QS_CONFIG,
allItemsSelected: isAllSelected,
fetchItems: fetchExecutionEnvironments,
}
);
const handleDelete = async () => {
await deleteExecutionEnvironments();
setSelected([]);
};
const canAdd = actions && actions.POST;
return (
<>
<PageSection>
<Card>
<PaginatedTable
contentError={contentError}
hasContentLoading={isLoading || deleteLoading}
items={executionEnvironments}
itemCount={executionEnvironmentsCount}
pluralizedItemName={i18n._(t`Execution Environments`)}
qsConfig={QS_CONFIG}
onRowClick={handleSelect}
toolbarSearchableKeys={searchableKeys}
toolbarRelatedSearchableKeys={relatedSearchableKeys}
toolbarSearchColumns={[
{
name: i18n._(t`Name`),
key: 'name__icontains',
isDefault: true,
},
{
name: i18n._(t`Image`),
key: 'image__icontains',
},
]}
toolbarSortColumns={[
{
name: i18n._(t`Image`),
key: 'image',
},
{
name: i18n._(t`Created`),
key: 'created',
},
{
name: i18n._(t`Organization`),
key: 'organization',
},
{
name: i18n._(t`Description`),
key: 'description',
},
]}
headerRow={
<HeaderRow qsConfig={QS_CONFIG}>
<HeaderCell sortKey="name">{i18n._(t`Name`)}</HeaderCell>
<HeaderCell>{i18n._(t`Image`)}</HeaderCell>
<HeaderCell>{i18n._(t`Organization`)}</HeaderCell>
<HeaderCell>{i18n._(t`Actions`)}</HeaderCell>
</HeaderRow>
}
renderToolbar={props => (
<DatalistToolbar
{...props}
showSelectAll
isAllSelected={isAllSelected}
onSelectAll={isSelected =>
setSelected(isSelected ? [...executionEnvironments] : [])
}
qsConfig={QS_CONFIG}
additionalControls={[
...(canAdd
? [
<ToolbarAddButton
key="add"
linkTo={`${match.url}/add`}
/>,
]
: []),
<ToolbarDeleteButton
key="delete"
onDelete={handleDelete}
itemsToDelete={selected}
pluralizedItemName={i18n._(t`Execution Environments`)}
/>,
]}
/>
)}
renderRow={(executionEnvironment, index) => (
<ExecutionEnvironmentsListItem
key={executionEnvironment.id}
rowIndex={index}
executionEnvironment={executionEnvironment}
detailUrl={`${match.url}/${executionEnvironment.id}/details`}
onSelect={() => handleSelect(executionEnvironment)}
isSelected={selected.some(
row => row.id === executionEnvironment.id
)}
/>
)}
emptyStateControls={
canAdd && (
<ToolbarAddButton key="add" linkTo={`${match.url}/add`} />
)
}
/>
</Card>
</PageSection>
<AlertModal
aria-label={i18n._(t`Deletion error`)}
isOpen={deletionError}
onClose={clearDeletionError}
title={i18n._(t`Error`)}
variant="error"
>
{i18n._(t`Failed to delete one or more execution environments`)}
<ErrorDetail error={deletionError} />
</AlertModal>
</>
);
}
export default withI18n()(ExecutionEnvironmentList);

View File

@@ -0,0 +1,79 @@
import React from 'react';
import { string, bool, func } from 'prop-types';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Link } from 'react-router-dom';
import { Button } from '@patternfly/react-core';
import { Tr, Td } from '@patternfly/react-table';
import { PencilAltIcon } from '@patternfly/react-icons';
import { ActionsTd, ActionItem } from '../../../components/PaginatedTable';
import { ExecutionEnvironment } from '../../../types';
function ExecutionEnvironmentListItem({
executionEnvironment,
detailUrl,
isSelected,
onSelect,
i18n,
rowIndex,
}) {
const labelId = `check-action-${executionEnvironment.id}`;
return (
<Tr id={`ee-row-${executionEnvironment.id}`}>
<Td
select={{
rowIndex,
isSelected,
onSelect,
disable: false,
}}
dataLabel={i18n._(t`Selected`)}
/>
<Td id={labelId} dataLabel={i18n._(t`Name`)}>
<Link to={`${detailUrl}`}>
<b>{executionEnvironment.name}</b>
</Link>
</Td>
<Td id={labelId} dataLabel={i18n._(t`Image`)}>
{executionEnvironment.image}
</Td>
<Td id={labelId} dataLabel={i18n._(t`Organization`)}>
{executionEnvironment.organization ? (
<Link
to={`/organizations/${executionEnvironment?.summary_fields?.organization?.id}/details`}
>
<b>{executionEnvironment?.summary_fields?.organization?.name}</b>
</Link>
) : (
i18n._(t`Globally Available`)
)}
</Td>
<ActionsTd dataLabel={i18n._(t`Actions`)} gridColumns="auto 40px">
<ActionItem
visible={executionEnvironment.summary_fields.user_capabilities.edit}
tooltip={i18n._(t`Edit Execution Environment`)}
>
<Button
aria-label={i18n._(t`Edit Execution Environment`)}
variant="plain"
component={Link}
to={`/execution_environments/${executionEnvironment.id}/edit`}
>
<PencilAltIcon />
</Button>
</ActionItem>
</ActionsTd>
</Tr>
);
}
ExecutionEnvironmentListItem.prototype = {
executionEnvironment: ExecutionEnvironment.isRequired,
detailUrl: string.isRequired,
isSelected: bool.isRequired,
onSelect: func.isRequired,
};
export default withI18n()(ExecutionEnvironmentListItem);

View File

@@ -0,0 +1,74 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import { mountWithContexts } from '../../../../testUtils/enzymeHelpers';
import ExecutionEnvironmentListItem from './ExecutionEnvironmentListItem';
describe('<ExecutionEnvironmentListItem/>', () => {
let wrapper;
const executionEnvironment = {
name: 'Foo',
id: 1,
image: 'https://registry.com/r/image/manifest',
organization: null,
credential: null,
summary_fields: { user_capabilities: { edit: true } },
};
test('should mount successfully', async () => {
await act(async () => {
wrapper = mountWithContexts(
<table>
<tbody>
<ExecutionEnvironmentListItem
executionEnvironment={executionEnvironment}
detailUrl="execution_environments/1/details"
isSelected={false}
onSelect={() => {}}
/>
</tbody>
</table>
);
});
expect(wrapper.find('ExecutionEnvironmentListItem').length).toBe(1);
});
test('should render the proper data', async () => {
await act(async () => {
wrapper = mountWithContexts(
<table>
<tbody>
<ExecutionEnvironmentListItem
executionEnvironment={executionEnvironment}
detailUrl="execution_environments/1/details"
isSelected={false}
onSelect={() => {}}
/>
</tbody>
</table>
);
});
expect(
wrapper
.find('Td')
.at(1)
.text()
).toBe(executionEnvironment.name);
expect(
wrapper
.find('Td')
.at(2)
.text()
).toBe(executionEnvironment.image);
expect(
wrapper
.find('Td')
.at(3)
.text()
).toBe('Globally Available');
expect(wrapper.find('PencilAltIcon').exists()).toBeTruthy();
});
});

View File

@@ -0,0 +1 @@
export { default } from './ExecutionEnvironmentList';

View File

@@ -0,0 +1,56 @@
import React, { useState, useCallback } from 'react';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Route, Switch } from 'react-router-dom';
import ExecutionEnvironment from './ExecutionEnvironment';
import ExecutionEnvironmentAdd from './ExecutionEnvironmentAdd';
import ExecutionEnvironmentList from './ExecutionEnvironmentList';
import ScreenHeader from '../../components/ScreenHeader/ScreenHeader';
function ExecutionEnvironments({ i18n }) {
const [breadcrumbConfig, setBreadcrumbConfig] = useState({
'/execution_environments': i18n._(t`Execution environments`),
'/execution_environments/add': i18n._(t`Create Execution environments`),
});
const buildBreadcrumbConfig = useCallback(
executionEnvironments => {
if (!executionEnvironments) {
return;
}
setBreadcrumbConfig({
'/execution_environments': i18n._(t`Execution environments`),
'/execution_environments/add': i18n._(t`Create Execution environments`),
[`/execution_environments/${executionEnvironments.id}`]: `${executionEnvironments.name}`,
[`/execution_environments/${executionEnvironments.id}/edit`]: i18n._(
t`Edit details`
),
[`/execution_environments/${executionEnvironments.id}/details`]: i18n._(
t`Details`
),
});
},
[i18n]
);
return (
<>
<ScreenHeader
streamType="execution_environment"
breadcrumbConfig={breadcrumbConfig}
/>
<Switch>
<Route path="/execution_environments/add">
<ExecutionEnvironmentAdd />
</Route>
<Route path="/execution_environments/:id">
<ExecutionEnvironment setBreadcrumb={buildBreadcrumbConfig} />
</Route>
<Route path="/execution_environments">
<ExecutionEnvironmentList />
</Route>
</Switch>
</>
);
}
export default withI18n()(ExecutionEnvironments);

View File

@@ -0,0 +1,25 @@
import React from 'react';
import { mountWithContexts } from '../../../testUtils/enzymeHelpers';
import ExecutionEnvironments from './ExecutionEnvironments';
describe('<ExecutionEnvironments/>', () => {
let pageWrapper;
let pageSections;
beforeEach(() => {
pageWrapper = mountWithContexts(<ExecutionEnvironments />);
pageSections = pageWrapper.find('PageSection');
});
afterEach(() => {
pageWrapper.unmount();
});
test('initially renders without crashing', () => {
expect(pageWrapper.length).toBe(1);
expect(pageSections.length).toBe(1);
expect(pageSections.first().props().variant).toBe('light');
});
});

View File

@@ -0,0 +1 @@
export { default } from './ExecutionEnvironments';

View File

@@ -0,0 +1,211 @@
import React, { useCallback, useEffect } from 'react';
import { func, shape } from 'prop-types';
import { Formik, useField, useFormikContext } from 'formik';
import { withI18n } from '@lingui/react';
import { t } from '@lingui/macro';
import { Form, FormGroup } from '@patternfly/react-core';
import { ExecutionEnvironmentsAPI } from '../../../api';
import CredentialLookup from '../../../components/Lookup/CredentialLookup';
import FormActionGroup from '../../../components/FormActionGroup';
import FormField, { FormSubmitError } from '../../../components/FormField';
import AnsibleSelect from '../../../components/AnsibleSelect';
import { FormColumnLayout } from '../../../components/FormLayout';
import { OrganizationLookup } from '../../../components/Lookup';
import ContentError from '../../../components/ContentError';
import ContentLoading from '../../../components/ContentLoading';
import { required } from '../../../util/validators';
import useRequest from '../../../util/useRequest';
function ExecutionEnvironmentFormFields({
i18n,
me,
options,
executionEnvironment,
}) {
const [credentialField] = useField('credential');
const [organizationField, organizationMeta, organizationHelpers] = useField({
name: 'organization',
validate:
!me?.is_superuser &&
required(i18n._(t`Select a value for this field`), i18n),
});
const { setFieldValue } = useFormikContext();
const onCredentialChange = useCallback(
value => {
setFieldValue('credential', value);
},
[setFieldValue]
);
const onOrganizationChange = useCallback(
value => {
setFieldValue('organization', value);
},
[setFieldValue]
);
const [
containerOptionsField,
containerOptionsMeta,
containerOptionsHelpers,
] = useField({
name: 'pull',
});
const containerPullChoices = options?.actions?.POST?.pull?.choices.map(
([value, label]) => ({ value, label, key: value })
);
return (
<>
<FormField
id="execution-environment-name"
label={i18n._(t`Name`)}
name="name"
type="text"
validate={required(null, i18n)}
isRequired
/>
<FormField
id="execution-environment-image"
label={i18n._(t`Image name`)}
name="image"
type="text"
validate={required(null, i18n)}
isRequired
tooltip={i18n._(
t`The registry location where the container is stored.`
)}
/>
<FormGroup
fieldId="execution-environment-container-options"
helperTextInvalid={containerOptionsMeta.error}
validated={
!containerOptionsMeta.touched || !containerOptionsMeta.error
? 'default'
: 'error'
}
label={i18n._(t`Pull`)}
>
<AnsibleSelect
{...containerOptionsField}
id="container-pull-options"
data={containerPullChoices}
onChange={(event, value) => {
containerOptionsHelpers.setValue(value);
}}
/>
</FormGroup>
<FormField
id="execution-environment-description"
label={i18n._(t`Description`)}
name="description"
type="text"
/>
<OrganizationLookup
helperTextInvalid={organizationMeta.error}
isValid={!organizationMeta.touched || !organizationMeta.error}
onBlur={() => organizationHelpers.setTouched()}
onChange={onOrganizationChange}
value={organizationField.value}
required={!me.is_superuser}
helperText={
me?.is_superuser
? i18n._(
t`Leave this field blank to make the execution environment globally available.`
)
: null
}
autoPopulate={!me?.is_superuser ? !executionEnvironment?.id : null}
/>
<CredentialLookup
label={i18n._(t`Registry credential`)}
onChange={onCredentialChange}
value={credentialField.value}
/>
</>
);
}
function ExecutionEnvironmentForm({
executionEnvironment = {},
onSubmit,
onCancel,
submitError,
me,
...rest
}) {
const {
isLoading,
error,
request: fetchOptions,
result: options,
} = useRequest(
useCallback(async () => {
const res = await ExecutionEnvironmentsAPI.readOptions();
const { data } = res;
return data;
}, []),
null
);
useEffect(() => {
fetchOptions();
}, [fetchOptions]);
if (isLoading || !options) {
return <ContentLoading />;
}
if (error) {
return <ContentError error={error} />;
}
const initialValues = {
name: executionEnvironment.name || '',
image: executionEnvironment.image || '',
pull: executionEnvironment?.pull || '',
description: executionEnvironment.description || '',
credential: executionEnvironment.summary_fields?.credential || null,
organization: executionEnvironment.summary_fields?.organization || null,
};
return (
<Formik initialValues={initialValues} onSubmit={values => onSubmit(values)}>
{formik => (
<Form autoComplete="off" onSubmit={formik.handleSubmit}>
<FormColumnLayout>
<ExecutionEnvironmentFormFields
me={me}
options={options}
executionEnvironment={executionEnvironment}
{...rest}
/>
{submitError && <FormSubmitError error={submitError} />}
<FormActionGroup
onCancel={onCancel}
onSubmit={formik.handleSubmit}
/>
</FormColumnLayout>
</Form>
)}
</Formik>
);
}
ExecutionEnvironmentForm.propTypes = {
executionEnvironment: shape({}),
onCancel: func.isRequired,
onSubmit: func.isRequired,
submitError: shape({}),
};
ExecutionEnvironmentForm.defaultProps = {
executionEnvironment: {},
submitError: null,
};
export default withI18n()(ExecutionEnvironmentForm);

View File

@@ -0,0 +1,163 @@
import React from 'react';
import { act } from 'react-dom/test-utils';
import {
mountWithContexts,
waitForElement,
} from '../../../../testUtils/enzymeHelpers';
import { ExecutionEnvironmentsAPI } from '../../../api';
import ExecutionEnvironmentForm from './ExecutionEnvironmentForm';
jest.mock('../../../api');
const mockMe = {
is_superuser: true,
is_super_auditor: false,
};
const executionEnvironment = {
id: 16,
name: 'Test EE',
type: 'execution_environment',
pull: 'one',
url: '/api/v2/execution_environments/16/',
related: {
created_by: '/api/v2/users/1/',
modified_by: '/api/v2/users/1/',
activity_stream: '/api/v2/execution_environments/16/activity_stream/',
unified_job_templates:
'/api/v2/execution_environments/16/unified_job_templates/',
credential: '/api/v2/credentials/4/',
},
summary_fields: {
credential: {
id: 4,
name: 'Container Registry',
},
},
created: '2020-09-17T16:06:57.346128Z',
modified: '2020-09-17T16:06:57.346147Z',
description: 'A simple EE',
organization: null,
image: 'https://registry.com/image/container',
managed_by_tower: false,
credential: 4,
};
const mockOptions = {
data: {
actions: {
POST: {
pull: {
choices: [
['one', 'One'],
['two', 'Two'],
['three', 'Three'],
],
},
},
},
},
};
describe('<ExecutionEnvironmentForm/>', () => {
let wrapper;
let onCancel;
let onSubmit;
beforeEach(async () => {
onCancel = jest.fn();
onSubmit = jest.fn();
ExecutionEnvironmentsAPI.readOptions.mockResolvedValue(mockOptions);
await act(async () => {
wrapper = mountWithContexts(
<ExecutionEnvironmentForm
onCancel={onCancel}
onSubmit={onSubmit}
executionEnvironment={executionEnvironment}
options={mockOptions}
me={mockMe}
/>
);
});
await waitForElement(wrapper, 'ContentLoading', el => el.length === 0);
});
afterEach(() => {
jest.clearAllMocks();
wrapper.unmount();
});
test('Initially renders successfully', () => {
expect(wrapper.length).toBe(1);
});
test('should display form fields properly', () => {
expect(wrapper.find('FormGroup[label="Image name"]').length).toBe(1);
expect(wrapper.find('FormGroup[label="Description"]').length).toBe(1);
expect(wrapper.find('CredentialLookup').length).toBe(1);
});
test('should call onSubmit when form submitted', async () => {
expect(onSubmit).not.toHaveBeenCalled();
await act(async () => {
wrapper.find('button[aria-label="Save"]').simulate('click');
});
expect(onSubmit).toHaveBeenCalledTimes(1);
});
test('should update form values', async () => {
await act(async () => {
wrapper.find('input#execution-environment-image').simulate('change', {
target: {
value: 'Updated EE Name',
name: 'name',
},
});
wrapper.find('input#execution-environment-image').simulate('change', {
target: {
value: 'https://registry.com/image/container2',
name: 'image',
},
});
wrapper
.find('input#execution-environment-description')
.simulate('change', {
target: { value: 'New description', name: 'description' },
});
wrapper.find('CredentialLookup').invoke('onBlur')();
wrapper.find('CredentialLookup').invoke('onChange')({
id: 99,
name: 'credential',
});
wrapper.find('OrganizationLookup').invoke('onBlur')();
wrapper.find('OrganizationLookup').invoke('onChange')({
id: 3,
name: 'organization',
});
});
wrapper.update();
expect(wrapper.find('OrganizationLookup').prop('value')).toEqual({
id: 3,
name: 'organization',
});
expect(
wrapper.find('input#execution-environment-image').prop('value')
).toEqual('https://registry.com/image/container2');
expect(
wrapper.find('input#execution-environment-description').prop('value')
).toEqual('New description');
expect(wrapper.find('CredentialLookup').prop('value')).toEqual({
id: 99,
name: 'credential',
});
});
test('should call handleCancel when Cancel button is clicked', async () => {
expect(onCancel).not.toHaveBeenCalled();
wrapper.find('button[aria-label="Cancel"]').invoke('onClick')();
expect(onCancel).toBeCalled();
});
});

View File

@@ -32,7 +32,7 @@ const instanceGroup = {
controller: null, controller: null,
is_controller: false, is_controller: false,
is_isolated: false, is_isolated: false,
is_containerized: true, is_container_group: true,
credential: 71, credential: 71,
policy_instance_percentage: 0, policy_instance_percentage: 0,
policy_instance_minimum: 0, policy_instance_minimum: 0,

View File

@@ -37,7 +37,7 @@ function ContainerGroupEdit({ instanceGroup }) {
try { try {
await InstanceGroupsAPI.update(instanceGroup.id, { await InstanceGroupsAPI.update(instanceGroup.id, {
name: values.name, name: values.name,
credential: values.credential.id, credential: values.credential ? values.credential.id : null,
pod_spec_override: values.override ? values.pod_spec_override : null, pod_spec_override: values.override ? values.pod_spec_override : null,
}); });
history.push(detailsIUrl); history.push(detailsIUrl);

View File

@@ -31,7 +31,7 @@ const instanceGroup = {
controller: null, controller: null,
is_controller: false, is_controller: false,
is_isolated: false, is_isolated: false,
is_containerized: true, is_container_group: true,
credential: 71, credential: 71,
policy_instance_percentage: 0, policy_instance_percentage: 0,
policy_instance_minimum: 0, policy_instance_minimum: 0,

Some files were not shown because too many files have changed in this diff Show More