mirror of
https://github.com/ansible/awx.git
synced 2026-01-09 15:02:07 -03:30
More AWX docs edits
This commit is contained in:
parent
e2be392f31
commit
860715d088
@ -1,14 +1,14 @@
|
|||||||
## Introduction
|
## Introduction
|
||||||
Starting from Tower 3.3, OAuth 2 will be used as the new means of token-based authentication. Users
|
Starting from Tower 3.3, OAuth2 will be used as the new means of token-based authentication. Users
|
||||||
will be able to manage OAuth 2 tokens as well as applications, a server-side representation of API
|
will be able to manage OAuth2 tokens as well as applications, a server-side representation of API
|
||||||
clients used to generate tokens. With OAuth 2, a user can authenticate by passing a token as part of
|
clients used to generate tokens. With OAuth2, a user can authenticate by passing a token as part of
|
||||||
the HTTP authentication header. The token can be scoped to have more restrictive permissions on top of
|
the HTTP authentication header. The token can be scoped to have more restrictive permissions on top of
|
||||||
the base RBAC permissions of the user. Refer to [RFC 6749](https://tools.ietf.org/html/rfc6749) for
|
the base RBAC permissions of the user. Refer to [RFC 6749](https://tools.ietf.org/html/rfc6749) for
|
||||||
more details of OAuth 2 specification.
|
more details of OAuth2 specification.
|
||||||
|
|
||||||
## Basic Usage
|
## Basic Usage
|
||||||
|
|
||||||
To get started using OAuth 2 tokens for accessing the browsable API using OAuth 2, this document will walk through the steps of acquiring a token and using it.
|
To get started using OAuth2 tokens for accessing the browsable API using OAuth2, this document will walk through the steps of acquiring a token and using it.
|
||||||
|
|
||||||
1. Make an application with `authorization_grant_type` set to 'password'. HTTP POST the following to the `/api/v2/applications/` endpoint (supplying your own `organization-id`):
|
1. Make an application with `authorization_grant_type` set to 'password'. HTTP POST the following to the `/api/v2/applications/` endpoint (supplying your own `organization-id`):
|
||||||
```
|
```
|
||||||
@ -51,18 +51,18 @@ curl -H "Authorization: Bearer <token-value>" -X DELETE https://<awx>/api/v2/tok
|
|||||||
|
|
||||||
## More Information
|
## More Information
|
||||||
|
|
||||||
#### Managing OAuth 2 Applications and Tokens
|
#### Managing OAuth2 Applications and Tokens
|
||||||
|
|
||||||
Applications and tokens can be managed as a top-level resource at `/api/<version>/applications` and
|
Applications and tokens can be managed as a top-level resource at `/api/v2/applications` and
|
||||||
`/api/<version>/tokens`. These resources can also be accessed respective to the user at
|
`/api/v2/tokens`. These resources can also be accessed respective to the user at
|
||||||
`/api/<version>/users/N/<resource>`. Applications can be created by making a POST to either `api/<version>/applications`
|
`/api/v2/users/N/<resource>`. Applications can be created by making a POST to either `api/v2/applications`
|
||||||
or `/api/<version>/users/N/applications`.
|
or `/api/v2/users/N/applications`.
|
||||||
|
|
||||||
Each OAuth 2 application represents a specific API client on the server side. For an API client to use the API via an application token,
|
Each OAuth2 application represents a specific API client on the server side. For an API client to use the API via an application token,
|
||||||
it must first have an application and issue an access token.
|
it must first have an application and issue an access token.
|
||||||
|
|
||||||
Individual applications will be accessible via their primary keys:
|
Individual applications will be accessible via their primary keys:
|
||||||
`/api/<version>/applications/<pk>/`. Here is a typical application:
|
`/api/v2/applications/<pk>/`. Here is a typical application:
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"id": 1,
|
"id": 1,
|
||||||
@ -105,8 +105,8 @@ Individual applications will be accessible via their primary keys:
|
|||||||
```
|
```
|
||||||
In the above example, `user` is the primary key of the user associated to this application and `name` is
|
In the above example, `user` is the primary key of the user associated to this application and `name` is
|
||||||
a human-readable identifier for the application. The other fields, like `client_id` and
|
a human-readable identifier for the application. The other fields, like `client_id` and
|
||||||
`redirect_uris`, are mainly used for OAuth 2 authorization, which will be covered later in the 'Using
|
`redirect_uris`, are mainly used for OAuth2 authorization, which will be covered later in the 'Using
|
||||||
OAuth 2 Token System' section.
|
OAuth2 Token System' section.
|
||||||
|
|
||||||
Fields `client_id` and `client_secret` are immutable identifiers of applications, and will be
|
Fields `client_id` and `client_secret` are immutable identifiers of applications, and will be
|
||||||
generated during creation; Fields `user` and `authorization_grant_type`, on the other hand, are
|
generated during creation; Fields `user` and `authorization_grant_type`, on the other hand, are
|
||||||
@ -127,7 +127,7 @@ token scope; or POSTing to `/api/v2/applications/<pk>/tokens/` by providing only
|
|||||||
the parent application will be automatically linked.
|
the parent application will be automatically linked.
|
||||||
|
|
||||||
Individual tokens will be accessible via their primary keys at
|
Individual tokens will be accessible via their primary keys at
|
||||||
`/api/<version>/tokens/<pk>/`. Here is a typical token:
|
`/api/v2/tokens/<pk>/`. Here is a typical token:
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"id": 4,
|
"id": 4,
|
||||||
@ -162,7 +162,7 @@ Individual tokens will be accessible via their primary keys at
|
|||||||
"scope": "read"
|
"scope": "read"
|
||||||
},
|
},
|
||||||
```
|
```
|
||||||
For an OAuth 2 token, the only fully mutable fields are `scope` and `description`. The `application`
|
For an OAuth2 token, the only fully mutable fields are `scope` and `description`. The `application`
|
||||||
field is *immutable on update*, and all other fields are totally immutable, and will be auto-populated
|
field is *immutable on update*, and all other fields are totally immutable, and will be auto-populated
|
||||||
during creation.
|
during creation.
|
||||||
* `user` - this field corresponds to the user the token is created for
|
* `user` - this field corresponds to the user the token is created for
|
||||||
@ -181,8 +181,8 @@ endpoint. Personal access tokens can be identified by the `application` field b
|
|||||||
- Other normal users will only be able to see and manipulate their own tokens.
|
- Other normal users will only be able to see and manipulate their own tokens.
|
||||||
> Note: Users can only see the token or refresh-token _value_ at the time of creation ONLY.
|
> Note: Users can only see the token or refresh-token _value_ at the time of creation ONLY.
|
||||||
|
|
||||||
#### Using OAuth 2 Token System for Personal Access Tokens (PAT)
|
#### Using OAuth2 Token System for Personal Access Tokens (PAT)
|
||||||
The most common usage of OAuth 2 is authenticating users. The `token` field of a token is used
|
The most common usage of OAuth2 is authenticating users. The `token` field of a token is used
|
||||||
as part of the HTTP authentication header, in the format `Authorization: Bearer <token field value>`. This _Bearer_
|
as part of the HTTP authentication header, in the format `Authorization: Bearer <token field value>`. This _Bearer_
|
||||||
token can be obtained by doing a curl to the `/api/o/token/` endpoint. For example:
|
token can be obtained by doing a curl to the `/api/o/token/` endpoint. For example:
|
||||||
```
|
```
|
||||||
@ -195,17 +195,17 @@ Here is an example of using that PAT to access an API endpoint using `curl`:
|
|||||||
curl -H "Authorization: Bearer kqHqxfpHGRRBXLNCOXxT5Zt3tpJogn" http://<awx>/api/v2/credentials/
|
curl -H "Authorization: Bearer kqHqxfpHGRRBXLNCOXxT5Zt3tpJogn" http://<awx>/api/v2/credentials/
|
||||||
```
|
```
|
||||||
|
|
||||||
According to OAuth 2 specification, users should be able to acquire, revoke and refresh an access
|
According to OAuth2 specification, users should be able to acquire, revoke and refresh an access
|
||||||
token. In AWX the equivalent, and easiest, way of doing that is creating a token, deleting
|
token. In AWX the equivalent, and easiest, way of doing that is creating a token, deleting
|
||||||
a token, and deleting a token quickly followed by creating a new one.
|
a token, and deleting a token quickly followed by creating a new one.
|
||||||
|
|
||||||
The specification also provides standard ways of doing this. RFC 6749 elaborates
|
The specification also provides standard ways of doing this. RFC 6749 elaborates
|
||||||
on those topics, but in summary, an OAuth 2 token is officially acquired via authorization using
|
on those topics, but in summary, an OAuth2 token is officially acquired via authorization using
|
||||||
authorization information provided by applications (special application fields mentioned above).
|
authorization information provided by applications (special application fields mentioned above).
|
||||||
There are dedicated endpoints for authorization and acquiring tokens. The `token` endpoint
|
There are dedicated endpoints for authorization and acquiring tokens. The `token` endpoint
|
||||||
is also responsible for token refresh, and token revoke can be done by the dedicated token revoke endpoint.
|
is also responsible for token refresh, and token revoke can be done by the dedicated token revoke endpoint.
|
||||||
|
|
||||||
In AWX, our OAuth 2 system is built on top of
|
In AWX, our OAuth2 system is built on top of
|
||||||
[Django Oauth Toolkit](https://django-oauth-toolkit.readthedocs.io/en/latest/), which provides full
|
[Django Oauth Toolkit](https://django-oauth-toolkit.readthedocs.io/en/latest/), which provides full
|
||||||
support on standard authorization, token revoke and refresh. AWX implements them and puts related
|
support on standard authorization, token revoke and refresh. AWX implements them and puts related
|
||||||
endpoints under `/api/o/` endpoint. Detailed examples on the most typical usage of those endpoints
|
endpoints under `/api/o/` endpoint. Detailed examples on the most typical usage of those endpoints
|
||||||
@ -215,9 +215,9 @@ are available as description text of `/api/o/`. See below for information on App
|
|||||||
|
|
||||||
#### Token Scope Mask Over RBAC System
|
#### Token Scope Mask Over RBAC System
|
||||||
|
|
||||||
The scope of an OAuth 2 token is a space-separated string composed of keywords like 'read' and 'write'.
|
The scope of an OAuth2 token is a space-separated string composed of keywords like 'read' and 'write'.
|
||||||
These keywords are configurable and used to specify permission level of the authenticated API client.
|
These keywords are configurable and used to specify permission level of the authenticated API client.
|
||||||
For the initial OAuth 2 implementation, we use the most simple scope configuration, where the only
|
For the initial OAuth2 implementation, we use the most simple scope configuration, where the only
|
||||||
valid scope keywords are 'read' and 'write'.
|
valid scope keywords are 'read' and 'write'.
|
||||||
|
|
||||||
Read and write scopes provide a mask layer over the RBAC permission system of AWX. In specific, a
|
Read and write scopes provide a mask layer over the RBAC permission system of AWX. In specific, a
|
||||||
@ -226,7 +226,7 @@ scope gives the authenticated user only read permissions the RBAC system provide
|
|||||||
|
|
||||||
For example, if a user has admin permission to a job template, he/she can both see and modify, launch
|
For example, if a user has admin permission to a job template, he/she can both see and modify, launch
|
||||||
and delete the job template if authenticated via session or basic auth. On the other hand, if the user
|
and delete the job template if authenticated via session or basic auth. On the other hand, if the user
|
||||||
is authenticated using OAuth 2 token, and the related token scope is 'read', the user can only see but
|
is authenticated using OAuth2 token, and the related token scope is 'read', the user can only see but
|
||||||
not manipulate or launch the job template, despite being an admin. If the token scope is
|
not manipulate or launch the job template, despite being an admin. If the token scope is
|
||||||
'write' or 'read write', she can take full advantage of the job template as its admin. Note that 'write'
|
'write' or 'read write', she can take full advantage of the job template as its admin. Note that 'write'
|
||||||
implies 'read' as well.
|
implies 'read' as well.
|
||||||
@ -234,10 +234,10 @@ implies 'read' as well.
|
|||||||
|
|
||||||
## Application Functions
|
## Application Functions
|
||||||
|
|
||||||
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
|
This page lists OAuth2 utility endpoints used for authorization, token refresh and revoke.
|
||||||
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
|
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
|
||||||
support HTTP GET. The endpoints here strictly follow
|
support HTTP GET. The endpoints here strictly follow
|
||||||
[RFC specs for OAuth 2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
|
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
|
||||||
reference. Below are some examples to demonstrate the typical usage of these endpoints in
|
reference. Below are some examples to demonstrate the typical usage of these endpoints in
|
||||||
AWX context (note that the AWX net location defaults to `http://localhost:8013` in these examples).
|
AWX context (note that the AWX net location defaults to `http://localhost:8013` in these examples).
|
||||||
|
|
||||||
@ -392,10 +392,10 @@ at `/api/v2/tokens/`.
|
|||||||
|
|
||||||
## Acceptance Criteria
|
## Acceptance Criteria
|
||||||
|
|
||||||
* All CRUD operations for OAuth 2 applications and tokens should function as described.
|
* All CRUD operations for OAuth2 applications and tokens should function as described.
|
||||||
* RBAC rules applied to OAuth 2 applications and tokens should behave as described.
|
* RBAC rules applied to OAuth2 applications and tokens should behave as described.
|
||||||
* A default application should be auto-created for each new user.
|
* A default application should be auto-created for each new user.
|
||||||
* Incoming requests using unexpired OAuth 2 token correctly in authentication header should be able
|
* Incoming requests using unexpired OAuth2 token correctly in authentication header should be able
|
||||||
to successfully authenticate themselves.
|
to successfully authenticate themselves.
|
||||||
* Token scope mask over RBAC should work as described.
|
* Token scope mask over RBAC should work as described.
|
||||||
* Tower configuration setting `OAUTH2_PROVIDER` should be configurable and function as described.
|
* Tower configuration setting `OAUTH2_PROVIDER` should be configurable and function as described.
|
||||||
|
|||||||
@ -1,43 +1,42 @@
|
|||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
Before Tower 3.3, auth token was used as the main authentication method. Starting from Tower 3.3,
|
Before Tower 3.3, an auth token was used as the main authentication method. Starting from Tower 3.3,
|
||||||
session-based authentication will take the place as the main authentication method, and auth token
|
session-based authentication will take its place as the main authentication method, and auth token
|
||||||
will be replaced by OAuth 2 tokens.
|
will be replaced by OAuth 2 tokens.
|
||||||
|
|
||||||
Session authentication is a safer way of utilizing HTTP(S) cookies:
|
Session authentication is a safer way of utilizing HTTP(S) cookies. Theoretically, the user can provide authentication information, like username and password, as part of the
|
||||||
|
|
||||||
Theoretically, the user can provide authentication information, like username and password, as part of the
|
|
||||||
`Cookie` header, but this method is vulnerable to cookie hijacks, where crackers can see and steal user
|
`Cookie` header, but this method is vulnerable to cookie hijacks, where crackers can see and steal user
|
||||||
information from cookie payload.
|
information from the cookie payload.
|
||||||
|
|
||||||
Session authentication, on the other hand, sets a single `session_id` cookie. The session_id
|
Session authentication, on the other hand, sets a single `session_id` cookie. The `session_id`
|
||||||
is *a random string which will be mapped to user authentication informations by server*. Crackers who
|
is *a random string which will be mapped to user authentication informations by server*. Crackers who
|
||||||
hijack cookies will only get the session_id itself, which does not imply any critical user info, is valid only for
|
hijack cookies will only get the `session_id` itself, which does not imply any critical user info, is valid only for
|
||||||
a limited time, and can be revoked at any time.
|
a limited time, and can be revoked at any time.
|
||||||
|
|
||||||
> Note: The CSRF token will by default allow HTTP. To increase security, the `CSRF_COOKIE_SECURE` setting should
|
> Note: The CSRF token will by default allow HTTP. To increase security, the `CSRF_COOKIE_SECURE` setting should
|
||||||
be set to False.
|
be set to False.
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
In session authentication, users log in using the `/api/login/` endpoint. A GET to `/api/login/` displays the
|
In session authentication, users log in using the `/api/login/` endpoint. A GET to `/api/login/` displays the
|
||||||
log in page of API browser:
|
login page of API browser:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
Users should enter correct username and password before clicking on 'LOG IN' button, which fires a POST
|
Users should enter correct username and password before clicking on the 'LOG IN' button, which fires a POST
|
||||||
to `/api/login/` to actually log the user in. The return code of a successful login is 302, meaning upon
|
to `/api/login/` to actually log the user in. The return code of a successful login is 302, meaning upon
|
||||||
successful login, the browser will be redirected, the redirected destination is determined by `next` form
|
successful login, the browser will be redirected; the redirected destination is determined by the `next` form
|
||||||
item described below.
|
item described below.
|
||||||
|
|
||||||
It should be noted that POST body of `/api/login/` is *not* in JSON, but HTTP form format. 4 items should
|
It should be noted that the POST body of `/api/login/` is *not* in JSON, but in HTTP form format. Four items should
|
||||||
be provided in the form:
|
be provided in the form:
|
||||||
* `username`: The username of the user trying to log in.
|
* `username`: The username of the user trying to log in.
|
||||||
* `password`: The password of the user trying to log in.
|
* `password`: The password of the user trying to log in.
|
||||||
* `next`: The path of the redirect destination, in API browser `"/api/"` is used.
|
* `next`: The path of the redirect destination, in API browser `"/api/"` is used.
|
||||||
* `csrfmiddlewaretoken`: The CSRF token, usually populated by using Django template `{% csrf_token %}`.
|
* `csrfmiddlewaretoken`: The CSRF token, usually populated by using Django template `{% csrf_token %}`.
|
||||||
|
|
||||||
The session_id is provided as a return `Set-Cookie` header. Here is a typical one:
|
The `session_id` is provided as a return `Set-Cookie` header. Here is a typical one:
|
||||||
```
|
```
|
||||||
Set-Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; expires=Tue, 21-Nov-2017 16:33:13 GMT; httponly; Max-Age=1209600; Path=/
|
Set-Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; expires=Tue, 21-Nov-2017 16:33:13 GMT; httponly; Max-Age=1209600; Path=/
|
||||||
```
|
```
|
||||||
@ -47,40 +46,42 @@ session cookie value, expiration date, duration, etc.
|
|||||||
|
|
||||||
The duration of the cookie is configurable by Tower Configuration setting `SESSION_COOKIE_AGE` under
|
The duration of the cookie is configurable by Tower Configuration setting `SESSION_COOKIE_AGE` under
|
||||||
category `authentication`. It is an integer denoting the number of seconds the session cookie should
|
category `authentication`. It is an integer denoting the number of seconds the session cookie should
|
||||||
live. The default session cookie age is 2 weeks.
|
live. The default session cookie age is two weeks.
|
||||||
|
|
||||||
After a valid session is acquired, a client should provide the session_id as a cookie for subsequent requests
|
After a valid session is acquired, a client should provide the `session_id` as a cookie for subsequent requests
|
||||||
in order to be authenticated. For example:
|
in order to be authenticated. For example:
|
||||||
```
|
```
|
||||||
Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; ...
|
Cookie: sessionid=lwan8l5ynhrqvps280rg5upp7n3yp6ds; ...
|
||||||
```
|
```
|
||||||
|
|
||||||
User should use `/api/logout/` endpoint to log out. In API browser, a logged in user can do that by
|
User should use the `/api/logout/` endpoint to log out. In the API browser, a logged-in user can do that by
|
||||||
simply clicking logout button on the nav bar. Under the hood the click issues a GET to '/api/logout/',
|
simply clicking logout button on the nav bar. Under the hood, the click issues a GET to `/api/logout/`.
|
||||||
Upon success, server will invalidate current session and the response header will indicate client
|
Upon success, the server will invalidate the current session and the response header will indicate for the client
|
||||||
to delete the session cookie. User should no longer try using this invalid session.
|
to delete the session cookie. The user should no longer try using this invalid session.
|
||||||
|
|
||||||
The duration of a session is constant. However, user can extend the expiration date of a valid session
|
The duration of a session is constant. However, a user can extend the expiration date of a valid session
|
||||||
by performing session acquire with the session provided.
|
by performing session acquire with the session provided.
|
||||||
|
|
||||||
A Tower configuration setting, `SESSIONS_PER_USER` under category `authentication`, is used to set the
|
A Tower configuration setting, `SESSIONS_PER_USER` under category `authentication`, is used to set the
|
||||||
maximum number of valid sessions a user can have at the same time. For example, if `SESSIONS_PER_USER`
|
maximum number of valid sessions a user can have at the same time. For example, if `SESSIONS_PER_USER`
|
||||||
is set to 3 and the same user is logged in from 5 different places, the earliest 2 sessions created will be invalidated. Tower will try
|
is set to three and the same user is logged in from five different places, the earliest two sessions created will be invalidated. Tower will try
|
||||||
broadcasting, via websocket, to all available clients. The websocket message body will contain a list of
|
broadcasting, via websocket, to all available clients. The websocket message body will contain a list of
|
||||||
invalidated sessions. If a client finds its session in that list, it should try logging out.
|
invalidated sessions. If a client finds its session in that list, it should try logging out.
|
||||||
|
|
||||||
Unlike tokens, sessions are meant to be short-lived and UI-only, therefore whenever a user's password
|
Unlike tokens, sessions are meant to be short-lived and UI-only; therefore, whenever a user's password
|
||||||
is updated, all sessions she owned will be invalidated and deleted.
|
is updated, all sessions she owned will be invalidated and deleted.
|
||||||
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
## Acceptance Criteria
|
||||||
* User should be able to log in via `/api/login/` endpoint by correctly providing all necessary fields.
|
|
||||||
* Logged in users should be able to authenticate themselves by providing correct session auth info.
|
* Users should be able to log in via the `/api/login/` endpoint by correctly providing all necessary fields.
|
||||||
* Logged in users should be able to log out via `/api/logout/`.
|
* Logged-in users should be able to authenticate themselves by providing correct session auth info.
|
||||||
|
* Logged-in users should be able to log out via `/api/logout/`.
|
||||||
* The duration of a session cookie should be configurable by `SESSION_COOKIE_AGE`.
|
* The duration of a session cookie should be configurable by `SESSION_COOKIE_AGE`.
|
||||||
* The maximum number of concurrent login for one user should be configurable by `SESSIONS_PER_USER`,
|
* The maximum number of concurrent login for one user should be configurable by `SESSIONS_PER_USER`,
|
||||||
and over-limit user sessions should be warned by websocket.
|
and over-limit user sessions should be warned by websocket.
|
||||||
* When a user's password is changed, all her sessions should be invalidated and deleted.
|
* When a user's password is changed, all her sessions should be invalidated and deleted.
|
||||||
* User should not be able to authenticate by HTTPS(S) request nor websocket connect using invalid
|
* User should not be able to authenticate by HTTPS(S) request nor websocket connection using invalid
|
||||||
sessions.
|
sessions.
|
||||||
* No existing behavior, like job run, inventory update or callback receiver, should be affected
|
* No existing behavior, like job runs, inventory updates or callback receiver, should be affected
|
||||||
by session auth.
|
by session auth.
|
||||||
|
|||||||
@ -1,7 +1,7 @@
|
|||||||
# TACACS+
|
# TACACS+
|
||||||
[Terminal Access Controller Access-Control System Plus (TACACS+)](https://en.wikipedia.org/wiki/TACACS) is a protocol developed by Cisco to handle remote authentication and related services for networked access control through a centralized server. In specific, TACACS+ provides authentication, authorization and accounting (AAA) services. Ansible Tower currently utilizes its authentication service.
|
[Terminal Access Controller Access-Control System Plus (TACACS+)](https://en.wikipedia.org/wiki/TACACS) is a protocol developed by Cisco to handle remote authentication and related services for networked access control through a centralized server. In specific, TACACS+ provides authentication, authorization and accounting (AAA) services. Ansible Tower currently utilizes its authentication service.
|
||||||
|
|
||||||
TACACS+ is configured by Tower configuration and is available under `/api/<version #>/settings/tacacsplus/`. Here is a typical configuration with every configurable field included:
|
TACACS+ is configured by Tower configuration and is available under `/api/v2/settings/tacacsplus/`. Here is a typical configuration with every configurable field included:
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
"TACACSPLUS_HOST": "127.0.0.1",
|
"TACACSPLUS_HOST": "127.0.0.1",
|
||||||
@ -11,7 +11,7 @@ TACACS+ is configured by Tower configuration and is available under `/api/<versi
|
|||||||
"TACACSPLUS_AUTH_PROTOCOL": "ascii"
|
"TACACSPLUS_AUTH_PROTOCOL": "ascii"
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
Below explains each field:
|
Each field is explained below:
|
||||||
|
|
||||||
| Field Name | Field Value Type | Field Value Default | Description |
|
| Field Name | Field Value Type | Field Value Default | Description |
|
||||||
|------------------------------|---------------------|---------------------|--------------------------------------------------------------------|
|
|------------------------------|---------------------|---------------------|--------------------------------------------------------------------|
|
||||||
@ -19,28 +19,31 @@ Below explains each field:
|
|||||||
| `TACACSPLUS_PORT` | Integer | 49 | Port number of TACACS+ server. |
|
| `TACACSPLUS_PORT` | Integer | 49 | Port number of TACACS+ server. |
|
||||||
| `TACACSPLUS_SECRET` | String | '' (empty string) | Shared secret for authenticating to TACACS+ server. |
|
| `TACACSPLUS_SECRET` | String | '' (empty string) | Shared secret for authenticating to TACACS+ server. |
|
||||||
| `TACACSPLUS_SESSION_TIMEOUT` | Integer | 5 | TACACS+ session timeout value in seconds. |
|
| `TACACSPLUS_SESSION_TIMEOUT` | Integer | 5 | TACACS+ session timeout value in seconds. |
|
||||||
| `TACACSPLUS_AUTH_PROTOCOL` | String with choices | 'ascii' | The authentication protocol used by TACACS+ client. Choices are `ascii` and `pap` |
|
| `TACACSPLUS_AUTH_PROTOCOL` | String with choices | 'ascii' | The authentication protocol used by TACACS+ client (choices are `ascii` and `pap`). |
|
||||||
|
|
||||||
Under the hood, Tower uses [open-source TACACS+ python client](https://github.com/ansible/tacacs_plus) to communicate with the remote TACACS+ server. During authentication, Tower passes username and password to TACACS+ client, which packs up auth information and send to TACACS+ server. Based on what the server returns, Tower will invalidate login attempt if authentication fails. If authentication passes, Tower will create a user if she does not exist in database, and log the user in.
|
Under the hood, Tower uses [open-source TACACS+ python client](https://github.com/ansible/tacacs_plus) to communicate with the remote TACACS+ server. During authentication, Tower passes username and password to TACACS+ client, which packs up auth information and sends it to the TACACS+ server. Based on what the server returns, Tower will invalidate login attempt if authentication fails. If authentication passes, Tower will create a user if she does not exist in database, and log the user in.
|
||||||
|
|
||||||
## Test environment setup
|
## Test Environment Setup
|
||||||
|
|
||||||
The suggested TACACS+ server for testing is [shrubbery TACACS+ daemon](http://www.shrubbery.net/tac_plus/). It is supposed to run on a centos machine. A verified candidate is centos 6.3 AMI in AWS EC2 Community AMIs (search for 'Centos 6.3 x86_64 HVM - Minimal with cloud-init aws-cfn-bootstrap and ec2-api-tools'). Note it is required to keep TCP port 49 open, since it's the default port used by TACACS+ daemon.
|
The suggested TACACS+ server for testing is [shrubbery TACACS+ daemon](http://www.shrubbery.net/tac_plus/). It is supposed to run on a CentOS machine. A verified candidate is CentOS 6.3 AMI in AWS EC2 Community AMIs (search for `CentOS 6.3 x86_64 HVM - Minimal with cloud-init aws-cfn-bootstrap and ec2-api-tools`). Note that it is required to keep TCP port 49 open, since it's the default port used by the TACACS+ daemon.
|
||||||
|
|
||||||
We provide [a playbook](https://github.com/jangsutsr/ansible-role-tacacs) to install a working TACACS+ server. Here is a typical test setup using the provided playbook.
|
We provide [a playbook](https://github.com/jangsutsr/ansible-role-tacacs) to install a working TACACS+ server. Here is a typical test setup using the provided playbook:
|
||||||
1. In AWS EC2, spawn the centos 6 machine.
|
|
||||||
|
1. In AWS EC2, spawn the CentOS 6 machine.
|
||||||
2. In Tower, create a test project using the stand-alone playbook inventory.
|
2. In Tower, create a test project using the stand-alone playbook inventory.
|
||||||
3. In Tower, create a test inventory with the only host to be the spawned centos machine.
|
3. In Tower, create a test inventory with the only host to be the spawned CentOS machine.
|
||||||
4. In Tower, create and run a job template using the created project and inventory with parameters setup as below.
|
4. In Tower, create and run a job template using the created project and inventory with parameters setup as below:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
The playbook creates a user named 'tower' with ascii password default to 'login' and modifiable by extra_var `ascii_password` and pap password default to 'papme' and modifiable by extra_var `pap_password`. In order to configure TACACS+ server to meet custom test needs, we need to modify server-side file `/etc/tac_plus.conf` and `sudo service tac_plus restart` to restart the daemon. Details on how to modify config file can be found [here](http://manpages.ubuntu.com/manpages/xenial/man5/tac_plus.conf.5.html).
|
The playbook creates a user named 'tower' with ascii password default to 'login' and modifiable by `extra_var` `ascii_password` and pap password default to 'papme' and modifiable by `extra_var` `pap_password`. In order to configure TACACS+ server to meet custom test needs, we need to modify server-side file `/etc/tac_plus.conf` and `sudo service tac_plus restart` to restart the daemon. Details on how to modify config file can be found [here](http://manpages.ubuntu.com/manpages/xenial/man5/tac_plus.conf.5.html).
|
||||||
|
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
## Acceptance criteria
|
|
||||||
* All specified Tower configuration fields should be shown and configurable as documented.
|
* All specified Tower configuration fields should be shown and configurable as documented.
|
||||||
* User defined by TACACS+ server should be able to log in Tower.
|
* A user defined by the TACACS+ server should be able to log into Tower.
|
||||||
* User not defined by TACACS+ server should not be able to log in Tower via TACACS+.
|
* User not defined by TACACS+ server should not be able to log into Tower via TACACS+.
|
||||||
* A user existing in TACACS+ server but not in Tower should be created after the first success log in.
|
* A user existing in TACACS+ server but not in Tower should be created after the first successful log in.
|
||||||
* TACACS+ backend should stop authentication attempt after configured timeout and should not block the authentication pipeline in any case.
|
* TACACS+ backend should stop an authentication attempt after configured timeout and should not block the authentication pipeline in any case.
|
||||||
* If exceptions occur on TACACS+ server side, the exception details should be logged in Tower, and Tower should not authenticate that user via TACACS+.
|
* If exceptions occur on TACACS+ server side, the exception details should be logged in Tower, and Tower should not authenticate that user via TACACS+.
|
||||||
|
|||||||
1
docs/credentials/README.md
Normal file
1
docs/credentials/README.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
This folder contains documentation related to credentials in AWX / Ansible Tower.
|
||||||
227
docs/credentials/multi_credential_assignment.md
Normal file
227
docs/credentials/multi_credential_assignment.md
Normal file
@ -0,0 +1,227 @@
|
|||||||
|
Multi-Credential Assignment
|
||||||
|
===========================
|
||||||
|
|
||||||
|
AWX has added support for assigning zero or more credentials to Job Templates and Inventory Updates via a singular, unified interface.
|
||||||
|
|
||||||
|
Background
|
||||||
|
----------
|
||||||
|
|
||||||
|
Prior to AWX (Tower 3.2), Job Templates had a certain set of requirements surrounding their relation to Credentials:
|
||||||
|
|
||||||
|
* All Job Templates (and Jobs) were required to have exactly *one* Machine/SSH
|
||||||
|
or Vault credential (or one of both).
|
||||||
|
* All Job Templates (and Jobs) could have zero or more "extra" Credentials.
|
||||||
|
* These extra Credentials represented "Cloud" and "Network" credentials that
|
||||||
|
* Could be used to provide authentication to external services via environment
|
||||||
|
* Variables (*e.g.*, `AWS_ACCESS_KEY_ID`).
|
||||||
|
|
||||||
|
This model required a variety of disjoint interfaces for specifying Credentials on a Job Template. For example, to modify assignment of Machine/SSH and Vault credentials, you would change the Credential key itself:
|
||||||
|
|
||||||
|
`PATCH /api/v2/job_templates/N/ {'credential': X, 'vault_credential': Y}`
|
||||||
|
|
||||||
|
Modifying `extra_credentials` was accomplished on a separate API endpoint via association/disassociation actions:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v2/job_templates/N/extra_credentials {'associate': true, 'id': Z}
|
||||||
|
POST /api/v2/job_templates/N/extra_credentials {'disassociate': true, 'id': Z}
|
||||||
|
```
|
||||||
|
|
||||||
|
This model lacked the ability associate multiple Vault credentials with a playbook run, a use case supported by Ansible core from Ansible 2.4 onwards.
|
||||||
|
|
||||||
|
This model also was a stumbling block for certain playbook execution workflows.
|
||||||
|
For example, some users wanted to run playbooks with `connection:local` that
|
||||||
|
only interacted with some cloud service via a cloud Credential. In this
|
||||||
|
scenario, users often generated a "dummy" Machine/SSH Credential to attach to
|
||||||
|
the Job Template simply to satisfy the requirement on the model.
|
||||||
|
|
||||||
|
|
||||||
|
Important Changes
|
||||||
|
-----------------
|
||||||
|
|
||||||
|
Job Templates now have a single interface for Credential assignment:
|
||||||
|
|
||||||
|
`GET /api/v2/job_templates/N/credentials/`
|
||||||
|
|
||||||
|
Users can associate and disassociate credentials using `POST` requests to this
|
||||||
|
interface, similar to the behavior in the now-deprecated `extra_credentials`
|
||||||
|
endpoint:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v2/job_templates/N/credentials/ {'associate': true, 'id': X'}
|
||||||
|
POST /api/v2/job_templates/N/credentials/ {'disassociate': true, 'id': Y'}
|
||||||
|
```
|
||||||
|
|
||||||
|
Under this model, a Job Template is considered valid even when it has _zero_ Credentials assigned to it.
|
||||||
|
|
||||||
|
Launch Time Considerations
|
||||||
|
--------------------------
|
||||||
|
|
||||||
|
Prior to this change, Job Templates had a configurable attribute,
|
||||||
|
`ask_credential_on_launch`. This value was used at launch time to determine
|
||||||
|
which missing credential values were necessary for launch - this was primarily
|
||||||
|
used as a mechanism for users to specify an SSH (or Vault) credential to satisfy
|
||||||
|
the minimum Credential requirement.
|
||||||
|
|
||||||
|
Under the new unified Credential list model, this attribute still exists, but it
|
||||||
|
is no longer bound to a notion of "requiring" a Credential. Now when
|
||||||
|
`ask_credential_on_launch` is `True`, it signifies that users may (if they
|
||||||
|
wish) specify a list of credentials at launch time to override those defined on
|
||||||
|
the Job Template:
|
||||||
|
|
||||||
|
`POST /api/v2/job_templates/N/launch/ {'credentials': [A, B, C]}`
|
||||||
|
|
||||||
|
If `ask_credential_on_launch` is `False`, it signifies that custom `credentials`
|
||||||
|
provided in the payload to `POST /api/v2/job_templates/N/launch/` will be
|
||||||
|
ignored.
|
||||||
|
|
||||||
|
Under this model, the only purpose for `ask_credential_on_launch` is to signal
|
||||||
|
that API clients should prompt the user for (optional) changes at launch time.
|
||||||
|
|
||||||
|
Backwards Compatibility Concerns
|
||||||
|
--------------------------------
|
||||||
|
Requests to update `JobTemplate.credential` and `JobTemplate.vault_credential`
|
||||||
|
will no longer work. Example request format:
|
||||||
|
|
||||||
|
`PATCH /api/v2/job_templates/N/ {'credential': X, 'vault_credential': Y}`
|
||||||
|
|
||||||
|
This request will have no effect because support for using these
|
||||||
|
fields has been removed.
|
||||||
|
|
||||||
|
The relationship `extra_credentials` is deprecated but still supported for now.
|
||||||
|
Clients should favor the `credentials` relationship instead.
|
||||||
|
|
||||||
|
`GET` requests to `/api/v2/job_templates/N/` and `/api/v2/jobs/N/`
|
||||||
|
will include this via `related_fields`:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"related": {
|
||||||
|
...
|
||||||
|
"credentials": "/api/v2/job_templates/5/credentials/",
|
||||||
|
"extra_credentials": "/api/v2/job_templates/5/extra_credentials/",
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
...and `summary_fields`, which is not included in list views:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
"summary_fields": {
|
||||||
|
"credentials": [
|
||||||
|
{
|
||||||
|
"description": "",
|
||||||
|
"credential_type_id": 5,
|
||||||
|
"id": 2,
|
||||||
|
"kind": "aws",
|
||||||
|
"name": "some-aws"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "",
|
||||||
|
"credential_type_id": 10,
|
||||||
|
"id": 4,
|
||||||
|
"kind": "gce",
|
||||||
|
"name": "some-gce"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"extra_credentials": [
|
||||||
|
{
|
||||||
|
"description": "",
|
||||||
|
"credential_type_id": 5,
|
||||||
|
"id": 2,
|
||||||
|
"kind": "aws",
|
||||||
|
"name": "some-aws"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"description": "",
|
||||||
|
"credential_type_id": 10,
|
||||||
|
"id": 4,
|
||||||
|
"kind": "gce",
|
||||||
|
"name": "some-gce"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The only difference between `credentials` and `extra_credentials` is that the
|
||||||
|
latter is filtered to only show "cloud" type credentials, whereas the former
|
||||||
|
can be used to manage all types of related credentials.
|
||||||
|
|
||||||
|
The `/api/v2/job_templates/N/launch/` endpoint no longer provides
|
||||||
|
backwards compatible support for specifying credentials at launch time
|
||||||
|
via the `credential` or `vault_credential` fields.
|
||||||
|
The launch endpoint can still accept a list under the `extra_credentials` key,
|
||||||
|
but this is deprecated in favor `credentials`.
|
||||||
|
|
||||||
|
|
||||||
|
Specifying Multiple Vault Credentials
|
||||||
|
-------------------------------------
|
||||||
|
One interesting use case supported by the new "zero or more credentials" model
|
||||||
|
is the ability to assign multiple Vault credentials to a Job Template run.
|
||||||
|
|
||||||
|
This specific use case covers Ansible's support for multiple Vault passwords for
|
||||||
|
a playbook run (since Ansible 2.4):
|
||||||
|
http://docs.ansible.com/ansible/latest/vault.html#vault-ids-and-multiple-vault-passwords
|
||||||
|
|
||||||
|
Vault credentials in AWX now have an optional field, `vault_id`, which is
|
||||||
|
analogous to the `--vault-id` argument to `ansible-playbook`. To run
|
||||||
|
a playbook which makes use of multiple Vault passwords:
|
||||||
|
|
||||||
|
1. Make a Vault credential in Tower for each Vault password; specify the Vault
|
||||||
|
ID as a field on the credential and input the password (which will be
|
||||||
|
encrypted and stored).
|
||||||
|
2. Assign multiple Vault credentials to the job template via the new
|
||||||
|
`credentials` endpoint:
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v2/job_templates/N/credentials/
|
||||||
|
|
||||||
|
{
|
||||||
|
'associate': true,
|
||||||
|
'id': X
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. Launch the Job Template, and `ansible-playbook` will be invoked with
|
||||||
|
multiple `--vault-id` arguments.
|
||||||
|
|
||||||
|
Prompted Vault Credentials
|
||||||
|
--------------------------
|
||||||
|
Vault credentials can have passwords that are marked as "Prompt on launch".
|
||||||
|
When this is the case, the launch endpoint of any related Job Templates will
|
||||||
|
communicate necessary Vault passwords via the `passwords_needed_to_start` key:
|
||||||
|
|
||||||
|
```
|
||||||
|
GET /api/v2/job_templates/N/launch/
|
||||||
|
{
|
||||||
|
'passwords_needed_to_start': [
|
||||||
|
'vault_password.X',
|
||||||
|
'vault_password.Y',
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
...where `X` and `Y` are primary keys of the associated Vault credentials.
|
||||||
|
|
||||||
|
```
|
||||||
|
POST /api/v2/job_templates/N/launch/
|
||||||
|
{
|
||||||
|
'credential_passwords': {
|
||||||
|
'vault_password.X': 'first-vault-password'
|
||||||
|
'vault_password.Y': 'second-vault-password'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Inventory Source Credentials
|
||||||
|
----------------------------
|
||||||
|
|
||||||
|
Inventory sources and inventory updates that they spawn also use the same
|
||||||
|
relationship. The new endpoints for this are:
|
||||||
|
- `/api/v2/inventory_sources/N/credentials/` and
|
||||||
|
- `/api/v2/inventory_updates/N/credentials/`
|
||||||
|
|
||||||
|
Most cloud sources will continue to adhere to the constraint that they
|
||||||
|
must have a single credential that corresponds to their cloud type.
|
||||||
|
However, this relationship allows users to associate multiple vault
|
||||||
|
credentials of different IDs to inventory sources.
|
||||||
@ -5,29 +5,29 @@ Django Debug Toolbar (DDT)
|
|||||||
----------------
|
----------------
|
||||||
This is a useful tool for examining SQL queries, performance, headers, requests, signals, cache, logging, and more.
|
This is a useful tool for examining SQL queries, performance, headers, requests, signals, cache, logging, and more.
|
||||||
|
|
||||||
To enable DDT, you need to set your INTERNAL_IPS to the IP address of your load balancer. This can be overriden in `local_settings`.
|
To enable DDT, you need to set your `INTERNAL_IPS` to the IP address of your load balancer. This can be overriden in `local_settings`.
|
||||||
This IP address can be found by making a GET to any page on the browsable API and looking for a like this in the standard output.
|
This IP address can be found by making a GET to any page on the browsable API and looking for a like this in the standard output:
|
||||||
```
|
```
|
||||||
awx_1 | 14:42:08 uwsgi.1 | 172.18.0.1 GET /api/v2/tokens/ - HTTP/1.1 200
|
awx_1 | 14:42:08 uwsgi.1 | 172.18.0.1 GET /api/v2/tokens/ - HTTP/1.1 200
|
||||||
```
|
```
|
||||||
|
|
||||||
Whitelist this IP address by adding it to the INTERNAL_IPS variable in local_settings, then navigate to the API and you should see DDT on the
|
Whitelist this IP address by adding it to the `INTERNAL_IPS` variable in `local_settings`, then navigate to the API and you should see DDT on the
|
||||||
right side. If you don't see it, make sure `DEBUG=True`.
|
right side. If you don't see it, make sure to set `DEBUG=True`.
|
||||||
> Note that enabling DDT is detrimental to the performance of AWX and adds overhead to every API request. It is
|
> Note that enabling DDT is detrimental to the performance of AWX and adds overhead to every API request. It is
|
||||||
recommended to keep this turned off when you are not using it.
|
recommended to keep this turned off when you are not using it.
|
||||||
|
|
||||||
SQL Debugging
|
SQL Debugging
|
||||||
-------------
|
-------------
|
||||||
AWX includes a powerful tool for tracking slow queries across all of its Python processes.
|
AWX includes a powerful tool for tracking slow queries across all of its Python processes.
|
||||||
As the awx user, run:
|
As the AWX user, run:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ awx-manage profile_sql --threshold 2 --minutes 5
|
$ awx-manage profile_sql --threshold 2 --minutes 5
|
||||||
```
|
```
|
||||||
|
|
||||||
...where threshold is the max query time in seconds, and minutes it the number of minutes to record.
|
...where threshold is the max query time in seconds, and minutes it the number of minutes to record.
|
||||||
For the next five minutes (in this example), any awx Python process that generates a SQL query
|
For the next five minutes (in this example), any AWX Python process that generates a SQL query
|
||||||
that runs for >2s will be recorded in a .sqlite database in /var/log/tower/profile
|
that runs for >2s will be recorded in a `.sqlite` database in `/var/log/tower/profile`.
|
||||||
|
|
||||||
This is a useful tool for logging all queries at a per-process level, or filtering and searching for
|
This is a useful tool for logging all queries at a per-process level, or filtering and searching for
|
||||||
queries within a certain code branch. For example, if you observed that certain HTTP requests were
|
queries within a certain code branch. For example, if you observed that certain HTTP requests were
|
||||||
@ -38,7 +38,7 @@ $ sqlite3 -column -header /var/log/tower/profile/uwsgi.sqlite
|
|||||||
sqlite> .schema queries
|
sqlite> .schema queries
|
||||||
CREATE TABLE queries (
|
CREATE TABLE queries (
|
||||||
id INTEGER PRIMARY KEY,
|
id INTEGER PRIMARY KEY,
|
||||||
version TEXT, # the awx version
|
version TEXT, # the AWX version
|
||||||
pid INTEGER, # the pid of the process
|
pid INTEGER, # the pid of the process
|
||||||
stamp DATETIME DEFAULT CURRENT_TIMESTAMP,
|
stamp DATETIME DEFAULT CURRENT_TIMESTAMP,
|
||||||
argv REAL, # argv of the process
|
argv REAL, # argv of the process
|
||||||
@ -90,7 +90,7 @@ a `continue` command.
|
|||||||
To simplify remote debugging session management, Tower's development
|
To simplify remote debugging session management, Tower's development
|
||||||
environment comes with tooling that can automatically discover open
|
environment comes with tooling that can automatically discover open
|
||||||
remote debugging sessions and automatically connect to them. From your *host*
|
remote debugging sessions and automatically connect to them. From your *host*
|
||||||
machine (i.e., _outside_ of the development container), you can run:
|
machine (*i.e.*, _outside_ of the development container), you can run:
|
||||||
|
|
||||||
```
|
```
|
||||||
sdb-listen
|
sdb-listen
|
||||||
|
|||||||
@ -1,4 +1,5 @@
|
|||||||
# AWX as an Ansible Fact Cache
|
# AWX as an Ansible Fact Cache
|
||||||
|
|
||||||
AWX can store and retrieve per-host facts via an Ansible Fact Cache Plugin.
|
AWX can store and retrieve per-host facts via an Ansible Fact Cache Plugin.
|
||||||
This behavior is configurable on a per-job-template basis. When enabled, AWX
|
This behavior is configurable on a per-job-template basis. When enabled, AWX
|
||||||
will serve fact requests for all Hosts in an Inventory related to the Job
|
will serve fact requests for all Hosts in an Inventory related to the Job
|
||||||
@ -7,14 +8,14 @@ having access to the entire Inventory of Host facts.
|
|||||||
|
|
||||||
## AWX Fact Cache Implementation Details
|
## AWX Fact Cache Implementation Details
|
||||||
### AWX Injection
|
### AWX Injection
|
||||||
In order to understand the behavior of AWX as a fact cache you will need to
|
In order to understand the behavior of AWX as a fact cache, you will need to
|
||||||
understand how fact caching is achieved in AWX. When a Job launches with
|
understand how fact caching is achieved in AWX. When a Job launches with
|
||||||
`use_fact_cache=True`, AWX will write all `ansible_facts` associated with
|
`use_fact_cache=True`, AWX will write all `ansible_facts` associated with
|
||||||
each Host in the associated Inventory as JSON files on the local file system
|
each Host in the associated Inventory as JSON files on the local file system
|
||||||
(one JSON file per host). Jobs invoked with `use_fact_cache=False` will not
|
(one JSON file per host). Jobs invoked with `use_fact_cache=False` will not
|
||||||
write `ansible_facts` files.
|
write `ansible_facts` files.
|
||||||
|
|
||||||
### Ansible plugin usage
|
### Ansible Plugin Usage
|
||||||
When `use_fact_cache=True`, Ansible will be configured to use the `jsonfile`
|
When `use_fact_cache=True`, Ansible will be configured to use the `jsonfile`
|
||||||
cache plugin. Any `get()` call to the fact cache interface in Ansible will
|
cache plugin. Any `get()` call to the fact cache interface in Ansible will
|
||||||
result in a JSON file lookup for the host-specific set of facts. Any `set()`
|
result in a JSON file lookup for the host-specific set of facts. Any `set()`
|
||||||
@ -30,13 +31,13 @@ subsequent playbook runs, AWX will _only_ inject cached facts that are _newer_
|
|||||||
than `settings.ANSIBLE_FACT_CACHE_TIMEOUT` seconds.
|
than `settings.ANSIBLE_FACT_CACHE_TIMEOUT` seconds.
|
||||||
|
|
||||||
## AWX Fact Logging
|
## AWX Fact Logging
|
||||||
New and changed facts will be logged via AWX's logging facility. Specifically,
|
New and changed facts will be logged via AWX's logging facility, specifically
|
||||||
to the `system_tracking` namespace or logger. The logging payload will include
|
to the `system_tracking` namespace or logger. The logging payload will include
|
||||||
the fields: `host_name`, `inventory_id`, and `ansible_facts`. Where
|
the fields `host_name`, `inventory_id`, and `ansible_facts`. Where
|
||||||
`ansible_facts` is a dictionary of all ansible facts for `host_name` in AWX
|
`ansible_facts` is a dictionary of all Ansible facts for `host_name` in AWX
|
||||||
Inventory `inventory_id`.
|
Inventory `inventory_id`.
|
||||||
|
|
||||||
## Integration Testing
|
## Integration Testing
|
||||||
* ensure `clear_facts` set's `hosts/<id>/ansible_facts` to `{}`
|
* Ensure `clear_facts` sets `hosts/<id>/ansible_facts` to `{}`.
|
||||||
* ensure that `gather_facts: False` does NOT result in clearing existing facts
|
* Ensure that `gather_facts: False` does NOT result in clearing existing facts.
|
||||||
* ensure that the when a host fact timeout is reached, that the facts are not used from the cache
|
* Ensure that when a host fact timeout is reached, that the facts are not used from the cache.
|
||||||
|
|||||||
@ -1,15 +1,23 @@
|
|||||||
# Insights Integration
|
# Insights Integration
|
||||||
Insights provides remediation playbooks that Tower executes. Tower also provides a view of Insights discovered misconfigurations and problems via proxying Insights API requests. Credentials to access the Insights API are stored in Tower and can be related to an Inventory in which Insights hosts are presumed to exist. This same Insights Credential can be attached to a Tower Project. The Tower Project will pull the Insights rememdiation plans whenever a Project Update is ran.
|
Insights provides remediation playbooks that Tower executes. Tower also provides a view of Insights-discovered misconfigurations and problems via proxying Insights API requests. Credentials to access the Insights API are stored in Tower and can be related to an Inventory in which Insights hosts are presumed to exist. This same Insights Credential can be attached to a Tower Project. The Tower Project will pull the Insights remediation plans whenever a Project Update runs.
|
||||||
|
|
||||||
|
|
||||||
## Tower Insights Credential
|
## Tower Insights Credential
|
||||||
|
|
||||||
Tower has an Insights Credential to store the username and password to be used when accessing the Insights API. The Insights Credential is a new `CredentialType` in the Tower system. The Insights Credential can be associated with an Insights Project and any non-smart Inventory.
|
Tower has an Insights Credential to store the username and password to be used when accessing the Insights API. The Insights Credential is a new `CredentialType` in the Tower system. The Insights Credential can be associated with an Insights Project and any non-smart Inventory.
|
||||||
|
|
||||||
|
|
||||||
## Tower Recognized Insights Host
|
## Tower Recognized Insights Host
|
||||||
Tower considers a Host an Insights Host when the attribute `insights_system_id` on the Host is set to something other than `null`. The `insights_system_id` is used to identify the host in the Insights system when making proxied requests to the Insights API. The `insights_system_id` is set as a result of the fact scan playbook being ran. Specifically, as a result of the `scan_insights` Ansible module being called. The `scan_insights` module will read the value from the file `/etc/redhat-access-insights/machine-id` on a host. If found, the value will be assigned to the `insights_system_id` for that host. The host would then be considered an Insights host.
|
|
||||||
|
Tower considers a host an Insights Host when the attribute `insights_system_id` on the host is set to something other than `null`. The `insights_system_id` is used to identify the host in the Insights system when making proxied requests to the Insights API. The `insights_system_id` is set as a result of the fact scan playbook that is running (specifically, as a result of the `scan_insights` Ansible module being called). The `scan_insights` module will read the value from the file `/etc/redhat-access-insights/machine-id` on a host. If found, the value will be assigned to the `insights_system_id` for that host. The host would then be considered an Insights host.
|
||||||
|
|
||||||
|
|
||||||
## Tower Insights Proxy
|
## Tower Insights Proxy
|
||||||
Insights data for a Tower recognized Insights Host can be gotten from the insights endpoint hanging off the host detail endpoint. Each time the insights endpoint has a `GET` request issued to it the backend issues a request to the Insights API for the `insights_system_id`. The response is then returned in the same get/response cycle. The response contains Insights details for the host including (1) the current plans, (2) reports, and (3) rules.
|
|
||||||
|
Insights data for a Tower-recognized Insights Host can be acquired from the insights endpoint hanging off of the host detail endpoint. Each time the Insights endpoint has a `GET` request issued to it, the backend issues a request to the Insights API for the `insights_system_id`. The response is then returned in the same get/response cycle. The response contains Insights details for the host including (1) the current plans, (2) reports, and (3) rules.
|
||||||
`/api/v2/hosts/<id>/insights/`
|
`/api/v2/hosts/<id>/insights/`
|
||||||
|
|
||||||
|
|
||||||
## Tower Insights Project
|
## Tower Insights Project
|
||||||
Tower has a Project exclusively for Insights. Projects of type Insights can attach a special Insights Credential. The Insights Credential is used for accessing the Insights API in the Project Update playbook. The `scm_revision` for an Insights Project differs from traditional SCM backed projects. The `scm_revision` is the Insights `ETag` http header value returned when making a plan requests to the Insights API during a Project update run. The `ETag` communications a version derived from the response data. During a Project update, the Project's `scm_revision` will be updated with the new `ETag`. The `ETag` will also be written to disk in the Project directory as `.scm_revision`. The Project update will download the remediation playbooks if the `.scm_revision` does not equal the `ETag`.
|
|
||||||
|
Tower has a Project exclusively for Insights. Projects of type Insights can attach a special Insights Credential. The Insights Credential is used for accessing the Insights API in the Project Update playbook. The `scm_revision` for an Insights Project differs from traditional SCM backed projects. The `scm_revision` is the Insights `ETag` HTTP header value returned when making a plan requests to the Insights API during a Project update run. The `ETag` communications a version derived from the response data. During a Project Update, the Project's `scm_revision` will be updated with the new `ETag`. The `ETag` will also be written to disk in the Project directory as `.scm_revision`. The Project update will download the remediation playbooks if the `.scm_revision` does not equal the `ETag`.
|
||||||
|
|||||||
1
docs/inventory/README.md
Normal file
1
docs/inventory/README.md
Normal file
@ -0,0 +1 @@
|
|||||||
|
This folder contains documentation related to inventories in AWX / Ansible Tower.
|
||||||
@ -1,61 +1,70 @@
|
|||||||
# Transition to Ansible Inventory Plugins
|
# Transition to Ansible Inventory Plugins
|
||||||
Inventory updates have changed from using scripts which are vendored as executable Python scripts to using dynamically-generated YAML files which conform to the specifications of the `auto` inventory plugin. These are then parsed by their respective inventory plugin.
|
|
||||||
|
Inventory updates have changed from using scripts, which are vendored as executable Python scripts, to using dynamically-generated YAML files which conform to the specifications of the `auto` inventory plugin. These are then parsed by their respective inventory plugin.
|
||||||
|
|
||||||
The major organizational change is that the inventory plugins are part of the Ansible core distribution, whereas the same logic used to be a part of AWX source.
|
The major organizational change is that the inventory plugins are part of the Ansible core distribution, whereas the same logic used to be a part of AWX source.
|
||||||
|
|
||||||
|
|
||||||
## Prior Background for Transition
|
## Prior Background for Transition
|
||||||
|
|
||||||
AWX used to maintain logic that parsed `.ini` inventory file contents, in addition to interpreting the JSON output of scripts, re-calling with the `--host` option in cases where the `_meta.hostvars` key was not provided.
|
AWX used to maintain logic that parsed `.ini` inventory file contents, in addition to interpreting the JSON output of scripts, re-calling with the `--host` option in cases where the `_meta.hostvars` key was not provided.
|
||||||
|
|
||||||
|
|
||||||
### Switch to Ansible Inventory
|
### Switch to Ansible Inventory
|
||||||
|
|
||||||
The CLI entry point `ansible-inventory` was introduced in Ansible 2.4. In Tower 3.2, inventory imports began running this command as an intermediary between the inventory and the import's logic to save content to database. Using `ansible-inventory` eliminates the need to maintain source-specific logic, relying on Ansible's code instead. This also allows us to count on a consistent data structure outputted from `ansible-inventory`. There are many valid structures that a script can provide, but the output from `ansible-inventory` will always be the same, thus the AWX logic to parse the content is simplified. This is why even scripts must be ran through the `ansible-inventory` CLI.
|
The CLI entry point `ansible-inventory` was introduced in Ansible 2.4. In Tower 3.2, inventory imports began running this command as an intermediary between the inventory and the import's logic to save content to the database. Using `ansible-inventory` eliminates the need to maintain source-specific logic, relying on Ansible's code instead. This also enables consistent data structure output from `ansible-inventory`. There are many valid structures that a script can provide, but the output from `ansible-inventory` will always be the same, thus the AWX logic to parse the content is simplified. This is why even scripts must be ran through the `ansible-inventory` CLI.
|
||||||
|
|
||||||
Along with this switchover, a backported version of `ansible-inventory` was provided, which supports Ansible versions 2.2 and 2.3.
|
Along with this switchover, a backported version of `ansible-inventory` was provided, which supports Ansible versions 2.2 and 2.3.
|
||||||
|
|
||||||
|
|
||||||
### Removal of Backport
|
### Removal of Backport
|
||||||
|
|
||||||
In AWX 3.0.0 (and Tower 3.5), the backport of `ansible-inventory` was removed, and support for using custom virtual environments was added. This set the minimum version of Ansible necessary to run _any_ inventory update to 2.4.
|
In AWX 3.0.0 (and Tower 3.5), the backport of `ansible-inventory` was removed, and support for using custom virtual environments was added. This set the minimum version of Ansible necessary to run _any_ inventory update to 2.4.
|
||||||
|
|
||||||
|
|
||||||
## Inventory Plugin Versioning
|
## Inventory Plugin Versioning
|
||||||
|
|
||||||
Beginning in Ansible 2.5, inventory sources in Ansible started migrating away from "contrib" scripts (meaning they lived in the contrib folder) to the inventory plugin model.
|
Beginning in Ansible 2.5, inventory sources in Ansible started migrating away from `contrib` scripts (meaning they lived in the `contrib` folder) to the inventory plugin model.
|
||||||
|
|
||||||
In AWX 4.0.0 (and Tower 3.5) inventory source types start to switchover to plugins, provided that sufficient compatibility is in place for the version of Ansible present in the local virtualenv.
|
In AWX 4.0.0 (and Tower 3.5) inventory source types start to switch over to plugins, provided that sufficient compatibility is in place for the version of Ansible present in the local virtualenv.
|
||||||
|
|
||||||
To see what version the plugin transition will happen, see `awx/main/models/inventory.py` and look for the source name as a subclass of `PluginFileInjector`, and there should be an `initial_version` which is the first version that testing deemed to have sufficient parity in the content its inventory plugin returns. For example, `openstack` will begin using the inventory plugin in Ansible version 2.8. If you run an openstack inventory update with Ansible 2.7.x or lower, it will use the script.
|
To see in which version the plugin transition will happen, see `awx/main/models/inventory.py` and look for the source name as a subclass of `PluginFileInjector`, and there should be an `initial_version`, which is the first version that was deemed (via testing) to have sufficient parity in the content for its inventory plugin returns. For example, `openstack` will begin using the inventory plugin in Ansible version 2.8. If you run an OpenStack inventory update with Ansible 2.7.x or lower, it will use the script.
|
||||||
|
|
||||||
### Sunsetting the scripts
|
|
||||||
|
|
||||||
The eventual goal is for all source types to have moved to plugins. For any given source, after the `initial_version` for plugin use is higher than the lowest supported Ansible version, the script can be removed and the logic for script credential injection will also be removed.
|
The eventual goal is for all source types to have moved to plugins. For any given source, after the `initial_version` for plugin use is higher than the lowest supported Ansible version, the script can be removed and the logic for script credential injection will also be removed.
|
||||||
|
|
||||||
For example, after AWX no longer supports Ansible 2.7, the script `awx/plugins/openstack_inventory.py` will be removed.
|
For example, after AWX no longer supports Ansible 2.7, the script `awx/plugins/openstack_inventory.py` will be removed.
|
||||||
|
|
||||||
|
|
||||||
## Changes to Expect in Imports
|
## Changes to Expect in Imports
|
||||||
|
|
||||||
An effort was made to keep imports working in the exact same way after the switchover. However, the inventory plugins are a fundamental rewrite and many elements of default behavior have changed. These changes also include many backward-incompatible changes. Because of this, what you get via an inventory import will be a superset of what you get from the script but will not match the default behavior you would get from the inventory plugin on the CLI.
|
An effort was made to keep imports working in the exact same way after the switchover. However, the inventory plugins are a fundamental rewrite and many elements of default behavior have changed. These changes also include many backward-incompatible changes. Because of this, what you get via an inventory import will be a superset of what you get from the script but will not match the default behavior you would get from the inventory plugin on the CLI.
|
||||||
|
|
||||||
Due to the fact that inventory plugins add additional variables, if you downgrade Ansible, you should turn on `overwrite` and `overwrite_vars` to get rid of stale variables (and potentially groups) no longer returned by the import.
|
Due to the fact that inventory plugins add additional variables, if you downgrade Ansible, you should turn on `overwrite` and `overwrite_vars` to get rid of stale variables (and potentially groups) no longer returned by the import.
|
||||||
|
|
||||||
|
|
||||||
### Changes for Compatibility
|
### Changes for Compatibility
|
||||||
|
|
||||||
Programatically-generated examples of inventory file syntax used in updates (with dummy data) can be found in `awx/main/tests/data/inventory/scripts`, these demonstrate the inventory file syntax used to restore old behavior from the inventory scripts.
|
Programatically-generated examples of inventory file syntax used in updates (with dummy data) can be found in `awx/main/tests/data/inventory/scripts`. These demonstrate the inventory file syntax used to restore old behavior from the inventory scripts.
|
||||||
|
|
||||||
|
|
||||||
#### Hostvar Keys and Values
|
#### Hostvar Keys and Values
|
||||||
|
|
||||||
More hostvars will appear if the inventory plugins are used. To maintain backward compatibility, the old names are added back where they have the same meaning as a variable returned by the plugin. New names are not removed.
|
More `hostvars` will appear if the inventory plugins are used. To maintain backward compatibility, the old names are added back where they have the same meaning as a variable returned by the plugin. New names are not removed.
|
||||||
|
|
||||||
|
A small number of `hostvars` will be lost because of general deprecation needs.
|
||||||
|
|
||||||
A small number of hostvars will be lost because of general deprecation needs.
|
|
||||||
|
|
||||||
#### Host Names
|
#### Host Names
|
||||||
|
|
||||||
In many cases, the host names will change. In all cases, accurate host tracking will still be maintained via the host `instance_id`. (after: https://github.com/ansible/awx/pull/3362)
|
In many cases, the host names will change. In all cases, accurate host tracking will still be maintained via the host `instance_id`.
|
||||||
|
|
||||||
|
|
||||||
## Writing Your Own Inventory File
|
## Writing Your Own Inventory File
|
||||||
|
|
||||||
If you do not want any of this compatibility-related functionality, then you can add an SCM inventory source that points to your own file. You can also apply a credential of a `managed_by_tower` type to that inventory source that matches the credential you are using, as long as it is not `gce` or `openstack`.
|
If you do not want any of this compatibility-related functionality, then you can add an SCM inventory source that points to your own file. You can also apply a credential of a `managed_by_tower` type to that inventory source that matches the credential you are using, as long as it is not `gce` or `openstack`.
|
||||||
|
|
||||||
All other sources provide _secrets_ via environment variables. These can be re-used without any problems for SCM-based inventory, and your inventory file can be used securely to specify non-sensitive configuration details such as the `keyed_groups` (to provide) or hostvars (to construct).
|
All other sources provide _secrets_ via environment variables. These can be re-used without any problems for SCM-based inventory, and your inventory file can be used securely to specify non-sensitive configuration details such as the `keyed_groups` (to provide) or `hostvars` (to construct).
|
||||||
|
|
||||||
|
|
||||||
## Notes on Technical Implementation of Injectors
|
## Notes on Technical Implementation of Injectors
|
||||||
|
|
||||||
@ -72,6 +81,7 @@ The way this data is applied to the environment (including files and environment
|
|||||||
|
|
||||||
With plugins, the inventory file may reference files that contain secrets from the credential. With scripts, typically an environment variable will reference a filename that contains a ConfigParser format file with parameters for the update, and possibly including fields from the credential.
|
With plugins, the inventory file may reference files that contain secrets from the credential. With scripts, typically an environment variable will reference a filename that contains a ConfigParser format file with parameters for the update, and possibly including fields from the credential.
|
||||||
|
|
||||||
|
|
||||||
**Caution:** Please do not put secrets from the credential into the inventory file for the plugin. Right now there appears to be no need to do this, and by using environment variables to specify secrets, this keeps open the possibility of showing the inventory file contents to the user as a latter enhancement.
|
**Caution:** Please do not put secrets from the credential into the inventory file for the plugin. Right now there appears to be no need to do this, and by using environment variables to specify secrets, this keeps open the possibility of showing the inventory file contents to the user as a latter enhancement.
|
||||||
|
|
||||||
Logic for setup for inventory updates using both plugins and scripts live in the inventory injector class, specific to the source type.
|
Logic for setup for inventory updates using both plugins and scripts live in the inventory injector class, specific to the source type.
|
||||||
@ -15,16 +15,16 @@ from `InventorySource` completely in Tower 3.3. As a result the related field on
|
|||||||
Facts generated by an Ansible playbook during a Job Template run are stored by Tower into the database
|
Facts generated by an Ansible playbook during a Job Template run are stored by Tower into the database
|
||||||
whenever `use_fact_cache=True` is set per-Job-Template. New facts are merged with existing
|
whenever `use_fact_cache=True` is set per-Job-Template. New facts are merged with existing
|
||||||
facts and are per-host. These stored facts can be used to filter hosts via the
|
facts and are per-host. These stored facts can be used to filter hosts via the
|
||||||
`/api/v2/hosts` endpoint, using the GET query parameter `host_filter` i.e.
|
`/api/v2/hosts` endpoint, using the GET query parameter `host_filter` *i.e.*,
|
||||||
`/api/v2/hosts?host_filter=ansible_facts__ansible_processor_vcpus=8`
|
`/api/v2/hosts?host_filter=ansible_facts__ansible_processor_vcpus=8`
|
||||||
|
|
||||||
The grammer of `host_filter` allows for:
|
The grammar of `host_filter` allows for:
|
||||||
* grouping via `()`
|
* grouping via `()`
|
||||||
* the boolean `and` operator
|
* the boolean `and` operator
|
||||||
* `__` to reference related fields in relational fields
|
* `__` to reference related fields in relational fields
|
||||||
* `__` is used on `ansible_facts` to separate keys in a JSON key path
|
* `__` is used on `ansible_facts` to separate keys in a JSON key path
|
||||||
* `[]` is used to denote a json array in the path specification
|
* `[]` is used to denote a JSON array in the path specification
|
||||||
* `""` can be used in the value when spaces are wanted in the value
|
* `""` can be used in the value when spaces are utilized
|
||||||
* "classic" Django queries may be embedded in the `host_filter`
|
* "classic" Django queries may be embedded in the `host_filter`
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
@ -40,16 +40,15 @@ Examples:
|
|||||||
|
|
||||||
## Smart Inventory
|
## Smart Inventory
|
||||||
Starting in Tower 3.2, Tower will support the ability to define a _Smart Inventory_.
|
Starting in Tower 3.2, Tower will support the ability to define a _Smart Inventory_.
|
||||||
You will define the inventories using the same language we currently support
|
Users will define the inventories using the same language that is currently supported
|
||||||
in our _Smart Search_.
|
in _Smart Search_.
|
||||||
|
|
||||||
### Inventory Changes
|
### Inventory Changes
|
||||||
* The `Inventory` model has a new field called `kind`. The default of this field will be blank
|
* The `Inventory` model has a new field called `kind`. The default of this field will be blank
|
||||||
for normal inventories and set to `smart` for smart inventories.
|
for normal inventories and set to `smart` for smart inventories.
|
||||||
|
|
||||||
* `Inventory` model has a new field called `host_filter`. The default of this field will be blank
|
* `Inventory` model has a new field called `host_filter`. The default of this field will be blank
|
||||||
for normal inventories. When `host_filter` is set AND the inventory `kind` is set to `smart`
|
for normal inventories. When `host_filter` is set AND the inventory `kind` is set to `smart`, this combination makes a _Smart Inventory_.
|
||||||
is the combination that makes a _Smart Inventory_.
|
|
||||||
|
|
||||||
* `Host` model has a new field called `smart_inventories`. This field uses the `SmartInventoryMemberships`
|
* `Host` model has a new field called `smart_inventories`. This field uses the `SmartInventoryMemberships`
|
||||||
lookup table to provide a set of all of the _Smart Inventory_ a host is a part of. The memberships
|
lookup table to provide a set of all of the _Smart Inventory_ a host is a part of. The memberships
|
||||||
@ -58,12 +57,12 @@ are generated by the `update_host_smart_inventory_memberships` task. The task is
|
|||||||
* Existing Host is changed (update/delete).
|
* Existing Host is changed (update/delete).
|
||||||
* New Smart Inventory is added.
|
* New Smart Inventory is added.
|
||||||
* Existing Smart Inventory is changed (update/delete).
|
* Existing Smart Inventory is changed (update/delete).
|
||||||
* NOTE: This task is only run if the `AWX_REBUILD_SMART_MEMBERSHIP` is set to True. It defaults to False.
|
* **NOTE:** This task is only run if the `AWX_REBUILD_SMART_MEMBERSHIP` is set to `True`. It defaults to `False`.
|
||||||
|
|
||||||
### Smart Filter (host_filter)
|
### Smart Filter (`host_filter`)
|
||||||
The `SmartFilter` class handles our translation of the smart search string. We store the
|
The `SmartFilter` class handles our translation of the smart search string. We store the
|
||||||
filter value in the `host_filter` field for an inventory. This value should be expressed
|
filter value in the `host_filter` field for an inventory. This value should be expressed
|
||||||
the same way we express our existing smart searches.
|
the same way for existing smart searches.
|
||||||
|
|
||||||
host_filter="search=foo"
|
host_filter="search=foo"
|
||||||
host_filter="group__search=bar"
|
host_filter="group__search=bar"
|
||||||
@ -82,12 +81,12 @@ Creating a new _Smart Inventory_ for all of our GCE and EC2 groups might look li
|
|||||||
}
|
}
|
||||||
|
|
||||||
### More On Searching
|
### More On Searching
|
||||||
The `host_filter` you set will search over the entirety of the hosts you have
|
The `host_filter` that is set will search over the entirety of the hosts the user has
|
||||||
access to in Tower. If you want to restrict your search in anyway, you will
|
access to in Tower. If the user wants to restrict their search in anyway, they will
|
||||||
want to declare that in your host filter.
|
want to declare that in their host filter.
|
||||||
|
|
||||||
For example, if you want to restrict the search to only hosts in an inventory
|
For example, if wanting to restrict the search to only hosts in an inventory
|
||||||
named "US-East", you would create a `host_filter` that looked something like this:
|
named "US-East", create a `host_filter` that looks something like this:
|
||||||
|
|
||||||
{
|
{
|
||||||
"name": "NYC Hosts",
|
"name": "NYC Hosts",
|
||||||
@ -96,12 +95,13 @@ named "US-East", you would create a `host_filter` that looked something like thi
|
|||||||
...
|
...
|
||||||
}
|
}
|
||||||
|
|
||||||
In the above example, you are limiting the search to the "US-East" inventory and
|
In the above example, the search is limited to the "US-East" inventory and
|
||||||
hosts with a name containing "nyc".
|
hosts with a name containing "nyc".
|
||||||
|
|
||||||
|
|
||||||
### Acceptance Critera
|
### Acceptance Critera
|
||||||
When verifying acceptance we should ensure the following statements are true:
|
|
||||||
|
When verifying acceptance, ensure the following statements are true:
|
||||||
``
|
``
|
||||||
|
|
||||||
* `Inventory` has a new field named `kind` that defaults to empty and
|
* `Inventory` has a new field named `kind` that defaults to empty and
|
||||||
@ -116,23 +116,21 @@ search that is set in the `host_filter`.
|
|||||||
* Not allow creation of Inventory Sources
|
* Not allow creation of Inventory Sources
|
||||||
|
|
||||||
### API Concerns
|
### API Concerns
|
||||||
There are no breaking or backwards incompatible changes for this feature.
|
There are no breaking or backwards-incompatible changes for this feature.
|
||||||
|
|
||||||
|
|
||||||
## Other Changes
|
## Other Changes
|
||||||
|
|
||||||
### Inventory update all inventory_sources
|
### Inventory Updates All `inventory_sources`
|
||||||
A new endpoint `/api/v2/inventories/:id/update_inventory_sources` has been added. This endpoint
|
A new endpoint `/api/v2/inventories/:id/update_inventory_sources` has been added. This endpoint
|
||||||
functions in the same way that `/api/v2/inventory_source/:id/update` functions for a single
|
functions in the same way that `/api/v2/inventory_source/:id/update` functions for a single
|
||||||
`InventorySource` with the exception that it updates all of the inventory sources for the
|
`InventorySource` with the exception that it updates all of the inventory sources for the
|
||||||
`Inventory`.
|
`Inventory`.
|
||||||
|
|
||||||
`HTTP GET /api/v2/inventories/:id/update_inventory_sources` will list all of the inventory
|
`HTTP GET /api/v2/inventories/:id/update_inventory_sources` will list all of the inventory
|
||||||
sources and if they will be updated when a POST to the same endpoint is made. The result of
|
sources and whether or not they will be updated when a POST to the same endpoint is made. The result of
|
||||||
this request will look like this:
|
this request will look like this:
|
||||||
|
|
||||||
> *Note:* All manual inventory sources (source='') will be ignored by the update_inventory_sources endpoint.
|
|
||||||
|
|
||||||
{
|
{
|
||||||
results: [
|
results: [
|
||||||
"inventory_source": 1, "can_update": True,
|
"inventory_source": 1, "can_update": True,
|
||||||
@ -140,7 +138,9 @@ this request will look like this:
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
When making a POST to the same endpoint, the response will contain a status as well as the job ID for the update.
|
> *Note:* All manual inventory sources (`source=''`) will be ignored by the `update_inventory_sources` endpoint.
|
||||||
|
|
||||||
|
When making a POST to the same endpoint, the response will contain a status as well as the job ID for the update:
|
||||||
|
|
||||||
POST /api/v2/inventories/:id/update_inventory_sources
|
POST /api/v2/inventories/:id/update_inventory_sources
|
||||||
|
|
||||||
@ -152,7 +152,7 @@ When making a POST to the same endpoint, the response will contain a status as w
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
Response code from this action will be:
|
The response code from this action will be:
|
||||||
|
|
||||||
- 200 if all inventory source updates were successful
|
- 200 if all inventory source updates were successful
|
||||||
- 202 if some inventory source updates were successful, but some failed
|
- 202 if some inventory source updates were successful, but some failed
|
||||||
@ -160,8 +160,6 @@ Response code from this action will be:
|
|||||||
- 400 if there are no inventory sources in the inventory
|
- 400 if there are no inventory sources in the inventory
|
||||||
|
|
||||||
|
|
||||||
### Background deletion of Inventory
|
### Background Deletion of Inventory
|
||||||
|
|
||||||
If a DELETE request is submitted to an inventory, the field `pending_delete` will be True until a separate task fully completes the task of deleting the inventory and all its contents.
|
If a DELETE request is submitted to an inventory, the field `pending_delete` will be True until a separate task fully completes the task of deleting the inventory and all its contents.
|
||||||
|
|
||||||
### InventorySource Hosts and Groups read-only
|
|
||||||
@ -9,9 +9,9 @@ Fields that should be specified on creation of SCM inventory source:
|
|||||||
|
|
||||||
- `source_project` - project to use
|
- `source_project` - project to use
|
||||||
- `source_path` - relative path inside of the project indicating a
|
- `source_path` - relative path inside of the project indicating a
|
||||||
directory or a file, if left blank, "" is still a relative path
|
directory or a file; if left blank, `""` is still a relative path
|
||||||
indicating the root directory of the project
|
indicating the root directory of the project
|
||||||
- the `source` field should be set to "scm"
|
- the `source` field should be set to `"scm"`
|
||||||
|
|
||||||
Additionally:
|
Additionally:
|
||||||
|
|
||||||
@ -30,27 +30,29 @@ in turn, trigger an update of the inventory source.
|
|||||||
Also, with this flag set, an update _of the project_ is
|
Also, with this flag set, an update _of the project_ is
|
||||||
scheduled immediately after creation of the inventory source.
|
scheduled immediately after creation of the inventory source.
|
||||||
Also, if this flag is set, no inventory updates will be triggered
|
Also, if this flag is set, no inventory updates will be triggered
|
||||||
_unless the scm revision of the project changes_.
|
_unless the SCM revision of the project changes_.
|
||||||
|
|
||||||
|
|
||||||
### RBAC
|
### RBAC
|
||||||
|
|
||||||
User needs `use` role to the project in order to use it as a source
|
The user needs the `use` role for the project in order to utilize it as a source
|
||||||
project for inventory (this entails permission to run arbitrary scripts).
|
project for the inventory (this entails permission to run arbitrary scripts).
|
||||||
To update the project, they need `update` permission to the project,
|
To update the project, they need `update` permission to the project,
|
||||||
even if the update is done indirectly.
|
even if the update is done indirectly.
|
||||||
|
|
||||||
|
|
||||||
### Inventory File Suggestions
|
### Inventory File Suggestions
|
||||||
|
|
||||||
The project should show a listing of suggested inventory locations, at the
|
The project should show a listing of suggested inventory locations at the `/projects/N/inventories/` endpoint, but this is not a comprehensive list of
|
||||||
endpoint `/projects/N/inventories/`, but this is not a comprehensive list of
|
|
||||||
all paths that could be used as an Ansible inventory because of the wide
|
all paths that could be used as an Ansible inventory because of the wide
|
||||||
range of inclusion criteria. The list will also max out at 50 entries.
|
range of inclusion criteria. The list will also max out at 50 entries.
|
||||||
The user should be allowed to specify a location manually in the UI.
|
The user should be allowed to specify a location manually in the UI.
|
||||||
This listing should be refreshed to latest SCM info on a project update.
|
This listing should be refreshed to the latest SCM info on a project update.
|
||||||
|
|
||||||
If no inventory sources use a project as an SCM inventory source, then
|
If no inventory sources use a project as an SCM inventory source, then
|
||||||
the inventory listing may not be refreshed on update.
|
the inventory listing may not be refreshed on update.
|
||||||
|
|
||||||
|
|
||||||
### Still-to-come 3.2 Changes
|
### Still-to-come 3.2 Changes
|
||||||
|
|
||||||
As a part of a different feature, it is planned to have all inventory sources
|
As a part of a different feature, it is planned to have all inventory sources
|
||||||
@ -58,28 +60,30 @@ inside of an inventory all update with a single button click. When this
|
|||||||
happens for an inventory containing an SCM inventory source, it should
|
happens for an inventory containing an SCM inventory source, it should
|
||||||
update the project.
|
update the project.
|
||||||
|
|
||||||
|
|
||||||
### Inventory Source Restriction
|
### Inventory Source Restriction
|
||||||
|
|
||||||
Since automatic inventory updates (triggered by a project update) do not
|
Since automatic inventory updates (triggered by a project update) do not
|
||||||
go through the task system, typical protection against conflicting updates
|
go through the task system, typical protection against conflicting updates
|
||||||
is not available. To avoid problems, only 1 inventory source is allowed for
|
is not available. To avoid problems, only one inventory source is allowed for
|
||||||
inventories that use this feature. That means that if an inventory source
|
inventories that use this feature. That means that if an inventory source
|
||||||
has `source=scm` and `update_on_project_update=true`, it can be the only
|
has `source=scm` and `update_on_project_update=true`, it can be the only
|
||||||
inventory source for its inventory.
|
inventory source for its inventory.
|
||||||
|
|
||||||
|
|
||||||
## Supported File Syntax
|
## Supported File Syntax
|
||||||
|
|
||||||
> Any Inventory Ansible supports should be supported by this feature
|
> Any Inventory Ansible supports should be supported by this feature.
|
||||||
|
|
||||||
This is accomplished by making use of the `ansible-inventory` command.
|
This is accomplished by making use of the `ansible-inventory` command.
|
||||||
the inventory import tower-manage command will check for the existence
|
The inventory import `tower-manage` command will check for the existence
|
||||||
of `ansible-inventory` and if it is not present, it will call a backported
|
of `ansible-inventory` and if it is not present, it will call a backported
|
||||||
version of it. The backport is maintained as its own GPL3 licensed
|
version of it. The backport is maintained as its own GPL3 licensed
|
||||||
repository.
|
repository.
|
||||||
|
|
||||||
https://github.com/ansible/ansible-inventory-backport
|
https://github.com/ansible/ansible-inventory-backport
|
||||||
|
|
||||||
Because the internal mechanism is different, we need some coverage
|
Because the internal mechanism is different, there needs to be some coverage
|
||||||
testing with Ansible versions pre-2.4 and after.
|
testing with Ansible versions pre-2.4 and after.
|
||||||
|
|
||||||
### Vars Restrictions
|
### Vars Restrictions
|
||||||
@ -106,14 +110,14 @@ will consistently utilize group-level variables.
|
|||||||
Some test scenarios to look at:
|
Some test scenarios to look at:
|
||||||
- Test projects that use scripts
|
- Test projects that use scripts
|
||||||
- Test projects that have multiple inventory files in a directory,
|
- Test projects that have multiple inventory files in a directory,
|
||||||
group_vars, host_vars, etc.
|
`group_vars`, `host_vars`, etc.
|
||||||
- Test scripts in the project repo
|
- Test scripts in the project repo
|
||||||
- Test scripts that use environment variables provided by a credential
|
- Test scripts that use environment variables provided by a credential
|
||||||
in Tower
|
in Tower
|
||||||
- Test multiple inventories that use the same project, pointing to different
|
- Test multiple inventories that use the same project, pointing to different
|
||||||
files / directories inside of the project
|
files / directories inside of the project
|
||||||
- Feature works correctly even if project doesn't have any playbook files
|
- Feature works correctly even if project doesn't have any playbook files
|
||||||
- File related errors should surface as inventory import failures
|
- File-related errors should surface as inventory import failures
|
||||||
+ missing file
|
+ missing file
|
||||||
+ invalid syntax in file
|
+ invalid syntax in file
|
||||||
- If the project SCM update encounters errors, it should not run the
|
- If the project SCM update encounters errors, it should not run the
|
||||||
@ -125,29 +129,28 @@ The API guide should summarize what is in the use details.
|
|||||||
Once the UI implementation is done, the product docs should cover its
|
Once the UI implementation is done, the product docs should cover its
|
||||||
standard use.
|
standard use.
|
||||||
|
|
||||||
## Update-on-launch
|
## Update-On-Launch
|
||||||
|
|
||||||
If the SCM inventory source is configured to follow the project updates,
|
If the SCM inventory source is configured to follow the project updates,
|
||||||
the `update_on_launch` field can not to be set to True. This is because
|
the `update_on_launch` field can not to be set to True. This is because
|
||||||
of concerns related to the task manager job dependency tree.
|
of concerns related to the task manager job dependency tree.
|
||||||
|
|
||||||
We should document the alternatives for a user to accomplish the same thing
|
Below are some alternatives methods which allow a user to accomplish the same thing in a different way:
|
||||||
through in a different way.
|
|
||||||
|
|
||||||
### Alternative 1: Use same project for playbook
|
### Alternative 1: Use same project for playbook
|
||||||
|
|
||||||
You can make a job template that uses a project as well as an inventory
|
A user can make a job template that uses a project as well as an inventory
|
||||||
that updates from that same project. In this case, you can set the project
|
that updates from that same project. In this case, they can set the project
|
||||||
to `update_on_launch`, in which case it will trigger an inventory update
|
to `update_on_launch`, in which case it will trigger an inventory update
|
||||||
if needed.
|
if needed.
|
||||||
|
|
||||||
### Alternative 2: Use the project in a workflow
|
### Alternative 2: Use the project in a workflow
|
||||||
|
|
||||||
If you must use a different project for the playbook than for the inventory
|
If a user must utilize a different project for the playbook than for the inventory
|
||||||
source, then you can still place the project in a workflow and then have
|
source, then they can still place the project in a workflow and then have
|
||||||
a job template run on success of the project update.
|
a job template run on success of the project update.
|
||||||
|
|
||||||
This is guaranteed to have the inventory update "on time" (by this we mean
|
This is guaranteed to have the inventory update "on time" (this means
|
||||||
that the inventory changes are complete before the job template is launched),
|
that the inventory changes are complete before the job template is launched),
|
||||||
because the project does not transition to the completed state
|
because the project does not transition to the completed state
|
||||||
until the inventory update is finished.
|
until the inventory update is finished.
|
||||||
@ -168,4 +171,3 @@ that contains the `source_project` of the inventory source.
|
|||||||
If the inventory source is not configured to update on project update,
|
If the inventory source is not configured to update on project update,
|
||||||
then it will inherit the allowed instance groups from its inventory,
|
then it will inherit the allowed instance groups from its inventory,
|
||||||
like all other inventory syncs.
|
like all other inventory syncs.
|
||||||
|
|
||||||
@ -1,22 +1,22 @@
|
|||||||
## Job Branch Override
|
## Job Branch Override
|
||||||
|
|
||||||
Background: Projects specify the branch, tag, or reference to use from source control
|
_Background:_ Projects specify the branch, tag, or reference to use from source control
|
||||||
in the `scm_branch` field.
|
in the `scm_branch` field.
|
||||||
|
|
||||||
This "Branch Override" feature allows project admins to delegate branch selection to
|
This "Branch Override" feature allows project admins to delegate branch selection to
|
||||||
admins of job templates that use that project (requiring only project
|
admins of Job Templates that use that project (requiring only project
|
||||||
`use_role`). Admins of job templates can further
|
`use_role`). Admins of Job Templates can further
|
||||||
delegate that ability to users executing the job template
|
delegate that ability to users executing the Job Template
|
||||||
(requiring only job template `execute_role`) by enabling
|
(requiring only Job Template `execute_role`) by enabling
|
||||||
`ask_scm_branch_on_launch` on the job template.
|
`ask_scm_branch_on_launch` on the Job Template.
|
||||||
|
|
||||||
### Source Tree Copy Behavior
|
### Source Tree Copy Behavior
|
||||||
|
|
||||||
Background: Every job run has its own private data directory.
|
_Background:_ Every job run has its own private data directory.
|
||||||
This folder is temporary, cleaned up at the end of the job run.
|
This folder is temporary, cleaned up at the end of the job run.
|
||||||
|
|
||||||
This directory contains a copy of the project source tree for the given
|
This directory contains a copy of the project source tree for the given
|
||||||
`scm_branch` the job is running.
|
`scm_branch` while the job is running.
|
||||||
|
|
||||||
A new shallow copy is made for every job run.
|
A new shallow copy is made for every job run.
|
||||||
Jobs are free to make changes to the project folder and make use of those
|
Jobs are free to make changes to the project folder and make use of those
|
||||||
@ -26,24 +26,25 @@ changes while it is still running.
|
|||||||
|
|
||||||
With the introduction of this feature, the function of `scm_clean` is watered
|
With the introduction of this feature, the function of `scm_clean` is watered
|
||||||
down. It will still be possible to enable this function, and it will be
|
down. It will still be possible to enable this function, and it will be
|
||||||
passed through as a parameter to the playbook as a tool for trouble shooting.
|
passed through as a parameter to the playbook as a tool for troubleshooting.
|
||||||
Two notable cases that lose support are documented here.
|
Two notable cases that lose support are documented below:
|
||||||
|
|
||||||
1) Setting `scm_clean` to `true` will no longer persist changes between job runs.
|
1) Setting `scm_clean` to `true` will no longer persist changes between job runs.
|
||||||
|
|
||||||
That means that jobs that rely on content which is not committed to source
|
This means that jobs that rely on content which is not committed to source
|
||||||
control may fail now.
|
control may fail now.
|
||||||
|
|
||||||
2) Because it is a shallow copy, this folder will not contain the full
|
2) Because it is a shallow copy, this folder will not contain the full
|
||||||
git history for git project types.
|
git history for git project types.
|
||||||
|
|
||||||
|
|
||||||
### Project Revision Concerns
|
### Project Revision Concerns
|
||||||
|
|
||||||
Background of how normal project updates work:
|
_Background:_
|
||||||
The revision of the default branch (specified as `scm_branch` of the project)
|
The revision of the default branch (specified as `scm_branch` of the project)
|
||||||
is stored when updated, and jobs using that project will employ this revision.
|
is stored when updated, and jobs using that project will employ this revision.
|
||||||
|
|
||||||
Providing a non-default `scm_branch` in a job comes with some restrictions
|
Providing a non-default `scm_branch` in a job comes with some restrictions,
|
||||||
which are unlike the normal update behavior.
|
which are unlike the normal update behavior.
|
||||||
If `scm_branch` is a branch identifier (not a commit hash or tag), then
|
If `scm_branch` is a branch identifier (not a commit hash or tag), then
|
||||||
the newest revision is pulled from the source control remote immediately
|
the newest revision is pulled from the source control remote immediately
|
||||||
@ -60,8 +61,9 @@ project default branch.
|
|||||||
The `scm_branch` field is not validated, so the project must update
|
The `scm_branch` field is not validated, so the project must update
|
||||||
to assure it is valid.
|
to assure it is valid.
|
||||||
If `scm_branch` is provided or prompted for, the `playbook` field of
|
If `scm_branch` is provided or prompted for, the `playbook` field of
|
||||||
job templates will not be validated, and users will have to launch
|
Job Templates will not be validated, and users will have to launch
|
||||||
the job template in order to verify presence of the expected playbook.
|
the Job Template in order to verify presence of the expected playbook.
|
||||||
|
|
||||||
|
|
||||||
### Git Refspec
|
### Git Refspec
|
||||||
|
|
||||||
@ -99,7 +101,7 @@ no matter what is used for `scm_refspec`.
|
|||||||
|
|
||||||
The `scm_refspec` will affect which `scm_branch` fields can be used as overrides.
|
The `scm_refspec` will affect which `scm_branch` fields can be used as overrides.
|
||||||
For example, you could set up a project that allows branch override with the
|
For example, you could set up a project that allows branch override with the
|
||||||
1st or 2nd refspec example, then use this in a job template
|
first or second refspec example, then use this in a Job Template
|
||||||
that prompts for `scm_branch`, then a client could launch the job template when
|
that prompts for `scm_branch`, then a client could launch the Job Template when
|
||||||
a new pull request is created, providing the branch `pull/N/head`,
|
a new pull request is created, providing the branch `pull/N/head`,
|
||||||
then the job template would run against the provided github pull request reference.
|
then the Job Template would run against the provided GitHub pull request reference.
|
||||||
|
|||||||
@ -1,16 +1,22 @@
|
|||||||
## Ansible Callback and Job Events
|
## Ansible Callback and Job Events
|
||||||
There is no concept of a job event in Ansible. Job Events are json structures, created when Ansible calls the Tower callback plugin hooks (i.e. `v2_playbook_on_task_start`, `v2_runner_on_ok`, etc.). The Job Event data structures contain data from the parameters of the callback hooks plus unique ids that reference other Job Events. There is usually a 1-1 relationship between a Job Event and an Ansible callback plugin function call.
|
|
||||||
|
There is no concept of a job event in Ansible. Job Events are JSON structures, created when Ansible calls the Tower callback plugin hooks (*i.e.*, `v2_playbook_on_task_start`, `v2_runner_on_ok`, etc.). The Job Event data structures contain data from the parameters of the callback hooks plus unique IDs that reference other Job Events. There is usually a one-to-one relationship between a Job Event and an Ansible callback plugin function call.
|
||||||
|
|
||||||
|
|
||||||
## Job Event Relationships
|
## Job Event Relationships
|
||||||
The Job Event relationship is strictly hierarchical. In the example details below, each Job Event bullet is related to the previous Job Event to form a hierarchy.
|
|
||||||
|
|
||||||
* There is always 1 and only 1 `v2_playbook_on_start` event and it is the first event.
|
The Job Event relationship is strictly hierarchical. In the example details below, each Job Event bullet is related to the previous Job Event to form a hierarchy:
|
||||||
* `v2_playbook_on_play_start` is generated once per-play in the playbook. 2 such events would be generated from the playbook example below.
|
|
||||||
* The `v2_playbook_on_task_start` function is called once for each task under the default execution strategy. Other execution strategies (i.e. free or serial) can result in the `v2_playbook_on_task_start` function being multiple times, one for each host. Tower only creates a Job Event for the **first** `v2_playbook_on_task_start` call. Subsequent calls for the same task do **not** result in Job Events being created.
|
* There is always one and only one `v2_playbook_on_start` event and it is the first event.
|
||||||
* `v2_runner_on_[ok, failed, skipped, unreachable, retry, item_on_ok, item_on_failed, item_on_skipped]` One `v2_runner_on_...` Job Event will be created for each `v2_playbook_on_task_start` event.
|
* `v2_playbook_on_play_start` is generated once per-play in the playbook; two such events would be generated from the playbook example below.
|
||||||
|
* The `v2_playbook_on_task_start` function is called once for each task under the default execution strategy. Other execution strategies (*i.e.*, free or serial) can result in the `v2_playbook_on_task_start` function being called multiple times, one for each host. Tower only creates a Job Event for the **first** `v2_playbook_on_task_start` call. Subsequent calls for the same task do **not** result in Job Events being created.
|
||||||
|
* `v2_runner_on_[ok, failed, skipped, unreachable, retry, item_on_ok, item_on_failed, item_on_skipped]`; one `v2_runner_on_...` Job Event will be created for each `v2_playbook_on_task_start` event.
|
||||||
|
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
Below is an example inventory and playbook outline along with an example of Job Events generated and their hierarchical relationship.
|
|
||||||
|
Below is an example inventory and playbook outline, along with the Job Events generated and their hierarchical relationship:
|
||||||
|
|
||||||
```
|
```
|
||||||
# inventory
|
# inventory
|
||||||
[tower]
|
[tower]
|
||||||
@ -45,7 +51,8 @@ hostC
|
|||||||
when: inventory_hostname == 'C'
|
when: inventory_hostname == 'C'
|
||||||
```
|
```
|
||||||
|
|
||||||
Below is a visualization of how Job Events are related to form a hierarchy given a run of the playbook above.
|
Below is a visualization of how Job Events are related to form a hierarchy given a run of the playbook above:
|
||||||
|
|
||||||
```
|
```
|
||||||
`-- playbook_on_start
|
`-- playbook_on_start
|
||||||
|-- playbook_on_play_start-preflight
|
|-- playbook_on_play_start-preflight
|
||||||
@ -69,8 +76,12 @@ Below is a visualization of how Job Events are related to form a hierarchy given
|
|||||||
`-- runner_on_ok_hostC
|
`-- runner_on_ok_hostC
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Job Event Creation Patterns
|
## Job Event Creation Patterns
|
||||||
Ansible execution strategy heavily influences the creation order of Job Events. The above examples of Job Events creation an hierarchy are also the order in which they are created when the Ansible default execution strategy is used. When other strategies like free and serial are used, the order in which Job Events are created is slightly different. Let's take the previous example playbook and Job Events and show the order in which the Job Events may be created when the free strategy is used. Notice how `runner_on_*` Job Events can be created **after** a `playbook_on_task_start` for the next task runs. This is not the case for the default Ansible execution strategy. Under the default Ansible execution strategy, all `runner_on_*` Job Events will be created before the next `playbook_on_task_start` is generated.
|
|
||||||
|
The Ansible execution strategy heavily influences the creation order of Job Events. The above examples of Job Events creation and hierarchy are also the order in which they are created when the Ansible default execution strategy is used. When other strategies like `free` and `serial` are used, the order in which Job Events are created is slightly different.
|
||||||
|
|
||||||
|
Let's take the previous example playbook and Job Events and show the order in which the Job Events may be created when the free strategy is used. Notice how `runner_on_*` Job Events can be created **after** a `playbook_on_task_start` for the next task runs. This is not the case for the default Ansible execution strategy. Under the default Ansible execution strategy, all `runner_on_*` Job Events will be created before the next `playbook_on_task_start` is generated:
|
||||||
|
|
||||||
```
|
```
|
||||||
playbook_on_start
|
playbook_on_start
|
||||||
@ -95,9 +106,13 @@ playbook_on_play_start-install
|
|||||||
runner_on_ok_hostA (install_tower)
|
runner_on_ok_hostA (install_tower)
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
A management command for event replay exists for replaying jobs at varying speeds and other parameters. Run `awx-manage replay_job_events --help` for additional usage information. To prepare the UI for event replay, load the page for a finished job and then append `_debug` as a parameter to the url.
|
A management command for event replay exists for replaying jobs at varying speeds and other parameters. Run `awx-manage replay_job_events --help` for additional usage information. To prepare the UI for event replay, load the page for a finished job and then append `_debug` as a parameter to the url.
|
||||||
|
|
||||||
|
|
||||||
## Code References
|
## Code References
|
||||||
* More comprehensive list of Job Events and the hierarchy they form https://github.com/ansible/awx/blob/devel/awx/main/models/jobs.py#L870
|
|
||||||
* Exhaustive list of Job Events in Tower https://github.com/ansible/awx/blob/devel/awx/main/models/jobs.py#L900
|
* For a more comprehensive list of Job Events and the hierarchy they form, go here: https://github.com/ansible/awx/blob/devel/awx/main/models/jobs.py#L870
|
||||||
|
* Exhaustive list of Job Events in Tower: https://github.com/ansible/awx/blob/devel/awx/main/models/jobs.py#L900
|
||||||
|
|||||||
@ -1,13 +1,15 @@
|
|||||||
# Job Slicing Overview
|
# Job Slicing Overview
|
||||||
|
|
||||||
Ansible, by default, runs jobs from a single control instance. At best a single Ansible job can be sliced up on a single system via forks but this doesn't fully take advantage of AWX's ability to distribute work to multiple nodes in a cluster.
|
Ansible, by default, runs jobs from a single control instance. At best, a single Ansible job can be sliced up on a single system via forks but this doesn't fully take advantage of AWX's ability to distribute work to multiple nodes in a cluster.
|
||||||
|
|
||||||
Job Slicing solves this by adding a Job Template field `job_slice_count`. This field specifies the number of **Jobs** to slice the Ansible run into. When this number is greater than 1 ``AWX`` will generate a **Workflow** from a **JobTemplate** instead of a **Job**. The **Inventory** will be distributed evenly amongst the slice jobs. The workflow job is then started and proceeds as though it were a normal workflow. The API will return either a **Job** resource (if `job_slice_count` < 2) or a **WorkflowJob** resource otherwise. Likewise, the UI will redirect to the appropriate screen to display the status of the run.
|
Job Slicing solves this problem by adding a Job Template field `job_slice_count`. This field specifies the number of **Jobs** to slice the Ansible run into. When this number is greater than one, `AWX` will generate a **Workflow** from a **Job Template** instead of a **Job**. The **Inventory** will be distributed evenly amongst the sliced jobs. The workflow job is then started and proceeds as though it were a normal workflow. The API will return either a **Job** resource (if `job_slice_count` < 2) or a **WorkflowJob** resource otherwise. Likewise, the UI will redirect to the appropriate screen to display the status of the run.
|
||||||
|
|
||||||
## Implications for Job execution
|
|
||||||
|
|
||||||
When jobs are sliced they can run on any Tower node and some may not run at the same time. Because of this, anything that relies on setting/sliced state (using modules such as ``set_fact``) will not work as expected. It's reasonable to expect that not all jobs will actually run at the same time (if there is not enough capacity in the system for example)
|
## Implications for Job Execution
|
||||||
|
|
||||||
|
When jobs are sliced, they can run on any Tower node; however, some may not run at the same time. Because of this, anything that relies on setting/sliced state (using modules such as `set_fact`) will not work as expected. It's reasonable to expect that not all jobs will actually run at the same time (*e.g.*, if there is not enough capacity in the system)
|
||||||
|
|
||||||
|
|
||||||
## Simultaneous Execution Behavior
|
## Simultaneous Execution Behavior
|
||||||
|
|
||||||
By default Job Templates aren't normally configured to execute simultaneously (``allow_simultaneous`` must be checked). Slicing overrides this behavior and implies ``allow_simultaneous`` even if that setting is unchecked.
|
By default, Job Templates aren't normally configured to execute simultaneously (`allow_simultaneous` must be checked). Slicing overrides this behavior and implies `allow_simultaneous`, even if that setting is not selected.
|
||||||
|
|||||||
@ -1,35 +1,32 @@
|
|||||||
# Integration with Third-Party Log Aggregators
|
# Integration With Third-Party Log Aggregators
|
||||||
|
|
||||||
This feature builds in the capability to send detailed logs to several kinds
|
This feature builds in the capability to send detailed logs to several kinds
|
||||||
of 3rd party external log aggregation services. Services connected to this
|
of third party external log aggregation services. Services connected to this
|
||||||
data feed should be useful in order to gain insights into Tower usage
|
data feed should be useful in order to gain insights into Tower usage
|
||||||
or technical trends. The data is intended to be
|
or technical trends. The data is intended to be
|
||||||
sent in JSON format via three ways: over a HTTP connection, a direct TCP
|
sent in JSON format via three ways: over an HTTP connection, a direct TCP
|
||||||
connection or a direct UDP connection. It uses minimal service-specific
|
connection, or a direct UDP connection. It uses minimal service-specific
|
||||||
tweaks engineered in a custom handler or via an imported library.
|
tweaks engineered in a custom handler or via an imported library.
|
||||||
|
|
||||||
|
|
||||||
## Loggers
|
## Loggers
|
||||||
|
|
||||||
This features introduces several new loggers which are intended to
|
This feature introduces several new loggers which are intended to
|
||||||
deliver a large amount of information in a predictable structured format,
|
deliver a large amount of information in a predictable and structured format,
|
||||||
following the same structure as one would expect if obtaining the data
|
following the same structure as one would expect if obtaining the data
|
||||||
from the API. These data loggers are the following.
|
from the API. These data loggers are the following:
|
||||||
|
|
||||||
- awx.analytics.job_events
|
- `awx.analytics.job_events` - Data returned from the Ansible callback module
|
||||||
- Data returned from the Ansible callback module
|
- `awx.analytics.activity_stream` - Record of changes to the objects within the Ansible Tower app
|
||||||
- awx.analytics.activity_stream
|
- `awx.analytics.system_tracking` - Data gathered by Ansible scan modules ran by scan job templates
|
||||||
- Record of changes to the objects within the Ansible Tower app
|
|
||||||
- awx.analytics.system_tracking
|
|
||||||
- Data gathered by Ansible scan modules ran by scan job templates
|
|
||||||
|
|
||||||
These loggers only use log-level of INFO.
|
These loggers only use log-level of `INFO`. Additionally, the standard Tower logs are deliverable through this
|
||||||
|
same mechanism. It should be obvious to the user how to enable or disable
|
||||||
Additionally, the standard Tower logs are be deliverable through this
|
each of these five sources of data without manipulating a complex dictionary
|
||||||
same mechanism. It should be obvious to the user how to enable to disable
|
in their local settings file, as well as adjust the log level consumed
|
||||||
each of these 5 sources of data without manipulating a complex dictionary
|
|
||||||
in their local settings file, as well as adjust the log-level consumed
|
|
||||||
from the standard Tower logs.
|
from the standard Tower logs.
|
||||||
|
|
||||||
|
|
||||||
## Supported Services
|
## Supported Services
|
||||||
|
|
||||||
Committed to support:
|
Committed to support:
|
||||||
@ -45,18 +42,19 @@ Have tested:
|
|||||||
Considered, but have not tested:
|
Considered, but have not tested:
|
||||||
|
|
||||||
- Datadog
|
- Datadog
|
||||||
- Red Hat Common Logging via logstash connector
|
- Red Hat Common Logging via Logstash connector
|
||||||
|
|
||||||
|
|
||||||
### Elastic Search Instructions
|
### Elastic Search Instructions
|
||||||
|
|
||||||
In the development environment, the server can be started up with the
|
In the development environment, the server can be started up with the
|
||||||
log aggregation services attached via the Makefile targets. This starts
|
log aggregation services attached via the Makefile targets. This starts
|
||||||
up the 3 associated services of Logstash, Elastic Search, and Kibana
|
up the three associated services of Logstash, Elastic Search, and Kibana
|
||||||
as their own separate containers individually.
|
as their own separate containers.
|
||||||
|
|
||||||
In addition to running these services, it establishes connections to the
|
In addition to running these services, it establishes connections to the
|
||||||
tower_tools containers as needed. This is derived from the docker-elk
|
`tower_tools` containers as needed. This is derived from the [`docker-elk`
|
||||||
project. (https://github.com/deviantony/docker-elk)
|
project](https://github.com/deviantony/docker-elk):
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Start a single server with links
|
# Start a single server with links
|
||||||
@ -65,12 +63,12 @@ make docker-compose-elk
|
|||||||
make docker-compose-cluster-elk
|
make docker-compose-cluster-elk
|
||||||
```
|
```
|
||||||
|
|
||||||
For more instructions on getting started with the environment this stands
|
For more instructions on getting started with the environment that this example spins
|
||||||
up, also refer to instructions in `/tools/elastic/README.md`.
|
up, also refer to instructions in [`/tools/elastic/README.md`](https://github.com/ansible/awx/blob/devel/tools/elastic/README.md).
|
||||||
|
|
||||||
If you were to start from scratch, standing up your own version the elastic
|
If you were to start from scratch, standing up your own version of the Elastic
|
||||||
stack, then the only change you should need is to add the following lines
|
Stack, then the only change you should need is to add the following lines
|
||||||
to the logstash `logstash.conf` file.
|
to the Logstash `logstash.conf` file:
|
||||||
|
|
||||||
```
|
```
|
||||||
filter {
|
filter {
|
||||||
@ -80,72 +78,77 @@ filter {
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
#### Debugging and Pitfalls
|
#### Debugging and Pitfalls
|
||||||
|
|
||||||
Backward-incompatible changes were introduced with Elastic 5.0.0, and
|
Backward-incompatible changes were introduced with Elastic 5.0.0, and
|
||||||
customers may need different configurations depending on what
|
customers may need different configurations depending on what
|
||||||
versions they are using.
|
versions they are using.
|
||||||
|
|
||||||
|
|
||||||
# Log Message Schema
|
# Log Message Schema
|
||||||
|
|
||||||
Common schema for all loggers:
|
Common schema for all loggers:
|
||||||
|
|
||||||
| Field | Information |
|
| Field | Information |
|
||||||
|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|-----------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| cluster_host_id | (string) unique identifier of the host within the Tower cluster |
|
| `cluster_host_id` | (string) Unique identifier of the host within the Tower cluster |
|
||||||
| level | (choice of DEBUG, INFO, WARNING, ERROR, etc.) Standard python log level, roughly reflecting the significance of the event All of the data loggers as a part of this feature use INFO level, but the other Tower logs will use different levels as appropriate |
|
| `level` | (choice of `DEBUG`, `INFO`, `WARNING`, `ERROR`, etc.) Standard python log level, roughly reflecting the significance of the event; all of the data loggers (as a part of this feature) use `INFO` level, but the other Tower logs will use different levels as appropriate |
|
||||||
| logger_name | (string) Name of the logger we use in the settings, for example, "awx.analytics.activity_stream" |
|
| `logger_name` | (string) Name of the logger we use in the settings, *e.g.*, "`awx.analytics.activity_stream`" |
|
||||||
| @timestamp | (datetime) Time of log |
|
| `@timestamp` | (datetime) Time of log |
|
||||||
| path | (string) File path in code where the log was generated |
|
| `path` | (string) File path in code where the log was generated |
|
||||||
|
|
||||||
|
|
||||||
## Activity Stream Schema
|
## Activity Stream Schema
|
||||||
|
|
||||||
| Field | Information |
|
| Field | Information |
|
||||||
|-------------------|-------------------------------------------------------------------------------------------------------------------------|
|
|-------------------|-------------------------------------------------------------------------------------------------------------------------|
|
||||||
| (common) | this uses all the fields common to all loggers listed above |
|
| (common) | This uses all the fields common to all loggers listed above |
|
||||||
| actor | (string) username of the user who took the action documented in the log |
|
| actor | (string) Username of the user who took the action documented in the log |
|
||||||
| changes | (string) unique identifier of the host within the Tower cluster |
|
| changes | (string) Unique identifier of the host within the Tower cluster |
|
||||||
| operation | (choice of several options) the basic category of the changed logged in the activity stream, for instance, "associate". |
|
| operation | (choice of several options) The basic category of the change logged in the Activity Stream, for instance, "associate". |
|
||||||
| object1 | (string) Information about the primary object being operated on, consistent with what we show in the activity stream |
|
| object1 | (string) Information about the primary object being operated on, consistent with what we show in the Activity Stream |
|
||||||
| object2 | (string) if applicable, the second object involved in the action |
|
| object2 | (string) If applicable, the second object involved in the action |
|
||||||
|
|
||||||
|
|
||||||
## Job Event Schema
|
## Job Event Schema
|
||||||
|
|
||||||
This logger echoes the data being saved into job events, except when they
|
This logger echoes the data being saved into Job Events, except when they
|
||||||
would otherwise conflict with expected standard fields from the logger,
|
would otherwise conflict with expected standard fields from the logger (in which case the fields are named differently).
|
||||||
in which case the fields are named differently.
|
Notably, the field `host` on the `job_event` model is given as `event_host`.
|
||||||
Notably, the field `host` on the job_event model is given as `event_host`.
|
|
||||||
There is also a sub-dictionary field `event_data` within the payload,
|
There is also a sub-dictionary field `event_data` within the payload,
|
||||||
which will contain different fields depending on the specifics of the
|
which will contain different fields depending on the specifics of the
|
||||||
Ansible event.
|
Ansible event.
|
||||||
|
|
||||||
This logger also includes the common fields.
|
This logger also includes the common fields.
|
||||||
|
|
||||||
|
|
||||||
## Scan / Fact / System Tracking Data Schema
|
## Scan / Fact / System Tracking Data Schema
|
||||||
|
|
||||||
These contain a detailed dictionary-type field either services,
|
These contain a detailed dictionary-type field for either services,
|
||||||
packages, or files.
|
packages, or files.
|
||||||
|
|
||||||
| Field | Information |
|
| Field | Information |
|
||||||
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
|--------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
| (common) | this uses all the fields common to all loggers listed above |
|
| (common) | This uses all the fields common to all loggers listed above |
|
||||||
| services | (dict, optional) For services scans, this field is included and has keys based on the name of the service NOTE: Periods are disallowed by elastic search in names, and are replaced with "_" by our log formatter |
|
| services | (dict, optional) For services scans, this field is included and has keys based on the name of the service. **NOTE:** Periods are disallowed by Elastic Search in names, and are replaced with `"_"` by our log formatter |
|
||||||
| packages | (dict, optional) Included for log messages from package scans |
|
| packages | (dict, optional) Included for log messages from package scans |
|
||||||
| files | (dict, optional) Included for log messages from file scans |
|
| files | (dict, optional) Included for log messages from file scans |
|
||||||
| host | (str) name of host scan applies to |
|
| host | (str) Name of host that the scan applies to |
|
||||||
| inventory_id | (int) inventory id host is inside of
|
| inventory_id | (int) Inventory ID host is inside of
|
||||||
|
|
||||||
|
|
||||||
## Job Status Changes
|
## Job Status Changes
|
||||||
|
|
||||||
This is a intended to be a lower-volume source of information about
|
This is a intended to be a lower-volume source of information about
|
||||||
changes in job states compared to job events, and also intended to
|
changes in job states compared to Job Events, and also intended to
|
||||||
capture changes to types of unified jobs other than job template based
|
capture changes to types of Unified Jobs other than Job Template-based
|
||||||
jobs.
|
jobs.
|
||||||
|
|
||||||
In addition to common fields, these logs include fields present on
|
In addition to common fields, these logs include fields present on
|
||||||
the job model.
|
the job model.
|
||||||
|
|
||||||
|
|
||||||
## Tower Logs
|
## Tower Logs
|
||||||
|
|
||||||
In addition to the common fields, this will contain a `msg` field with
|
In addition to the common fields, this will contain a `msg` field with
|
||||||
@ -153,6 +156,7 @@ the log message. Errors contain a separate `traceback` field.
|
|||||||
These logs can be enabled or disabled in CTiT by adding or removing
|
These logs can be enabled or disabled in CTiT by adding or removing
|
||||||
it to the setting `LOG_AGGREGATOR_LOGGERS`.
|
it to the setting `LOG_AGGREGATOR_LOGGERS`.
|
||||||
|
|
||||||
|
|
||||||
# Configuring Inside of Tower
|
# Configuring Inside of Tower
|
||||||
|
|
||||||
Parameters needed in order to configure the connection to the log
|
Parameters needed in order to configure the connection to the log
|
||||||
@ -167,8 +171,8 @@ supported services:
|
|||||||
- A flag to indicate how system tracking records will be sent
|
- A flag to indicate how system tracking records will be sent
|
||||||
- Selecting which loggers to send
|
- Selecting which loggers to send
|
||||||
- Enabling sending logs
|
- Enabling sending logs
|
||||||
- Connection type, which can be HTTPS, TCP and UDP.
|
- Connection type (HTTPS, TCP, or UDP)
|
||||||
- Timeout value if connection type is based on TCP protocol (HTTPS and TCP).
|
- Timeout value if connection type is based on TCP protocol (HTTPS and TCP)
|
||||||
|
|
||||||
Some settings for the log handler will not be exposed to the user via
|
Some settings for the log handler will not be exposed to the user via
|
||||||
this mechanism. For example, threading (enabled).
|
this mechanism. For example, threading (enabled).
|
||||||
@ -186,28 +190,29 @@ connection, Port field is supposed to be provided and Host field is supposed to
|
|||||||
contain hostname only. If instead a URL is entered in Host field, its hostname
|
contain hostname only. If instead a URL is entered in Host field, its hostname
|
||||||
portion will be extracted as the actual hostname.
|
portion will be extracted as the actual hostname.
|
||||||
|
|
||||||
|
|
||||||
# Acceptance Criteria Notes
|
# Acceptance Criteria Notes
|
||||||
|
|
||||||
Connection: Testers need to replicate the documented steps for setting up
|
**Connection:** Testers need to replicate the documented steps for setting up
|
||||||
and connecting with a destination log aggregation service, if that is
|
and connecting with a destination log aggregation service, if that is
|
||||||
an officially supported service. That will involve 1) configuring the
|
an officially supported service. That will involve 1) configuring the
|
||||||
settings, as documented, 2) taking some action in Tower that causes a log
|
settings, as documented, 2) taking some action in Tower that causes a log
|
||||||
message from each type of data logger to be sent and 3) verifying that
|
message from each type of data logger to be sent and 3) verifying that
|
||||||
the content is present in the log aggregation service.
|
the content is present in the log aggregation service.
|
||||||
|
|
||||||
Schema: After the connection steps are completed, a tester will need to create
|
**Schema:** After the connection steps are completed, a tester will need to create
|
||||||
an index. We need to confirm that no errors are thrown in this process.
|
an index. We need to confirm that no errors are thrown in this process.
|
||||||
It also needs to be confirmed that the schema is consistent with the
|
It also needs to be confirmed that the schema is consistent with the
|
||||||
documentation. In the case of Splunk, we need basic confirmation that
|
documentation. In the case of Splunk, we need basic confirmation that
|
||||||
the data is compatible with the existing app schema.
|
the data is compatible with the existing app schema.
|
||||||
|
|
||||||
Tower logs: Formatting of Traceback message is a known issue in several
|
**Tower logs:** Formatting of Traceback message is a known issue in several
|
||||||
open-source log handlers, so we should confirm that server errors result
|
open-source log handlers, so we should confirm that server errors result
|
||||||
in the log aggregator receiving a well-formatted multi-line string
|
in the log aggregator receiving a well-formatted multi-line string
|
||||||
with the traceback message.
|
with the traceback message.
|
||||||
|
|
||||||
Log messages should be sent outside of the
|
Log messages should be sent outside of the
|
||||||
request-response cycle. For example, loggly examples use
|
request-response cycle. For example, Loggly examples use
|
||||||
`requests_futures.sessions.FuturesSession`, which does some
|
`requests_futures.sessions.FuturesSession`, which does some
|
||||||
threading work to fire the message without interfering with other
|
threading work to fire the message without interfering with other
|
||||||
operations. A timeout on the part of the log aggregation service should
|
operations. A timeout on the part of the log aggregation service should
|
||||||
|
|||||||
@ -1,8 +1,11 @@
|
|||||||
Starting from API V2, the named URL feature lets user access Tower resources via resource-specific human-readable identifiers. Before the only way of accessing a resource object without auxiliary query string is via resource primary key number, for example, via URL path `/api/v2/hosts/2/`. Now users can use named URL to do the same thing, for example, via URL path `/api/v2/hosts/host_name++inv_name++org_name/`.
|
Starting from API V2, the Named URL feature lets users access Tower resources via resource-specific human-readable identifiers. Previously, the only way of accessing a resource object without auxiliary query string was via resource primary key number(*e.g.*, via URL path `/api/v2/hosts/2/`). Now users can use named URL to do the same thing, for example, via URL path `/api/v2/hosts/host_name++inv_name++org_name/`.
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
There are two named-URL-related Tower configuration setting available under `/api/v2/settings/named-url/`: `NAMED_URL_FORMATS` and `NAMED_URL_GRAPH_NODES`. `NAMED_URL_FORMATS` is a *read only* key-value pair list of all available named URL identifier formats. A typical `NAMED_URL_FORMATS` looks like this:
|
There are two named-URL-related Tower configuration settings available under `/api/v2/settings/named-url/`: `NAMED_URL_FORMATS` and `NAMED_URL_GRAPH_NODES`.
|
||||||
|
|
||||||
|
`NAMED_URL_FORMATS` is a *read only* key-value pair list of all available named URL identifier formats. A typical `NAMED_URL_FORMATS` looks like this:
|
||||||
```
|
```
|
||||||
"NAMED_URL_FORMATS": {
|
"NAMED_URL_FORMATS": {
|
||||||
"job_templates": "<name>",
|
"job_templates": "<name>",
|
||||||
@ -27,49 +30,64 @@ There are two named-URL-related Tower configuration setting available under `/ap
|
|||||||
```
|
```
|
||||||
For each item in `NAMED_URL_FORMATS`, the key is the API name of the resource to have named URL, the value is a string indicating how to form a human-readable unique identifiers for that resource. A typical procedure of composing named URL for a specific resource object using `NAMED_URL_FORMATS` is given below:
|
For each item in `NAMED_URL_FORMATS`, the key is the API name of the resource to have named URL, the value is a string indicating how to form a human-readable unique identifiers for that resource. A typical procedure of composing named URL for a specific resource object using `NAMED_URL_FORMATS` is given below:
|
||||||
|
|
||||||
Suppose that a user wants to manually determine the named URL for a label with `id` 5. She should first look up `labels` field of `NAMED_URL_FORMATS` and get the identifier format `<name>++<organization.name>`. The first part of the URL format is `<name>`, that indicates she should get the label resource detail, `/api/v2/labels/5/`, and look for `name` field in returned JSON. Suppose the user has `name` field with value 'Foo', then the first part of our unique identifier is `Foo`; The second part of the format are double pluses `++`. That is the delimiter that separate different parts of a unique identifier so simply append them to unique identifier to get `Foo++`; The third part of the format is `<organization.name>`, This indicates that field is not in the current label object under investigation, but in an organization which the label object points to. Thus, as the format indicates, the user should look up `organization` in `related` field of current returned JSON. That field may or may not exist, if it exists, follow the URL given in that field, say `/api/v2/organizations/3/`, to get the detail of the specific organization, extract its `name` field, say 'Default', and append it to our current unique identifier. Since `<organizations.name>` is the last part of format, we end up generating unique identifier for underlying label and have our named URL ready: `/api/v2/labels/Foo++Default/`. In the case where `organization` does not exist in `related` field of label object detail, we append empty string `''` instead, which essentially does not alter the current identifier. So `Foo++` becomes final unique identifier and thus generate named URL to be `/api/v2/labels/Foo++/`.
|
Suppose that a user wants to manually determine the named URL for a label with `id` `5`. She should first look up `labels` field of `NAMED_URL_FORMATS` and get the identifier format `<name>++<organization.name>`. The first part of the URL format is `<name>`, which indicates that she should get the label resource detail, `/api/v2/labels/5/`, and look for the `name` field in returned JSON.
|
||||||
|
|
||||||
An important aspect of generating unique identifier for named URL is dealing with reserved characters. Because the identifier is part of a URL, the following reserved characters by URL standard should be escaped to its percentage encoding: `;/?:@=&[]`. For example, if an organization is named `;/?:@=&[]`, its unique identifier should be `%3B%2F%3F%3A%40%3D%26%5B%5D`. Another special reserved character is `+`, which is not reserved by URL standard but used by named URL to link different parts of an identifier. It is escaped by `[+]`. For example, if an organization is named `[+]`, tis unique identifier is `%5B[+]%5D`, where original `[` and `]` are percent encoded and `+` is converted to `[+]`.
|
Suppose the user has `name` field with value `'Foo'`; then the first part of our unique identifier is `Foo`. The second part of the format are double pluses `++`. That is the delimiter that separates different parts of a unique identifier, so simply append them to the unique identifier to get `Foo++`.
|
||||||
|
|
||||||
`NAMED_URL_FORMATS` exclusively lists every resource that can have named URL, any resource not listed there has no named URL. `NAMED_URL_FORMATS` alone should be instructive enough for users to compose human-readable unique identifier and named URL themselves. For more convenience, every object of a resource that can have named URL will have a related field `named_url` that displays that object's named URL. Users can simply copy-paste that field for their custom usages. Also, users are expected to see indications in help text of API browser if a resource object has named URL.
|
The third part of the format is `<organization.name>`, which indicates that field is not in the current label object under investigation, but in an organization which the label object points to. Thus, as the format indicates, the user should look up `organization` in `related` field of current returned JSON. That field may or may not exist; if it exists, follow the URL given in that field, say `/api/v2/organizations/3/`, to get the detail of the specific organization, extract its `name` field (*e.g.*, `'Default'`), and append it to our current unique identifier. Since `<organizations.name>` is the last part of format, we end up generating unique identifier for underlying label and have our named URL ready: `/api/v2/labels/Foo++Default/`.
|
||||||
|
|
||||||
Although `NAMED_URL_FORMATS` is immutable on user side, it will be automatically modified and expanded over time, reflecting underlying resource modification and expansion. Please consult `NAMED_URL_FORMATS` on the same Tower cluster where you want to use named url feature against.
|
In the case where `organization` does not exist in the `related` field of label object detail, we append empty string `''` instead, which essentially does not alter the current identifier. So `Foo++` becomes final unique identifier and thus generate named URL to be `/api/v2/labels/Foo++/`.
|
||||||
|
|
||||||
|
An important aspect of generating unique identifiers for named URL is dealing with reserved characters. Because the identifier is part of a URL, the following reserved characters by URL standard should be escaped to its percentage encoding: `;/?:@=&[]`. For example, if an organization is named `;/?:@=&[]`, its unique identifier should be `%3B%2F%3F%3A%40%3D%26%5B%5D`. Another special reserved character is `+`, which is not reserved by URL standard but used by named URL to link different parts of an identifier. It is escaped by `[+]`. For example, if an organization is named `[+]`, tis unique identifier is `%5B[+]%5D`, where original `[` and `]` are percent encoded and `+` is converted to `[+]`.
|
||||||
|
|
||||||
|
`NAMED_URL_FORMATS` exclusively lists every resource that can have named URL; any resource not listed there has no named URL. `NAMED_URL_FORMATS` alone should be instructive enough for users to compose human-readable unique identifier and named URL themselves. For more convenience, every object of a resource that can have named URL will have a related field `named_url` that displays that object's named URL. Users can simply copy-paste that field for their custom usages. Also, users are expected to see indications in the help text of the API browser if a resource object has named URL.
|
||||||
|
|
||||||
|
Although `NAMED_URL_FORMATS` is immutable on the user side, it will be automatically modified and expanded over time, reflecting underlying resource modification and expansion. Please consult `NAMED_URL_FORMATS` on the same Tower cluster where you want to use the named URL feature against.
|
||||||
|
|
||||||
|
`NAMED_URL_GRAPH_NODES` is another *read-only* list of key-value pairs that exposes the internal graph data structure that Tower uses to manage named URLs. This is not supposed to be human-readable but should be used for programmatically generating named URLs. An example script of generating a named URL given the primary key of arbitrary resource objects that can have named URL (using info provided by `NAMED_URL_GRAPH_NODES`) can be found as `/tools/scripts/pk_to_named_url.py`.
|
||||||
|
|
||||||
`NAMED_URL_GRAPH_NODES` is another *read-only* list of key-value pairs that exposes the internal graph data structure Tower used to manage named URLs. This is not supposed to be human-readable but should be used for programmatically generating named URLs. An example script of generating named URL given the primary key of arbitrary resource objects that can have named URL, using info provided by `NAMED_URL_GRAPH_NODES`, can be found as `/tools/scripts/pk_to_named_url.py`.
|
|
||||||
|
|
||||||
## Identifier Format Protocol
|
## Identifier Format Protocol
|
||||||
|
|
||||||
Resources in Tower are identifiable by their unique keys, which are basically tuples of resource fields. Every Tower resource is guaranteed to have its primary key number alone as a unique key, but there might be multiple other unique keys. A resource can generate identifier format thus have named URL if it contains at least one unique key that satisfies rules below:
|
Resources in Tower are identifiable by their unique keys, which are basically tuples of resource fields. Every Tower resource is guaranteed to have its primary key number alone as a unique key, but there might be multiple other unique keys.
|
||||||
|
|
||||||
|
A resource can generate identifier formats and thus have named URL if it contains at least one unique key that satisfies rules below:
|
||||||
|
|
||||||
1. The key *contains and only contains* fields that are either the `name` field, or text fields with a finite number of possible choices (like credential type resource's `kind` field).
|
1. The key *contains and only contains* fields that are either the `name` field, or text fields with a finite number of possible choices (like credential type resource's `kind` field).
|
||||||
2. The only allowed exceptional fields that breaks rule 1 is a many-to-one related field relating to a resource *other than self* which is also allowed to have a slug.
|
2. The only allowed exceptional fields that breaks the first rule is a many-to-one related field relating to a resource *other than self* which is also allowed to have a slug.
|
||||||
|
|
||||||
Here is an example for understanding the rules: Suppose Tower has resources `Foo` and `Bar`, both `Foo` and `Bar` contain a `name` field and a `choice` field that can only have value 'yes' or 'no'. Additionally, resource `Foo` contains a many-to-one field (a foreign key) relating to `Bar`, say `fk`. `Foo` has a unique key tuple `(name, choice, fk)` and `Bar` has a unique key tuple `(name, choice)`. Apparently `Bar` can have named URL because it satisfies rule 1. On the other hand, `Foo` can also have named URL, because although `Foo` breaks rule 1, the extra field breaking rule 1 is `fk` field, which is many-to-one-related to `Bar` and `Bar` can have named URL.
|
Here is an example for understanding the rules: Suppose Tower has resources `Foo` and `Bar`; both `Foo` and `Bar` contain a `name` field and a `choice` field that can only have value `'yes'` or `'no'`. Additionally, resource `Foo` contains a many-to-one field (a foreign key) relating to `Bar`, say `fk`. `Foo` has a unique key tuple `(name, choice, fk)` and `Bar` has a unique key tuple `(name, choice)`. Apparently `Bar` can have named URL because it satisfies rule 1. On the other hand, `Foo` can also have named URL, because although `Foo` breaks rule 1, the extra field breaking rule 1 is a `fk` field, which is many-to-one-related to `Bar` and `Bar` can have named URL.
|
||||||
|
|
||||||
For resources satisfying rule 1 above, their human-readable unique identifiers are combinations of foreign key fields, delimited by `+`. In specific, resource `Bar` above will have slug format `<name>+<choice>`. Note the field order matters in slug format: `name` field always comes the first if present, following by all the rest fields arranged in lexicographic order of field name. For example, if `Bar` also has an `a_choice` field satisfying rule 1 and the unique key becomes `(name, choice, a_choice)`, its slug format becomes `<name>+<a_choice>+<choice>`.
|
For resources satisfying rule 1 above, their human-readable unique identifiers are combinations of foreign key fields, delimited by `+`. Specifically, resource `Bar` above will have the slug format `<name>+<choice>`. Note the field order matters in slug format: `name` field always comes first if present, followed by all the rest of the fields arranged in lexicographic order of field name. For example, if `Bar` also has an `a_choice` field satisfying rule 1 and the unique key becomes `(name, choice, a_choice)`, its slug format becomes `<name>+<a_choice>+<choice>`.
|
||||||
|
|
||||||
For resources satisfying rule 2 above instead, if we trace back via the extra foreign key fields, we end up getting a tree of resources that all together identify objects of that resource. In order to generate identifier format, each resource in the traceback tree generates its own part of standalone format in the way described in the last paragraph, using all fields but the foreign keys. Finally all parts are combined by `++` in the following order:
|
For resources satisfying rule 2 above instead, if we trace back via the extra foreign key fields, we end up getting a tree of resources that altogether identify objects of that resource. In order to generate identifier format, each resource in the traceback tree generates its own part of standalone format in the way described in the last paragraph, using all fields but the foreign keys. Finally all parts are combined by `++` in the following order:
|
||||||
* Put stand-alone format as the first identifier component.
|
|
||||||
* Recursively generate unique identifiers for each resource the underlying resource is pointing to using foreign key (a child of a traceback tree node).
|
* Put standalone format as the first identifier component.
|
||||||
|
* Recursively generate unique identifiers for each resource the underlying resource is pointing to by using a foreign key (a child of a traceback tree node).
|
||||||
* Treat generated unique identifiers as the rest identifier components. Sort them in lexicographic order of corresponding foreign key.
|
* Treat generated unique identifiers as the rest identifier components. Sort them in lexicographic order of corresponding foreign key.
|
||||||
* Combine all components together using `++` to generate the final identifier format.
|
* Combine all components together using `++` to generate the final identifier format.
|
||||||
|
|
||||||
Back to the example above, when generating identifier format for resource `Foo`, we firstly generate stand-alone formats, `<name>+<choice>` for `Foo` and `<fk.name>+<fk.choice>` for `Bar`, then combine them together to be `<name>+<choice>++<fk.name>+<fk.choice>`.
|
Back to the example above, when generating identifier format for resource `Foo`, we firstly generate standalone formats, `<name>+<choice>` for `Foo` and `<fk.name>+<fk.choice>` for `Bar`, then combine them together to be `<name>+<choice>++<fk.name>+<fk.choice>`.
|
||||||
|
|
||||||
When generating identifiers according to the given identifier format, there are cases where a foreign key may points to nowhere. In this case we substitute the part of the format corresponding to the resource the foreign key should point to with an empty string `''`. For example, if a `Foo` object has `name` to be 'alice', `choice` to be 'yes', but `fk` field `None`, its identifier will look like `alice+yes++`.
|
When generating identifiers according to the given identifier format, there are cases where a foreign key might point nowhere. In this case, we substitute the part of the format corresponding to the resource the foreign key should point to with an empty string `''`. For example, if a `Foo` object has `name` to be `'alice'`, `choice` to be `'yes'`, but `fk` field `None`, its identifier will look like `alice+yes++`.
|
||||||
|
|
||||||
|
|
||||||
## Implementation Overview
|
## Implementation Overview
|
||||||
Module `awx.main.utils.named_url_graph` stands at the core of named URL implementation. It exposes a single public function `generate_graph`. `generate_graph` accepts a list of Tower models in tower that might have named URL (meaning having corresponding endpoint under `/api/v2/`), filter out those that turns out not able to have named URL, and connect the rest together into a named URL graph. The graph is available as a settings option `NAMED_URL_GRAPH` and each node of it contains all info needed to generate named URL identifier formats and parse incoming named URL identifiers.
|
|
||||||
|
|
||||||
`generate_graph` will run only once for each Tower WSGI process. This is guaranteed by putting the function call inside `__init__` of `URLModificationMiddleware`. When an incoming request enters `URLModificationMiddleware`, the part of its URL path that could contain a valid named URL identifier is extracted and processed to find (possible) corresponding resource object. The internal process is basically crawling against part of the named URL graph. If the object is found, the identifier part of the URL path is converted to the object's primary key. From now forward Tower can treat the request with the old-styled URL.
|
Module `awx.main.utils.named_url_graph` stands at the core of named URL implementation. It exposes a single public function, `generate_graph`. `generate_graph` accepts a list of Tower models in Tower that might have named URL (meaning they have corresponding endpoints under `/api/v2/`), filter out those that are unable to have named URLs, and connect the rest together into a named URL graph. The graph is available as a settings option, `NAMED_URL_GRAPH`, and each node of it contains all info needed to generate named URL identifier formats and parse incoming named URL identifiers.
|
||||||
|
|
||||||
|
`generate_graph` will run only once for each Tower WSGI process. This is guaranteed by putting the function call inside `__init__` of `URLModificationMiddleware`. When an incoming request enters `URLModificationMiddleware`, the part of its URL path that could contain a valid named URL identifier is extracted and processed to find (possible) corresponding resource objects. The internal process is basically crawling against part of the named URL graph. If the object is found, the identifier part of the URL path is converted to the object's primary key. Going forward, Tower can treat the request with the old-styled URL.
|
||||||
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
## Acceptance Criteria
|
||||||
In general, acceptance should follow what's in 'Usage' section. The contents in 'Identifier Format Protocol' section should not be relevant.
|
|
||||||
|
In general, acceptance should follow what's in the "Usage" section. The contents in the "Identifier Format Protocol" section should not be relevant.
|
||||||
|
|
||||||
* The classical way of getting objects via primary keys should behave the same.
|
* The classical way of getting objects via primary keys should behave the same.
|
||||||
* Tower configuration part of named URL should work as described. Particularly, `NAMED_URL_FORMATS` should be immutable on user side and display accurate named URL identifier format info.
|
* Tower configuration for named URL should work as described. Particularly, `NAMED_URL_FORMATS` should be immutable on the user's side and display accurately-named URL identifier format info.
|
||||||
* `NAMED_URL_FORMATS` should be exclusive, meaning resources specified in `NAMED_URL_FORMATS` should have named URL, and resources not specified there should *not* have named URL.
|
* `NAMED_URL_FORMATS` should be exclusive, meaning resources specified in `NAMED_URL_FORMATS` should have named URL, and resources not specified there should *not* have named URL.
|
||||||
* If a resource can have named URL, its objects should have a `named_url` field which represents the object-specific named URL. That field should only be visible under detail view, not list view.
|
* If a resource can have named URL, its objects should have a `named_url` field which represents the object-specific named URL. That field should only be visible under detail view, not list view.
|
||||||
* A user following the rules specified in `NAMED_URL_FORMATS` should be able to generate named URL exactly the same as the `named_url` field.
|
* A user following the rules specified in `NAMED_URL_FORMATS` should be able to generate named URL exactly the same as the `named_url` field.
|
||||||
* A user should be able to access specified resource objects via accurately generated named URL. This includes not only the object itself but also its related URLs, like if `/api/v2/res_name/obj_slug/` is valid, `/api/v2/res_name/obj_slug/related_res_name/` should also be valid.
|
* A user should be able to access specified resource objects via an accurately-generated named URL. This includes not only the object itself but also its related URLs; for example, if `/api/v2/res_name/obj_slug/` is valid, then `/api/v2/res_name/obj_slug/related_res_name/` should also be valid.
|
||||||
* A user should not be able to access specified resource objects if the given named URL is inaccurate. For example, reserved characters not correctly escaped, or components whose corresponding foreign key field pointing nowhere is not replaced by empty string.
|
* A user should not be able to access specified resource objects if the given named URL is inaccurate. For example, reserved characters not correctly escaped, or components whose corresponding foreign key field points nowhere but is not replaced by an empty string.
|
||||||
* A user should be able to dynamically generate named URLs by utilizing `NAMED_URL_GRAPH_NODES`.
|
* A user should be able to dynamically generate named URLs by utilizing `NAMED_URL_GRAPH_NODES`.
|
||||||
|
|||||||
@ -9,7 +9,8 @@ A Notification Template is an instance of a notification type (Email, Slack, Web
|
|||||||
At a high level, the typical notification task flow is:
|
At a high level, the typical notification task flow is:
|
||||||
|
|
||||||
* User creates a `NotificationTemplate` at `/api/v2/notification_templates/`.
|
* User creates a `NotificationTemplate` at `/api/v2/notification_templates/`.
|
||||||
* User assigns the notification to any of the various objects that support it (all variants of Job Templates as well as organizations and projects) and at the appropriate trigger level for which they want the notification (error, success, or any). For example, a user may wish to assign a particular Notification Template to trigger when `Job Template 1` fails.
|
* User assigns the notification to any of the various objects that support it (all variants of Job Templates as well as organizations and projects) and at the appropriate trigger level for which they want the notification (error, success, or start). For example, a user may wish to assign a particular Notification Template to trigger when `Job Template 1` fails.
|
||||||
|
|
||||||
|
|
||||||
## Templated notification messages
|
## Templated notification messages
|
||||||
|
|
||||||
@ -53,19 +54,21 @@ Notification templates assigned at certain levels will inherit notifications def
|
|||||||
* Inventory Updates will use notifications defined on the Organization it is in.
|
* Inventory Updates will use notifications defined on the Organization it is in.
|
||||||
* Ad-hoc commands will use notifications defined on the Organization with which that inventory is associated.
|
* Ad-hoc commands will use notifications defined on the Organization with which that inventory is associated.
|
||||||
|
|
||||||
|
|
||||||
## Workflow
|
## Workflow
|
||||||
|
|
||||||
When a job starts, succeeds or fails, the running, success or error handler, respectively, will pull a list of relevant notifications using the procedure defined above. It then creates a Notification Object for each one containing relevant details about the job and then **sends** it to the destination (email addresses, Slack channel(s), SMS numbers, etc.). These Notification objects are available as related resources on job types (Jobs, Inventory Updates, Project Updates), and also at `/api/v2/notifications`. You may also see what notifications have been sent by examining its related resources.
|
When a job starts, succeeds or fails, the running, success or error handler, respectively, will pull a list of relevant notifications using the procedure defined above. It then creates a Notification Object for each one containing relevant details about the job and then **sends** it to the destination (email addresses, Slack channel(s), SMS numbers, etc.). These Notification objects are available as related resources on job types (Jobs, Inventory Updates, Project Updates), and also at `/api/v2/notifications`. You may also see what notifications have been sent by examining its related resources.
|
||||||
|
|
||||||
Notifications can succeed or fail but that will _not_ cause its associated job to succeed or fail. The status of the notification can be viewed at its detail endpoint: `/api/v2/notifications/<n>`
|
Notifications can succeed or fail but that will _not_ cause its associated job to succeed or fail. The status of the notification can be viewed at its detail endpoint: `/api/v2/notifications/<n>`
|
||||||
|
|
||||||
|
|
||||||
## Testing Notifications Before Using Them
|
## Testing Notifications Before Using Them
|
||||||
|
|
||||||
Once a Notification Template has been created, its configuration can be tested by utilizing the endpoint at `/api/v2/notification_templates/<n>/test`. This will emit a test notification given the configuration defined by the notification. These test notifications will also appear in the notifications list at `/api/v2/notifications`
|
Once a Notification Template has been created, its configuration can be tested by utilizing the endpoint at `/api/v2/notification_templates/<n>/test`. This will emit a test notification given the configuration defined by the notification. These test notifications will also appear in the notifications list at `/api/v2/notifications`
|
||||||
|
|
||||||
# Notification Types
|
# Notification Types
|
||||||
|
|
||||||
The currently defined Notification Types are:
|
The currently-defined Notification Types are:
|
||||||
|
|
||||||
* Email
|
* Email
|
||||||
* Slack
|
* Slack
|
||||||
@ -97,7 +100,7 @@ The following should be performed for good acceptance:
|
|||||||
|
|
||||||
Set up a local SMTP mail service. Some options are listed below:
|
Set up a local SMTP mail service. Some options are listed below:
|
||||||
|
|
||||||
* Postfix service on galaxy: https://galaxy.ansible.com/debops/postfix/
|
* Postfix service on Galaxy: https://galaxy.ansible.com/debops/postfix/
|
||||||
* Mailtrap has a good free plan that should provide all of the features necessary: https://mailtrap.io/
|
* Mailtrap has a good free plan that should provide all of the features necessary: https://mailtrap.io/
|
||||||
* Another option is to use a Docker container: `docker run --network="tools_default" -p 25:25 -e maildomain=mail.example.com -e smtp_user=user:pwd --name postfix -d catatnight/postfix`
|
* Another option is to use a Docker container: `docker run --network="tools_default" -p 25:25 -e maildomain=mail.example.com -e smtp_user=user:pwd --name postfix -d catatnight/postfix`
|
||||||
|
|
||||||
@ -214,17 +217,17 @@ There are a few modern IRC servers to choose from. [InspIRCd](http://www.inspirc
|
|||||||
|
|
||||||
## Webhook
|
## Webhook
|
||||||
|
|
||||||
The webhook notification type in Ansible Tower provides a simple interface for sending `POST`s to a predefined web service. Tower will `POST` to this address using `application/json` content type with the data payload containing all relevant details in json format.
|
The webhook notification type in Ansible Tower provides a simple interface for sending `POST`s to a predefined web service. Tower will `POST` to this address using `application/json` content type with the data payload containing all relevant details in JSON format.
|
||||||
|
|
||||||
The parameters are fairly straightforward:
|
The parameters are fairly straightforward:
|
||||||
|
|
||||||
* `url`: The full URL that will be `POST`ed to
|
* `url`: The full URL that will be `POST`ed to
|
||||||
* `headers`: Headers in json form where the keys and values are strings. For example: `{"Authentication": "988881adc9fc3655077dc2d4d757d480b5ea0e11", "MessageType": "Test"}`
|
* `headers`: Headers in JSON form where the keys and values are strings. For example: `{"Authentication": "988881adc9fc3655077dc2d4d757d480b5ea0e11", "MessageType": "Test"}`
|
||||||
|
|
||||||
### Test Considerations
|
### Test Considerations
|
||||||
|
|
||||||
* Test HTTP service and HTTPS, also specifically test HTTPS with a self signed cert.
|
* Test HTTP service and HTTPS, also specifically test HTTPS with a self signed cert.
|
||||||
* Verify that the headers and payload are present, that the payload is json, and the content type is specifically `application/json`
|
* Verify that the headers and payload are present, that the payload is JSON, and the content type is specifically `application/json`
|
||||||
|
|
||||||
### Test Service
|
### Test Service
|
||||||
|
|
||||||
@ -240,7 +243,7 @@ Note that this won't respond correctly to the notification, so it will yield an
|
|||||||
|
|
||||||
https://gist.github.com/matburt/73bfbf85c2443f39d272
|
https://gist.github.com/matburt/73bfbf85c2443f39d272
|
||||||
|
|
||||||
The link below shows how to define an endpoint and parse headers and json content. It doesn't show how to configure Flask for HTTPS, but is fairly straightforward:
|
The link below shows how to define an endpoint and parse headers and JSON content. It doesn't show how to configure Flask for HTTPS, but is fairly straightforward:
|
||||||
http://flask.pocoo.org/snippets/111/
|
http://flask.pocoo.org/snippets/111/
|
||||||
|
|
||||||
You can also link an `httpbin` service to the development environment for testing webhooks using:
|
You can also link an `httpbin` service to the development environment for testing webhooks using:
|
||||||
|
|||||||
@ -1,62 +1,77 @@
|
|||||||
# awx
|
# AWX
|
||||||
awx provides a web interface and distributed task engine for scheduling and
|
|
||||||
|
AWX provides a web interface and distributed task engine for scheduling and
|
||||||
running Ansible playbooks. As such, it relies heavily on the interfaces
|
running Ansible playbooks. As such, it relies heavily on the interfaces
|
||||||
provided by Ansible. This document provides a birds-eye view of the notable
|
provided by Ansible. This document provides a birds-eye view of the notable
|
||||||
touchpoints between awx and Ansible.
|
touchpoints between AWX and Ansible.
|
||||||
|
|
||||||
|
|
||||||
## Terminology
|
## Terminology
|
||||||
awx has a variety of concepts which map to components of Ansible, or
|
|
||||||
|
AWX has a variety of concepts which map to components of Ansible, or
|
||||||
which further abstract them to provide functionality on top of Ansible. A few
|
which further abstract them to provide functionality on top of Ansible. A few
|
||||||
of the most notable ones are:
|
of the most notable ones are:
|
||||||
|
|
||||||
|
|
||||||
### Projects
|
### Projects
|
||||||
Projects represent a collection of Ansible playbooks. Most awx users create
|
|
||||||
|
Projects represent a collection of Ansible playbooks. Most AWX users create
|
||||||
Projects that import periodically from source control systems (such as git,
|
Projects that import periodically from source control systems (such as git,
|
||||||
mercurial, or subversion repositories). This import is accomplished via an
|
mercurial, or subversion repositories). This import is accomplished via an
|
||||||
ansible playbook included with awx (which makes use of the various source
|
Ansible playbook included with AWX (which makes use of the various source
|
||||||
control management modules in Ansible).
|
control management modules in Ansible).
|
||||||
|
|
||||||
|
|
||||||
### Inventories
|
### Inventories
|
||||||
awx manages Inventories, Groups, and Hosts, and provides a RESTful interface
|
|
||||||
|
AWX manages Inventories, Groups, and Hosts, and provides a RESTful interface
|
||||||
that maps to static and dynamic Ansible inventories. Inventory data can
|
that maps to static and dynamic Ansible inventories. Inventory data can
|
||||||
be entered into awx manually, but many users perform Inventory Syncs to import
|
be entered into AWX manually, but many users perform Inventory Syncs to import
|
||||||
inventory data from a variety of external sources.
|
inventory data from a variety of external sources.
|
||||||
|
|
||||||
|
|
||||||
### Job Templates
|
### Job Templates
|
||||||
|
|
||||||
A Job Template is a definition and set of parameters for running
|
A Job Template is a definition and set of parameters for running
|
||||||
`ansible-playbook`. If defines metadata about a given playbook run, such as:
|
`ansible-playbook`. If defines metadata about a given playbook run, such as:
|
||||||
|
|
||||||
* a named identifier
|
* a named identifier
|
||||||
* an associated inventory to run against
|
* an associated inventory to run against
|
||||||
* the project and `.yml` playbook to run
|
* the project and `.yml` playbook to run
|
||||||
* a variety of other options which map directly to ansible-playbook
|
* a variety of other options which map directly to `ansible-playbook`
|
||||||
arguments (extra_vars, verbosity, forks, limit, etc...)
|
arguments (`extra_vars`, verbosity, forks, limit, etc...)
|
||||||
|
|
||||||
|
|
||||||
### Credentials
|
### Credentials
|
||||||
awx stores sensitive credential data which can be attached to `ansible-playbook`
|
|
||||||
|
AWX stores sensitive credential data which can be attached to `ansible-playbook`
|
||||||
processes that it runs. This data can be oriented towards SSH connection
|
processes that it runs. This data can be oriented towards SSH connection
|
||||||
authentication (usernames, passwords, SSH keys and passphrases),
|
authentication (usernames, passwords, SSH keys and pass phrases),
|
||||||
ansible-specific prompts (such as Vault passwords), or environmental
|
Ansible-specific prompts (such as Vault passwords), or environmental
|
||||||
authentication values which various Ansible modules depend on (such as setting
|
authentication values which various Ansible modules depend on (such as setting
|
||||||
`AWS_ACCESS_KEY_ID` in an environment variable, or specifying
|
`AWS_ACCESS_KEY_ID` in an environment variable, or specifying
|
||||||
`ansible_ssh_user` as an extra variable).
|
`ansible_ssh_user` as an extra variable).
|
||||||
|
|
||||||
|
|
||||||
## Canonical Example
|
## Canonical Example
|
||||||
Bringing all of this terminology together, a "Getting Started using AWX" might
|
|
||||||
|
Bringing all of this terminology together, a "Getting Started Using AWX" might
|
||||||
involve:
|
involve:
|
||||||
|
|
||||||
* Creating a new Project that imports playbooks from e.g., a remote git repository
|
* Creating a new Project that imports playbooks from, for example, a remote git repository
|
||||||
* Manually creating or importing an Inventory which defines where the playbook(s) will run
|
* Manually creating or importing an Inventory which defines where the playbook(s) will run
|
||||||
* Optionally, saving a Credential which contains SSH authentication details for
|
* Optionally, saving a Credential which contains SSH authentication details for
|
||||||
the host(s) where the playbook will run
|
the host(s) where the playbook will run
|
||||||
* Creating a Job Template that specifies which Project and playbook to run and
|
* Creating a Job Template that specifies which Project and playbook to run and
|
||||||
where to run it (Inventory), and any necessary Credentials for e.g., SSH
|
where to run it (Inventory), and any necessary Credentials (*e.g.*, SSH
|
||||||
authentication
|
authentication)
|
||||||
* Launching the Job Template and viewing the results
|
* Launching the Job Template and viewing the results
|
||||||
|
|
||||||
## awx's Interaction with Ansible
|
|
||||||
The touchpoints between awx and Ansible are mostly encompassed by
|
## AWX's Interaction with Ansible
|
||||||
everything that happens *after* a job is started in awx. Specifically, this
|
|
||||||
|
The touchpoints between AWX and Ansible are mostly encompassed by
|
||||||
|
everything that happens *after* a job is started in AWX. Specifically, this
|
||||||
includes:
|
includes:
|
||||||
|
|
||||||
* Any time a Job Template is launched
|
* Any time a Job Template is launched
|
||||||
@ -64,49 +79,57 @@ includes:
|
|||||||
* Any time an Inventory Sync is performed
|
* Any time an Inventory Sync is performed
|
||||||
* Any time an Adhoc Command is run
|
* Any time an Adhoc Command is run
|
||||||
|
|
||||||
|
|
||||||
### Spawning Ansible Processes
|
### Spawning Ansible Processes
|
||||||
awx relies on a handful of stable interfaces in its interaction with Ansible.
|
|
||||||
|
AWX relies on a handful of stable interfaces in its interaction with Ansible.
|
||||||
The first of these are the actual CLI for `ansible-playbook` and
|
The first of these are the actual CLI for `ansible-playbook` and
|
||||||
`ansible-inventory`.
|
`ansible-inventory`.
|
||||||
|
|
||||||
When a Job Template or Project Update is run in awx, an actual
|
When a Job Template or Project Update is run in AWX, an actual
|
||||||
`ansible-playbook` command is composed and spawned in a pseudoterminal on one
|
`ansible-playbook` command is composed and spawned in a pseudoterminal on one
|
||||||
of the servers/containers that make up the awx installation. This process runs
|
of the servers/containers that make up the AWX installation. This process runs
|
||||||
until completion (or until a configurable timeout), and the return code,
|
until completion (or until a configurable timeout), and the return code,
|
||||||
stdout, and stderr of the process are recorded in the awx database. Adhoc
|
`stdout`, and `stderr` of the process are recorded in the AWX database. Ad hoc
|
||||||
commands work the same way, though they spawn `ansible` processes instead of
|
commands work the same way, though they spawn `ansible` processes instead of
|
||||||
`ansible-playbook`.
|
`ansible-playbook`.
|
||||||
|
|
||||||
Similarly, when an Inventory Sync runs, an actual `ansible-inventory` process
|
Similarly, when an Inventory Sync runs, an actual `ansible-inventory` process
|
||||||
runs, and its output is parsed and persisted into the awx database as Hosts and
|
runs, and its output is parsed and persisted into the AWX database as Hosts and
|
||||||
Groups.
|
Groups.
|
||||||
|
|
||||||
awx relies on stability in CLI behavior to function properly across Ansible
|
AWX relies on stability in CLI behavior to function properly across Ansible
|
||||||
releases; this includes the actual CLI arguments _and_ the behavior of task
|
releases; this includes the actual CLI arguments _and_ the behavior of task
|
||||||
execution and prompts (such as password, become, and Vault prompts).
|
execution and prompts (such as password, `become`, and Vault prompts).
|
||||||
|
|
||||||
|
|
||||||
### Capturing Event Data
|
### Capturing Event Data
|
||||||
awx applies an Ansible callback plugin to all `ansible-playbook` and `ansible`
|
|
||||||
|
AWX applies an Ansible callback plugin to all `ansible-playbook` and `ansible`
|
||||||
processes it spawns. This allows Ansible events to be captured and persisted
|
processes it spawns. This allows Ansible events to be captured and persisted
|
||||||
into the awx database; this process is what drives the "streaming" web UI
|
into the AWX database; this process is what drives the "streaming" web UI
|
||||||
you'll see if you launch a job from the awx web interface and watch its results
|
you'll see if you launch a job from the AWX web interface and watch its results
|
||||||
appears on the screen. awx relies on stability in this plugin interface, the
|
appears on the screen. AWX relies on stability in this plugin interface, the
|
||||||
heirarchy of emitted events based on strategy, and _especially_ the structure
|
heirarchy of emitted events based on strategy, and _especially_ the structure
|
||||||
of event data to work across Ansible releases:
|
of event data to work across Ansible releases:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
||||||
### Fact Caching
|
### Fact Caching
|
||||||
awx provides a custom fact caching implementation that allows users to store
|
|
||||||
facts for playbook runs across subsequent Job Template runs. Specifically, awx
|
AWX provides a custom fact caching implementation that allows users to store
|
||||||
|
facts for playbook runs across subsequent Job Template runs. Specifically, AWX
|
||||||
makes use of the `jsonfile` fact cache plugin; after `ansible-playbook` runs
|
makes use of the `jsonfile` fact cache plugin; after `ansible-playbook` runs
|
||||||
have exited, awx consumes the entire `jsonfile` cache and persists it in the
|
have exited, AWX consumes the entire `jsonfile` cache and persists it in the
|
||||||
awx database. On subsequent Job Template runs, prior `jsonfile` caches are
|
AWX database. On subsequent Job Template runs, prior `jsonfile` caches are
|
||||||
restored to the local file system so the new `ansible-playbook` process makes
|
restored to the local file system so the new `ansible-playbook` process makes
|
||||||
use of them.
|
use of them.
|
||||||
|
|
||||||
|
|
||||||
### Environment-Based Configuration
|
### Environment-Based Configuration
|
||||||
awx injects credentials and module configuration for a number of Ansible
|
|
||||||
|
AWX injects credentials and module configuration for a number of Ansible
|
||||||
modules via environment variables. Examples include:
|
modules via environment variables. Examples include:
|
||||||
|
|
||||||
* `ANSIBLE_NET_*` and other well-known environment variables for network device authentication
|
* `ANSIBLE_NET_*` and other well-known environment variables for network device authentication
|
||||||
@ -114,5 +137,5 @@ modules via environment variables. Examples include:
|
|||||||
(`AWS_ACCESS_KEY_ID`, `GCE_EMAIL`, etc...)
|
(`AWS_ACCESS_KEY_ID`, `GCE_EMAIL`, etc...)
|
||||||
* SSH-oriented configuration flags, such as `ANSIBLE_SSH_CONTROL_PATH`
|
* SSH-oriented configuration flags, such as `ANSIBLE_SSH_CONTROL_PATH`
|
||||||
|
|
||||||
awx relies on stability in these configuration options to reliably support
|
AWX relies on stability in these configuration options to reliably support
|
||||||
credential injection for supported Ansible modules.
|
credential injection for supported Ansible modules.
|
||||||
|
|||||||
@ -1,14 +1,15 @@
|
|||||||
## Process Isolation Overview
|
## Process Isolation Overview
|
||||||
|
|
||||||
In older version of Ansible Tower we used a system called `proot` to isolate tower job processes from the rest of the system.
|
In older versions of Ansible Tower, we used a system called `proot` to isolate Tower job processes from the rest of the system.
|
||||||
|
|
||||||
For Tower 3.1 and later we have switched to using `bubblewrap` which is a much lighter weight and maintained process isolation system.
|
Tower version 3.1 and later switched to using `bubblewrap`, which is a much lighter-weight and maintained process isolation system.
|
||||||
|
|
||||||
|
Tower 3.5 and later uses the process isolation feature in Ansible runner to achieve process isolation.
|
||||||
|
|
||||||
Tower 3.5 forward uses the process isolation feature in ansible runner to achieve process isolation.
|
|
||||||
|
|
||||||
### Activating Process Isolation
|
### Activating Process Isolation
|
||||||
|
|
||||||
By default `bubblewrap` is enabled, this can be turned off via Tower Config or from a tower settings file:
|
`bubblewrap` is enabled by default; it can be turned off via Tower Config or from a Tower settings file:
|
||||||
|
|
||||||
AWX_PROOT_ENABLED = False
|
AWX_PROOT_ENABLED = False
|
||||||
|
|
||||||
@ -17,16 +18,17 @@ Process isolation, when enabled, will be used for the following Job Types:
|
|||||||
* Job Templates - Launching jobs from regular job templates
|
* Job Templates - Launching jobs from regular job templates
|
||||||
* Ad-hoc Commands - Launching ad-hoc commands against one or more hosts in inventory
|
* Ad-hoc Commands - Launching ad-hoc commands against one or more hosts in inventory
|
||||||
|
|
||||||
|
|
||||||
### Tunables
|
### Tunables
|
||||||
|
|
||||||
Process Isolation will, by default, hide the following directories from the tasks mentioned above:
|
Process Isolation will, by default, hide the following directories from the tasks mentioned above:
|
||||||
|
|
||||||
* /etc/tower - To prevent exposing Tower configuration
|
* `/etc/tower` - To prevent exposing Tower configuration
|
||||||
* /var/lib/awx - With the exception of the current project being used (for regular job templates)
|
* `/var/lib/awx` - With the exception of the current project being used (for regular job templates)
|
||||||
* /var/log
|
* `/var/log`
|
||||||
* /tmp (or whatever the system temp dir is) - With the exception of the processes's own temp files
|
* `/tmp` (or whatever the system `temp dir` is) - With the exception of the processes's own temp files
|
||||||
|
|
||||||
If there is other information on the system that is sensitive and should be hidden that can be added via the Tower Configuration Screen
|
If there is other information on the system that is sensitive and should be hidden, it can be added via the Tower Configuration Screen
|
||||||
or by updating the following entry in a tower settings file:
|
or by updating the following entry in a tower settings file:
|
||||||
|
|
||||||
AWX_PROOT_HIDE_PATHS = ['/list/of/', '/paths']
|
AWX_PROOT_HIDE_PATHS = ['/list/of/', '/paths']
|
||||||
@ -35,10 +37,11 @@ If there are any directories that should specifically be exposed that can be set
|
|||||||
|
|
||||||
AWX_PROOT_SHOW_PATHS = ['/list/of/', '/paths']
|
AWX_PROOT_SHOW_PATHS = ['/list/of/', '/paths']
|
||||||
|
|
||||||
By default the system will use the system's tmp dir (/tmp by default) as it's staging area. This can be changed:
|
By default, the system will use the system's `tmp dir` (`/tmp` by default) as its staging area. This can be changed via the following setting:
|
||||||
|
|
||||||
AWX_PROOT_BASE_PATH = "/opt/tmp"
|
AWX_PROOT_BASE_PATH = "/opt/tmp"
|
||||||
|
|
||||||
|
|
||||||
### Project Folder Isolation
|
### Project Folder Isolation
|
||||||
|
|
||||||
Starting in AWX versions above 6.0.0, the project folder will be copied for each job run.
|
Starting in AWX versions above 6.0.0, the project folder will be copied for each job run.
|
||||||
|
|||||||
@ -1,19 +1,18 @@
|
|||||||
# Prometheus Container
|
# Prometheus Container
|
||||||
|
|
||||||
## Development
|
## Development
|
||||||
AWX comes with an example prometheus container and make target. To use it:
|
AWX comes with an example Prometheus container and `make` target. To use it:
|
||||||
|
|
||||||
1. Edit `tools/prometheus/prometheus.yml` and update the `basic_auth` section
|
1. Edit `tools/prometheus/prometheus.yml` and update the `basic_auth` section
|
||||||
to specify a valid user/password for an AWX user you've created.
|
to specify a valid user/password for an AWX user you've created.
|
||||||
Alternatively, you can provide an OAuth2 token (which can be generated at
|
Alternatively, you can provide an OAuth2 token (which can be generated at
|
||||||
`/api/v2/users/N/personal_tokens/`).
|
`/api/v2/users/N/personal_tokens/`).
|
||||||
|
|
||||||
> Note: By default, the config assumes a user with username=admin and password=password.
|
> Note: By default, the config assumes a user with username=admin and password=password.
|
||||||
|
|
||||||
2. Start the Prometheus container:
|
2. Start the Prometheus container:
|
||||||
`make prometheus`
|
`make prometheus`
|
||||||
3. The Prometheus UI will now be accessible at `http://localhost:9090/graph`.
|
3. The Prometheus UI will now be accessible at `http://localhost:9090/graph`.
|
||||||
|
|
||||||
There should be no extra setup needed. You can try executing this query in the
|
There should be no extra setup needed. You can try executing this query in the
|
||||||
UI to get back the number of active sessions: `awx_sessions_total`
|
UI to get back the number of active sessions: `awx_sessions_total`
|
||||||
|
|
||||||
|
|||||||
@ -1,19 +1,19 @@
|
|||||||
## Launch-time Configurations / Prompting
|
## Launch-time Configurations / Prompting
|
||||||
|
|
||||||
Admins of templates in AWX have the option to allow fields to be over-written
|
Admins of templates in AWX have the option to allow fields to be overwritten
|
||||||
by user-provided values at the time of launch. The job that runs will
|
by user-provided values at the time of launch. The job that runs will
|
||||||
then use the launch-time values in lieu of the template values.
|
then use the launch-time values in lieu of the template values.
|
||||||
|
|
||||||
Fields that can be prompted for, and corresponding "ask_" variables
|
Fields that can be prompted for, and corresponding `"ask_"` variables
|
||||||
(which exist on the template and must be set to `true` to enable prompting)
|
(which exist on the template and must be set to `true` to enable prompting)
|
||||||
are the following.
|
are the following:
|
||||||
|
|
||||||
##### Standard Pattern with Character Fields
|
|
||||||
|
|
||||||
- `ask_<variable>_on_launch` allows use of
|
##### Standard Pattern With Character Fields
|
||||||
- `<variable>`
|
|
||||||
|
|
||||||
The standard pattern applies to fields
|
- `ask_<variable>_on_launch` allows use of `<variable>`
|
||||||
|
|
||||||
|
The standard pattern applies to the following fields:
|
||||||
|
|
||||||
- `job_type`
|
- `job_type`
|
||||||
- `skip_tags`
|
- `skip_tags`
|
||||||
@ -22,27 +22,23 @@ The standard pattern applies to fields
|
|||||||
- `verbosity`
|
- `verbosity`
|
||||||
- `scm_branch`
|
- `scm_branch`
|
||||||
|
|
||||||
|
|
||||||
##### Non-Standard Cases
|
##### Non-Standard Cases
|
||||||
|
|
||||||
- `ask_variables_on_launch` allows unrestricted use of
|
- `ask_variables_on_launch` allows unrestricted use of `extra_vars`
|
||||||
- `extra_vars`
|
- `ask_tags_on_launch` allows use of `job_tags`
|
||||||
- `ask_tags_on_launch` allows use of
|
- Enabled survey allows restricted use of `extra_vars`, only for variables in survey (with qualifiers)
|
||||||
- `job_tags`
|
- `ask_credential_on_launch` allows use of `credentials`
|
||||||
- Enabled survey allows restricted use of
|
- `ask_inventory_on_launch` allows use of `inventory`
|
||||||
- `extra_vars`, only for variables in survey (with qualifiers)
|
|
||||||
- `ask_credential_on_launch` allows use of
|
|
||||||
- `credentials`
|
|
||||||
- `ask_inventory_on_launch` allows use of
|
|
||||||
- `inventory`
|
|
||||||
|
|
||||||
Surveys are a special-case of prompting for variables - applying a survey to
|
Surveys are a special-case of prompting for variables - applying a survey to
|
||||||
a template white-lists variable names in the survey spec (requires the survey
|
a template whitelists variable names in the survey spec (requires the survey
|
||||||
spec to exist and `survey_enabled` to be true). On the other hand,
|
spec to exist and `survey_enabled` to be true). On the other hand,
|
||||||
if `ask_variables_on_launch` is true, users can provide any variables in
|
if `ask_variables_on_launch` is true, users can provide any variables in
|
||||||
extra_vars.
|
`extra_vars`.
|
||||||
|
|
||||||
Prompting enablement for all types of credentials is controlled by `ask_credential_on_launch`.
|
Prompting enablement for all types of credentials is controlled by `ask_credential_on_launch`.
|
||||||
Clients can manually provide a list of credentials of any type, but only 1 of _each_ type, in
|
Clients can manually provide a list of credentials of any type, but only one of _each_ type, in
|
||||||
`credentials` on a POST to the launch endpoint.
|
`credentials` on a POST to the launch endpoint.
|
||||||
If the job is being spawned by a saved launch configuration (such as a schedule),
|
If the job is being spawned by a saved launch configuration (such as a schedule),
|
||||||
credentials are managed by the many-to-many relationship `credentials` relative
|
credentials are managed by the many-to-many relationship `credentials` relative
|
||||||
@ -51,27 +47,26 @@ The credentials in this relationship will either add to the job template's
|
|||||||
credential list, or replace a credential in the job template's list if it
|
credential list, or replace a credential in the job template's list if it
|
||||||
is the same type.
|
is the same type.
|
||||||
|
|
||||||
### Manual use of Prompts
|
|
||||||
|
### Manual Use of Prompts
|
||||||
|
|
||||||
Fields enabled as prompts in the template can be used for the following
|
Fields enabled as prompts in the template can be used for the following
|
||||||
actions in the API.
|
actions in the API:
|
||||||
|
|
||||||
- POST to `/api/v2/job_templates/N/launch/`
|
- POST to `/api/v2/job_templates/N/launch/` (can accept all prompt-able fields)
|
||||||
- can accept all prompt-able fields
|
- POST to `/api/v2/workflow_job_templates/N/launch/` (can accept certain fields, see `workflow.md`)
|
||||||
- POST to `/api/v2/workflow_job_templates/N/launch/`
|
- POST to `/api/v2/system_job_templates/N/launch/` (can accept certain fields, with no user configuration)
|
||||||
- can accept certain fields, see `workflow.md`
|
|
||||||
- POST to `/api/v2/system_job_templates/N/launch/`
|
|
||||||
- can accept certain fields, with no user configuration
|
|
||||||
|
|
||||||
When launching manually, certain restrictions apply to the use of credentials
|
When launching manually, certain restrictions apply to the use of credentials:
|
||||||
- if providing deprecated `extra_credentials` this becomes the "legacy" method,
|
- If providing deprecated `extra_credentials`, this becomes the "legacy" method
|
||||||
and imposes additional restrictions on relaunch,
|
and imposes additional restrictions on relaunch,
|
||||||
and is mutually exclusive with the use of `credentials` field
|
and is mutually exclusive with the use of `credentials` field
|
||||||
- if providing `credentials`, existing credentials on the job template may
|
- If providing `credentials`, existing credentials on the job template may
|
||||||
only be removed if replaced by another credential of the same type
|
only be removed if replaced by another credential of the same type
|
||||||
this is so that relaunch will use the up-to-date credential on the template
|
this is so that relaunch will use the up-to-date credential on the template
|
||||||
if it has been edited since the prior launch
|
if it has been edited since the prior launch
|
||||||
|
|
||||||
|
|
||||||
#### Data Rules for Prompts
|
#### Data Rules for Prompts
|
||||||
|
|
||||||
For the POST action to launch, data for "prompts" are provided as top-level
|
For the POST action to launch, data for "prompts" are provided as top-level
|
||||||
@ -80,8 +75,8 @@ provided for `credentials`, which is otherwise not possible in AWX API design.
|
|||||||
The list of credentials provided in the POST data will become the list
|
The list of credentials provided in the POST data will become the list
|
||||||
for the spawned job.
|
for the spawned job.
|
||||||
|
|
||||||
Values of `null` are not allowed, if the field is not being over-ridden,
|
Values of `null` are not allowed; if the field is not being over-ridden,
|
||||||
the key should not be given in the payload. A 400 should be returned if
|
the key should not be given in the payload. A `400` should be returned if
|
||||||
this is done.
|
this is done.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
@ -97,7 +92,7 @@ POST to `/api/v2/job_templates/N/launch/` with data:
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
where the job template has credentials `[2, 3, 5]`, and the credential type
|
...where the job template has credentials `[2, 3, 5]`, and the credential type
|
||||||
are the following:
|
are the following:
|
||||||
|
|
||||||
- 1 - gce
|
- 1 - gce
|
||||||
@ -106,7 +101,7 @@ are the following:
|
|||||||
- 4 - aws
|
- 4 - aws
|
||||||
- 5 - openstack
|
- 5 - openstack
|
||||||
|
|
||||||
Assuming that the job template is configured to prompt for all these,
|
Assuming that the job template is configured to prompt for all of these
|
||||||
fields, here is what happens in this action:
|
fields, here is what happens in this action:
|
||||||
|
|
||||||
- `job_type` of the job takes the value of "check"
|
- `job_type` of the job takes the value of "check"
|
||||||
@ -117,27 +112,28 @@ fields, here is what happens in this action:
|
|||||||
- `extra_vars` of the job template will be used without any overrides
|
- `extra_vars` of the job template will be used without any overrides
|
||||||
|
|
||||||
If `extra_vars` in the request data contains some keys, these will
|
If `extra_vars` in the request data contains some keys, these will
|
||||||
be combined with the job template extra_vars dictionary, with the
|
be combined with the job template `extra_vars` dictionary, with the
|
||||||
request data taking precedence.
|
request data taking precedence.
|
||||||
|
|
||||||
Provided credentials will replace any job template credentials of the same
|
Provided credentials will replace any job template credentials of the same
|
||||||
exclusive type. In the example, the job template
|
exclusive type. In the example, the job template
|
||||||
credential 3 was replaced with the provided credential 1, because a job
|
Credential 3 was replaced with the provided Credential 1, because a job
|
||||||
may only use 1 gce credential because these two credentials define the
|
may only use one GCE credential because these two credentials define the
|
||||||
same environment variables and configuration file.
|
same environment variables and configuration file.
|
||||||
If the job had not provided the credential 1, a 400 error would have been
|
If the job had not provided the Credential 1, a 400 error would have been
|
||||||
returned because the job must contain the same types of credentials as its
|
returned because the job must contain the same types of credentials as its
|
||||||
job template.
|
job template.
|
||||||
|
|
||||||
|
|
||||||
### Saved Launch-time Configurations
|
### Saved Launch-time Configurations
|
||||||
|
|
||||||
Several other mechanisms which automatically launch jobs can apply prompts
|
Several other mechanisms which automatically launch jobs can apply prompts
|
||||||
at launch-time that are saved in advance.
|
at launch-time that are saved in advance:
|
||||||
|
|
||||||
- Workflow nodes
|
- Workflow nodes
|
||||||
- Schedules
|
- Schedules
|
||||||
- Job relaunch / re-scheduling
|
- Job relaunch / re-scheduling
|
||||||
- (partially) workflow job templates
|
- (partially) Workflow job templates
|
||||||
|
|
||||||
In the case of workflow nodes and schedules, the prompted fields are saved
|
In the case of workflow nodes and schedules, the prompted fields are saved
|
||||||
directly on the model. Those models include Workflow Job Template Nodes,
|
directly on the model. Those models include Workflow Job Template Nodes,
|
||||||
@ -153,6 +149,7 @@ and only used to prepare the correct launch-time configuration for subsequent
|
|||||||
re-launch and re-scheduling of the job. To see these prompts for a particular
|
re-launch and re-scheduling of the job. To see these prompts for a particular
|
||||||
job, do a GET to `/api/v2/jobs/N/create_schedule/`.
|
job, do a GET to `/api/v2/jobs/N/create_schedule/`.
|
||||||
|
|
||||||
|
|
||||||
#### Workflow Node Launch Configuration
|
#### Workflow Node Launch Configuration
|
||||||
|
|
||||||
Workflow job nodes will combine `extra_vars` from their parent
|
Workflow job nodes will combine `extra_vars` from their parent
|
||||||
@ -170,35 +167,38 @@ If the node's job template has `ask_inventory_on_launch` set to false and
|
|||||||
the node provides an inventory, this resource will not be used in the spawned
|
the node provides an inventory, this resource will not be used in the spawned
|
||||||
job. If a user creates a node that would do this, a 400 response will be returned.
|
job. If a user creates a node that would do this, a 400 response will be returned.
|
||||||
|
|
||||||
|
|
||||||
#### Workflow Job Template Prompts
|
#### Workflow Job Template Prompts
|
||||||
|
|
||||||
Workflow JTs are different than other cases, because they do not have a
|
Workflow job templates are different from other cases because they do not have a
|
||||||
template directly linked, so their prompts are a form of action-at-a-distance.
|
template directly linked, so their prompts are a form of action-at-a-distance.
|
||||||
When the node's prompts are gathered to spawn its job, any prompts from the workflow job
|
When the node's prompts are gathered to spawn its job, any prompts from the workflow job
|
||||||
will take precedence over the node's value.
|
will take precedence over the node's value.
|
||||||
|
|
||||||
As a special exception, `extra_vars` from a workflow will not obey JT survey
|
As a special exception, `extra_vars` from a workflow will not obey the job template survey
|
||||||
and prompting rules, both both historical and ease-of-understanding reasons.
|
and prompting rules, both for historical and ease-of-understanding reasons.
|
||||||
This behavior may change in the future.
|
This behavior may change in the future.
|
||||||
|
|
||||||
Other than that exception, JT prompting rules are still adhered to when
|
Other than that exception, job template prompting rules are still adhered to when
|
||||||
a job is spawned.
|
a job is spawned.
|
||||||
|
|
||||||
|
|
||||||
#### Job Relaunch and Re-scheduling
|
#### Job Relaunch and Re-scheduling
|
||||||
|
|
||||||
Job relaunch does not allow user to provide any prompted fields at the time of relaunch.
|
Job relaunch does not allow a user to provide any prompted fields at the time of relaunch.
|
||||||
Relaunching will re-apply all the prompts used at the
|
Relaunching will re-apply all the prompts used at the
|
||||||
time of the original launch. This means that:
|
time of the original launch. This means that:
|
||||||
|
|
||||||
- all prompts restrictions apply as-if the job was being launched with the
|
- All prompts restrictions apply as if the job was being launched with the
|
||||||
current job template (even if it has been modified)
|
current job template (even if it has been modified)
|
||||||
- RBAC rules for prompted resources still apply
|
- RBAC rules for prompted resources still apply
|
||||||
|
|
||||||
Those same rules apply when created a schedule from the
|
Those same rules apply when creating a schedule from the
|
||||||
`/api/v2/schedule_job/` endpoint.
|
`/api/v2/schedule_job/` endpoint.
|
||||||
|
|
||||||
Jobs orphaned by a deleted job template can be relaunched,
|
Jobs orphaned by a deleted job template can be relaunched,
|
||||||
but only with organization or system administrator privileges.
|
but only with Organization or System Administrator privileges.
|
||||||
|
|
||||||
|
|
||||||
#### Credential Password Prompting Restriction
|
#### Credential Password Prompting Restriction
|
||||||
|
|
||||||
@ -208,6 +208,7 @@ of a saved launch-time configuration. This is for security reasons.
|
|||||||
|
|
||||||
Credential passwords _can_ be provided at time of relaunch.
|
Credential passwords _can_ be provided at time of relaunch.
|
||||||
|
|
||||||
|
|
||||||
### Validation
|
### Validation
|
||||||
|
|
||||||
The general rule for validation:
|
The general rule for validation:
|
||||||
@ -219,6 +220,7 @@ In other words, if no prompts (including surveys) are configured, a job
|
|||||||
must be identical to the template it was created from, for all fields
|
must be identical to the template it was created from, for all fields
|
||||||
that become `ansible-playbook` options.
|
that become `ansible-playbook` options.
|
||||||
|
|
||||||
|
|
||||||
#### Disallowed Fields
|
#### Disallowed Fields
|
||||||
|
|
||||||
If a manual launch provides fields not allowed by the rules of the template,
|
If a manual launch provides fields not allowed by the rules of the template,
|
||||||
@ -227,6 +229,7 @@ the behavior is:
|
|||||||
- Launches without those fields, ignores fields
|
- Launches without those fields, ignores fields
|
||||||
- lists fields in `ignored_fields` in POST response
|
- lists fields in `ignored_fields` in POST response
|
||||||
|
|
||||||
|
|
||||||
#### Data Type Validation
|
#### Data Type Validation
|
||||||
|
|
||||||
All fields provided on launch, or saved in a launch-time configuration
|
All fields provided on launch, or saved in a launch-time configuration
|
||||||
@ -237,11 +240,13 @@ if saving to the job template model. For example, only certain values of
|
|||||||
Surveys impose additional restrictions, and violations of the survey
|
Surveys impose additional restrictions, and violations of the survey
|
||||||
validation rules will prevent launch from proceeding.
|
validation rules will prevent launch from proceeding.
|
||||||
|
|
||||||
|
|
||||||
#### Fields Required on Launch
|
#### Fields Required on Launch
|
||||||
|
|
||||||
Failing to provide required variables also results in a validation error
|
Failing to provide required variables also results in a validation error
|
||||||
when manually launching. It will also result in a 400 error if the user
|
when manually launching. It will also result in a 400 error if the user
|
||||||
fails to provide those fields when saving a WFJT node or schedule.
|
fails to provide those fields when saving a workflow job template node or schedule.
|
||||||
|
|
||||||
|
|
||||||
#### Broken Saved Configurations
|
#### Broken Saved Configurations
|
||||||
|
|
||||||
@ -254,26 +259,31 @@ launched (typical example is a null `inventory`), then the job should be
|
|||||||
created in an "error" state with `job_explanation` containing a summary
|
created in an "error" state with `job_explanation` containing a summary
|
||||||
of what happened.
|
of what happened.
|
||||||
|
|
||||||
### Scenarios to have Coverage for
|
|
||||||
|
|
||||||
- variable precedence
|
### Scenarios to Cover
|
||||||
- schedule has survey answers for WFJT survey
|
|
||||||
- WFJT has node that has answers to JT survey
|
**Variable Precedence**
|
||||||
- on launch, the schedule answers override all others
|
- Schedule has survey answers for workflow job template survey
|
||||||
- survey password durability
|
- Workflow job template has node that has answers to job template survey
|
||||||
- schedule has survey password answers from WFJT survey
|
- On launch, the schedule answers override all others
|
||||||
- WFJT node has answers to different password questions from JT survey
|
|
||||||
- Saving with "$encrypted$" value will either
|
**Survey Password Durability**
|
||||||
|
- Schedule has survey password answers from workflow job template survey
|
||||||
|
- Workflow job template node has answers to different password questions from job template survey
|
||||||
|
- Saving with `"$encrypted$"` value will either:
|
||||||
- become a no-op, removing the key if a valid question default exists
|
- become a no-op, removing the key if a valid question default exists
|
||||||
- replace with the database value if question was previously answered
|
- replace with the database value if question was previously answered
|
||||||
- final job it spawns has both answers encrypted
|
- Final job it spawns has both answers encrypted
|
||||||
- POST to associate credential to WFJT node
|
|
||||||
- requires admin to WFJT and execute to JT
|
**POST to Associate Credential to Workflow Job Template Node**
|
||||||
- this is in addition to the restriction of `ask_credential_on_launch`
|
- Requires admin to WFJT and execute to job template
|
||||||
- credentials merge behavior
|
- This is in addition to the restriction of `ask_credential_on_launch`
|
||||||
- JT has machine & cloud credentials, set to prompt for credential on launch
|
|
||||||
- schedule for JT provides no credentials
|
**Credentials Merge Behavior**
|
||||||
- spawned job still uses all JT credentials
|
- Job template has machine & cloud credentials, set to prompt for credential on launch
|
||||||
- credentials deprecated behavior
|
- Schedule for job template provides no credentials
|
||||||
- manual launch providing `"extra_credentials": []` should launch with no job credentials
|
- Spawned job still uses all job template credentials
|
||||||
- such jobs cannot have schedules created from them
|
|
||||||
|
**Credentials Deprecated Behavior**
|
||||||
|
- Manual launch providing `"extra_credentials": []` should launch with no job credentials
|
||||||
|
- Such jobs cannot have schedules created from them
|
||||||
|
|||||||
@ -1,27 +1,29 @@
|
|||||||
Starting from Tower 3.3 and API v2, user are able to copy some existing resource objects to quickly
|
Starting from Tower 3.3 and API V2, users are able to copy some existing resource objects to quickly
|
||||||
create new resource objects via POSTing to corresponding `/copy/` endpoint. A new `CopyAPIView` class
|
create new resource objects via POSTing to the corresponding `/copy/` endpoint. A new `CopyAPIView` class
|
||||||
is introduced as the base view class for `/copy/` endpoints. It mimics the process of manually fetching
|
is introduced as the base view class for `/copy/` endpoints. It mimics the process of manually fetching
|
||||||
fields from the existing object to create a new object, plus the ability to automatically detect sub
|
fields from the existing object to create a new object, plus the ability to automatically detect sub
|
||||||
structures of existing objects and make a background task-based deep copy when necessary.
|
structures of existing objects and make a background task-based deep copy when necessary.
|
||||||
|
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
If an AWX resource is copiable, all of its object detail API views will have a related URL field
|
|
||||||
`"copy"`, which has form `/api/<version>/<resource name>/<object pk>/copy/`. GET to this endpoint
|
If an AWX resource is able to be copied, all of its object detail API views will have a related URL field
|
||||||
|
`"copy"`, which has the form `/api/v2/<resource name>/<object pk>/copy/`. A GET to this endpoint
|
||||||
will return `can_copy`, which is a boolean indicating whether the current user can execute a copy
|
will return `can_copy`, which is a boolean indicating whether the current user can execute a copy
|
||||||
operation; POST to this endpoint actually copies the resource object. One field `name` is required
|
operation; POSTing to this endpoint actually copies the resource object. One field, `name`, is required;
|
||||||
which will later be used as the name of the created copy. Upon success, 201 will be returned, along
|
this will later be used as the name of the created copy. Upon success, a 201 will be returned, along
|
||||||
with the created copy.
|
with the created copy.
|
||||||
|
|
||||||
For some resources like credential, the copy process is not time-consuming, thus the entire copy
|
For some resources like credentials, the copy process is not time-consuming, thus the entire copy
|
||||||
process will take place in the request-response cycle, and the created object copy is returned as
|
process will take place in the request-response cycle, and the created object copy is returned as a
|
||||||
POST response.
|
POST response.
|
||||||
|
|
||||||
For some other resources like inventory, the copy process can take longer, depending on the number
|
For some other resources like inventories, the copy process can take longer, depending on the number
|
||||||
of sub-objects to copy (will explain later). Thus, although the created copy will be returned, the
|
of sub-objects to copy (this will be explained later in this document). Thus, although the created copy will be returned, the
|
||||||
copy process is not finished yet. All sub-objects (like all hosts and groups of an inventory) will
|
copy process is not finished yet. All sub-objects (like all hosts and groups of an inventory) will
|
||||||
not be created until after the background copy task is finished in success.
|
not be created until after the background copy task is finished successfully.
|
||||||
|
|
||||||
Currently the available list of copiable resources are:
|
Currently, the available list of copiable resources are:
|
||||||
|
|
||||||
- job templates
|
- job templates
|
||||||
- projects
|
- projects
|
||||||
@ -31,20 +33,22 @@ Currently the available list of copiable resources are:
|
|||||||
- notifications
|
- notifications
|
||||||
- inventory scripts
|
- inventory scripts
|
||||||
|
|
||||||
For most of the resources above, only the object to be copied itself will be copied; For some resources
|
For most of the resources above, only the object to be copied itself will be copied; for some resources
|
||||||
like inventories, however, sub resources belonging to the resource will also be copied to maintain the
|
like inventories, however, sub resources belonging to the resource will also be copied to maintain the
|
||||||
full functionality of the copied new resource. In specific:
|
full functionality of the copied new resource. Specifically:
|
||||||
|
|
||||||
- When an inventory is copied, all its hosts, groups and inventory sources are copied.
|
- When an inventory is copied, all of its hosts, groups and inventory sources are copied.
|
||||||
- When a workflow job template is copied, all its workflow job template nodes are copied.
|
- When a workflow job template is copied, all of its workflow job template nodes are copied.
|
||||||
|
|
||||||
|
|
||||||
|
## How to Add a Copy Endpoint for a Resource
|
||||||
|
|
||||||
## How to add a copy end-point for a resource
|
|
||||||
The copy behavior of different resources largely follow the same pattern, therefore a unified way of
|
The copy behavior of different resources largely follow the same pattern, therefore a unified way of
|
||||||
enabling copy capability for resources is available for developers:
|
enabling copy capability for resources is available for developers.
|
||||||
|
|
||||||
Firstly, create a `/copy/` url endpoint for the target resource.
|
First, create a `/copy/` URL endpoint for the target resource.
|
||||||
|
|
||||||
Secondly, create a view class as handler to `/copy/` endpoint. This view class should be subclassed
|
Second, create a view class as handler to the `/copy/` endpoint. This view class should be subclassed
|
||||||
from `awx.api.generics.CopyAPIView`. Here is an example:
|
from `awx.api.generics.CopyAPIView`. Here is an example:
|
||||||
```python
|
```python
|
||||||
class JobTemplateCopy(CopyAPIView):
|
class JobTemplateCopy(CopyAPIView):
|
||||||
@ -52,12 +56,13 @@ class JobTemplateCopy(CopyAPIView):
|
|||||||
model = JobTemplate
|
model = JobTemplate
|
||||||
copy_return_serializer_class = JobTemplateSerializer
|
copy_return_serializer_class = JobTemplateSerializer
|
||||||
```
|
```
|
||||||
|
|
||||||
Note the above example declares a custom class attribute `copy_return_serializer_class`. This attribute
|
Note the above example declares a custom class attribute `copy_return_serializer_class`. This attribute
|
||||||
is used by `CopyAPIView` to render the created copy in POST response, so in most cases the value should
|
is used by `CopyAPIView` to render the created copy in POST response, so in most cases the value should
|
||||||
be the same as `serializer_class` of corresponding resource detail view, like here the value is the
|
be the same as `serializer_class` of corresponding resource detail view; for example, here the value is the
|
||||||
`serializer_class` of `JobTemplateDetail`.
|
`serializer_class` of `JobTemplateDetail`.
|
||||||
|
|
||||||
Thirdly, for the underlying model of the resource, Add 2 macros, `FIELDS_TO_PRESERVE_AT_COPY` and
|
Third, for the underlying model of the resource, add two macros, `FIELDS_TO_PRESERVE_AT_COPY` and
|
||||||
`FIELDS_TO_DISCARD_AT_COPY`, as needed. Here is an example:
|
`FIELDS_TO_DISCARD_AT_COPY`, as needed. Here is an example:
|
||||||
```python
|
```python
|
||||||
class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, ResourceMixin):
|
class JobTemplate(UnifiedJobTemplate, JobOptions, SurveyJobTemplateMixin, ResourceMixin):
|
||||||
@ -91,10 +96,10 @@ Lastly, unit test copy behavior of the new endpoint in `/awx/main/tests/function
|
|||||||
update docs (like this doc).
|
update docs (like this doc).
|
||||||
|
|
||||||
Fields in `FIELDS_TO_PRESERVE_AT_COPY` must be solid model fields, while fields in
|
Fields in `FIELDS_TO_PRESERVE_AT_COPY` must be solid model fields, while fields in
|
||||||
`FIELDS_TO_DISCARD_AT_COPY` do not need to be. Note there are hidden fields not visible from model
|
`FIELDS_TO_DISCARD_AT_COPY` do not need to be. Note that there are hidden fields not visible from the model
|
||||||
definition, namely reverse relationships and fields inherited from super classes or mix-ins. A help
|
definition, namely reverse relationships and fields inherited from super classes or mix-ins. A help
|
||||||
script `tools/scripts/list_fields.py` is available to inspect a model and list details of all its
|
script `tools/scripts/list_fields.py` is available to inspect a model and list details of all its
|
||||||
available fields.
|
available fields:
|
||||||
```
|
```
|
||||||
# In shell_plus
|
# In shell_plus
|
||||||
>>> from list_fields import pretty_print_model_fields
|
>>> from list_fields import pretty_print_model_fields
|
||||||
@ -103,10 +108,10 @@ available fields.
|
|||||||
|
|
||||||
`CopyAPIView` will automatically detect sub objects of an object, and do a deep copy of all sub objects
|
`CopyAPIView` will automatically detect sub objects of an object, and do a deep copy of all sub objects
|
||||||
as a background task. There are sometimes permission issues with sub object copy. For example,
|
as a background task. There are sometimes permission issues with sub object copy. For example,
|
||||||
when copying nodes of a workflow job template, there are cases where the user performing copy has no use
|
when copying nodes of a workflow job template, there are cases where the user performing copy has no use for
|
||||||
permission of related credential and inventory of some nodes, and it is desired those fields will be
|
permission of related credential and inventory of some nodes, and those fields should be
|
||||||
`None`. In order to do that, developer should provide a static method `deep_copy_permission_check_func`
|
`None`. In order to do that, the developer should provide a static method `deep_copy_permission_check_func`
|
||||||
under corresponding specific copy view. Like
|
under corresponding specific copy view:
|
||||||
```python
|
```python
|
||||||
class WorkflowJobTemplateCopy(WorkflowsEnforcementMixin, CopyAPIView):
|
class WorkflowJobTemplateCopy(WorkflowsEnforcementMixin, CopyAPIView):
|
||||||
|
|
||||||
@ -122,45 +127,46 @@ class WorkflowJobTemplateCopy(WorkflowsEnforcementMixin, CopyAPIView):
|
|||||||
# Other code
|
# Other code
|
||||||
```
|
```
|
||||||
Static method `deep_copy_permission_check_func` must have and only have two arguments: `user`, the
|
Static method `deep_copy_permission_check_func` must have and only have two arguments: `user`, the
|
||||||
user performing the copy; `new_objs`, a list of all sub objects of the created copy. Sub objects in
|
user performing the copy, and `new_objs`, a list of all sub objects of the created copy. Sub objects in
|
||||||
`new_objs` are initially populated disregarding any permission constraints, developer shall check
|
`new_objs` are initially populated disregarding any permission constraints; the developer shall check
|
||||||
`user`'s permission against these new sub objects and react like unlink related objects or sending
|
`user`'s permission against these new sub objects and unlink related objects or send
|
||||||
warning logs. `deep_copy_permission_check_func` should not return anything.
|
warning logs as necessary. `deep_copy_permission_check_func` should not return anything.
|
||||||
|
|
||||||
Lastly, macro `REENCRYPTION_BLACKLIST_AT_COPY` is available as part of a model definition. It is a
|
Lastly, macro `REENCRYPTION_BLACKLIST_AT_COPY` is available as part of a model definition. It is a
|
||||||
list of field names which will escape re-encryption during copy. For example, `extra_data` field
|
list of field names which will escape re-encryption during copy. For example, the `extra_data` field
|
||||||
of workflow job template nodes.
|
of workflow job template nodes.
|
||||||
|
|
||||||
|
|
||||||
## Acceptance Criteria
|
## Acceptance Criteria
|
||||||
|
|
||||||
* Credentials should be able to copy themselves. The behavior of copying credential A shall be exactly
|
* Credentials should be able to copy themselves. The behavior of copying credential A shall be exactly
|
||||||
the same as creating a credential B with all needed fields for creation coming from credential A.
|
the same as creating a credential B with all necessary fields for creation coming from credential A.
|
||||||
* Inventories should be able to copy themselves. The behavior of copying inventory A shall be exactly
|
* Inventories should be able to copy themselves. The behavior of copying inventory A shall be exactly
|
||||||
the same as creating an inventory B with all needed fields for creation coming from inventory A. Other
|
the same as creating an inventory B with all necessary fields for creation coming from inventory A. Other
|
||||||
than that, inventory B should inherit A's `instance_groups`, and have exactly the same host and group
|
than that, inventory B should inherit A's `instance_groups`, and have exactly the same host and group
|
||||||
structures as A.
|
structures as A.
|
||||||
* Inventory scripts should be able to copy themselves. The behavior of copying inventory script A
|
* Inventory scripts should be able to copy themselves. The behavior of copying inventory script A
|
||||||
shall be exactly the same as creating an inventory script B with all needed fields for creation
|
shall be exactly the same as creating an inventory script B with all necessary fields for creation
|
||||||
coming from inventory script A.
|
coming from inventory script A.
|
||||||
* Job templates should be able to copy themselves. The behavior of copying job template A
|
* Job templates should be able to copy themselves. The behavior of copying job template A
|
||||||
shall be exactly the same as creating a job template B with all needed fields for creation
|
shall be exactly the same as creating a job template B with all necessary fields for creation
|
||||||
coming from job template A. Other than that, job template B should inherit A's `labels`,
|
coming from job template A. Other than that, job template B should inherit A's `labels`,
|
||||||
`instance_groups`, `credentials` and `survey_spec`.
|
`instance_groups`, `credentials` and `survey_spec`.
|
||||||
* Notification templates should be able to copy themselves. The behavior of copying notification
|
* Notification templates should be able to copy themselves. The behavior of copying notification
|
||||||
template A shall be exactly the same as creating a notification template B with all needed fields
|
template A shall be exactly the same as creating a notification template B with all necessary fields
|
||||||
for creation coming from notification template A.
|
for creation coming from notification template A.
|
||||||
* Projects should be able to copy themselves. The behavior of copying project A shall be the
|
* Projects should be able to copy themselves. The behavior of copying project A shall be the
|
||||||
same as creating a project B with all needed fields for creation coming from project A, except for
|
same as creating a project B with all necessary fields for creation coming from project A, except for
|
||||||
`local_path`, which will be populated by triggered project update. Other than that, project B
|
`local_path`, which will be populated by triggered project update. Other than that, project B
|
||||||
should inherit A's `labels`, `instance_groups` and `credentials`.
|
should inherit A's `labels`, `instance_groups` and `credentials`.
|
||||||
* Workflow Job templates should be able to copy themselves. The behavior of copying workflow job
|
* Workflow Job templates should be able to copy themselves. The behavior of copying workflow job
|
||||||
template A shall be exactly the same as creating a workflow job template B with all needed fields
|
template A shall be exactly the same as creating a workflow job template B with all necessary fields
|
||||||
for creation coming from workflow job template A. Other than that, workflow job template B should
|
for creation coming from workflow job template A. Other than that, workflow job template B should
|
||||||
inherit A's `labels`, `instance_groups`, `credentials` and `survey_spec`, and have exactly the
|
inherit A's `labels`, `instance_groups`, `credentials` and `survey_spec`, and have exactly the
|
||||||
same workflow job template node structure as A.
|
same workflow job template node structure as A.
|
||||||
* In all copy processes, `name` field of the created copy of the original object should be able to
|
* In all copy processes, the `name` field of the created copy of the original object should be customizable in the POST body.
|
||||||
customize in the POST body.
|
|
||||||
* The permission for a user to make a copy for an existing resource object should be the same as the
|
* The permission for a user to make a copy for an existing resource object should be the same as the
|
||||||
permission for a user to create a brand new resource object using fields from the existing object.
|
permission for a user to create a brand new resource object using fields from the existing object.
|
||||||
* The RBAC behavior of original workflow job template `/copy/` should be pertained. That is, if the
|
* The RBAC behavior of original workflow job template `/copy/` should be pertained. That is, if the
|
||||||
user has no necessary permission to the related project and credential of a workflow job template
|
user has no permission to access the related project and credential of a workflow job template
|
||||||
node, the copied workflow job template node should have those fields empty.
|
node, the copied workflow job template node should have those fields empty.
|
||||||
|
|||||||
@ -1,7 +1,6 @@
|
|||||||
Scheduled Jobs
|
## Scheduled Jobs
|
||||||
==============
|
|
||||||
|
|
||||||
awx allows jobs to run on a schedule (with optional recurrence rules) via
|
AWX allows jobs to run on a schedule (with optional recurrence rules) via
|
||||||
an `HTTP POST` to a variety of API endpoints:
|
an `HTTP POST` to a variety of API endpoints:
|
||||||
|
|
||||||
HTTP POST
|
HTTP POST
|
||||||
@ -23,9 +22,10 @@ an `HTTP POST` to a variety of API endpoints:
|
|||||||
specific example above would run a job every day - for seven consecutive days - starting
|
specific example above would run a job every day - for seven consecutive days - starting
|
||||||
on January 15th, 2030 at noon (UTC).
|
on January 15th, 2030 at noon (UTC).
|
||||||
|
|
||||||
Specifying Timezones
|
|
||||||
====================
|
## Specifying Timezones
|
||||||
`DTSTART` values provided to awx _must_ provide timezone information (they may
|
|
||||||
|
`DTSTART` values provided to AWX _must_ provide timezone information (they may
|
||||||
not be naive dates).
|
not be naive dates).
|
||||||
|
|
||||||
For UTC dates, `DTSTART` values should be denoted with the `Z` suffix:
|
For UTC dates, `DTSTART` values should be denoted with the `Z` suffix:
|
||||||
@ -48,9 +48,9 @@ A list of _valid_ zone identifiers (which can vary by system) can be found at:
|
|||||||
]
|
]
|
||||||
|
|
||||||
|
|
||||||
UNTIL and Timezones
|
## UNTIL and Timezones
|
||||||
===================
|
|
||||||
`DTSTART` values provided to awx _must_ provide timezone information (they may
|
`DTSTART` values provided to AWX _must_ provide timezone information (they may
|
||||||
not be naive dates).
|
not be naive dates).
|
||||||
|
|
||||||
Additionally, RFC5545 specifies that:
|
Additionally, RFC5545 specifies that:
|
||||||
@ -73,9 +73,9 @@ Not Valid:
|
|||||||
`DTSTART;TZID=America/New_York:20180601T120000 RRULE:FREQ=DAILY;INTERVAL=1;UNTIL=20180606T170000`
|
`DTSTART;TZID=America/New_York:20180601T120000 RRULE:FREQ=DAILY;INTERVAL=1;UNTIL=20180606T170000`
|
||||||
|
|
||||||
|
|
||||||
Previewing Schedules
|
## Previewing Schedules
|
||||||
====================
|
|
||||||
awx provides an endpoint for previewing the future dates and times for
|
AWX provides an endpoint for previewing the future dates and times for
|
||||||
a specified `RRULE`. A list of the next _ten_ occurrences will be returned in
|
a specified `RRULE`. A list of the next _ten_ occurrences will be returned in
|
||||||
local and UTC time:
|
local and UTC time:
|
||||||
|
|
||||||
@ -107,10 +107,9 @@ local and UTC time:
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
RRULE Limitations
|
## RRULE Limitations
|
||||||
=================
|
|
||||||
|
|
||||||
The following aspects of `RFC5545` are _not_ supported by awx schedules:
|
The following aspects of `RFC5545` are _not_ supported by AWX schedules:
|
||||||
|
|
||||||
* Strings with more than a single `DTSTART:` component
|
* Strings with more than a single `DTSTART:` component
|
||||||
* Strings with more than a single `RRULE` component
|
* Strings with more than a single `RRULE` component
|
||||||
@ -123,8 +122,7 @@ The following aspects of `RFC5545` are _not_ supported by awx schedules:
|
|||||||
* The use of `COUNT=` in an `RRULE` with a value over 999
|
* The use of `COUNT=` in an `RRULE` with a value over 999
|
||||||
|
|
||||||
|
|
||||||
Implementation Details
|
## Implementation Details
|
||||||
======================
|
|
||||||
|
|
||||||
Any time an `awx.model.Schedule` is saved with a valid `rrule` value, the
|
Any time an `awx.model.Schedule` is saved with a valid `rrule` value, the
|
||||||
`dateutil` library is used to burst out a list of all occurrences. From here,
|
`dateutil` library is used to burst out a list of all occurrences. From here,
|
||||||
@ -135,7 +133,7 @@ the following dates are saved in the database:
|
|||||||
* `main_schedule.dtend` - the _last_ datetime in the list of all occurrences (coerced to UTC)
|
* `main_schedule.dtend` - the _last_ datetime in the list of all occurrences (coerced to UTC)
|
||||||
* `main_schedule.next_run` - the _next_ datetime in list after `utcnow()` (coerced to UTC)
|
* `main_schedule.next_run` - the _next_ datetime in list after `utcnow()` (coerced to UTC)
|
||||||
|
|
||||||
awx makes use of [Celery Periodic Tasks
|
AWX makes use of [Celery Periodic Tasks
|
||||||
(celerybeat)](http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html)
|
(celerybeat)](http://docs.celeryproject.org/en/latest/userguide/periodic-tasks.html)
|
||||||
to run a periodic task that discovers new jobs that need to run at a regular
|
to run a periodic task that discovers new jobs that need to run at a regular
|
||||||
interval (by default, every 30 seconds). When this task starts, it queries the
|
interval (by default, every 30 seconds). When this task starts, it queries the
|
||||||
|
|||||||
@ -3,7 +3,7 @@ Tower configuration gives Tower users the ability to adjust multiple runtime par
|
|||||||
## Usage manual
|
## Usage manual
|
||||||
|
|
||||||
#### To Use:
|
#### To Use:
|
||||||
The REST endpoint for CRUD operations against Tower configurations can be found at `/api/<version #>/settings/`. GETing to that endpoint will return a list of available Tower configuration categories and their URLs, such as `"system": "/api/<version #>/settings/system/"`. The URL given to each category is the endpoint for CRUD operations against individual settings under that category.
|
The REST endpoint for CRUD operations against Tower configurations can be found at `/api/v2/settings/`. GETing to that endpoint will return a list of available Tower configuration categories and their URLs, such as `"system": "/api/v2/settings/system/"`. The URL given to each category is the endpoint for CRUD operations against individual settings under that category.
|
||||||
|
|
||||||
Here is a typical Tower configuration category GET response:
|
Here is a typical Tower configuration category GET response:
|
||||||
```
|
```
|
||||||
|
|||||||
@ -1,25 +1,32 @@
|
|||||||
## Tower Workflow Overview
|
## Tower Workflow Overview
|
||||||
|
|
||||||
Workflows are structured compositions of Tower job resources. The only job of a workflow is to trigger other jobs in specific orders to achieve certain goals, such as tracking the full set of jobs that were part of a release process as a single unit.
|
Workflows are structured compositions of Tower job resources. The only job of a workflow is to trigger other jobs in specific orders to achieve certain goals, such as tracking the full set of jobs that were part of a release process as a single unit.
|
||||||
|
|
||||||
A workflow has an associated tree-graph that is composed of multiple nodes. Each node in the tree has one associated job template (job template, inventory update, project update, or workflow job template) along with related resources that, if defined, will override the associated job template resources (i.e. credential, inventory, etc.) if the job template associated with the node is chosen to run.
|
A workflow has an associated tree-graph that is composed of multiple nodes. Each node in the tree has one associated job template (job template, inventory update, project update, or workflow job template) along with related resources that, if defined, will override the associated job template resources (*i.e.*, credential, inventory, etc.) if the job template associated with the node is selected to run.
|
||||||
|
|
||||||
|
|
||||||
## Usage Manual
|
## Usage Manual
|
||||||
|
|
||||||
### Workflow Create-Read-Update-Delete (CRUD)
|
### Workflow Create-Read-Update-Delete (CRUD)
|
||||||
Like other job resources, workflow jobs are created from workflow job templates. The API exposes common fields similar to job templates, including labels, schedules, notification templates, extra variables and survey specifications. Other than that, in the API, the related workflow graph nodes can be gotten to via the related workflow_nodes field.
|
|
||||||
|
Like other job resources, workflow jobs are created from workflow job templates. The API exposes common fields similar to job templates, including labels, schedules, notification templates, extra variables and survey specifications. Other than that, in the API, the related workflow graph nodes can be accessed via the related `workflow_nodes` field.
|
||||||
|
|
||||||
The CRUD operations against a workflow job template and its corresponding workflow jobs are almost identical to those of normal job templates and related jobs.
|
The CRUD operations against a workflow job template and its corresponding workflow jobs are almost identical to those of normal job templates and related jobs.
|
||||||
|
|
||||||
By default, organization administrators have full control over all workflow job templates under the same organization, and they share these abilities with users who have the `workflow_admin_role` in that organization. Permissions can be further delegated to other users via the workflow job template roles.
|
By default, organization administrators have full control over all workflow job templates under the same organization, and they share these abilities with users who have the `workflow_admin_role` in that organization. Permissions can be further delegated to other users via the workflow job template roles.
|
||||||
|
|
||||||
### Workflow Nodes
|
|
||||||
Workflow Nodes are containers of workflow spawned job resources and function as nodes of workflow decision trees. Like that of workflow itself, the two types of workflow nodes are workflow job template nodes and workflow job nodes.
|
|
||||||
|
|
||||||
Workflow job template nodes are listed and created under endpoint `/workflow_job_templates/\d+/workflow_nodes/` to be associated with underlying workflow job template, or directly under endpoint `/workflow_job_template_nodes/`. The most important fields of a workflow job template node are `success_nodes`, `failure_nodes`, `always_nodes`, `unified_job_template` and `workflow_job_template`. The former three are lists of workflow job template nodes that, in union, forms the set of all its child nodes, in specific, `success_nodes` are triggered when parent node job succeeds, `failure_nodes` are triggered when parent node job fails, and `always_nodes` are triggered regardless of whether parent job succeeds or fails; The later two reference the job template resource it contains and workflow job template it belongs to.
|
### Workflow Nodes
|
||||||
|
|
||||||
|
Workflow Nodes are containers of workflow-spawned job resources and function as nodes of workflow decision trees. Like that of the workflow itself, the two types of workflow nodes are workflow job template nodes and workflow job nodes.
|
||||||
|
|
||||||
|
Workflow job template nodes are listed and created under the `/workflow_job_templates/\d+/workflow_nodes/` endpoint to be associated with the underlying workflow job template, or directly under endpoint `/workflow_job_template_nodes/`. The most important fields of a workflow job template node are `success_nodes`, `failure_nodes`, `always_nodes`, `unified_job_template` and `workflow_job_template`. The first three are lists of workflow job template nodes that, in union, forms the set of all of its child nodes; specifically, `success_nodes` are triggered when the parent node job succeeds, `failure_nodes` are triggered the when parent node job fails, and `always_nodes` are triggered regardless of whether the parent job succeeds or fails. The latter two fields reference the job template resource it contains and workflow job template it belongs to.
|
||||||
|
|
||||||
|
|
||||||
#### Workflow Launch Configuration
|
#### Workflow Launch Configuration
|
||||||
|
|
||||||
Workflow job templates can contain launch configuration items. So far, these only include
|
Workflow job templates can contain launch configuration items. So far, these only include:
|
||||||
|
|
||||||
- `extra_vars`
|
- `extra_vars`
|
||||||
- `inventory`
|
- `inventory`
|
||||||
- `limit`
|
- `limit`
|
||||||
@ -31,7 +38,7 @@ a survey, in the same way that job templates work.
|
|||||||
Workflow nodes may also contain the launch-time configuration for the job it will spawn.
|
Workflow nodes may also contain the launch-time configuration for the job it will spawn.
|
||||||
As such, they share all the properties common to all saved launch configurations.
|
As such, they share all the properties common to all saved launch configurations.
|
||||||
|
|
||||||
When a workflow job template is launched a workflow job is created. If the workflow
|
When a workflow job template is launched, a workflow job is created. If the workflow
|
||||||
job template is set to prompt for a value, then the user may provide this on launch,
|
job template is set to prompt for a value, then the user may provide this on launch,
|
||||||
and the workflow job will assume the user-provided value.
|
and the workflow job will assume the user-provided value.
|
||||||
|
|
||||||
@ -39,7 +46,7 @@ A workflow job node is created for each WFJT node and all fields from the WFJT n
|
|||||||
|
|
||||||
If the workflow job and the node both specify the same prompt, then the workflow job
|
If the workflow job and the node both specify the same prompt, then the workflow job
|
||||||
takes precedence and its value will be used. In either case, if the job template
|
takes precedence and its value will be used. In either case, if the job template
|
||||||
the node references does not have the related prompting field set to true
|
the node references does not have the related prompting field set to `true`
|
||||||
(such as `ask_inventory_on_launch`), then the prompt will be ignored, and the
|
(such as `ask_inventory_on_launch`), then the prompt will be ignored, and the
|
||||||
job template default, if it exists, will be used instead.
|
job template default, if it exists, will be used instead.
|
||||||
|
|
||||||
@ -47,10 +54,11 @@ See the document on saved launch configurations for how these are processed
|
|||||||
when the job is launched, and the API validation involved in building
|
when the job is launched, and the API validation involved in building
|
||||||
the launch configurations on workflow nodes.
|
the launch configurations on workflow nodes.
|
||||||
|
|
||||||
|
|
||||||
#### Workflows as Workflow Nodes
|
#### Workflows as Workflow Nodes
|
||||||
|
|
||||||
A workflow can be added as a node in another workflow. The child workflow is the associated
|
A workflow can be added as a node in another workflow. The child workflow is the associated
|
||||||
`unified_job_template` that the node references, when that node is added to the parent workflow.
|
`unified_job_template` that the node references when that node is added to the parent workflow.
|
||||||
When the parent workflow dispatches that node, then the child workflow will begin running, and
|
When the parent workflow dispatches that node, then the child workflow will begin running, and
|
||||||
the parent will resume execution of that branch when the child workflow finishes.
|
the parent will resume execution of that branch when the child workflow finishes.
|
||||||
Branching into success / failed pathways is decided based on the status of the child workflow.
|
Branching into success / failed pathways is decided based on the status of the child workflow.
|
||||||
@ -59,6 +67,7 @@ In the event that spawning the workflow would result in recursion, the child wor
|
|||||||
will be marked as failed with a message explaining that recursion was detected.
|
will be marked as failed with a message explaining that recursion was detected.
|
||||||
This is to prevent saturation of the task system with an infinite chain of workflows.
|
This is to prevent saturation of the task system with an infinite chain of workflows.
|
||||||
|
|
||||||
|
|
||||||
#### Workflow Approval Nodes
|
#### Workflow Approval Nodes
|
||||||
|
|
||||||
The workflow approval node feature enables users to add steps in a workflow in between nodes within workflows so that a user (as long as they have approval permissions, explained in further detail below) can give the "yes" or "no" to continue on to the next step in the workflow.
|
The workflow approval node feature enables users to add steps in a workflow in between nodes within workflows so that a user (as long as they have approval permissions, explained in further detail below) can give the "yes" or "no" to continue on to the next step in the workflow.
|
||||||
@ -86,20 +95,24 @@ A timeout (in minutes and seconds) can be set for each approval node. These fiel
|
|||||||
|
|
||||||
|
|
||||||
### DAG Formation and Restrictions
|
### DAG Formation and Restrictions
|
||||||
The DAG structure of a workflow is enforced by associating workflow job template nodes via endpoints `/workflow_job_template_nodes/\d+/*_nodes/`, where `*` has options `success`, `failure` and `always`. There is one restriction that is enforced when setting up new connections and that is the cycle restriction, since it's a DAG.
|
|
||||||
|
The directed acyclic graph (DAG) structure of a workflow is enforced by associating workflow job template nodes via endpoints `/workflow_job_template_nodes/\d+/*_nodes/`, where `*` has options `success`, `failure` and `always`. There is one restriction that is enforced when setting up new connections and that is the cycle restriction, since it's a DAG.
|
||||||
|
|
||||||
|
|
||||||
### Workflow Run Details
|
### Workflow Run Details
|
||||||
A typical workflow run starts by either POSTing to endpoint `/workflow_job_templates/\d+/launch/`, or being triggered automatically by related schedule. At the very first, the workflow job template creates workflow job, and all related workflow job template nodes create workflow job nodes. Right after that, all root nodes are populated with corresponding job resources and start running. If nothing goes wrong, each decision tree will follow its own route to completion. The entire workflow finishes running when all its decision trees complete.
|
|
||||||
|
|
||||||
As stated, workflow job templates can be created with populated `extra_vars`. These `extra_vars` are combined with the `extra_vars` of any job template launched by the workflow with higher variable precedence, meaning they will overwrite job template variables with the same name. Note before the extra_vars set is applied as runtime job extra variables, it might be expanded and over-written by the cumulative job artifacts of ancestor nodes. The meaning of 'cumulative' here is children overwriting parent. For example, if a node has a parent node and a grandparent node, and both ancestors generate job artifacts, then the job artifacts of grandparent node is overwritten by that of parent node to form the set of cumulative job artifacts of the current node.
|
A typical workflow run starts by either POSTing to endpoint `/workflow_job_templates/\d+/launch/`, or being triggered automatically by related schedule. At the very first, the workflow job template creates a workflow job, and all related workflow job template nodes create workflow job nodes. Right after that, all root nodes are populated with corresponding job resources and start running. If nothing goes wrong, each decision tree will follow its own route to completion. The entire workflow finishes running when all of its decision trees complete.
|
||||||
|
|
||||||
Job resources spawned by workflow jobs are needed by workflow to run correctly. Therefore deletion of spawned job resources is blocked while the underlying workflow job is executing.
|
As stated, workflow job templates can be created with populated `extra_vars`. These `extra_vars` are combined with the `extra_vars` of any job template launched by the workflow with higher variable precedence, meaning they will overwrite job template variables with the same name. Note that before the `extra_vars` set is applied as runtime job extra variables, it might be expanded and over-written by the cumulative job artifacts of ancestor nodes. The meaning of 'cumulative' here is children overwriting parent. For example, if a node has a parent node and a grandparent node, and both ancestors generate job artifacts, then the job artifacts of grandparent node is overwritten by that of parent node to form the set of cumulative job artifacts of the current node.
|
||||||
|
|
||||||
|
Job resources spawned by workflow jobs are necessary for workflows to run correctly. Therefore, the deletion of spawned job resources is blocked while the underlying workflow job is executing.
|
||||||
|
|
||||||
Other than success and failure, a workflow spawned job resource can also end with status 'error' and 'canceled'. When a workflow spawned job resource errors or is canceled, it is treated the same as failure. If the unified job template of the node is null (which could be a result of deleting the unified job template or copying a workflow when the user lacks necessary permissions to use the resource), then the node will be treated as 'failed' and the failure paths will continue to execute.
|
Other than success and failure, a workflow spawned job resource can also end with status 'error' and 'canceled'. When a workflow spawned job resource errors or is canceled, it is treated the same as failure. If the unified job template of the node is null (which could be a result of deleting the unified job template or copying a workflow when the user lacks necessary permissions to use the resource), then the node will be treated as 'failed' and the failure paths will continue to execute.
|
||||||
|
|
||||||
A workflow job itself can also be canceled. In this case all its spawned job resources will be canceled if cancelable and following paths stop executing.
|
A workflow job itself can also be canceled. In this case all of its spawned job resources will be canceled (if cancellation is allowed) and its following paths stop executing.
|
||||||
|
|
||||||
|
Like job templates, workflow job templates can be associated with notification templates and notifications work exactly the same as that of job templates. One distinction is the notification message body. Workflow jobs send a notification body that contains not only the status of itself, but also status of all its spawned jobs. A typical notification body looks like this:
|
||||||
|
|
||||||
Like job templates, workflow job templates can be associated with notification templates and notifications work exactly the same as that of job templates. One distinction is the notification message body. Workflow jobs sends notification body that contains not only the status of itself, but also status of all its spawned jobs. A typical notification body looks like this:
|
|
||||||
```
|
```
|
||||||
Workflow job summary:
|
Workflow job summary:
|
||||||
|
|
||||||
@ -109,22 +122,28 @@ Workflow job summary:
|
|||||||
...
|
...
|
||||||
```
|
```
|
||||||
|
|
||||||
Starting from Tower 3.2, Workflow jobs support simultaneous job runs just like that of ordinary jobs. It is controlled by `allow_simultaneous` field of underlying workflow job template. By default, simultaneous workflow job runs are disabled and users should be prudent in enabling this functionality. Because the performance boost of simultaneous workflow runs will only manifest when a large portion of jobs contained by a workflow allow simultaneous runs. Otherwise it is expected to have some long-running workflow jobs since its spawned jobs can be in pending state for a long time.
|
Starting from Tower 3.2, Workflow jobs support simultaneous job runs just like that of ordinary jobs. It is controlled by the `allow_simultaneous` field of underlying workflow job template. By default, simultaneous workflow job runs are disabled and users should be prudent in enabling this functionality, since the performance boost of simultaneous workflow runs will only manifest when a large portion of jobs contained by a workflow allow simultaneous runs. Otherwise, it is expected to have some long-running workflow jobs since its spawned jobs can be in pending state for a long time.
|
||||||
|
|
||||||
|
A workflow job is marked as failed if a job spawned by a workflow job fails, without a failure handler. A failure handler is a `failure` or `always` link in the workflow job template. A job that is canceled is, effectively, considered a failure for the purposes of determining if a job nodes is failed.
|
||||||
|
|
||||||
A workflow job is marked as failed if a job spawned by a workflow job fails, without a failure handler. A failure handler is a failure or always link in the workflow job template. A job that is canceled is, effectively, considered a failure for purposes of determining if a job nodes is failed.
|
|
||||||
|
|
||||||
### Workflow Copy and Relaunch
|
### Workflow Copy and Relaunch
|
||||||
Other than the normal way of creating workflow job templates, it is also possible to copy existing workflow job templates. The resulting new workflow job template will be mostly identical to the original, except for `name` field which will be appended a text to indicate it's a copy.
|
|
||||||
|
|
||||||
Workflow job templates can be copied by POSTing to endpoint `/workflow_job_templates/\d+/copy/`. After copy finished, the resulting new workflow job template will have identical fields including description, extra_vars, and survey-related fields (survey_spec and survey_enabled). More importantly, workflow job template node of the original workflow job template, as well as the topology they bear, will be copied. Note there are RBAC restrictions on copying workflow job template nodes. A workflow job template is allowed to be copied if the user has permission to add an equivalent workflow job template. If the user performing the copy does not have access to a node's related resources (job template, inventory, or credential), those related fields will be null in the copy's version of the node. Schedules and notification templates of the original workflow job template will not be copied nor shared, and the name of the created workflow job template is the original name plus a special-formatted suffix to indicate its copy origin as well as the copy time, such as 'copy_from_name@10:30:00 am'.
|
Other than the normal way of creating workflow job templates, it is also possible to copy existing workflow job templates. The resulting new workflow job template will be mostly identical to the original, except for the `name` field which will be appended in a way to indicate that it's a copy.
|
||||||
|
|
||||||
|
Workflow job templates can be copied by POSTing to endpoint `/workflow_job_templates/\d+/copy/`. After the copy finishes, the resulting new workflow job template will have identical fields including description, `extra_vars`, and survey-related fields (`survey_spec` and `survey_enabled`). More importantly, the workflow job template nodes of the original workflow job template, as well as the topology they bear, will be copied. Note that there are RBAC restrictions on copying workflow job template nodes. A workflow job template is allowed to be copied if the user has permission to add an equivalent workflow job template. If the user performing the copy does not have access to a node's related resources (job template, inventory, or credential), those related fields will be null in the copy's version of the node. Schedules and notification templates of the original workflow job template will not be copied nor shared, and the name of the created workflow job template is the original name plus a special-formatted suffix to indicate its copy origin as well as the copy time, such as `'copy_from_name@10:30:00 am'`.
|
||||||
|
|
||||||
|
Workflow jobs cannot be copied directly; instead, a workflow job is implicitly copied when it needs to relaunch. Relaunching an existing workflow job is done by POSTing to endpoint `/workflow_jobs/\d+/relaunch/`. What happens next is the original workflow job's prompts are re-applied to its workflow job template to create a new workflow job. Finally, the full-fledged new workflow job is triggered to run, thus fulfilling the purpose of relaunch. Survey password-type answers should also be redacted in the relaunched version of the workflow job.
|
||||||
|
|
||||||
Workflow jobs cannot be copied directly, instead a workflow job is implicitly copied when it needs to relaunch. Relaunching an existing workflow job is done by POSTing to endpoint `/workflow_jobs/\d+/relaunch/`. What happens next is the original workflow job's prompts are re-applied to its workflow job template to create a new workflow job. Finally the full-fledged new workflow job is triggered to run, thus fulfilling the purpose of relaunch. Survey password-type answers should also be redacted in the relaunched version of the workflow job.
|
|
||||||
|
|
||||||
### Artifacts
|
### Artifacts
|
||||||
Artifact support starts in Ansible and is carried through in Tower. The `set_stats` module is invoked by users, in a playbook, to register facts. Facts are passed in via `data:` argument. Note that the default `set_stats` parameters are the correct ones to work with Tower (i.e. `per_host: no`). Now that facts are registered, we will describe how facts are used. In Ansible, registered facts are "returned" to the callback plugin(s) via the `playbook_on_stats` event. Ansible users can configure whether or not they want the facts displayed through the global `show_custom_stats` configuration. Note that the `show_custom_stats` does not effect the artifacting feature of Tower. This only controls the displaying of `set_stats` fact data in Ansible output (also the output in Ansible playbooks ran in Tower). Tower uses a custom callback plugin that gathers the fact data set via `set_stats` in the `playbook_on_stats` handler and "ships" it back to Tower, saves it in the database, and makes it available on the job endpoint via the variable `artifacts`. The semantics and usage of `artifacts` throughout a workflow is described elsewhere in this document.
|
|
||||||
|
Support for artifacts starts in Ansible and is carried through in Tower. The `set_stats` module is invoked by users, in a playbook, to register facts. Facts are passed in via the `data:` argument. Note that the default `set_stats` parameters are the correct ones to work with Tower (*i.e.*, `per_host: no`). Now that facts are registered, we will describe how facts are used. In Ansible, registered facts are "returned" to the callback plugin(s) via the `playbook_on_stats` event. Ansible users can configure whether or not they want the facts displayed through the global `show_custom_stats` configuration. Note that the `show_custom_stats` does not effect the artifact feature of Tower. This only controls the displaying of `set_stats` fact data in Ansible output (also the output in Ansible playbooks that get run in Tower). Tower uses a custom callback plugin that gathers the fact data set via `set_stats` in the `playbook_on_stats` handler and "ships" it back to Tower, saves it in the database, and makes it available on the job endpoint via the variable `artifacts`. The semantics and usage of `artifacts` throughout a workflow is described elsewhere in this document.
|
||||||
|
|
||||||
|
|
||||||
### Workflow Run Example
|
### Workflow Run Example
|
||||||
To best understand the nuances of workflow run logic we will look at an example workflow run as it progresses through the 'running' state. In the workflow examples below nodes are labeled `<do_not_run, job_status, node_id>` where `do_not_run` can be `RUN` or `DNR` where `DNR` means 'do not run the node' and `RUN` which means will run the node. Nodes start out with `do_not_run = False` depicted as `RUN` in the pictures below. When nodes are known to not run they will be marked `DNR` and the state will not change. `job_status` is the job's status associated with the node. `node_id` is the unique id for the workflow job node.
|
|
||||||
|
To best understand the nuances of workflow run logic, we will look at an example workflow run as it progresses through the 'running' state. In the workflow examples below, nodes are labeled `<do_not_run, job_status, node_id>` where `do_not_run` can be `RUN` or `DNR` where `DNR` means 'do not run the node' and `RUN` which means 'run the node'. Nodes start out with `do_not_run = False` depicted as `RUN` in the pictures below. When nodes are known to not run they will be marked `DNR` and the state will not change. `job_status` is the job's status associated with the node. `node_id` is the unique ID for the workflow job node.
|
||||||
|
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="img/workflow_step0.png">
|
<img src="img/workflow_step0.png">
|
||||||
@ -132,60 +151,66 @@ To best understand the nuances of workflow run logic we will look at an example
|
|||||||
</p>
|
</p>
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="img/workflow_step1.png">
|
<img src="img/workflow_step1.png">
|
||||||
Root nodes are selected to run. A root node is a node with no incoming nodes. Node 0 is selected to run and results in a status of 'successful'. Nodes 1, 4, and 5 are marked 'DNR' because they are in the failure path. Node 6 is not marked 'DNR' because nodes 2 and 3 may run and result and node 6 running. The same reasoning is why nodes 7, 8, 9 are not marked 'DNR'.
|
Root nodes are selected to run. A root node is a node with no incoming nodes. Node 0 is selected to run and results in a status of `'successful'`. Nodes 1, 4, and 5 are marked `'DNR'` because they are in the failure path. Node 6 is not marked `'DNR'` because nodes 2 and 3 may run and result and node 6 running. The same reasoning is why nodes 7, 8, 9 are not marked `'DNR'`.
|
||||||
</p>
|
</p>
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="img/workflow_step2.png">
|
<img src="img/workflow_step2.png">
|
||||||
Nodes 2 and 3 are selected to run and their job results are both 'successful'. Node 6 is not marked 'DNR' because node 3 will trigger node 6.
|
Nodes 2 and 3 are selected to run and their job results are both `'successful'`. Node 6 is not marked `'DNR'` because node 3 will trigger node 6.
|
||||||
</p>
|
</p>
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="img/workflow_step3.png">
|
<img src="img/workflow_step3.png">
|
||||||
Node 6 is selected to run and the job results in 'failed'. Node 8 is marked 'DNR' because of the success path. Nodes 7 and 8 will be ran in the next cycle.
|
Node 6 is selected to run and the job results in `'failed'`. Node 8 is marked `'DNR'` because of the success path. Nodes 7 and 8 will be ran in the next cycle.
|
||||||
</p>
|
</p>
|
||||||
<p align="center">
|
<p align="center">
|
||||||
<img src="img/workflow_step4.png">
|
<img src="img/workflow_step4.png">
|
||||||
Node 7 and 8 are selected to run and their job results are both 'successful'.
|
Node 7 and 8 are selected to run and their job results are both `'successful'`.
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
The resulting state of the workflow job run above would be 'successful'. Although individual nodes fail, the overall workflow job status is 'successful' because all individual node failures have error handling paths ('failed_nodes' or 'always_nodes').
|
The resulting state of the workflow job run above would be `'successful'`. Although individual nodes fail, the overall workflow job status is `'successful'` because all individual node failures have error handling paths (`'failed_nodes'` or `'always_nodes'`).
|
||||||
|
|
||||||
|
|
||||||
## Test Coverage
|
## Test Coverage
|
||||||
|
|
||||||
### CRUD-related
|
### CRUD-related
|
||||||
* Verify that CRUD operations on all workflow resources are working properly. Note workflow job nodes cannot be created or deleted independently, but verifications are needed to make sure when a workflow job is deleted, all its related workflow job nodes are deleted.
|
|
||||||
|
* Verify that CRUD operations on all workflow resources are working properly. Note that workflow job nodes cannot be created or deleted independently, but verifications are needed to ensure that when a workflow job is deleted, all its related workflow job nodes are also deleted.
|
||||||
* Verify the RBAC property of workflow resources. In specific:
|
* Verify the RBAC property of workflow resources. In specific:
|
||||||
* Workflow job templates can only be accessible by superusers ---- system admin, admin of the same organization and system auditor and auditor of the same organization with read permission only.
|
* Workflow job templates can only be accessible by superusers ---- system admin, admin of the same organization and system auditor and auditor of the same organization with read permission only.
|
||||||
* Workflow job read and delete permissions follow from its associated workflow job template.
|
* Workflow job read and delete permissions follow from its associated workflow job template.
|
||||||
* Workflow job relaunch permission consists of the union of execute permission to its associated workflow job template, and the permission to re-create all the nodes inside of the workflow job.
|
* Workflow job relaunch permission consists of the union of execute permission to its associated workflow job template, and the permission to re-create all the nodes inside of the workflow job.
|
||||||
* Workflow job template nodes rely their permission rules on the permission rules of both their associated workflow job template and unified job template for creation and editing.
|
* Workflow job template nodes rely on their permission rules for both their associated workflow job template and unified job template for creating and editing.
|
||||||
* Workflow job template nodes can be deleted with admin permission to their workflow job template (even lacking permission to the node's job template).
|
* Workflow job template nodes can be deleted with admin permission to their workflow job template (even lacking permission to the node's job template).
|
||||||
* Workflow job nodes are viewable if its workflow job is viewable.
|
* Workflow job nodes are viewable if its workflow job is viewable.
|
||||||
* No CRUD actions are possible on workflow job nodes by any user, and they may only be deleted by deleting their workflow job.
|
* No CRUD actions are possible on workflow job nodes by any user, and they may only be deleted by deleting their workflow job.
|
||||||
* Workflow jobs can be deleted by superusers and org admins of the organization of its associated workflow job template, and no one else.
|
* Workflow jobs can be deleted by superusers and org admins of the organization of its associated workflow job template, and no one else.
|
||||||
* Verify that workflow job template nodes can be created under, or (dis)associated with workflow job templates.
|
* Verify that workflow job template nodes can be created under, or (dis)associated with workflow job templates.
|
||||||
* Verify that the permitted types of job template types can be associated with a workflow job template node. Currently the permitted types are *job templates, inventory sources, projects, and workflow job templates*.
|
* Verify that the permitted types of job template types can be associated with a workflow job template node. Currently the permitted types are *job templates, inventory sources, projects, and workflow job templates*.
|
||||||
* Verify that workflow job template nodes under the same workflow job template can be associated to form parent-child relationship of decision trees. In specific, one node takes another as its child node by POSTing another node's id to one of the three endpoints: `/success_nodes/`, `/failure_nodes/` and `/always_nodes/`.
|
* Verify that workflow job template nodes under the same workflow job template can be associated to form a parent-child relationship of decision trees. In specific, one node takes another as its child node by POSTing another node's ID to one of the three endpoints: `/success_nodes/`, `/failure_nodes/` and `/always_nodes/`.
|
||||||
* Verify that workflow job template nodes are not allowed to have invalid association. Any attempt that causes invalidity will trigger 400-level response (i.e. cycles).
|
* Verify that workflow job template nodes are not allowed to have invalid association. Any attempt that causes invalidity will trigger 400-level response (*i.e.*, cycles).
|
||||||
* Verify that a workflow job template can be successfully copied and the created workflow job template does not miss any field that should be copied or intentionally modified.
|
* Verify that a workflow job template can be successfully copied, and that the created workflow job template does not miss any field that should be copied or intentionally modified.
|
||||||
* Verify that if a user has no access to any of the related resources of a workflow job template node, that node will not be copied and will have `null` as placeholder.
|
* Verify that if a user has no access to any of the related resources of a workflow job template node, that node will not be copied and will have `null` as placeholder.
|
||||||
* Verify that `artifacts` is populated when `set_stats` is used in Ansible >= v2.2.1.0-0.3.rc3.
|
* Verify that `artifacts` is populated when `set_stats` is used in Ansible >= v2.2.1.0-0.3.rc3.
|
||||||
|
|
||||||
|
|
||||||
### Task-related
|
### Task-related
|
||||||
|
|
||||||
* Verify that workflow jobs can be launched by POSTing to endpoint `/workflow_job_templates/\d/launch/`.
|
* Verify that workflow jobs can be launched by POSTing to endpoint `/workflow_job_templates/\d/launch/`.
|
||||||
* Verify that schedules can be successfully (dis)associated with a workflow job template, and workflow jobs can be triggered by the schedule of associated workflow job template at specified time point.
|
* Verify that schedules can be successfully (dis)associated with a workflow job template, and workflow jobs can be triggered by the schedule of associated workflow job template at specified time point.
|
||||||
* Verify that extra variables work for workflow job templates as described. In specific, verify the role of workflow job extra variables as a set of global runtime variables over all its spawned jobs.
|
* Verify that extra variables work for workflow job templates as described. In specific, verify the role of workflow job extra variables as a set of global runtime variables over all its spawned jobs.
|
||||||
* Verify that extra variables of a workflow job node are correctly overwritten in order by the cumulative job artifacts of ancestors, and the overwrite policy of cumulative job artifacts is correct (artifacts of parent overwrite artifacts of grandparent).
|
* Verify that extra variables of a workflow job node are correctly overwritten in order by the cumulative job artifacts of ancestors, and the overwrite policy of cumulative job artifacts is correct (artifacts of parent overwrite artifacts of grandparent).
|
||||||
* Verify that during a workflow job run, all its decision trees follow their correct paths of execution. Unwarranted behaviors include child node executing before its parent and wrong path being selected (*failure nodes* are executed when parent node *succeeds* and so on).
|
* Verify that during a workflow job run, all of its decision trees follow their correct paths of execution. Unwarranted behaviors include the child node executing before its parent and wrong path being selected (*failure nodes* are executed when parent node *succeeds* and so on).
|
||||||
* Verify that a subtree of execution will never start if its root node runs into internal error (*not ends with failure*).
|
* Verify that a subtree of execution will never start if its root node runs into internal error (*not ends with failure*).
|
||||||
* Verify that a subtree of execution will never start if its root node is successfully canceled.
|
* Verify that a subtree of execution will never start if its root node is successfully canceled.
|
||||||
* Verify that cancelling a workflow job that is cancellable will consequently cancel any of its cancellable spawned jobs and thus interrupts the whole workflow execution.
|
* Verify that cancelling a workflow job that is cancellable will consequently cancel any of its cancellable spawned jobs and thus interrupts the whole workflow execution.
|
||||||
* Verify that during a workflow job run, deleting its spawned jobs are prohibited.
|
* Verify that during a workflow job run, deleting its spawned jobs are prohibited.
|
||||||
* Verify that at the beginning of each spawned job run, its prompted fields will be populated by the wrapping workflow job node with corrected values. For example, related `credentials` of workflow job node go to `credentials` of spawned job.
|
* Verify that at the beginning of each spawned job run, its prompted fields will be populated by the wrapping workflow job node with corrected values. For example, related `credentials` of the workflow job node go to `credentials` of spawned job.
|
||||||
* Verify that notification templates can be successfully (dis)associated with a workflow job template. Later when its spawned workflow jobs finish running, verify that the correct type of notifications will be sent according to the job status.
|
* Verify that notification templates can be successfully (dis)associated with a workflow job template. Later, when its spawned workflow jobs finishes running, verify that the correct type of notifications will be sent according to the job status.
|
||||||
* Verify that a workflow job can be successfully relaunched.
|
* Verify that a workflow job can be successfully relaunched.
|
||||||
|
|
||||||
|
|
||||||
## Test Notes
|
## Test Notes
|
||||||
* Please apply non-trivial topology when testing workflow run. A non-trivial topology for a workflow job template should include:
|
|
||||||
|
* Please apply non-trivial topology when testing a workflow run. A non-trivial topology for a workflow job template should include:
|
||||||
* Multiple decision trees.
|
* Multiple decision trees.
|
||||||
* Relatively large hight in each decision tree.
|
* Relatively large height in each decision tree.
|
||||||
* All three types of relationships (`success`, `failure` and `always`).
|
* All three types of relationships (`success`, `failure` and `always`).
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user