mirror of
https://github.com/ansible/awx.git
synced 2026-02-01 01:28:09 -03:30
Create AWX docsite with RST content (#14328)
Co-authored-by: Thanhnguyet Vo <tvo@ansible.com> Co-authored-by: TVo <thavo@redhat.com>
This commit is contained in:
110
docs/docsite/rst/userguide/applications_auth.rst
Normal file
110
docs/docsite/rst/userguide/applications_auth.rst
Normal file
@@ -0,0 +1,110 @@
|
||||
.. _ug_applications_auth:
|
||||
|
||||
Applications
|
||||
=====================
|
||||
|
||||
.. index::
|
||||
single: token authentication
|
||||
single: authentication
|
||||
pair: applications; authentication
|
||||
pair: applications; tokens
|
||||
|
||||
|
||||
Creating and configuring token-based authentication for external applications makes it easier for external applications such as ServiceNow and Jenkins to integrate with AWX. OAuth 2 allows you to use tokens to share certain data with an application without disclosing login information, and furthermore, these tokens can be scoped as "read-only". You create an application that is representative of the external application you are integrating with, then use it to create tokens for that application to use on behalf of the users of the external application.
|
||||
|
||||
Having these tokens associated to an application resource gives you the ability to manage all tokens issued for a particular application more easily. By separating token issuance under Applications, you can revoke all tokens based on the Application without having to revoke all tokens in the system.
|
||||
|
||||
When integrating an external web app with AWX that web app may need to create OAuth2 Tokens on behalf of users in that other web app. Creating an application with the Authorization Code grant type is the preferred way to do this because:
|
||||
|
||||
- external applications can obtain a token for users, using their credentials
|
||||
- compartmentalized tokens issued for a particular application, allows those tokens to be easily managed (revoke all tokens associated with that application, for example)
|
||||
|
||||
|
||||
Getting Started with Applications
|
||||
-----------------------------------
|
||||
|
||||
.. index::
|
||||
pair: applications; getting started
|
||||
|
||||
Access the Applications page by clicking **Applications** from the left navigation bar. The Applications page displays a search-able list of all available Applications currently managed by AWX and can be sorted by **Name**.
|
||||
|
||||
|Applications - home with example apps|
|
||||
|
||||
.. |Applications - home with example apps| image:: ../common/images/apps-list-view-examples.png
|
||||
|
||||
If no other applications exist, only a gray box with a message to add applications displays.
|
||||
|
||||
.. image:: ../common/images/apps-list-view-empty.png
|
||||
|
||||
|
||||
.. _ug_applications_auth_create:
|
||||
|
||||
Create a new application
|
||||
------------------------------
|
||||
|
||||
.. index::
|
||||
pair: applications; create
|
||||
pair: applications; adding new
|
||||
|
||||
Token-based authentication for users can be configured in the Applications window.
|
||||
|
||||
1. In the AWX User Interface, click **Applications** from the left navigation bar.
|
||||
|
||||
The Applications window opens.
|
||||
|
||||
2. Click the **Add** button located in the upper right corner of the Applications window.
|
||||
|
||||
The New Application window opens.
|
||||
|
||||
|Create application|
|
||||
|
||||
.. |Create application| image:: ../common/images/apps-create-new.png
|
||||
|
||||
3. Enter the following details in **Create New Application** window:
|
||||
|
||||
- **Name** (required): provide a name for the application you want to create
|
||||
- **Description**: optionally provide a short description for your application
|
||||
- **Organization** (required): provide an organization for which this application is associated
|
||||
- **Authorization Grant Type** (required): Select from one of the grant types to use in order for the user to acquire tokens for this application. Refer to :ref:`grant types <ag_oauth2_token_auth_grant_types>` in the Applications section of the |ata|.
|
||||
- **Redirect URIS**: Provide a list of allowed URIs, separated by spaces. This is required if you specified the grant type to be **Authorization code**.
|
||||
- **Client Type** (required): Select the level of security of the client device
|
||||
|
||||
4. When done, click **Save** or **Cancel** to abandon your changes. Upon saving, the client ID displays in a pop-up window.
|
||||
|
||||
.. image:: ../common/images/apps-client-id-popup.png
|
||||
|
||||
|
||||
Applications - Tokens
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: applications; tokens
|
||||
pair: applications; adding tokens
|
||||
|
||||
Selecting the **Tokens** view displays a list of the users that have tokens to access the application.
|
||||
|
||||
|Applications - tokens list|
|
||||
|
||||
.. |Applications - tokens list| image:: ../common/images/apps-tokens-list-view-examples.png
|
||||
|
||||
Tokens can only access resources that its associated user can access, and can be limited further by specifying the scope of the token.
|
||||
|
||||
|
||||
.. _ug_tokens_auth_create:
|
||||
|
||||
Add Tokens
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Tokens are added through the Users screen and can be associated with an application at that time. Specifying an application can be performed directly in the User's token settings. You can create a token for *your* user in the Tokens configuration tab, meaning only you can create and see your tokens in your own user screen. To add a token:
|
||||
|
||||
1. Access the Users list view by clicking **Users** from the left navigation bar then click on your user to configure your OAuth 2 tokens.
|
||||
|
||||
.. note::
|
||||
|
||||
You can only create OAuth 2 Tokens for your user via the API or UI, which means you can only access your own user profile in order to configure or view your tokens. If you are an admin and need to create or remove tokens for other users, see the revoke and create commands in the :ref:`Token and session management <ag_token_utility>` section of the |ata|.
|
||||
|
||||
.. include:: ../common/add-token.rst
|
||||
|
||||
To verify the application in the example above now shows the user with the appropriate token, go to the **Tokens** tab of the Applications window:
|
||||
|
||||
.. image:: ../common/images/apps-tokens-list-view-example2.png
|
||||
85
docs/docsite/rst/userguide/best_practices.rst
Normal file
85
docs/docsite/rst/userguide/best_practices.rst
Normal file
@@ -0,0 +1,85 @@
|
||||
Best Practices
|
||||
--------------
|
||||
|
||||
.. index::
|
||||
single: best practices
|
||||
|
||||
Use Source Control
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; source control
|
||||
|
||||
While AWX supports playbooks stored directly on the server, best practice is to store your playbooks, roles, and any associated details in source control. This way you have an audit trail describing when and why you changed the rules that are automating your infrastructure. Plus, it allows for easy sharing of playbooks with other parts of your infrastructure or team.
|
||||
|
||||
Ansible file and directory structure
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; file and directory structure
|
||||
|
||||
Please review the `Ansible Tips and Tricks <https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html>`_ from the Ansible documentation. If creating a common set of roles to use across projects, these should be accessed via source control submodules, or a common location such as ``/opt``. Projects should not expect to import roles or content from other projects.
|
||||
|
||||
.. note::
|
||||
Playbooks should not use the ``vars_prompt`` feature, as AWX does not interactively allow for ``vars_prompt`` questions. If you must use ``vars_prompt``, refer to and make use of the :ref:`ug_surveys` functionality.
|
||||
|
||||
.. note::
|
||||
Playbooks should not use the ``pause`` feature of Ansible without a timeout, as AWX does not allow for interactively cancelling a pause. If you must use ``pause``, ensure that you set a timeout.
|
||||
|
||||
Jobs run use the playbook directory as the current working
|
||||
directory, although jobs should be coded to use the ``playbook_dir``
|
||||
variable rather than relying on this.
|
||||
|
||||
Use Dynamic Inventory Sources
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; dynamic inventory sources
|
||||
|
||||
If you have an external source of truth for your infrastructure, whether it is a cloud provider or a local CMDB, it is best to define an inventory sync process and use the support for dynamic inventory (including cloud inventory sources). This ensures your inventory is always up to date.
|
||||
|
||||
.. include:: ../common/overwrite_var_note_2-4-0.rst
|
||||
|
||||
Variable Management for Inventory
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; variable inventory management
|
||||
|
||||
Keeping variable data along with the hosts and groups definitions (see the inventory
|
||||
editor) is encouraged, rather than using ``group_vars/`` and
|
||||
``host_vars/``. If you use dynamic inventory sources, AWX can sync
|
||||
such variables with the database as long as the **Overwrite Variables**
|
||||
option is not set.
|
||||
|
||||
Autoscaling
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; autoscaling
|
||||
|
||||
Using the "callback" feature to allow newly booting instances to request
|
||||
configuration is very useful for auto-scaling scenarios or provisioning
|
||||
integration.
|
||||
|
||||
Larger Host Counts
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; host counts, larger
|
||||
|
||||
Consider setting "forks" on a job template to larger values to increase
|
||||
parallelism of execution runs. For more information on tuning Ansible,
|
||||
see `the Ansible
|
||||
blog <http://www.ansible.com/blog/ansible-performance-tuning>`__.
|
||||
|
||||
Continuous integration / Continuous Deployment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: best practices; integration, continuous
|
||||
pair: best practices; deployment, continuous
|
||||
|
||||
For a Continuous Integration system, such as Jenkins, to spawn a job, it should make a curl request to a job template. The credentials to the job template should not require prompting for any particular passwords. Refer to the `CLI documentation`_ for configuration and usage instructions.
|
||||
|
||||
.. _`CLI documentation`: https://docs.ansible.com/automation-controller/latest/html/controllercli/usage.html
|
||||
358
docs/docsite/rst/userguide/credential_plugins.rst
Normal file
358
docs/docsite/rst/userguide/credential_plugins.rst
Normal file
@@ -0,0 +1,358 @@
|
||||
.. _ug_credential_plugins:
|
||||
|
||||
Secret Management System
|
||||
=============================
|
||||
|
||||
.. index::
|
||||
single: credentials
|
||||
pair: credential; plugins
|
||||
pair: secret management; credential
|
||||
|
||||
Users and admins upload machine and cloud credentials so that automation can access machines and external services on their behalf. By default, sensitive credential values (such as SSH passwords, SSH private keys, API tokens for cloud services) are stored in the database after being encrypted. With external credentials backed by credential plugins, you can map credential fields (like a password or an SSH Private key) to values stored in a :term:`secret management system` instead of providing them to AWX directly. AWX provides a secret management system that include integrations for:
|
||||
|
||||
- Centrify Vault Credential Provider Lookup
|
||||
- CyberArk Central Credential Provider Lookup (CCP)
|
||||
- CyberArk Conjur Secrets Manager Lookup
|
||||
- HashiCorp Vault Key-Value Store (KV)
|
||||
- HashiCorp Vault SSH Secrets Engine
|
||||
- Microsoft Azure Key Management System (KMS)
|
||||
- Thycotic DevOps Secrets Vault
|
||||
- Thycotic Secret Server
|
||||
|
||||
These external secret values will be fetched prior to running a playbook that needs them. For more information on specifying these credentials in the User Interface, see :ref:`ug_credentials`.
|
||||
|
||||
Configure and link secret lookups
|
||||
-----------------------------------
|
||||
|
||||
When configuring AWX to pull a secret from a 3rd-party system, it is in essence linking credential fields to external systems. To link a credential field to a value stored in an external system, select the external credential corresponding to that system and provide :term:`metadata` to look up the desired value. The metadata input fields are part of the :term:`external credential type` definition of the :term:`source credential`.
|
||||
|
||||
AWX provides a :term:`credential plugin` interface for developers, integrators, admins, and power-users with the ability to add new external credential types to extend it to support other secret management systems. For more detail, see the `development docs for credential plugins`_.
|
||||
|
||||
.. _`development docs for credential plugins`: https://github.com/ansible/awx/blob/devel/docs/credentials/credential_plugins.md
|
||||
|
||||
|
||||
Use the AWX User Interface to configure and use each of the supported 3-party secret management systems.
|
||||
|
||||
1. First, create an external credential for authenticating with the secret management system. At minimum, provide a name for the external credential and select one of the following for the **Credential Type**:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
2. Navigate to the credential form of the target credential and link one or more input fields to the external credential along with metadata for locating the secret in the external system. In this example, the *Demo Credential* is the target credential.
|
||||
|
||||
.. _ag_credential_plugins_link_step:
|
||||
|
||||
3. For any of the fields below the **Type Details** area that you want to link to the external credential, click the |key| button of the input field. You are prompted to set the input source to use to retrieve your secret information.
|
||||
|
||||
.. |key| image:: ../common/images/key-mgmt-button.png
|
||||
|
||||
.. image:: ../common/images/credentials-link-credential-prompt.png
|
||||
|
||||
4. Select the credential you want to link to, and click **Next**. This takes you to the **Metadata** tab of the input source. This example shows the Metadata prompt for HashiVault Secret Lookup. Metadata is specific to the input source you select. See the :ref:`ug_metadata_creds_inputs` table for details.
|
||||
|
||||
.. image:: ../common/images/credentials-link-metadata-prompt.png
|
||||
|
||||
5. Click **Test** to verify connection to the secret management system. If the lookup is unsuccessful, an error message like this one displays:
|
||||
|
||||
.. image:: ../common/images/credentials-link-metadata-test-error.png
|
||||
|
||||
6. When done, click **OK**. This closes the prompt window and returns you to the Details screen of your target credential. **Repeat these steps**, starting with :ref:`step 3 above <ag_credential_plugins_link_step>` to complete the remaining input fields for the target credential. By linking the information in this manner, AWX retrieves sensitive information, such as username, password, keys, certificates, and tokens from the 3rd-party management systems and populates that data into the remaining fields of the target credential form.
|
||||
|
||||
7. If necessary, supply any information manually for those fields that do not use linking as a way of retrieving sensitive information. Refer to the appropriate :ref:`ug_credentials_cred_types` for more detail about each of the fields.
|
||||
|
||||
8. Click **Save** when done.
|
||||
|
||||
.. _ug_metadata_creds_inputs:
|
||||
|
||||
Metadata for credential input sources
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
**Centrify Vault Credential Provider Lookup**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Account Name (Required)
|
||||
- Name of the system account or domain associated with Centrify Vault.
|
||||
* - System Name
|
||||
- Specify the name used by the Centrify portal.
|
||||
|
||||
**CyberArk Central Credential Provider Lookup**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Object Query (Required)
|
||||
- Lookup query for the object.
|
||||
* - Object Query Format
|
||||
- Select ``Exact`` for a specific secret name, or ``Regexp`` for a secret that has a dynamically generated name.
|
||||
* - Object Property
|
||||
- Specifies the name of the property to return (e.g., ``UserName``, ``Address``, etc.) other than the default of ``Content``.
|
||||
* - Reason
|
||||
- If required per the object's policy, supply a reason for checking out the secret, as CyberArk logs those.
|
||||
|
||||
**CyberArk Conjur Secrets Lookup**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Secret Identifier
|
||||
- The identifier for the secret.
|
||||
* - Secret Version
|
||||
- Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version.
|
||||
|
||||
**HashiVault Secret Lookup**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Name of Secret Backend
|
||||
- Specify the name of the KV backend to use. Leave it blank to use the first path segment of the **Path to Secret** field instead.
|
||||
* - Path to Secret (required)
|
||||
- Specify the path to where the secret information is stored; for example, ``/path/username``.
|
||||
* - Key Name (required)
|
||||
- Specify the name of the key to look up the secret information.
|
||||
* - Secret Version (V2 Only)
|
||||
- Specify a version if necessary, otherwise, leave it empty to use the latest version.
|
||||
|
||||
**HashiCorp Signed SSH**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Unsigned Public Key (required)
|
||||
- Specify the public key of the cert you want to get signed. It needs to be present in the authorized keys file of the target host(s).
|
||||
* - Path to Secret (required)
|
||||
- Specify the path to where the secret information is stored; for example, ``/path/username``.
|
||||
* - Role Name (required)
|
||||
- A role is a collection of SSH settings and parameters that are stored in Hashi vault. Typically, you can specify a couple of them with different privileges, timeouts, etc. So you could have a role that is allowed to get a cert signed for root, and other less privileged ones, for example.
|
||||
* - Valid Principals
|
||||
- Specify a user (or users) other than the default, that you are requesting vault to authorize the cert for the stored key. Hashi vault has a default user for whom it signs (e.g., ec2-user).
|
||||
|
||||
**Azure KMS**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Secret Name (required)
|
||||
- The actual name of the secret as it is referenced in Azure's Key vault app.
|
||||
* - Secret Version
|
||||
- Specify a version of the secret, if necessary, otherwise, leave it empty to use the latest version.
|
||||
|
||||
**Thycotic DevOps Secrets Vault**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Secret Path (required)
|
||||
- Specify the path to where the secret information is stored (e.g., /path/username).
|
||||
|
||||
**Thycotic Secret Server**
|
||||
|
||||
.. list-table::
|
||||
:widths: 25 50
|
||||
:header-rows: 1
|
||||
|
||||
* - Metadata
|
||||
- Description
|
||||
* - Secret ID (required)
|
||||
- The identifier for the secret.
|
||||
* - Secret Field
|
||||
- Specify the field to be used from the secret.
|
||||
|
||||
.. _ug_credentials_centrify:
|
||||
|
||||
Centrify Vault Credential Provider Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
pair: credential types; Centrify
|
||||
|
||||
You need the Centrify Vault web service running to store secrets in order for this integration to work. When **Centrify Vault Credential Provider Lookup** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Centrify Tenant URL** (required): provide the URL used for communicating with Centrify's secret management system
|
||||
- **Centrify API User** (required): provide the username
|
||||
- **Centrify API Password** (required): provide the password
|
||||
- **OAuth2 Application ID** : specify the identifier given associated with the OAuth2 client
|
||||
- **OAuth2 Scope** : specify the scope of the OAuth2 client
|
||||
|
||||
|
||||
Below shows an example of a configured CyberArk AIM credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-centrify-vault-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_cyberarkccp:
|
||||
|
||||
CyberArk Central Credential Provider (CCP) Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: CyberArk CCP
|
||||
pair: credential; CyberArk CCP
|
||||
|
||||
You need the CyberArk Central Credential Provider web service running to store secrets in order for this integration to work. When **CyberArk Central Credential Provider Lookup** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **CyberArk CCP URL** (required): provide the URL used for communicating with CyberArk CCP's secret management system; must include URL scheme (http, https, etc.)
|
||||
- **Web Service ID**: optionally specify the identifier for the web service; leaving it blank defaults to AIMWebService
|
||||
- **Application ID** (required): specify the identifier given by CyberArk CCP services
|
||||
- **Client Key**: paste the client key if provided by CyberArk
|
||||
- **Client Certificate**: include the ``BEGIN CERTIFICATE`` and ``END CERTIFICATE`` lines when pasting the certificate, if provided by CyberArk
|
||||
- **Verify SSL Certificates**: this option is only available when the URL uses HTTPS. Check this option to verify the server’s SSL certificate is valid and trusted. Environments that use internal or private CA's should leave this option unchecked to disable verification.
|
||||
|
||||
Below shows an example of a configured CyberArk CCP credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-cyberark-ccp-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_cyberarkconjur:
|
||||
|
||||
CyberArk Conjur Secrets Manager Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: CyberArk Conjur
|
||||
pair: credential; CyberArk Conjur
|
||||
|
||||
With a Conjur Cloud tenant available to target, configure the CyberArk Conjur Secrets Lookup external management system credential plugin as documented.
|
||||
|
||||
When **CyberArk Conjur Secrets Manager Lookup** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Conjur URL** (required): provide the URL used for communicating with CyberArk Conjur's secret management system; must include URL scheme (http, https, etc.)
|
||||
- **API Key** (required): provide the key given by your Conjur admin
|
||||
- **Account** (required): the organization's account name
|
||||
- **Username** (required): the specific authenticated user for this service
|
||||
- **Public Key Certificate**: include the ``BEGIN CERTIFICATE`` and ``END CERTIFICATE`` lines when pasting the public key, if provided by CyberArk
|
||||
|
||||
Below shows an example of a configured CyberArk Conjur credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-cyberark-conjur-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_hashivault:
|
||||
|
||||
HashiCorp Vault Secret Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: HashiCorp Secret Lookup
|
||||
pair: credential; HashiCorp KV
|
||||
|
||||
When **HashiCorp Vault Secret Lookup** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Server URL** (required): provide the URL used for communicating with HashiCorp Vault's secret management system
|
||||
- **Token**: specify the access token used to authenticate HashiCorp's server
|
||||
- **CA Certificate**: specify the CA certificate used to verify HashiCorp's server
|
||||
- **Approle Role_ID**: specify the ID for Approle authentication
|
||||
- **Approle Secret_ID**: specify the corresponding secret ID for Approle authentication
|
||||
- **Path to Approle Auth**: specify a path if other than the default path of ``/approle``
|
||||
- **API Version** (required): select v1 for static lookups and v2 for versioned lookups
|
||||
|
||||
For more detail about Approle and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_. Below shows an example of a configured HashiCorp Vault Secret Lookup credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-hashicorp-kv-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_hashivaultssh:
|
||||
|
||||
HashiCorp Vault Signed SSH
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: HashiCorp SSH Secrets Engine
|
||||
pair: credential; HashiCorp SSH Secrets Engine
|
||||
|
||||
When **HashiCorp Vault Signed SSH** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Server URL** (required): provide the URL used for communicating with HashiCorp Signed SSH's secret management system
|
||||
- **Token**: specify the access token used to authenticate HashiCorp's server
|
||||
- **CA Certificate**: specify the CA certificate used to verify HashiCorp's server
|
||||
- **Approle Role_ID**: specify the ID for Approle authentication
|
||||
- **Approle Secret_ID**: specify the corresponding secret ID for Approle authentication
|
||||
- **Path to Approle Auth**: specify a path if other than the default path of ``/approle``
|
||||
|
||||
For more detail about Approle and its fields, refer to the `Vault documentation for Approle Auth Method <https://www.vaultproject.io/docs/auth/approle>`_.
|
||||
|
||||
Below shows an example of a configured HashiCorp SSH Secrets Engine credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-hashicorp-ssh-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_azurekeyvault:
|
||||
|
||||
Microsoft Azure Key Vault
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: MS Azure KMS
|
||||
pair: credential; MS Azure KMS
|
||||
triple: credential; Azure; KMS
|
||||
|
||||
When **Microsoft Azure Key Vault** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Vault URL (DNS Name)** (required): provide the URL used for communicating with MS Azure's key management system
|
||||
- **Client ID** (required): provide the identifier as obtained by the Azure Active Directory
|
||||
- **Client Secret** (required): provide the secret as obtained by the Azure Active Directory
|
||||
- **Tenant ID** (required): provide the unique identifier that is associated with an Azure Active Directory instance within an Azure subscription
|
||||
- **Cloud Environment**: select the applicable cloud environment to apply
|
||||
|
||||
Below shows an example of a configured Microsoft Azure KMS credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-azure-kms-credential.png
|
||||
|
||||
|
||||
.. _ug_credentials_thycoticvault:
|
||||
|
||||
Thycotic DevOps Secrets Vault
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: Thycotic DevOps Secrets Vault
|
||||
pair: credential; Thycotic DevOps Secrets Vault
|
||||
|
||||
When **Thycotic DevOps Secrets Vault** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Tenant** (required): provide the URL used for communicating with Thycotic's secret management system
|
||||
- **Top-level Domain (TLD)** : provide the top-level domain designation (e.g., com, edu, org) associated with the secret vault you want to integrate
|
||||
- **Client ID** (required): provide the identifier as obtained by the Thycotic secret management system
|
||||
- **Client Secret** (required): provide the secret as obtained by the Thycotic secret management system
|
||||
|
||||
Below shows an example of a configured Thycotic DevOps Secrets Vault credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-thycotic-devops-credential.png
|
||||
|
||||
|
||||
|
||||
.. _ug_credentials_thycoticserver:
|
||||
|
||||
Thycotic Secret Server
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
single: Thycotic Secret Server
|
||||
pair: credential; Thycotic Secret Server
|
||||
|
||||
When **Thycotic Secrets Server** is selected for **Credential Type**, provide the following metadata to properly configure your lookup:
|
||||
|
||||
- **Secret Server URL** (required): provide the URL used for communicating with the Thycotic Secrets Server management system
|
||||
- **Username** (required): specify the authenticated user for this service
|
||||
- **Password** (required): provide the password associated with the user
|
||||
|
||||
Below shows an example of a configured Thycotic Secret Server credential.
|
||||
|
||||
.. image:: ../common/images/credentials-create-thycotic-server-credential.png
|
||||
|
||||
|
||||
321
docs/docsite/rst/userguide/credential_types.rst
Normal file
321
docs/docsite/rst/userguide/credential_types.rst
Normal file
@@ -0,0 +1,321 @@
|
||||
.. _ug_credential_types:
|
||||
|
||||
Custom Credential Types
|
||||
==========================
|
||||
|
||||
.. index::
|
||||
single: credential types
|
||||
|
||||
As an administrator with superuser access, you can define a custom credential type in a standard format using a YAML/JSON-like definition, allowing the assignment of new credential types to jobs and inventory updates. This allows you to define a custom credential type that works in ways similar to existing credential types. For example, you could create a custom credential type that injects an API token for a third-party web service into an environment variable, which your playbook or custom inventory script could consume.
|
||||
|
||||
Custom credentials support the following ways of injecting their authentication information:
|
||||
|
||||
- Environment variables
|
||||
- Ansible extra variables
|
||||
- File-based templating (i.e., generating ``.ini`` or ``.conf`` files that contain credential values)
|
||||
|
||||
You can attach one SSH and multiple cloud credentials to a Job Template. Each cloud credential must be of a different type. In other words, only one AWS credential, one GCE credential, etc., are allowed. Vault credentials and machine credentials are separate entities.
|
||||
|
||||
.. note::
|
||||
|
||||
When creating a new credential type, you are responsible for avoiding collisions in the ``extra_vars``, ``env``, and file namespaces. Also, avoid environment variable or extra variable names that start with ``ANSIBLE_`` because they are reserved. You must have Superuser permissions to be able to create and edit a credential type (``CredentialType``) and to be able to view the ``CredentialType.injection`` field.
|
||||
|
||||
|
||||
Content sourcing from collections
|
||||
-----------------------------------
|
||||
|
||||
A "managed" credential type of ``kind=galaxy`` represents a content source for fetching collections defined in ``requirements.yml`` when project updates are run (e.g., galaxy.ansible.com, cloud.redhat.com, on-premise Automation Hub). This new type will represent a URL and (optional) authentication details necessary to construct the environment variables when a project update runs ``ansible-galaxy collection install`` as described in the Ansible documentation, `Configuring the ansible-galaxy client <https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#configuring-the-ansible-galaxy-client>`_. It has fields which map directly to the configuration options exposed to the Ansible Galaxy CLI, e.g., per-server. An endpoint in the API reflects an ordered list of these credentials at the Organization level:
|
||||
|
||||
::
|
||||
|
||||
/api/v2/organizations/N/galaxy_credentials/
|
||||
|
||||
Installations of AWX migrates existing Galaxy-oriented setting values in such a way that post-upgrade, proper credentials are created and attached to every Organization. After upgrading to the latest version, every organization that existed prior to upgrade now has a list of (one or more) "Galaxy" credentials associated with it.
|
||||
|
||||
Additionally, post-upgrade, these settings are not be visible (or editable) from the ``/api/v2/settings/jobs/`` endpoint.
|
||||
|
||||
AWX should still continue to fetch roles directly from public Galaxy even if galaxy.ansible.com is not the first credential in the list for the Organization. The global "Galaxy" settings are no longer configured at the jobs level, but at the Organization level in the User Interface. The Organization's Add and Edit windows have an optional **Credential** lookup field for credentials of ``kind=galaxy``.
|
||||
|
||||
.. image:: ../common/images/organizations-galaxy-credentials.png
|
||||
|
||||
It is very important to specify the order of these credentials as order sets precedence for the sync and lookup of the content.
|
||||
For more information, see :ref:`ug_organizations_create`.
|
||||
For detail on how to set up a project using collections, see :ref:`ug_collections_usage`.
|
||||
|
||||
|
||||
Backwards-Compatible API Considerations
|
||||
-----------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: credential types; API considerations
|
||||
|
||||
Support for version 2 of the API (``api/v2/``) means a one-to-many relationship for Job Templates to credentials (including multi-cloud support). Credentials can be filtered using the v2 API:
|
||||
|
||||
::
|
||||
|
||||
$ curl "https://awx.example.org/api/v2/credentials/?credential_type__namespace=aws"
|
||||
|
||||
|
||||
In the V2 CredentialType model, the relationships are defined as follows:
|
||||
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Machine | SSH |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Vault | Vault |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Network | Sets environment variables (e.g., ``ANSIBLE_NET_AUTHORIZE``) |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| SCM | Source Control |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Cloud | EC2, AWS |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| | Lots of others |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Insights | Insights |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| Galaxy | galaxy.ansible.com, cloud.redhat.com |
|
||||
+----------+--------------------------------------------------------------+
|
||||
| | on-premise Automation Hub |
|
||||
+----------+--------------------------------------------------------------+
|
||||
|
||||
.. _ug_content_verification:
|
||||
|
||||
Content verification
|
||||
---------------------
|
||||
|
||||
AWX uses GNU Privacy Guard (GPG) to verify content. For more information, refer to `The GNU Privacy Handbook <https://www.gnupg.org/gph/en/manual/c14.html#:~:text=GnuPG%20uses%20public%2Dkey%20cryptography,the%20user%20wants%20to%20communicate>`_.
|
||||
|
||||
|
||||
|
||||
|
||||
Getting Started with Credential Types
|
||||
---------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: credentials; getting started
|
||||
|
||||
Access the Credentials from clicking **Credential Types** from the left navigation bar. If no custom credential types have been created, the Credential Types view will not have any to display and will prompt you to add one:
|
||||
|
||||
|Credential Types - home empty|
|
||||
|
||||
.. |Credential Types - home empty| image:: ../common/images/credential-types-home-empty.png
|
||||
|
||||
|
||||
If credential types have been created, this page displays a list of all existing and available Credential Types.
|
||||
|
||||
|Credential Types - home with example credential types|
|
||||
|
||||
.. |Credential Types - home with example credential types| image:: ../common/images/credential-types-home-with-example-types.png
|
||||
|
||||
To view more information about a credential type, click on its name or the Edit (|edit|) button from the **Actions** column.
|
||||
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
|
||||
Each credential type displays its own unique configurations in the **Input Configuration** field and the **Injector Configuration** field, if applicable. Both YAML and JSON formats are supported in the configuration fields.
|
||||
|
||||
|
||||
Create a New Credential Type
|
||||
--------------------------------
|
||||
|
||||
.. index::
|
||||
pair: credential types; creating new
|
||||
|
||||
To create a new credential type:
|
||||
|
||||
1. Click the **Add** button in the **Credential Types** screen.
|
||||
|
||||
|Create new credential type|
|
||||
|
||||
.. |Create new credential type| image:: ../common/images/credential-types-create-new.png
|
||||
|
||||
2. Enter the appropriate details in the **Name** and **Description** field.
|
||||
|
||||
.. note::
|
||||
|
||||
When creating a new credential type, do not use reserved variable names that start with ``ANSIBLE_`` for the **INPUT** and **INJECTOR** names and IDs, as they are invalid for custom credential types.
|
||||
|
||||
3. In the **Input Configuration** field, specify an input schema which defines a set of ordered fields for that type. The format can be in YAML or JSON, as shown:
|
||||
|
||||
**YAML**
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
fields:
|
||||
- type: string
|
||||
id: username
|
||||
label: Username
|
||||
- type: string
|
||||
id: password
|
||||
label: Password
|
||||
secret: true
|
||||
required:
|
||||
- username
|
||||
- password
|
||||
|
||||
**JSON**
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"fields": [
|
||||
{
|
||||
"type": "string",
|
||||
"id": "username",
|
||||
"label": "Username"
|
||||
},
|
||||
{
|
||||
"secret": true,
|
||||
"type": "string",
|
||||
"id": "password",
|
||||
"label": "Password"
|
||||
}
|
||||
],
|
||||
"required": ["username", "password"]
|
||||
}
|
||||
|
||||
The configuration in JSON format below show each field and how they are used:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
{
|
||||
"fields": [{
|
||||
"id": "api_token", # required - a unique name used to
|
||||
# reference the field value
|
||||
|
||||
"label": "API Token", # required - a unique label for the
|
||||
# field
|
||||
|
||||
"help_text": "User-facing short text describing the field.",
|
||||
|
||||
"type": ("string" | "boolean") # defaults to 'string'
|
||||
|
||||
"choices": ["A", "B", "C"] # (only applicable to `type=string`)
|
||||
|
||||
"format": "ssh_private_key" # optional, can be used to enforce data
|
||||
# format validity for SSH private key
|
||||
# data (only applicable to `type=string`)
|
||||
|
||||
"secret": true, # if true, the field value will be encrypted
|
||||
|
||||
"multiline": false # if true, the field should be rendered
|
||||
# as multi-line for input entry
|
||||
# (only applicable to `type=string`)
|
||||
},{
|
||||
# field 2...
|
||||
},{
|
||||
# field 3...
|
||||
}],
|
||||
|
||||
"required": ["api_token"] # optional; one or more fields can be marked as required
|
||||
},
|
||||
|
||||
When ``type=string``, fields can optionally specify multiple choice options:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
{
|
||||
"fields": [{
|
||||
"id": "api_token", # required - a unique name used to reference the field value
|
||||
"label": "API Token", # required - a unique label for the field
|
||||
"type": "string",
|
||||
"choices": ["A", "B", "C"]
|
||||
}]
|
||||
},
|
||||
|
||||
|
||||
4. In the **Injector Configuration** field, enter environment variables or extra variables that specify the values a credential type can inject. The format can be in YAML or JSON (see examples in the previous step). The configuration in JSON format below show each field and how they are used:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"file": {
|
||||
"template": "[mycloud]\ntoken={{ api_token }}"
|
||||
},
|
||||
"env": {
|
||||
"THIRD_PARTY_CLOUD_API_TOKEN": "{{ api_token }}"
|
||||
},
|
||||
"extra_vars": {
|
||||
"some_extra_var": "{{ username }}:{{ password }}"
|
||||
}
|
||||
}
|
||||
|
||||
Credential Types can also generate temporary files to support .ini files or certificate/key data:
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"file": {
|
||||
"template": "[mycloud]\ntoken={{ api_token }}"
|
||||
},
|
||||
"env": {
|
||||
"MY_CLOUD_INI_FILE": "{{ awx.filename }}"
|
||||
}
|
||||
}
|
||||
|
||||
In this example, AWX will write a temporary file that contains:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
[mycloud]\ntoken=SOME_TOKEN_VALUE
|
||||
|
||||
The absolute file path to the generated file will be stored in an environment variable named ``MY_CLOUD_INI_FILE``.
|
||||
|
||||
|
||||
An example of referencing multiple files in a custom credential template is as follows:
|
||||
|
||||
**Inputs**
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"fields": [{
|
||||
"id": "cert",
|
||||
"label": "Certificate",
|
||||
"type": "string"
|
||||
},{
|
||||
"id": "key",
|
||||
"label": "Key",
|
||||
"type": "string"
|
||||
}]
|
||||
}
|
||||
|
||||
**Injectors**
|
||||
|
||||
.. code-block:: json
|
||||
|
||||
{
|
||||
"file": {
|
||||
"template.cert_file": "[mycert]\n{{ cert }}",
|
||||
"template.key_file": "[mykey]\n{{ key }}"
|
||||
},
|
||||
"env": {
|
||||
"MY_CERT_INI_FILE": "{{ awx.filename.cert_file }}",
|
||||
"MY_KEY_INI_FILE": "{{ awx.filename.key_file }}"
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
5. Click **Save** when done.
|
||||
|
||||
6. Scroll down to the bottom of the screen and your newly created credential type appears on the list of credential types:
|
||||
|
||||
|New credential type|
|
||||
|
||||
.. |New credential type| image:: ../common/images/credential-types-new-listed.png
|
||||
|
||||
Click |edit| to modify the credential type options under the Actions column.
|
||||
|
||||
.. note::
|
||||
|
||||
In the Edit screen, you can modify the details or delete the credential. If the **Delete** button is grayed out, it is indication that the credential type that is being used by a credential, and you must delete the credential type from all the credentials that use it before you can delete it. Below is an example of such a message:
|
||||
|
||||
.. image:: ../common/images/credential-types-delete-confirmation.png
|
||||
|
||||
|
||||
7. Verify that the newly created credential type can be selected from the **Credential Type** selection window when creating a new credential:
|
||||
|
||||
|Verify new credential type|
|
||||
|
||||
.. |Verify new credential type| image:: ../common/images/credential-types-new-listed-verify.png
|
||||
|
||||
For details on how to create a new credential, see :ref:`ug_credentials`.
|
||||
684
docs/docsite/rst/userguide/credentials.rst
Normal file
684
docs/docsite/rst/userguide/credentials.rst
Normal file
@@ -0,0 +1,684 @@
|
||||
.. _ug_credentials:
|
||||
|
||||
Credentials
|
||||
===============
|
||||
|
||||
.. index::
|
||||
single: credentials
|
||||
|
||||
Credentials are utilized for authentication when launching Jobs against machines, synchronizing with inventory sources, and importing project content from a version control system.
|
||||
|
||||
You can grant users and teams the ability to use these credentials, without actually exposing the credential to the user. If you have a user move to a different team or leave the organization, you don’t have to re-key all of your systems just because that credential was available in AWX.
|
||||
|
||||
.. note::
|
||||
|
||||
AWX encrypts passwords and key information in the database and never makes secret information visible via the API. See :ref:`ag_secret_handling` in the |ata| for details.
|
||||
|
||||
.. _how_credentials_work:
|
||||
|
||||
Understanding How Credentials Work
|
||||
------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: credentials; how they work
|
||||
|
||||
AWX uses SSH to connect to remote hosts (or the Windows equivalent). In order to pass the key from AWX to SSH, the key must be decrypted before it can be written a named pipe. AWX then uses that pipe to send the key to SSH (so that it is never written to disk).
|
||||
|
||||
If passwords are used, AWX handles those by responding directly to the password prompt and decrypting the password before writing it to the prompt.
|
||||
|
||||
.. (commented out from note above, add back once api guide is extended again) Refer to :ref:`Unified Job List API Endpoint <unified_job_list>` in the |atapi| for more information.
|
||||
|
||||
|
||||
|
||||
Getting Started with Credentials
|
||||
----------------------------------
|
||||
|
||||
.. index::
|
||||
pair: credentials; getting started
|
||||
|
||||
Click **Credentials** from the left navigation bar to access the Credentials page. The Credentials page displays a search-able list of all available Credentials and can be sorted by **Name**.
|
||||
|
||||
|Credentials - home with example credentials|
|
||||
|
||||
.. |Credentials - home with example credentials| image:: ../common/images/credentials-demo-edit-details.png
|
||||
|
||||
Credentials added to a Team are made available to all members of the Team, whereas credentials added to a User are only available to that specific User by default.
|
||||
|
||||
To help you get started, a Demo Credential has been created for your use.
|
||||
|
||||
Clicking on the link for the **Demo Credential** takes you to the **Details** view of this Credential.
|
||||
|
||||
|Credentials - home with demo credential details|
|
||||
|
||||
.. |Credentials - home with demo credential details| image:: ../common/images/credentials-home-with-demo-credential-details.png
|
||||
|
||||
|
||||
Clicking the **Access** tab shows you users and teams associated with this Credential and their granted roles (owner, admin, auditor, etc.)
|
||||
|
||||
|
||||
|Credentials - home with permissions credential details|
|
||||
|
||||
.. |Credentials - home with permissions credential details| image:: ../common/images/credentials-home-with-permissions-detail.png
|
||||
|
||||
.. note::
|
||||
|
||||
A credential with roles associated will retain them even after the credential has been reassigned to another organization.
|
||||
|
||||
You can click the **Add** button to assign this **Demo Credential** to additional users. If no users exist, add them from the **Users** menu and refer to the :ref:`ug_users` section for further detail.
|
||||
|
||||
|
||||
Clicking the **Job Templates** tab shows you the job templates associated with this Credential and which jobs recently ran using this particular credential.
|
||||
|
||||
|
||||
.. image:: ../common/images/credentials-home-with-jt-detail.png
|
||||
|
||||
You can click the **Add** button to assign this **Demo Credential** to additional job templates. Refer to the :ref:`ug_JobTemplates` section for further detail on creating a new job template.
|
||||
|
||||
.. _ug_credentials_add:
|
||||
|
||||
Add a New Credential
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: credentials; adding new
|
||||
|
||||
To create a new credential:
|
||||
|
||||
1. Click the **Add** button from the **Credentials** screen.
|
||||
|
||||
|Create credential|
|
||||
|
||||
.. |Create credential| image:: ../common/images/credentials-create-credential.png
|
||||
|
||||
2. Enter the name for your new credential in the **Name** field.
|
||||
|
||||
3. Optionally enter a description and enter or select the name of the organization with which the credential is associated.
|
||||
|
||||
.. note::
|
||||
|
||||
A credential with a set of permissions associated with one organization will remain even after the credential is reassigned to another organization.
|
||||
|
||||
|
||||
4. Enter or select the credential type you want to create.
|
||||
|
||||
.. image:: ../common/images/credential-types-drop-down-menu.png
|
||||
|
||||
5. Enter the appropriate details depending on the type of credential selected, as described in the next section, :ref:`ug_credentials_cred_types`.
|
||||
|
||||
6. Click **Save** when done.
|
||||
|
||||
|
||||
.. _ug_credentials_cred_types:
|
||||
|
||||
Credential Types
|
||||
-----------------
|
||||
|
||||
.. index::
|
||||
single: credentials; types
|
||||
single: credential types
|
||||
|
||||
The following credential types are supported with AWX:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
The credential types associated with Centrify, CyberArk, HashiCorp Vault, Microsoft Azure Key Management System (KMS), and Thycotic are part of the credential plugins capability that allows an external system to lookup your secrets information. See the :ref:`ug_credential_plugins` section for further detail.
|
||||
|
||||
|
||||
.. _ug_credentials_aws:
|
||||
|
||||
Amazon Web Services
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; Amazon Web Services
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with Amazon Web Services.
|
||||
|
||||
AWX uses the following environment variables for AWS credentials and are fields prompted in the user interface:
|
||||
|
||||
::
|
||||
|
||||
AWS_ACCESS_KEY_ID
|
||||
AWS_SECRET_ACCESS_KEY
|
||||
AWS_SECURITY_TOKEN
|
||||
|
||||
|Credentials - create AWS credential|
|
||||
|
||||
.. |Credentials - create AWS credential| image:: ../common/images/credentials-create-aws-credential.png
|
||||
|
||||
Traditional Amazon Web Services credentials consist of the AWS **Access Key** and **Secret Key**.
|
||||
|
||||
AWX provides support for EC2 STS tokens (sometimes referred to as IAM STS credentials). Security Token Service (STS) is a web service that enables you to request temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users. To learn more about the IAM/EC2 STS Token, refer to: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html
|
||||
|
||||
.. note::
|
||||
|
||||
If the value of your tags in EC2 contain booleans (yes/no/true/false), you must remember to quote them.
|
||||
|
||||
.. warning::
|
||||
|
||||
To use implicit IAM role credentials, do not attach AWS cloud credentials in AWX when relying on IAM roles to access the AWS API. While it may seem to make sense to attach your AWS cloud credential to your job template, doing so will force the use of your AWS credentials and will not "fall through" to use your IAM role credentials (this is due to the use of the boto library.)
|
||||
|
||||
|
||||
Ansible Galaxy/Automation Hub API Token
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
pair: credential types; Galaxy
|
||||
pair: credential types; Automation Hub
|
||||
|
||||
Selecting this credential allows AWX to access Galaxy or use a collection published on a local |ah|. See :ref:`ug_collections_usage` for detail. Entering the Galaxy server URL is the only required value on this screen.
|
||||
|
||||
|Credentials - create galaxy credential|
|
||||
|
||||
.. |Credentials - create galaxy credential| image:: ../common/images/credentials-create-galaxy-credential.png
|
||||
|
||||
To populate the **Galaxy Server URL** and the **Auth Server URL** fields, look for the corresponding fields of the |ah| section of the `Red Hat Hybrid Cloud Console <https://console.redhat.com/ansible/automation-hub/token>`_ labeled **Server URL** and **SSO URL**, respectively.
|
||||
|
||||
.. image:: ../common/images/hub-console-tokens-page.png
|
||||
|
||||
|
||||
Centrify Vault Credential Provider Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_centrify` for more detail.
|
||||
|
||||
|
||||
Container Registry
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
pair: credential types; Container Registry
|
||||
|
||||
Selecting this credential allows AWX to access a collection of container images. See `What is a container registry? <https://www.redhat.com/en/topics/cloud-native-apps/what-is-a-container-registry>`_ for more information.
|
||||
|
||||
Aside from specifying a name, the **Authentication URL** is the only required field on this screen, and it is already pre-populated with a default value. You may change this default by specifying the authentication endpoint for a different container registry.
|
||||
|
||||
|Credentials - create container credential|
|
||||
|
||||
.. |Credentials - create container credential| image:: ../common/images/credentials-create-container-credential.png
|
||||
|
||||
|
||||
CyberArk Central Credential Provider Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_cyberarkccp` for more detail.
|
||||
|
||||
|
||||
CyberArk Conjur Secrets Manager Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_cyberarkconjur` for more detail.
|
||||
|
||||
|
||||
.. _ug_credentials_github:
|
||||
|
||||
GitHub Personal Access Token
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
pair: credential types; GitHub PAT
|
||||
|
||||
Selecting this credential allows you to access GitHub using a Personal Access Token (PAT), which is obtained through GitHub. See :ref:`ug_webhooks` for detail. Entering the provided token is the only required value in this screen.
|
||||
|
||||
|Credentials - create GitHub credential|
|
||||
|
||||
.. |Credentials - create GitHub credential| image:: ../common/images/credentials-create-webhook-github-credential.png
|
||||
|
||||
GitHub PAT credentials require a value in the **Token** field, which is provided in your GitHub profile settings.
|
||||
|
||||
This credential can be used for establishing an API connection to GitHub for use in webhook listener jobs, to post status updates.
|
||||
|
||||
|
||||
.. _ug_credentials_gitlab:
|
||||
|
||||
GitLab Personal Access Token
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
.. index::
|
||||
pair: credential types; GitLab PAT
|
||||
|
||||
Selecting this credential allows you to access GitLab using a Personal Access Token (PAT), which is obtained through GitLab. See :ref:`ug_webhooks` for detail. Entering the provided token is the only required value in this screen.
|
||||
|
||||
|Credentials - create GitLab credential|
|
||||
|
||||
.. |Credentials - create GitLab credential| image:: ../common/images/credentials-create-webhook-gitlab-credential.png
|
||||
|
||||
GitLab PAT credentials require a value in the **Token** field, which is provided in your GitLab profile settings.
|
||||
|
||||
This credential can be used for establishing an API connection to GitLab for use in webhook listener jobs, to post status updates.
|
||||
|
||||
|
||||
Google Compute Engine
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; Google Compute Engine
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with Google Compute Engine (GCE).
|
||||
|
||||
AWX uses the following environment variables for GCE credentials and are fields prompted in the user interface:
|
||||
|
||||
::
|
||||
|
||||
GCE_EMAIL
|
||||
GCE_PROJECT
|
||||
GCE_CREDENTIALS_FILE_PATH
|
||||
|
||||
|Credentials - create GCE credential|
|
||||
|
||||
.. |Credentials - create GCE credential| image:: ../common/images/credentials-create-gce-credential.png
|
||||
|
||||
GCE credentials have the following inputs that are required:
|
||||
|
||||
- **Service Account Email Address**: The email address assigned to the Google Compute Engine **service account**.
|
||||
- **Project**: Optionally provide the GCE assigned identification or the unique project ID you provided at project creation time.
|
||||
- **Service Account JSON File**: Optionally upload a GCE service account file. Use the folder (|file-browser|) icon to browse for the file that contains the special account information that can be used by services and applications running on your GCE instance to interact with other Google Cloud Platform APIs. This grants permissions to the service account and virtual machine instances.
|
||||
- **RSA Private Key**: The PEM file associated with the service account email.
|
||||
|
||||
.. |file-browser| image:: ../common/images/file-browser-button.png
|
||||
|
||||
|
||||
GPG Public Key
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; GPG public key
|
||||
|
||||
Selecting this credential type allows you to create a credential that gives AWX the ability to verify the integrity of the project when syncing from source control.
|
||||
|
||||
|Credentials - create GPG credential|
|
||||
|
||||
.. |Credentials - create GPG credential| image:: ../common/images/credentials-create-gpg-credential.png
|
||||
|
||||
See :ref:`ug_content_signing` for detailed information on how to generate a valid keypair, use the CLI tool to sign content, and how to add the public key to AWX.
|
||||
|
||||
|
||||
HashiCorp Vault Secret Lookup
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_hashivault` for more detail.
|
||||
|
||||
|
||||
HashiCorp Vault Signed SSH
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_hashivaultssh` for more detail.
|
||||
|
||||
|
||||
Insights
|
||||
^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; insights
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with Red Hat Insights.
|
||||
|
||||
|Credentials - create Insights credential|
|
||||
|
||||
.. |Credentials - create Insights credential| image:: ../common/images/credentials-create-insights-credential.png
|
||||
|
||||
Insights credentials consist of the Insights **Username** and **Password**, which is the user’s Red Hat Customer Portal Account username and password.
|
||||
|
||||
|
||||
Machine
|
||||
^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; machine
|
||||
|
||||
|
||||
Machine credentials enable AWX to invoke Ansible on hosts under your management. Just like using Ansible on the command line, you can specify the SSH username, optionally provide a password, an SSH key, a key password, or even have AWX prompt the user for their password at deployment time. They define ssh and user-level privilege escalation access for playbooks, and are used when submitting jobs to run playbooks on a remote host. Network connections (``httpapi``, ``netconf``, and ``network_cli``) use **Machine** for the credential type.
|
||||
|
||||
Machine/SSH credentials do not use environment variables. Instead, they pass the username via the ``ansible -u`` flag, and interactively write the SSH password when the underlying SSH client prompts for it.
|
||||
|
||||
|Credentials - create machine credential|
|
||||
|
||||
.. |Credentials - create machine credential| image:: ../common/images/credentials-create-machine-credential.png
|
||||
|
||||
Machine credentials have several attributes that may be configured:
|
||||
|
||||
- **Username**: The username to be used for SSH authentication.
|
||||
- **Password**: The actual password to be used for SSH authentication. This password will be stored encrypted in the database, if entered. Alternatively, you can configure AWX to ask the user for the password at launch time by selecting **Prompt on launch**. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation.
|
||||
- **SSH Private Key**: Copy or drag-and-drop the SSH private key for the machine credential.
|
||||
- **Private Key Passphrase**: If the SSH Private Key used is protected by a password, you can configure a Key Password for the private key. This password will be stored encrypted in the database, if entered. Alternatively, you can configure AWX to ask the user for the password at launch time by selecting **Prompt on launch**. In these cases, a dialog opens when the job is launched, prompting the user to enter the password and password confirmation.
|
||||
- **Privilege Escalation Method**: Specifies the type of escalation privilege to assign to specific users. This is equivalent to specifying the ``--become-method=BECOME_METHOD`` parameter, where ``BECOME_METHOD`` could be any of the typical methods described below, or a custom method you've written. Begin entering the name of the method, and the appropriate name auto-populates.
|
||||
|
||||
.. image:: ../common/images/credentials-create-machine-credential-priv-escalation.png
|
||||
|
||||
|
||||
- empty selection: If a task/play has ``become`` set to ``yes`` and is used with an empty selection, then it will default to ``sudo``
|
||||
- **sudo**: Performs single commands with super user (root user) privileges
|
||||
- **su**: Switches to the super user (root user) account (or to other user accounts)
|
||||
- **pbrun**: Requests that an application or command be run in a controlled account and provides for advanced root privilege delegation and keylogging
|
||||
- **pfexec**: Executes commands with predefined process attributes, such as specific user or group IDs
|
||||
- **dzdo**: An enhanced version of sudo that uses RBAC information in an Centrify's Active Directory service (see Centrify's `site on DZDO <http://community.centrify.com/t5/Centrify-Server-Suite/FAQ-What-is-DirectAuthorize-dzdo-dzwin/td-p/21193>`_)
|
||||
- **pmrun**: Requests that an application is run in a controlled account (refer to `Privilege Manager for Unix 6.0 <http://documents.software.dell.com/privilege-manager-for-unix/6.0/administrators-guide/privilege-manager-programs/pmrun>`_)
|
||||
- **runas**: Allows you to run as the current user
|
||||
- **enable**: Switches to elevated permissions on a network device
|
||||
- **doas**: Allows your remote/login user to execute commands as another user via the doas ("Do as user") utility
|
||||
- **ksu**: Allows your remote/login user to execute commands as another user via Kerberos access
|
||||
- **machinectl**: Allows you to manage containers via the systemd machine manager
|
||||
- **sesu**: Allows your remote/login user to execute commands as another user via the CA Privileged Access Manager
|
||||
|
||||
.. note::
|
||||
Custom ``become`` plugins are available only starting with Ansible 2.8. For more detail on this concept, refer to `Understanding Privilege Escalation https://docs.ansible.com/ansible/latest/user_guide/become.html` and the `list of become plugins https://docs.ansible.com/ansible/latest/plugins/become.html#plugin-list`.
|
||||
|
||||
- **Privilege Escalation Username** field is only seen if an option for privilege escalation is selected. Enter the username to use with escalation privileges on the remote system.
|
||||
- **Privilege Escalation Password**: field is only seen if an option for privilege escalation is selected. Enter the actual password to be used to authenticate the user via the selected privilege escalation type on the remote system. This password will be stored encrypted in the database, if entered. Alternatively, you may configure AWX to ask the user for the password at launch time by selecting **Prompt on launch**. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation.
|
||||
|
||||
.. note::
|
||||
Sudo Password must be used in combination with SSH passwords or SSH Private Keys, since AWX must first establish an authenticated SSH connection with the host prior to invoking sudo to change to the sudo user.
|
||||
|
||||
.. warning::
|
||||
Credentials which are used in *Scheduled Jobs* must not be configured as "**Prompt on launch**".
|
||||
|
||||
|
||||
Microsoft Azure Key Vault
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_azurekeyvault` for more detail.
|
||||
|
||||
|
||||
Microsoft Azure Resource Manager
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; Microsoft Azure Resource Manager
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with Microsoft Azure Resource Manager.
|
||||
|
||||
|Credentials - create Azure credential|
|
||||
|
||||
.. |Credentials - create Azure credential| image:: ../common/images/credentials-create-azure-credential.png
|
||||
|
||||
Microsoft Azure Resource Manager credentials have several attributes that may be configured:
|
||||
|
||||
- **Subscription ID**: The Subscription UUID for the Microsoft Azure account (required).
|
||||
- **Username**: The username to use to connect to the Microsoft Azure account.
|
||||
- **Password**: The password to use to connect to the Microsoft Azure account.
|
||||
- **Client ID**: The Client ID for the Microsoft Azure account.
|
||||
- **Client Secret**: The Client Secret for the Microsoft Azure account.
|
||||
- **Tenant ID**: The Tenant ID for the Microsoft Azure account.
|
||||
- **Azure Cloud Environment**: The variable associated with Azure cloud or Azure stack environments.
|
||||
|
||||
These fields are equivalent to the variables in the API. To pass service principal credentials, define the following variables::
|
||||
|
||||
AZURE_CLIENT_ID
|
||||
AZURE_SECRET
|
||||
AZURE_SUBSCRIPTION_ID
|
||||
AZURE_TENANT
|
||||
AZURE_CLOUD_ENVIRONMENT
|
||||
|
||||
To pass an Active Directory username/password pair, define the following variables::
|
||||
|
||||
AZURE_AD_USER
|
||||
AZURE_PASSWORD
|
||||
AZURE_SUBSCRIPTION_ID
|
||||
|
||||
|
||||
You can also pass credentials as parameters to a task within a playbook. The order of precedence is parameters, then environment variables, and finally a file found in your home directory.
|
||||
|
||||
To pass credentials as parameters to a task, use the following parameters for service principal credentials::
|
||||
|
||||
client_id
|
||||
secret
|
||||
subscription_id
|
||||
tenant
|
||||
azure_cloud_environment
|
||||
|
||||
Or, pass the following parameters for Active Directory username/password::
|
||||
|
||||
ad_user
|
||||
password
|
||||
subscription_id
|
||||
|
||||
|
||||
Network
|
||||
^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; network
|
||||
|
||||
Select the Network credential type **only** if you are using a `local` connection with `provider` to use Ansible networking modules to connect to and manage networking devices. When connecting to network devices, the credential type must match the connection type:
|
||||
|
||||
- For ``local`` connections using ``provider``, credential type should be **Network**
|
||||
- For all other network connections (``httpapi``, ``netconf``, and ``network_cli``), credential type should be **Machine**
|
||||
|
||||
For an overview of connection types available for network devices, refer to `Multiple Communication Protocols`_.
|
||||
|
||||
.. _`Multiple Communication Protocols`: https://docs.ansible.com/ansible/devel/network/getting_started/network_differences.html#multiple-communication-protocols.
|
||||
|
||||
AWX uses the following environment variables for Network credentials and are fields prompted in the user interface:
|
||||
|
||||
::
|
||||
|
||||
ANSIBLE_NET_USERNAME
|
||||
ANSIBLE_NET_PASSWORD
|
||||
|
||||
|Credentials - create network credential|
|
||||
|
||||
.. |Credentials - create network credential| image:: ../common/images/credentials-create-network-credential.png
|
||||
|
||||
|
||||
Network credentials have several attributes that may be configured:
|
||||
|
||||
- **Username**: The username to use in conjunction with the network device (required).
|
||||
- **Password**: The password to use in conjunction with the network device.
|
||||
- **SSH Private Key**: Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the network via SSH.
|
||||
- **Private Key Passphrase**: The actual passphrase for the private key to be used to authenticate the user to the network via SSH.
|
||||
- **Authorize**: Select this from the Options field to control whether or not to enter privileged mode.
|
||||
- If **Authorize** is checked, enter a password in the **Authorize Password** field to access privileged mode.
|
||||
|
||||
For more information, refer to the *Inside Playbook* blog, `Porting Ansible Network Playbooks with New Connection Plugins`_.
|
||||
|
||||
.. _`Porting Ansible Network Playbooks with New Connection Plugins`: https://www.ansible.com/blog/porting-ansible-network-playbooks-with-new-connection-plugins
|
||||
|
||||
.. _ug_credentials_ocp_k8s:
|
||||
|
||||
OpenShift or Kubernetes API Bearer Token
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; OpenShift
|
||||
pair: credential types; Kubernetes
|
||||
pair: credential types; API bearer token
|
||||
|
||||
|
||||
Selecting this credential type allows you to create instance groups that point to a Kubernetes or OpenShift container. For more information about this concept, refer to :ref:`ag_ext_exe_env`.
|
||||
|
||||
|Credentials - create Containers credential|
|
||||
|
||||
.. |Credentials - create Containers credential| image:: ../common/images/credentials-create-containers-credential.png
|
||||
|
||||
Container credentials have the following inputs:
|
||||
|
||||
- **OpenShift or Kubernetes API Endpoint** (required): the endpoint to be used to connect to an OpenShift or Kubernetes container
|
||||
- **API Authentication Bearer Token** (required): The token to use to authenticate the connection
|
||||
- **Verify SSL**: Optionally you can check this option to verify the server’s SSL certificate is valid and trusted. Environments that use internal or private CA’s should leave this option unchecked to disable verification.
|
||||
- **Certificate Authority Data**: include the ``BEGIN CERTIFICATE`` and ``END CERTIFICATE`` lines when pasting the certificate, if provided
|
||||
|
||||
|
||||
.. include:: ../common/get-creds-from-service-account.rst
|
||||
|
||||
|
||||
OpenStack
|
||||
^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; OpenStack
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with OpenStack.
|
||||
|
||||
|Credentials - create OpenStack credential|
|
||||
|
||||
.. |Credentials - create OpenStack credential| image:: ../common/images/credentials-create-openstack-credential.png
|
||||
|
||||
OpenStack credentials have the following inputs that are required:
|
||||
|
||||
- **Username**: The username to use to connect to OpenStack.
|
||||
- **Password (API Key)**: The password or API key to use to connect to OpenStack.
|
||||
- **Host (Authentication URL)**: The host to be used for authentication.
|
||||
- **Project (Tenant Name)**: The Tenant name or Tenant ID used for OpenStack. This value is usually the same as the username.
|
||||
- **Project (Domain Name)**: Optionally provide the project name associated with your domain.
|
||||
- **Domain name**: Optionally provide the FQDN to be used to connect to OpenStack.
|
||||
|
||||
If you are interested in using OpenStack Cloud Credentials, refer to :ref:`ug_CloudCredentials` in this guide for more information, including a sample playbook.
|
||||
|
||||
|
||||
Red Hat Ansible Automation Platform
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; automation platform
|
||||
|
||||
Selecting this credential allows you to access a Red Hat Ansible Automation Platform instance.
|
||||
|
||||
.. image:: ../common/images/credentials-create-at-credential.png
|
||||
|
||||
The Red Hat Ansible Automation Platform credentials have the following inputs that are required:
|
||||
|
||||
- **Red Hat Ansible Automation Platform**: The base URL or IP address of the other instance to connect to.
|
||||
- **Username**: The username to use to connect to it.
|
||||
- **Password**: The password to use to connect to it.
|
||||
- **Oauth Token**: If username and password is not used, provide an OAuth token to use to authenticate.
|
||||
|
||||
|
||||
Red Hat Satellite 6
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; Red Hat Satellite
|
||||
|
||||
Selecting this credential type enables synchronization of cloud inventory with Red Hat Satellite 6.
|
||||
|
||||
AWX writes a Satellite configuration file based on fields prompted in the user interface. The absolute path to the file is set in the following environment variable:
|
||||
|
||||
::
|
||||
|
||||
FOREMAN_INI_PATH
|
||||
|
||||
|Credentials - create Red Hat Satellite 6 credential|
|
||||
|
||||
.. |Credentials - create Red Hat Satellite 6 credential| image:: ../common/images/credentials-create-rh-sat-credential.png
|
||||
|
||||
|
||||
Satellite credentials have the following inputs that are required:
|
||||
|
||||
- **Satellite 6 URL**: The Satellite 6 URL or IP address to connect to.
|
||||
- **Username**: The username to use to connect to Satellite 6.
|
||||
- **Password**: The password to use to connect to Satellite 6.
|
||||
|
||||
|
||||
Red Hat Virtualization
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; oVirt
|
||||
pair: credential types; rhv
|
||||
pair: credential types; Red Hat Virtualization
|
||||
|
||||
This credential allows AWX to access Ansible's ``oVirt4.py`` dynamic inventory plugin, which is managed by Red Hat Virtualization (RHV).
|
||||
|
||||
AWX uses the following environment variables for Red Hat Virtualization credentials and are fields in the user interface:
|
||||
|
||||
::
|
||||
|
||||
OVIRT_URL
|
||||
OVIRT_USERNAME
|
||||
OVIRT_PASSWORD
|
||||
|
||||
|Credentials - create rhv credential|
|
||||
|
||||
.. |Credentials - create rhv credential| image:: ../common/images/credentials-create-rhv-credential.png
|
||||
|
||||
RHV credentials have the following inputs that are required:
|
||||
|
||||
- **Host (Authentication URL)**: The host URL or IP address to connect to. In order to sync with the inventory, the credential URL needs to include the ``ovirt-engine/api`` path.
|
||||
- **Username**: The username to use to connect to oVirt4. This needs to include the domain profile to succeed, for example ``username@ovirt.host.com``.
|
||||
- **Password**: The password to use to connect to it.
|
||||
- **CA File**: Optionally provide an absolute path to the oVirt certificate file (it may end in ``.pem``, ``.cer`` and ``.crt`` extensions, but preferably ``.pem`` for consistency)
|
||||
|
||||
|
||||
Source Control
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; source control
|
||||
|
||||
SCM (source control) credentials are used with Projects to clone and update local source code repositories from a remote revision control system such as Git or Subversion.
|
||||
|
||||
|Credentials - create SCM credential|
|
||||
|
||||
.. |Credentials - create SCM credential| image:: ../common/images/credentials-create-scm-credential.png
|
||||
|
||||
|
||||
Source Control credentials have several attributes that may be configured:
|
||||
|
||||
- **Username**: The username to use in conjunction with the source control system.
|
||||
- **Password**: The password to use in conjunction with the source control system.
|
||||
- **SCM Private Key**: Copy or drag-and-drop the actual SSH Private Key to be used to authenticate the user to the source control system via SSH.
|
||||
- **Private Key Passphrase**: If the SSH Private Key used is protected by a passphrase, you may configure a Key Passphrase for the private key.
|
||||
|
||||
.. note::
|
||||
|
||||
Source Control credentials cannot be configured as "**Prompt on launch**".
|
||||
If you are using a GitHub account for a Source Control credential and you have 2FA (Two Factor Authenication) enabled on your account, you will need to use your Personal Access Token in the password field rather than your account password.
|
||||
|
||||
|
||||
Thycotic DevOps Secrets Vault
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_thycoticvault` for more detail.
|
||||
|
||||
|
||||
Thycotic Secret Server
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
This is considered part of the secret management capability. See :ref:`ug_credentials_thycoticserver` for more detail.
|
||||
|
||||
|
||||
Vault
|
||||
^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; Vault
|
||||
|
||||
Selecting this credential type enables synchronization of inventory with Ansible Vault.
|
||||
|
||||
|Credentials - create Vault credential|
|
||||
|
||||
.. |Credentials - create Vault credential| image:: ../common/images/credentials-create-vault-credential.png
|
||||
|
||||
|
||||
Vault credentials require the **Vault Password** and an optional **Vault Identifier** if applying multi-Vault credentialing. For more information on AWX Multi-Vault support, refer to the :ref:`ag_multi_vault` section of the |ata|.
|
||||
|
||||
You may configure AWX to ask the user for the password at launch time by selecting **Prompt on launch**. In these cases, a dialog opens when the job is launched, promoting the user to enter the password and password confirmation.
|
||||
|
||||
.. warning::
|
||||
|
||||
Credentials which are used in *Scheduled Jobs* must not be configured as "**Prompt on launch**".
|
||||
|
||||
For more information about Ansible Vault, refer to: http://docs.ansible.com/ansible/playbooks_vault.html
|
||||
|
||||
|
||||
VMware vCenter
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credential types; VMware
|
||||
|
||||
Selecting this credential type enables synchronization of inventory with VMware vCenter.
|
||||
|
||||
AWX uses the following environment variables for VMware vCenter credentials and are fields prompted in the user interface:
|
||||
|
||||
::
|
||||
|
||||
VMWARE_HOST
|
||||
VMWARE_USER
|
||||
VMWARE_PASSWORD
|
||||
VMWARE_VALIDATE_CERTS
|
||||
|
||||
|
||||
|Credentials - create VMware credential|
|
||||
|
||||
.. |Credentials - create VMware credential| image:: ../common/images/credentials-create-vmware-credential.png
|
||||
|
||||
VMware credentials have the following inputs that are required:
|
||||
|
||||
- **vCenter Host**: The vCenter hostname or IP address to connect to.
|
||||
- **Username**: The username to use to connect to vCenter.
|
||||
- **Password**: The password to use to connect to vCenter.
|
||||
|
||||
.. note::
|
||||
|
||||
If the VMware guest tools are not running on the instance, VMware inventory sync may not return an IP address for that instance.
|
||||
|
||||
318
docs/docsite/rst/userguide/ee_reference.rst
Normal file
318
docs/docsite/rst/userguide/ee_reference.rst
Normal file
@@ -0,0 +1,318 @@
|
||||
|
||||
Execution Environment Setup Reference
|
||||
=======================================
|
||||
|
||||
This section contains reference information associated with the definition of an |ee|.
|
||||
|
||||
You define the content of your execution environment in a YAML file. By default, this file is called ``execution_environment.yml``. This file tells Ansible Builder how to create the build instruction file (Containerfile for Podman, Dockerfile for Docker) and build context for your container image.
|
||||
|
||||
.. note::
|
||||
|
||||
This page documents the definition schema for Ansible Builder 3.x. If you are running an older version of Ansible Builder, you need an older schema version. Please consult older versions of the docs for more information. We recommend using version 3, which offers substantially more configurable options and functionality than previous versions.
|
||||
|
||||
.. _ref_ee_definition:
|
||||
|
||||
Execution environment definition
|
||||
---------------------------------
|
||||
|
||||
A definition file is a ``.yml`` file that is required to build an image for an |ee|. Below is a sample version 3 |ee| definition schema file. To use Ansible Builder 3.x, you must specify the schema version. If your |ee| file does not specify ``version: 3``, Ansible Builder will assume you want version 1.
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
version: 3
|
||||
|
||||
build_arg_defaults:
|
||||
ANSIBLE_GALAXY_CLI_COLLECTION_OPTS: '--pre'
|
||||
|
||||
dependencies:
|
||||
galaxy: requirements.yml
|
||||
python:
|
||||
- six
|
||||
- psutil
|
||||
system: bindep.txt
|
||||
|
||||
images:
|
||||
base_image:
|
||||
name: registry.redhat.io/ansible-automation-platform-24/ee-minimal-rhel8:latest
|
||||
|
||||
additional_build_files:
|
||||
- src: files/ansible.cfg
|
||||
dest: configs
|
||||
|
||||
additional_build_steps:
|
||||
prepend_galaxy:
|
||||
- ADD _build/configs/ansible.cfg ~/.ansible.cfg
|
||||
|
||||
prepend_final: |
|
||||
RUN whoami
|
||||
RUN cat /etc/os-release
|
||||
append_final:
|
||||
- RUN echo This is a post-install command!
|
||||
- RUN ls -la /etc
|
||||
|
||||
|
||||
Configuration options
|
||||
----------------------
|
||||
|
||||
You may use the configuration YAML keys listed here in your v3 |ee| definition file. The Ansible Builder 3.x execution environment definition file accepts seven top-level sections:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
additional_build_files
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Specifies files to be added to the build context directory. These can then be referenced or copied by ``additional_build_steps`` during any build stage. The format is a list of dictionary values, each with a ``src`` and ``dest`` key and value.
|
||||
|
||||
Each list item must be a dictionary containing the following (non-optional) keys:
|
||||
|
||||
**src**
|
||||
Specifies the source file(s) to copy into the build context directory. This may either be an absolute path (e.g., ``/home/user/.ansible.cfg``), or a path that is relative to the |ee| file. Relative paths may be a glob expression matching one or more files (e.g. ``files/*.cfg``). Note that an absolute path may **not** include a regular expression. If ``src`` is a directory, the entire contents of that directory are copied to ``dest``.
|
||||
|
||||
**dest**
|
||||
Specifies a subdirectory path underneath the ``_build`` subdirectory of the build context directory that should contain the source file(s) (e.g., ``files/configs``). This may not be an absolute path or contain ``..`` within the path. This directory will be created for you if it does not exist.
|
||||
|
||||
.. note::
|
||||
When using an ``ansible.cfg`` file to pass a token and other settings for a private account to an |ah| server, listing the config file path here (as a string) will enable it to be included as a build argument in the initial phase of the build.
|
||||
|
||||
|
||||
additional_build_steps
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Specifies custom build commands for any build phase. These commands will be inserted directly into the build instruction file for the container runtime (e.g., Containerfile or Dockerfile). The commands must conform to any rules required by the containerization tool.
|
||||
|
||||
You can add build steps before or after any stage of the image creation process. For example, if you need ``git`` to be installed before you install your dependencies, you can add a build step at the end of the base build stage.
|
||||
|
||||
Below are the valid keys for this section. Each supports either a multi-line string, or a list of strings.
|
||||
|
||||
**prepend_base**
|
||||
Commands to insert before building of the base image.
|
||||
|
||||
**append_base**
|
||||
Commands to insert after building of the base image.
|
||||
|
||||
**prepend_galaxy**
|
||||
Commands to insert before building of the galaxy image.
|
||||
|
||||
**append_galaxy**
|
||||
Commands to insert after building of the galaxy image.
|
||||
|
||||
**prepend_builder**
|
||||
Commands to insert before building of the builder image.
|
||||
|
||||
**append_builder**
|
||||
Commands to insert after building of the builder image.
|
||||
|
||||
**prepend_final**
|
||||
Commands to insert before building of the final image.
|
||||
|
||||
**append_final**
|
||||
Commands to insert after building of the final image.
|
||||
|
||||
|
||||
build_arg_defaults
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
Specifies default values for build args as a dictionary. This is an alternative to using the ``--build-arg`` CLI flag.
|
||||
|
||||
Build arguments used by ``ansible-builder`` are the following:
|
||||
|
||||
**ANSIBLE_GALAXY_CLI_COLLECTION_OPTS**
|
||||
Allows the user to pass the ``–pre`` flag (or others) to enable the installation of pre-release collections.
|
||||
|
||||
**ANSIBLE_GALAXY_CLI_ROLE_OPTS**
|
||||
This allows the user to pass any flags, such as --no-deps, to the role installation.
|
||||
|
||||
**PKGMGR_PRESERVE_CACHE**
|
||||
This controls how often the package manager cache is cleared during the image build process. If this value is not set, which is the default, the cache is cleared frequently. If it is set to the string ``always``, the cache is never cleared. Any other value forces the cache to be cleared only after the system dependencies are installed in the final build stage.
|
||||
|
||||
Ansible Builder hard-codes values given inside of ``build_arg_defaults`` into the build instruction file, so they will persist if you run your container build manually.
|
||||
|
||||
If you specify the same variable in the |ee| definition and at the command line with the CLI ``build-arg`` flag, the CLI value will take higher precedence (the CLI value will override the value in the |ee| definition).
|
||||
|
||||
.. _ref_collections_metadata:
|
||||
|
||||
dependencies
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
Specifies dependencies to install into the final image, including ``ansible-core``, ``ansible-runner``, Python packages, system packages, and Ansible Collections. Ansible Builder automatically installs dependencies for any Ansible Collections you install.
|
||||
|
||||
In general, you can use standard syntax to constrain package versions. Use the same syntax you would pass to ``dnf``, ``pip``, ``ansible-galaxy``, or any other package management utility. You can also define your packages or collections in separate files and reference those files in the ``dependencies`` section of your |ee| definition file.
|
||||
|
||||
The following keys are valid for this section:
|
||||
|
||||
**ansible_core**
|
||||
The version of the ``ansible-core`` Python package to be installed. This value is a dictionary with a single key, ``package_pip``. The ``package_pip`` value is passed directly to pip for installation and can be in any format that pip supports. Below are some example values:
|
||||
::
|
||||
|
||||
ansible_core:
|
||||
package_pip: ansible-core
|
||||
ansible_core:
|
||||
package_pip: ansible-core==2.14.3
|
||||
ansible_core:
|
||||
package_pip: https://github.com/example_user/ansible/archive/refs/heads/ansible.tar.gz
|
||||
|
||||
**ansible_runner**
|
||||
The version of the Ansible Runner Python package to be installed. This value is a dictionary with a single key, package_pip. The package_pip value is passed directly to pip for installation and can be in any format that pip supports. Below are some example values:
|
||||
::
|
||||
|
||||
ansible_runner:
|
||||
package_pip: ansible-runner
|
||||
ansible_runner:
|
||||
package_pip: ansible-runner==2.3.2
|
||||
ansible_runner:
|
||||
package_pip: https://github.com/example_user/ansible-runner/archive/refs/heads/ansible-runner.tar.gz
|
||||
|
||||
|
||||
**galaxy**
|
||||
Ansible Collections to be installed from Galaxy. This may be a filename, a dictionary, or a multi-line string representation of an Ansible Galaxy ``requirements.yml`` file (see below for examples). Read more about the requirements file format in the `Galaxy user guide <https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#install-multiple-collections-with-a-requirements-file>`_.
|
||||
|
||||
**python**
|
||||
The Python installation requirements. This may either be a filename, or a list of requirements (see below for an example). Ansible Builder combines all the Python requirements files from all collections into a single file using the ``requirements-parser`` library. This library supports complex syntax, including references to other files. If multiple collections require the same *package name*, Ansible Builder combines them into a single entry and combines the constraints. Certain package names are specifically *ignored* by ``ansible-builder``, meaning that Ansible Builder does not include them in the combined file of Python dependencies, even if a collection lists them as dependencies. These include test packages and packages that provide Ansible itself. The full list can be found in ``EXCLUDE_REQUIREMENTS`` in ``src/ansible_builder/_target_scripts/introspect.py``. If you need to include one of these ignored package names, use the ``--user-pip`` option of the ``introspect`` command to list it in the user requirements file. Packages supplied this way are not processed against the list of excluded Python packages.
|
||||
|
||||
**python_interpreter**
|
||||
A dictionary that defines the Python system package name to be installed by dnf (``package_system``) and/or a path to the Python interpreter to be used (``python_path)``.
|
||||
|
||||
**system**
|
||||
The system packages to be installed, in bindep format. This may either be a filename, or a list of requirements (see below for an example). For more information about bindep, refer to the `OpenDev documentation <https://docs.opendev.org/opendev/bindep/latest/readme.html>`_.
|
||||
For system packages, use the ``bindep`` format to specify cross-platform requirements, so they can be installed by whichever package management system the execution environment uses. Collections should specify necessary requirements for ``[platform:rpm]``. Ansible Builder combines system package entries from multiple collections into a single file. Only requirements with *no* profiles (runtime requirements) are installed to the image. Entries from multiple collections which are outright duplicates of each other may be consolidated in the combined file.
|
||||
|
||||
The following example uses filenames that contain the various dependencies:
|
||||
|
||||
::
|
||||
|
||||
dependencies:
|
||||
python: requirements.txt
|
||||
system: bindep.txt
|
||||
galaxy: requirements.yml
|
||||
ansible_core:
|
||||
package_pip: ansible-core==2.14.2
|
||||
ansible_runner:
|
||||
package_pip: ansible-runner==2.3.1
|
||||
python_interpreter:
|
||||
package_system: "python310"
|
||||
python_path: "/usr/bin/python3.10"
|
||||
|
||||
And this example uses inline values:
|
||||
|
||||
::
|
||||
|
||||
dependencies:
|
||||
python:
|
||||
- pywinrm
|
||||
system:
|
||||
- iputils [platform:rpm]
|
||||
galaxy:
|
||||
collections:
|
||||
- name: community.windows
|
||||
- name: ansible.utils
|
||||
version: 2.10.1
|
||||
ansible_core:
|
||||
package_pip: ansible-core==2.14.2
|
||||
ansible_runner:
|
||||
package_pip: ansible-runner==2.3.1
|
||||
python_interpreter:
|
||||
package_system: "python310"
|
||||
python_path: "/usr/bin/python3.10"
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If any of these dependency files (``requirementa.txt,bindep.txt, and requirements.yml``) are in the ``build_ignore`` of the collection, it will not work correctly.
|
||||
|
||||
Collection maintainers can verify that ansible-builder recognizes the requirements they expect by using the ``introspect`` command, for example:
|
||||
|
||||
::
|
||||
|
||||
ansible-builder introspect --sanitize ~/.ansible/collections/
|
||||
|
||||
The ``--sanitize`` option reviews all of the collection requirements and removes duplicates. It also removes any Python requirements that should normally be excluded. Use the ``-v3`` option to ``introspect`` to see logging messages about requirements that are being excluded.
|
||||
|
||||
|
||||
images
|
||||
~~~~~~~
|
||||
|
||||
Specifies the base image to be used. At a minimum you **MUST** specify a source, image, and tag for the base image. The base image provides the operating system and may also provide some packages. We recommend using the standard ``host/namespace/container:tag`` syntax to specify images. You may use Podman or Docker shortcut syntax instead, but the full definition is more reliable and portable.
|
||||
|
||||
Valid keys for this section are:
|
||||
|
||||
**base_image**
|
||||
A dictionary defining the parent image for the execution environment. A ``name`` key must be supplied with the container image to use. Use the ``signature_original_name`` key if the image is mirrored within your repository, but signed with the original image's signature key.
|
||||
|
||||
image verification
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
You can verify signed container images if you are using the ``podman`` container runtime. Set the ``container-policy`` CLI option to control how this data is used in relation to a Podman ``policy.json`` file for container image signature validation.
|
||||
|
||||
- ``ignore_all`` policy: Generate a ``policy.json`` file in the build ``context directory <context>`` where no signature validation is performed.
|
||||
- ``system`` policy: Signature validation is performed using pre-existing ``policy.json`` files in standard system locations. ``ansible-builder`` assumes no responsibility for the content within these files, and the user has complete control over the content.
|
||||
- ``signature_required`` policy: ``ansible-builder`` will use the container image definitions here to generate a ``policy.json`` file in the build ``context directory <context>`` that will be used during the build to validate the images.
|
||||
|
||||
|
||||
options
|
||||
~~~~~~~~
|
||||
|
||||
A dictionary of keywords/options that can affect builder runtime functionality. Valid keys for this section are:
|
||||
|
||||
**container_init**
|
||||
A dictionary with keys that allow for customization of the container ``ENTRYPOINT`` and ``CMD`` directives (and related behaviors). Customizing these behaviors is an advanced task, and may result in subtle, difficult-to-debug failures. As the provided defaults for this section control a number of intertwined behaviors, overriding any value will skip all remaining defaults in this dictionary. Valid keys are:
|
||||
|
||||
**cmd**
|
||||
Literal value for the ``CMD`` Containerfile directive. The default value is ``["bash"]``.
|
||||
|
||||
**entrypoint**
|
||||
Literal value for the ``ENTRYPOINT`` Containerfile directive. The default entrypoint behavior handles signal propagation to subprocesses, as well as attempting to ensure at runtime that the container user has a proper environment with a valid writeable home directory, represented in ``/etc/passwd``, with the ``HOME`` environment variable set to match. The default entrypoint script may emit warnings to ``stderr`` in cases where it is unable to suitably adjust the user runtime environment. This behavior can be ignored or elevated to a fatal error; consult the source for the ``entrypoint`` target script for more details. The default value is ``["/opt/builder/bin/entrypoint", "dumb-init"]``.
|
||||
|
||||
**package_pip**
|
||||
Package to install via pip for entrypoint support. This package will be installed in the final build image. The default value is ``dumb-init==1.2.5``.
|
||||
|
||||
**package_manager_path**
|
||||
A string with the path to the package manager (dnf or microdnf) to use. The default is ``/usr/bin/dnf``. This value will be used to install a Python interpreter, if specified in ``dependencies``, and during the build phase by the ``assemble`` script.
|
||||
|
||||
**skip_ansible_check**
|
||||
This boolean value controls whether or not the check for an installation of Ansible and Ansible Runner is performed on the final image. Set this value to ``True`` to not perform this check. The default is ``False``.
|
||||
|
||||
**relax_passwd_permissions**
|
||||
This boolean value controls whether the ``root`` group (GID 0) is explicitly granted write permission to ``/etc/passwd`` in the final container image. The default entrypoint script may attempt to update ``/etc/passwd`` under some container runtimes with dynamically created users to ensure a fully-functional POSIX user environment and home directory. Disabling this capability can cause failures of software features that require users to be listed in ``/etc/passwd`` with a valid and writeable home directory (eg, ``async`` in ansible-core, and the ``~username`` shell expansion). The default is ``True``.
|
||||
|
||||
**workdir**
|
||||
Default current working directory for new processes started under the final container image. Some container runtimes also use this value as ``HOME`` for dynamically-created users in the ``root`` (GID 0) group. When this value is specified, the directory will be created (if it doesn't already exist), set to ``root`` group ownership, and ``rwx`` group permissions recursively applied to it. The default value is ``/runner``.
|
||||
|
||||
**user**
|
||||
This sets the username or UID to use as the default user for the final container image. The default value is ``1000``.
|
||||
|
||||
Example options section:
|
||||
::
|
||||
|
||||
options:
|
||||
container_init:
|
||||
package_pip: dumb-init>=1.2.5
|
||||
entrypoint: '["dumb-init"]'
|
||||
cmd: '["csh"]'
|
||||
package_manager_path: /usr/bin/microdnf
|
||||
relax_password_permissions: false
|
||||
skip_ansible_check: true
|
||||
workdir: /myworkdir
|
||||
user: bob
|
||||
|
||||
|
||||
version
|
||||
~~~~~~~~
|
||||
|
||||
An integer value that sets the schema version of the execution environment definition file. Defaults to ``1``. Must be ``3`` if you are using Ansible Builder 3.x.
|
||||
|
||||
|
||||
Default execution environment for AWX
|
||||
--------------------------------------
|
||||
|
||||
The example in ``test/data/pytz`` requires the ``awx.awx`` collection in the |ee| definition. The lookup plugin ``awx.awx.tower_schedule_rrule`` requires the PyPI ``pytz`` and another library to work. If ``test/data/pytz/execution-environment.yml`` file is provided to the ``ansible-builder build`` command, then it will install the collection inside the image, read the ``requirements.txt`` file inside of the collection, and then install ``pytz`` into the image.
|
||||
|
||||
The image produced can be used inside of an ``ansible-runner`` project by placing these variables inside the ``env/settings`` file, inside of the private data directory.
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
container_image: image-name
|
||||
process_isolation_executable: podman # or docker
|
||||
process_isolation: true
|
||||
|
||||
The ``awx.awx`` collection is a subset of content included in the default AWX |ee|. More details can be found in the `awx-ee repository <https://github.com/ansible/awx-ee>`_.
|
||||
182
docs/docsite/rst/userguide/execution_environments.rst
Normal file
182
docs/docsite/rst/userguide/execution_environments.rst
Normal file
@@ -0,0 +1,182 @@
|
||||
.. _ug_execution_environments:
|
||||
|
||||
Execution Environments
|
||||
======================
|
||||
|
||||
.. index::
|
||||
single: execution environment
|
||||
pair: add; execution environment
|
||||
pair: jobs; add execution environment
|
||||
|
||||
|
||||
.. include:: ../common/execution_environs.rst
|
||||
|
||||
.. _ug_build_ees:
|
||||
|
||||
Building an Execution Environment
|
||||
---------------------------------
|
||||
|
||||
.. index::
|
||||
single: execution environment
|
||||
pair: build; execution environment
|
||||
|
||||
|
||||
Using Ansible content that depends on non-default dependencies (custom virtual environments) can be tricky. Packages must be installed on each node, play nicely with other software installed on the host system, and be kept in sync. Previously, jobs ran inside of a virtual environment at ``/var/lib/awx/venv/ansible`` by default, which was pre-loaded with dependencies for ansible-runner and certain types of Ansible content used by the Ansible control machine.
|
||||
|
||||
To help simplify this process, container images can be built that serve as Ansible `control nodes <https://docs.ansible.com/ansible/latest/network/getting_started/basic_concepts.html#control-node>`_. These container images are referred to as automation |ees|, which you can create with ansible-builder and then ansible-runner can make use of those images.
|
||||
|
||||
Install ansible-builder
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In order to build images, either installations of podman or docker is required along with the ansible-builder Python package. The ``--container-runtime`` option needs to correspond to the Podman/Docker executable you intend to use.
|
||||
|
||||
Refer to the latest `Quickstart for Ansible Builder <https://ansible.readthedocs.io/projects/builder/en/latest/#quickstart-for-ansible-builder>`_ for detail.
|
||||
|
||||
.. _build_ee:
|
||||
|
||||
Build an execution environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Ansible-builder is used to create an |ee|.
|
||||
|
||||
An |ee| is expected to contain:
|
||||
|
||||
- Ansible
|
||||
- Ansible Runner
|
||||
- Ansible Collections
|
||||
- Python and/or system dependencies of:
|
||||
|
||||
- modules/plugins in collections
|
||||
- content in ansible-base
|
||||
- custom user needs
|
||||
|
||||
Building a new |ee| involves a definition (a ``.yml`` file) that specifies which content you would like to include in your |ee|, such as collections, Python requirements, and system-level packages. The content from the output generated from migrating to |ees| has some of the required data that can be piped to a file or pasted into this definition file.
|
||||
|
||||
|
||||
Run the builder
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Once you created a definition, use this procedure to build your |ee|.
|
||||
|
||||
The ``ansible-builder build`` command takes an |ee| definition as an input. It outputs the build context necessary for building an |ee| image, and proceeds with building that image. The image can be re-built with the build context elsewhere, and produces the same result. By default, it looks for a file named ``execution-environment.yml`` in the current directory.
|
||||
|
||||
For the illustration purposes, the following example ``execution-environment.yml`` file is used as a starting point:
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
version: 3
|
||||
dependencies:
|
||||
galaxy: requirements.yml
|
||||
|
||||
The content of ``requirements.yml``:
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
collections:
|
||||
- name: awx.awx
|
||||
|
||||
To build an |ee| using the files above, run:
|
||||
|
||||
::
|
||||
|
||||
$ ansible-builder build
|
||||
...
|
||||
STEP 7: COMMIT my-awx-ee
|
||||
--> 09c930f5f6a
|
||||
09c930f5f6ac329b7ddb321b144a029dbbfcc83bdfc77103968b7f6cdfc7bea2
|
||||
Complete! The build context can be found at: context
|
||||
|
||||
In addition to producing a ready-to-use container image, the build context is preserved, which can be rebuilt at a different time and/or location with the tooling of your choice, such as ``docker build`` or ``podman build``.
|
||||
|
||||
For additional information about the ``ansible-builder build`` command, refer to Ansible's `CLI Usage <https://ansible.readthedocs.io/projects/builder/en/latest/usage/#cli-usage>`_ documentation.
|
||||
|
||||
Use an execution environment in jobs
|
||||
------------------------------------
|
||||
|
||||
In order to use an |ee| in a job, a few components are required:
|
||||
|
||||
- An |ee| must have been created using |ab|. See :ref:`build_ee` for detail. Once an |ee| is created, you can use it to run jobs. Use the AWX user interface to specify the |ee| to use in your job templates.
|
||||
|
||||
- Depending on whether an |ee| is made available for global use or tied to an organization, you must have the appropriate level of administrator privileges in order to use an |ee| in a job. |Ees| tied to an organization require Organization administrators to be able to run jobs with those |ees|.
|
||||
|
||||
- Before running a job or job template that uses an |ee| that has a credential assigned to it, be sure that the credential contains a username, host, and password.
|
||||
|
||||
1. Click **Execution Environments** from the left navigation bar of the AWX user interface.
|
||||
|
||||
2. Add an |ee| by selecting the **Add** button.
|
||||
|
||||
3. Enter the appropriate details into the following fields:
|
||||
|
||||
- **Name**: Enter a name for the |ee| (required).
|
||||
- **Image**: Enter the image name (required). The image name requires its full location (repo), the registry, image name, and version tag in the example format of ``quay.io/ansible/awx-ee:latestrepo/project/image-name:tag``.
|
||||
- **Pull**: optionally choose the type of pull when running jobs:
|
||||
|
||||
- **Always pull container before running**: Pulls the latest image file for the container.
|
||||
- **Only pull the image if not present before running**: Only pulls latest image if none specified.
|
||||
- **Never pull container before running**: Never pull the latest version of the container image.
|
||||
|
||||
- **Description**: optional.
|
||||
- **Organization**: optionally assign the organization to specifically use this |ee|. To make the |ee| available for use across multiple organizations, leave this field blank.
|
||||
- **Registry credential**: If the image has a protected container registry, provide the credential to access it.
|
||||
|
||||
.. image:: ../common/images/ee-new-ee-form-filled.png
|
||||
|
||||
4. Click **Save**.
|
||||
|
||||
Now your newly added |ee| is ready to be used in a job template. To add an |ee| to a job template, specify it in the **Execution Environment** field of the job template, as shown in the example below. For more information on setting up a job template, see :ref:`ug_JobTemplates` in the |atu|.
|
||||
|
||||
.. image:: ../common/images/job-template-with-example-ee-selected.png
|
||||
|
||||
Once you added an |ee| to a job template, you can see those templates listed in the **Templates** tab of the |ee|:
|
||||
|
||||
.. image:: ../common/images/ee-details-templates-list.png
|
||||
|
||||
|
||||
Execution environment mount options
|
||||
-----------------------------------
|
||||
|
||||
.. index::
|
||||
pair: mount options; execution environment
|
||||
pair: system trust store; execution environment
|
||||
|
||||
.. https://github.com/ansible/product-docs/issues/1647 and https://github.com/ansible/awx/issues/10787
|
||||
|
||||
Rebuilding an |ee| is one way to add certs, but inheriting certs from the host provides a more convenient solution.
|
||||
|
||||
Additionally, you may customize |ee| mount options and mount paths in the **Paths to expose to isolated jobs** field of the Job Settings page, where it supports podman-style volume mount syntax. Refer to the `Podman documentation <https://docs.podman.io/en/latest/markdown/podman-run.1.html#volume-v-source-volume-host-dir-container-dir-options>`_ for detail.
|
||||
|
||||
In some cases where the ``/etc/ssh/*`` files were added to the |ee| image due to customization of an |ee|, an SSH error may occur. For example, exposing the ``/etc/ssh/ssh_config.d:/etc/ssh/ssh_config.d:O`` path allows the container to be mounted, but the ownership permissions are not mapped correctly.
|
||||
|
||||
If you encounter this error, or have upgraded from an older version of AWX, perform the following steps:
|
||||
|
||||
1. Change the container ownership on the mounted volume to ``root``.
|
||||
|
||||
2. In the **Paths to expose to isolated jobs** field of the Job Settings page, using the current example, expose the path as such:
|
||||
|
||||
.. image:: ../common/images/settings-paths2expose-iso-jobs.png
|
||||
|
||||
.. note::
|
||||
|
||||
The ``:O`` option is only supported for directories. It is highly recommended that you be as specific as possible, especially when specifying system paths. Mounting ``/etc`` or ``/usr`` directly have impact that make it difficult to troubleshoot.
|
||||
|
||||
This informs podman to run a command similar to the example below, where the configuration is mounted and the ``ssh`` command works as expected.
|
||||
|
||||
::
|
||||
|
||||
podman run -v /ssh_config:/etc/ssh/ssh_config.d/:O ...
|
||||
|
||||
|
||||
.. https://github.com/ansible/awx/issues/11600
|
||||
|
||||
To expose isolated paths in OpenShift or Kubernetes containers as HostPath, assume the following configuration:
|
||||
|
||||
.. image:: ../common/images/settings-paths2expose-iso-jobs-mount-containers.png
|
||||
|
||||
Use the **Expose host paths for Container Groups** toggle to enable it.
|
||||
|
||||
Once the playbook runs, the resulting Pod spec will display similar to the example below. Note the details of the ``volumeMounts`` and ``volumes`` sections.
|
||||
|
||||
.. image:: ../common/images/mount-containers-playbook-run-podspec.png
|
||||
|
||||
175
docs/docsite/rst/userguide/glossary.rst
Normal file
175
docs/docsite/rst/userguide/glossary.rst
Normal file
@@ -0,0 +1,175 @@
|
||||
Glossary
|
||||
========
|
||||
|
||||
.. index::
|
||||
single: glossary
|
||||
|
||||
.. glossary::
|
||||
|
||||
Ad Hoc
|
||||
Refers to running Ansible to perform some quick command, using /usr/bin/ansible, rather than the orchestration language, which is /usr/bin/ansible-playbook. An example of an ad hoc command might be rebooting 50 machines in your infrastructure. Anything you can do ad hoc can be accomplished by writing a Playbook, and Playbooks can also glue lots of other operations together.
|
||||
|
||||
Callback Plugin
|
||||
Refers to some user-written code that can intercept results from Ansible and do something with them. Some supplied examples in the GitHub project perform custom logging, send email, or even play sound effects.
|
||||
|
||||
Control Groups
|
||||
Also known as '*cgroups*', a control group is a feature in the Linux kernel that allows resources to be grouped and allocated to run certain processes. In addition to assigning resources to processes, cgroups can also report actual resource usage by all processes running inside of the cgroup.
|
||||
|
||||
Check Mode
|
||||
Refers to running Ansible with the ``--check`` option, which does not make any changes on the remote systems, but only outputs the changes that might occur if the command ran without this flag. This is analogous to so-called “dry run” modes in other systems, though the user should be warned that this does not take into account unexpected command failures or cascade effects (which is true of similar modes in other systems). Use this to get an idea of what might happen, but it is not a substitute for a good staging environment.
|
||||
|
||||
Container Groups
|
||||
Container Groups are a type of Instance Group that specify a configuration for provisioning a pod in a Kubernetes or OpenShift cluster where a job is run. These pods are provisioned on-demand and exist only for the duration of the playbook run.
|
||||
|
||||
Credentials
|
||||
Authentication details that may be utilized by AWX to launch jobs against machines, to synchronize with inventory sources, and to import project content from a version control system.
|
||||
|
||||
Credential Plugin
|
||||
Python code that contains definitions for an external credential type, its metadata fields, and the code needed for interacting with a secret management system.
|
||||
|
||||
Distributed Job
|
||||
A job that consists of a job template, an inventory, and slice size. When executed, a distributed job slices each inventory into a number of “slice size” chunks, which are then used to run smaller job slices.
|
||||
|
||||
External Credential Type
|
||||
A managed credential type for AWX used for authenticating with a secret management system.
|
||||
|
||||
Facts
|
||||
Facts are simply things that are discovered about remote nodes. While they can be used in playbooks and templates just like variables, facts are things that are inferred, rather than set. Facts are automatically discovered when running plays by executing the internal setup module on the remote nodes. You never have to call the setup module explicitly, it just runs, but it can be disabled to save time if it is not needed. For the convenience of users who are switching from other configuration management systems, the fact module also pulls in facts from the ‘ohai’ and ‘facter’ tools if they are installed, which are fact libraries from Chef and Puppet, respectively.
|
||||
|
||||
Forks
|
||||
Ansible and AWX talk to remote nodes in parallel and the level of parallelism can be set serveral ways--during the creation or editing of a Job Template, by passing ``--forks``, or by editing the default in a configuration file. The default is a very conservative 5 forks, though if you have a lot of RAM, you can easily set this to a value like 50 for increased parallelism.
|
||||
|
||||
Group
|
||||
A set of hosts in Ansible that can be addressed as a set, of which many may exist within a single Inventory.
|
||||
|
||||
Group Vars
|
||||
The ``group_vars/`` files are files that live in a directory alongside an inventory file, with an optional filename named after each group. This is a convenient place to put variables that will be provided to a given group, especially complex data structures, so that these variables do not have to be embedded in the inventory file or playbook.
|
||||
|
||||
Handlers
|
||||
Handlers are just like regular tasks in an Ansible playbook (see Tasks), but are only run if the Task contains a “notify” directive and also indicates that it changed something. For example, if a config file is changed then the task referencing the config file templating operation may notify a service restart handler. This means services can be bounced only if they need to be restarted. Handlers can be used for things other than service restarts, but service restarts are the most common usage.
|
||||
|
||||
Host
|
||||
A system managed by AWX, which may include a physical, virtual, cloud-based server, or other device. Typically an operating system instance. Hosts are contained in Inventory. Sometimes referred to as a "node".
|
||||
|
||||
Host Specifier
|
||||
Each Play in Ansible maps a series of tasks (which define the role, purpose, or orders of a system) to a set of systems. This “hosts:” directive in each play is often called the hosts specifier. It may select one system, many systems, one or more groups, or even some hosts that are in one group and explicitly not in another.
|
||||
|
||||
Instance Group
|
||||
A group that contains instances for use in a clustered environment. An instance group provides the ability to group instances based on policy.
|
||||
|
||||
Inventory
|
||||
A collection of hosts against which Jobs may be launched.
|
||||
|
||||
Inventory Script
|
||||
A very simple program (or a complicated one) that looks up hosts, group membership for hosts, and variable information from an external resource--whether that be a SQL database, a CMDB solution, or something like LDAP. This concept was adapted from Puppet (where it is called an “External Nodes Classifier”) and works more or less exactly the same way.
|
||||
|
||||
Inventory Source
|
||||
Information about a cloud or other script that should be merged into the current inventory group, resulting in the automatic population of Groups, Hosts, and variables about those groups and hosts.
|
||||
|
||||
Job
|
||||
One of many background tasks launched by AWX, this is usually the instantiation of a Job Template; the launch of an Ansible playbook. Other types of jobs include inventory imports, project synchronizations from source control, or administrative cleanup actions.
|
||||
|
||||
Job Detail
|
||||
The history of running a particular job, including its output and success/failure status.
|
||||
|
||||
Job Slice
|
||||
See :term:`Distributed Job`.
|
||||
|
||||
Job Template
|
||||
The combination of an Ansible playbook and the set of parameters required to launch it.
|
||||
|
||||
JSON
|
||||
Ansible and AWX use JSON for return data from remote modules. This allows modules to be written in any language, not just Python.
|
||||
|
||||
Mesh
|
||||
Describes a network comprising of nodes. Communication between nodes is established at the transport layer by protocols such as TCP, UDP or Unix sockets. See also, :term:`node`.
|
||||
|
||||
Metadata
|
||||
Information for locating a secret in the external system once authenticated. The uses provides this information when linking an external credential to a target credential field.
|
||||
|
||||
Node
|
||||
A node corresponds to entries in the instance database model, or the ``/api/v2/instances/`` endpoint, and is a machine participating in the cluster / mesh. The unified jobs API reports ``awx_node`` and ``execution_node`` fields. The execution node is where the job runs, and AWX node interfaces between the job and server functions.
|
||||
|
||||
+-----------+------------------------------------------------------------------------------------------------------+
|
||||
| Node Type | Description |
|
||||
+===========+======================================================================================================+
|
||||
| Control | Nodes that run persistent |aap| services, and delegate jobs to hybrid and execution nodes |
|
||||
+-----------+------------------------------------------------------------------------------------------------------+
|
||||
| Hybrid | Nodes that run persistent |aap| services and execute jobs |
|
||||
+-----------+------------------------------------------------------------------------------------------------------+
|
||||
| Hop | Used for relaying across the mesh only |
|
||||
+-----------+------------------------------------------------------------------------------------------------------+
|
||||
| Execution | Nodes that run jobs delivered from control nodes (jobs submitted from the user's Ansible automation) |
|
||||
+-----------+------------------------------------------------------------------------------------------------------+
|
||||
|
||||
Notification Template
|
||||
An instance of a notification type (Email, Slack, Webhook, etc.) with a name, description, and a defined configuration.
|
||||
|
||||
Notification
|
||||
A manifestation of the notification template; for example, when a job fails a notification is sent using the configuration defined by the notification template.
|
||||
|
||||
Notify
|
||||
The act of a task registering a change event and informing a handler task that another action needs to be run at the end of the play. If a handler is notified by multiple tasks, it will still be run only once. Handlers are run in the order they are listed, not in the order that they are notified.
|
||||
|
||||
Organization
|
||||
A logical collection of Users, Teams, Projects, and Inventories. The highest level in the AWX object hierarchy is the Organization.
|
||||
|
||||
Organization Administrator
|
||||
An AWX user with the rights to modify the Organization's membership and settings, including making new users and projects within that organization. An organization admin can also grant permissions to other users within the organization.
|
||||
|
||||
Permissions
|
||||
The set of privileges assigned to Users and Teams that provide the ability to read, modify, and administer Projects, Inventories, and other AWX objects.
|
||||
|
||||
Plays
|
||||
A playbook is a list of plays. A play is minimally a mapping between a set of hosts selected by a host specifier (usually chosen by groups, but sometimes by hostname globs) and the tasks which run on those hosts to define the role that those systems will perform. There can be one or many plays in a playbook.
|
||||
|
||||
Playbook
|
||||
An Ansible playbook. Refer to http://docs.ansible.com/ for more information.
|
||||
|
||||
Policy
|
||||
Policies dictate how instance groups behave and how jobs are executed.
|
||||
|
||||
Project
|
||||
A logical collection of Ansible playbooks, represented in AWX.
|
||||
|
||||
Roles
|
||||
Roles are units of organization in Ansible and AWX. Assigning a role to a group of hosts (or a set of groups, or host patterns, etc.) implies that they should implement a specific behavior. A role may include applying certain variable values, certain tasks, and certain handlers--or just one or more of these things. Because of the file structure associated with a role, roles become redistributable units that allow you to share behavior among playbooks--or even with other users.
|
||||
|
||||
Secret Management System
|
||||
A server or service for securely storing and controlling access to tokens, passwords, certificates, encryption keys, and other sensitive data.
|
||||
|
||||
Schedule
|
||||
The calendar of dates and times for which a job should run automatically.
|
||||
|
||||
Sliced Job
|
||||
See :term:`Distributed Job`.
|
||||
|
||||
Source Credential
|
||||
An external credential that is linked to the field of a target credential.
|
||||
|
||||
Sudo
|
||||
Ansible does not require root logins and, since it is daemonless, does not require root level daemons (which can be a security concern in sensitive environments). Ansible can log in and perform many operations wrapped in a ``sudo`` command, and can work with both password-less and password-based sudo. Some operations that do not normally work with ``sudo`` (like ``scp`` file transfer) can be achieved with Ansible’s *copy*, *template*, and *fetch* modules while running in ``sudo`` mode.
|
||||
|
||||
Superuser
|
||||
An admin of the AWX server who has permission to edit any object in the system, whether associated to any organization. Superusers can create organizations and other superusers.
|
||||
|
||||
Survey
|
||||
Questions asked by a job template at job launch time, configurable on the job template.
|
||||
|
||||
Target Credential
|
||||
A non-external credential with an input field that is linked to an external credential.
|
||||
|
||||
Team
|
||||
A sub-division of an Organization with associated Users, Projects, Credentials, and Permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across Organizations.
|
||||
|
||||
User
|
||||
An AWX operator with associated permissions and credentials.
|
||||
|
||||
Webhook
|
||||
Webhooks allow communication and information sharing between apps. They are used to respond to commits pushed to SCMs and launch job templates or workflow templates.
|
||||
|
||||
Workflow Job Template
|
||||
A set consisting of any combination of job templates, project syncs, and inventory syncs, linked together in order to execute them as a single unit.
|
||||
|
||||
YAML
|
||||
Ansible and AWX use YAML to define playbook configuration languages and also variable files. YAML has a minimum of syntax, is very clean, and is easy for people to skim. It is a good data format for configuration files and humans, but is also machine readable. YAML is fairly popular in the dynamic language community and the format has libraries available for serialization in many languages (Python, Perl, Ruby, etc.).
|
||||
44
docs/docsite/rst/userguide/index.rst
Normal file
44
docs/docsite/rst/userguide/index.rst
Normal file
@@ -0,0 +1,44 @@
|
||||
.. _ug_start:
|
||||
|
||||
==========
|
||||
User Guide
|
||||
==========
|
||||
|
||||
User Guide
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:numbered:
|
||||
|
||||
self
|
||||
overview
|
||||
logging_in
|
||||
main_menu
|
||||
search_sort
|
||||
organizations
|
||||
users
|
||||
teams
|
||||
credentials
|
||||
credential_types
|
||||
credential_plugins
|
||||
applications_auth
|
||||
execution_environments
|
||||
ee_reference
|
||||
projects
|
||||
project-sign
|
||||
inventories
|
||||
inventory_plugins_templates
|
||||
job_templates
|
||||
job_slices
|
||||
workflows
|
||||
workflow_templates
|
||||
instance_groups
|
||||
jobs
|
||||
webhooks
|
||||
notifications
|
||||
notification_parameters_supported
|
||||
scheduling
|
||||
insights
|
||||
best_practices
|
||||
security
|
||||
glossary
|
||||
128
docs/docsite/rst/userguide/insights.rst
Normal file
128
docs/docsite/rst/userguide/insights.rst
Normal file
@@ -0,0 +1,128 @@
|
||||
.. _insights:
|
||||
|
||||
Setting up Insights Remediations
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: projects; Insights
|
||||
|
||||
|
||||
|At| supports integration with Red Hat Insights. Once a host is registered with Insights, it will be continually scanned for vulnerabilities and known configuration conflicts. Each of the found problems may have an associated fix in the form of an Ansible playbook. Insights users create a maintenance plan to group the fixes and, ultimately, create a playbook to mitigate the problems. |At| tracks the maintenance plan playbooks via an Insights project. Authentication to Insights via Basic Auth is backed by a special Insights Credential, which must first be established in |at|. To ultimately run an Insights Maintenance Plan, you need an Insights project, and an Insights inventory.
|
||||
|
||||
|
||||
Create Insights Credential
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: credentials; Insights
|
||||
|
||||
To create a new credential for use with Insights:
|
||||
|
||||
1. Click **Credentials** from the left navigation bar to access the Credentials page.
|
||||
|
||||
2. Click the **Add** button located in the upper right corner of the Credentials screen.
|
||||
|
||||
3. Enter the name of the credential to be used in the **Name** field.
|
||||
|
||||
4. Optionally enter a description for this credential in the **Description** field.
|
||||
|
||||
5. In the **Organization** field, optionally enter the name of the organization with which the credential is associated, or click the |search| button and select it from the pop-up window.
|
||||
|
||||
.. |search| image:: ../common/images/search-button.png
|
||||
|
||||
6. In the **Credential Type** field, enter **Insights** or select it from the drop-down list.
|
||||
|
||||
.. image:: ../common/images/credential-types-popup-window-insights.png
|
||||
|
||||
7. Enter a valid Insights credential in the **Username** and **Password** fields.
|
||||
|
||||
|Credentials - create with demo insights credentials|
|
||||
|
||||
.. |Credentials - create with demo insights credentials| image:: ../common/images/insights-create-with-demo-credentials.png
|
||||
|
||||
8. Click **Save** when done.
|
||||
|
||||
|
||||
Create an Insights Project
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: project; Insights
|
||||
|
||||
To create a new Insights project:
|
||||
|
||||
1. Click **Projects** from the left navigation bar to access the Projects page.
|
||||
|
||||
2. Click the **Add** button located in the upper right corner of the Projects screen.
|
||||
|
||||
3. Enter the appropriate details into the required fields, at minimum. Note the following fields requiring specific Insights-related entries:
|
||||
|
||||
- **Name**: Enter the name for your Insights project.
|
||||
- **Organization**: Enter the name of the organization associated with this project, or click the |search| button and select it from the pop-up window.
|
||||
- **SCM Type**: Select **Red Hat Insights**.
|
||||
- Upon selecting the SCM type, the **Source Details** field expands.
|
||||
|
||||
4. The **Credential** field is pre-populated with the Insights credential you previously created. If not, enter the credential, or click the |search| button and select it from the pop-up window.
|
||||
|
||||
5. Click to select the update option(s) for this project from the **Options** field, and provide any additional values, if applicable. For information about each option, click the tooltip |tooltip| next to the options.
|
||||
|
||||
.. |tooltip| image:: ../common/images/tooltips-icon.png
|
||||
|
||||
|Insights - create demo insights project form|
|
||||
|
||||
.. |Insights - create demo insights project form| image:: ../common/images/insights-create-project-insights-form.png
|
||||
|
||||
6. Click **Save** when done.
|
||||
|
||||
All SCM/Project syncs occur automatically the first time you save a new project. However, if you want them to be updated to what is current in Insights, manually update the SCM-based project by clicking the |update| button under the project's available Actions.
|
||||
|
||||
.. |update| image:: ../common/images/update-button.png
|
||||
|
||||
This process syncs your Insights project with your Insights account solution. Notice that the status dot beside the name of the project updates once the sync has run.
|
||||
|
||||
|Insights - demo insights project success|
|
||||
|
||||
.. |Insights - demo insights project success| image:: ../common/images/insights-create-project-insights-succeed.png
|
||||
|
||||
|
||||
Create Insights Inventory
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: inventory; Insights
|
||||
|
||||
The Insights playbook contains a `hosts:` line where the value is the hostname that Insights itself knows about, which may be different than the hostname that AWX knows about. To use an Insights playbook, you will need an Insights inventory.
|
||||
|
||||
To create a new inventory for use with Insights, see :ref:`ug_source_insights`.
|
||||
|
||||
Remediate Insights Inventory
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: inventory; Insights
|
||||
|
||||
Remediation of an Insights inventory allows AWX to run Insights playbooks with a single click. This is done by creating a Job Template to run the Insights remediation.
|
||||
|
||||
1. Click **Job Templates** from the left navigation bar to access the Job Templates page.
|
||||
|
||||
2. Create a new Job Template, with the appropriate details into the required fields, at minimum. Note the following fields requiring specific Insights-related entries:
|
||||
|
||||
- **Name**: Enter the name of your Maintenance Plan.
|
||||
- **Job Type**: If not already populated, select **Run** from the drop-down menu list.
|
||||
- **Inventory**: Select the Insights Inventory you previously created.
|
||||
- **Project**: Select the Insights project you previously created.
|
||||
- **Playbook**: Select a playbook associated with the Maintenance Plan you want to run from the drop-down menu list.
|
||||
- **Credential**: Enter the credential to use for this project or click the |search| button and select it from the pop-up window. The credential does not have to be an Insights credential.
|
||||
- **Verbosity**: Keep the default setting, or select the desired verbosity from the drop-down menu list.
|
||||
|
||||
|Insights - maintenance plan template filled|
|
||||
|
||||
.. |Insights - maintenance plan template filled| image:: ../common/images/insights-create-new-job-template-maintenance-plan-filled.png
|
||||
|
||||
3. Click **Save** when done.
|
||||
|
||||
4. Click the |launch| icon to launch the job template.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
|
||||
Once complete, the job results display in the Job Details page.
|
||||
99
docs/docsite/rst/userguide/instance_groups.rst
Normal file
99
docs/docsite/rst/userguide/instance_groups.rst
Normal file
@@ -0,0 +1,99 @@
|
||||
.. _ug_instance_groups:
|
||||
|
||||
Instance Groups
|
||||
====================
|
||||
|
||||
.. index::
|
||||
single: instance groups
|
||||
|
||||
An :term:`Instance Group` provides the ability to group instances in a clustered environment. Additionally, policies dictate how instance groups behave and how jobs are executed. The following view displays the capacity levels based on policy algorithms:
|
||||
|
||||
|Instance Group policy example|
|
||||
|
||||
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
|
||||
|
||||
For more information about the policy or rules associated with instance groups, see the :ref:`ag_instance_groups` section of the |ata|.
|
||||
|
||||
If you want to connect your instance group to a container, refer to :ref:`ag_container_groups` for further detail.
|
||||
|
||||
For an in-depth discussion on these concepts, refer to the *Feature Spotlight: Instance Groups and Isolated Nodes* `blog
|
||||
<https://www.ansible.com/blog/ansible-tower-feature-spotlight-instance-groups-and-isolated-nodes>`_.
|
||||
|
||||
|
||||
.. _ug_instance_groups_create:
|
||||
|
||||
Create an instance group
|
||||
--------------------------
|
||||
|
||||
To create a new instance group:
|
||||
|
||||
1. Click **Instance Groups** from the left navigation menu to open the Instance Groups configuration window.
|
||||
|
||||
2. Click **Add** and select **Add instance group**.
|
||||
|
||||
|IG - create new IG|
|
||||
|
||||
.. |IG - create new IG| image:: ../common/images/instance-group-create-new-ig.png
|
||||
|
||||
3. Enter the appropriate details into the following fields:
|
||||
|
||||
- **Name**. Names must be unique and must not be named *awx*.
|
||||
- **Policy instance minimum**. Enter the minimum number of instances to automatically assign to this group when new instances come online.
|
||||
- **Policy instance percentage**. Use the slider to select a minimum percentage of instances to automatically assign to this group when new instances come online.
|
||||
|
||||
.. note::
|
||||
|
||||
Policy instance fields are not required to create a new instance group. If you do not specify values, then the Policy instance minimum and Policy instance percentage default to 0.
|
||||
|
||||
- **Max concurrent jobs**. Specify the maximum number of forks that can be run for any given job.
|
||||
- **Max forks**. Specify the maximum number of concurrent jobs that can be run for any given job.
|
||||
|
||||
.. note::
|
||||
|
||||
The default value of 0 for **Max concurrent jobs** and **Max forks** denotes no limit. See :ref:`ag_instancegrp_cpacity` in the |ata| for more detail.
|
||||
|
||||
4. Click **Save**.
|
||||
|
||||
Once the instance group is successfully created, the **Details** tab of the newly created instance group remains, allowing you to review and edit your instance group information. This is the same screen that opens when the **Edit** (|edit-button|) button is clicked from the **Instance Groups** list view. You can also edit **Instances** and review **Jobs** associated with this instance group.
|
||||
|
||||
.. |edit-button| image:: ../common/images/edit-button.png
|
||||
|
||||
|IG - example IG successfully created|
|
||||
|
||||
.. |IG - example IG successfully created| image:: ../common/images/instance-group-example-ig-successfully-created.png
|
||||
|
||||
|
||||
Associate instances to an instance group
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To associate instances to an instance group:
|
||||
|
||||
1. Click the **Instances** tab of the Instance Group window and click the **Associate** button.
|
||||
|
||||
2. Click the checkbox next to one or more available instances from the list to select the instance(s) you want to associate with the instance group.
|
||||
|
||||
|IG - select instances|
|
||||
|
||||
.. |IG - select instances| image:: ../common/images/instance-group-assoc-instances.png
|
||||
|
||||
3. In the following example, the instances added to the instance group displays along with information about their capacity.
|
||||
|
||||
This view also allows you to edit some key attributes associated with the instances in your instance group:
|
||||
|
||||
|IG - instances in IG callouts|
|
||||
|
||||
.. |IG - instances in IG callouts| image:: ../common/images/instance-group-instances-example-callouts.png
|
||||
|
||||
|
||||
View jobs associated with an instance group
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
To view the jobs associated with the instance group, click the **Jobs** tab of the Instance Group window and then click **Expanded** to expand the view to show details about each job.
|
||||
|
||||
|IG - instances jobs|
|
||||
|
||||
.. |IG - instances jobs| image:: ../common/images/instance-group-jobs-list.png
|
||||
|
||||
Each job displays the job status, ID, and name; type of job, time started and completed, who started the job; and applicable resources associated with it, such as template, inventory, project, |ee|, etc.
|
||||
|
||||
The instances are run in accordance with instance group policies. Refer to :ref:`Instance Group Policies <ag_instance_group_policies>` in the |ata|.
|
||||
1231
docs/docsite/rst/userguide/inventories.rst
Normal file
1231
docs/docsite/rst/userguide/inventories.rst
Normal file
File diff suppressed because it is too large
Load Diff
394
docs/docsite/rst/userguide/inventory_plugins_templates.rst
Normal file
394
docs/docsite/rst/userguide/inventory_plugins_templates.rst
Normal file
@@ -0,0 +1,394 @@
|
||||
.. _ir_inv_plugin_templates_reference:
|
||||
|
||||
Supported Inventory Plugin Templates
|
||||
==============================================
|
||||
|
||||
.. index::
|
||||
pair: templates;inventory plugins
|
||||
|
||||
Upon upgrades, existing configurations will be migrated to the new format that will produce a backwards compatible inventory output. Use the templates below to help aid in migrating your inventories to the new style inventory plugin output.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
|
||||
Amazon Web Services EC2
|
||||
------------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Amazon Web Services
|
||||
pair: inventories; aws
|
||||
pair: inventory plugins; aws
|
||||
|
||||
::
|
||||
|
||||
compose:
|
||||
ansible_host: public_ip_address
|
||||
ec2_account_id: owner_id
|
||||
ec2_ami_launch_index: ami_launch_index | string
|
||||
ec2_architecture: architecture
|
||||
ec2_block_devices: dict(block_device_mappings | map(attribute='device_name') | list | zip(block_device_mappings | map(attribute='ebs.volume_id') | list))
|
||||
ec2_client_token: client_token
|
||||
ec2_dns_name: public_dns_name
|
||||
ec2_ebs_optimized: ebs_optimized
|
||||
ec2_eventsSet: events | default("")
|
||||
ec2_group_name: placement.group_name
|
||||
ec2_hypervisor: hypervisor
|
||||
ec2_id: instance_id
|
||||
ec2_image_id: image_id
|
||||
ec2_instance_profile: iam_instance_profile | default("")
|
||||
ec2_instance_type: instance_type
|
||||
ec2_ip_address: public_ip_address
|
||||
ec2_kernel: kernel_id | default("")
|
||||
ec2_key_name: key_name
|
||||
ec2_launch_time: launch_time | regex_replace(" ", "T") | regex_replace("(\+)(\d\d):(\d)(\d)$", ".\g<2>\g<3>Z")
|
||||
ec2_monitored: monitoring.state in ['enabled', 'pending']
|
||||
ec2_monitoring_state: monitoring.state
|
||||
ec2_persistent: persistent | default(false)
|
||||
ec2_placement: placement.availability_zone
|
||||
ec2_platform: platform | default("")
|
||||
ec2_private_dns_name: private_dns_name
|
||||
ec2_private_ip_address: private_ip_address
|
||||
ec2_public_dns_name: public_dns_name
|
||||
ec2_ramdisk: ramdisk_id | default("")
|
||||
ec2_reason: state_transition_reason
|
||||
ec2_region: placement.region
|
||||
ec2_requester_id: requester_id | default("")
|
||||
ec2_root_device_name: root_device_name
|
||||
ec2_root_device_type: root_device_type
|
||||
ec2_security_group_ids: security_groups | map(attribute='group_id') | list | join(',')
|
||||
ec2_security_group_names: security_groups | map(attribute='group_name') | list | join(',')
|
||||
ec2_sourceDestCheck: source_dest_check | default(false) | lower | string
|
||||
ec2_spot_instance_request_id: spot_instance_request_id | default("")
|
||||
ec2_state: state.name
|
||||
ec2_state_code: state.code
|
||||
ec2_state_reason: state_reason.message if state_reason is defined else ""
|
||||
ec2_subnet_id: subnet_id | default("")
|
||||
ec2_tag_Name: tags.Name
|
||||
ec2_virtualization_type: virtualization_type
|
||||
ec2_vpc_id: vpc_id | default("")
|
||||
filters:
|
||||
instance-state-name:
|
||||
- running
|
||||
groups:
|
||||
ec2: true
|
||||
hostnames:
|
||||
- network-interface.addresses.association.public-ip
|
||||
- dns-name
|
||||
- private-dns-name
|
||||
keyed_groups:
|
||||
- key: image_id | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: images
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: placement.availability_zone
|
||||
parent_group: zones
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: ec2_account_id | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: accounts
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: ec2_state | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: instance_states
|
||||
prefix: instance_state
|
||||
- key: platform | default("undefined") | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: platforms
|
||||
prefix: platform
|
||||
- key: instance_type | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: types
|
||||
prefix: type
|
||||
- key: key_name | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: keys
|
||||
prefix: key
|
||||
- key: placement.region
|
||||
parent_group: regions
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: security_groups | map(attribute="group_name") | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
|
||||
parent_group: security_groups
|
||||
prefix: security_group
|
||||
- key: dict(tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list | zip(tags.values()
|
||||
| map("regex_replace", "[^A-Za-z0-9\_]", "_") | list))
|
||||
parent_group: tags
|
||||
prefix: tag
|
||||
- key: tags.keys() | map("regex_replace", "[^A-Za-z0-9\_]", "_") | list
|
||||
parent_group: tags
|
||||
prefix: tag
|
||||
- key: vpc_id | regex_replace("[^A-Za-z0-9\_]", "_")
|
||||
parent_group: vpcs
|
||||
prefix: vpc_id
|
||||
- key: placement.availability_zone
|
||||
parent_group: '{{ placement.region }}'
|
||||
prefix: ''
|
||||
separator: ''
|
||||
plugin: amazon.aws.aws_ec2
|
||||
use_contrib_script_compatible_sanitization: true
|
||||
|
||||
|
||||
Google Compute Engine
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Google Compute Engine
|
||||
pair: inventories; gce
|
||||
pair: inventory plugins; gce
|
||||
|
||||
::
|
||||
|
||||
auth_kind: serviceaccount
|
||||
compose:
|
||||
ansible_ssh_host: networkInterfaces[0].accessConfigs[0].natIP | default(networkInterfaces[0].networkIP)
|
||||
gce_description: description if description else None
|
||||
gce_id: id
|
||||
gce_image: image
|
||||
gce_machine_type: machineType
|
||||
gce_metadata: metadata.get("items", []) | items2dict(key_name="key", value_name="value")
|
||||
gce_name: name
|
||||
gce_network: networkInterfaces[0].network.name
|
||||
gce_private_ip: networkInterfaces[0].networkIP
|
||||
gce_public_ip: networkInterfaces[0].accessConfigs[0].natIP | default(None)
|
||||
gce_status: status
|
||||
gce_subnetwork: networkInterfaces[0].subnetwork.name
|
||||
gce_tags: tags.get("items", [])
|
||||
gce_zone: zone
|
||||
hostnames:
|
||||
- name
|
||||
- public_ip
|
||||
- private_ip
|
||||
keyed_groups:
|
||||
- key: gce_subnetwork
|
||||
prefix: network
|
||||
- key: gce_private_ip
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: gce_public_ip
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: machineType
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: zone
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: gce_tags
|
||||
prefix: tag
|
||||
- key: status | lower
|
||||
prefix: status
|
||||
- key: image
|
||||
prefix: ''
|
||||
separator: ''
|
||||
plugin: google.cloud.gcp_compute
|
||||
retrieve_image_info: true
|
||||
use_contrib_script_compatible_sanitization: true
|
||||
|
||||
|
||||
Microsoft Azure Resource Manager
|
||||
---------------------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Microsoft Azure Resource Manager
|
||||
pair: inventories; azure
|
||||
pair: inventory plugins; azure
|
||||
|
||||
::
|
||||
|
||||
conditional_groups:
|
||||
azure: true
|
||||
default_host_filters: []
|
||||
fail_on_template_errors: false
|
||||
hostvar_expressions:
|
||||
computer_name: name
|
||||
private_ip: private_ipv4_addresses[0] if private_ipv4_addresses else None
|
||||
provisioning_state: provisioning_state | title
|
||||
public_ip: public_ipv4_addresses[0] if public_ipv4_addresses else None
|
||||
public_ip_id: public_ip_id if public_ip_id is defined else None
|
||||
public_ip_name: public_ip_name if public_ip_name is defined else None
|
||||
tags: tags if tags else None
|
||||
type: resource_type
|
||||
keyed_groups:
|
||||
- key: location
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: tags.keys() | list if tags else []
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: security_group
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: resource_group
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: os_disk.operating_system_type
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: dict(tags.keys() | map("regex_replace", "^(.*)$", "\1_") | list | zip(tags.values() | list)) if tags else []
|
||||
prefix: ''
|
||||
separator: ''
|
||||
plain_host_names: true
|
||||
plugin: azure.azcollection.azure_rm
|
||||
use_contrib_script_compatible_sanitization: true
|
||||
|
||||
VMware vCenter
|
||||
---------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; VMware vCenter
|
||||
pair: inventories; vmware
|
||||
pair: inventory plugins; vmware
|
||||
|
||||
::
|
||||
|
||||
compose:
|
||||
ansible_host: guest.ipAddress
|
||||
ansible_ssh_host: guest.ipAddress
|
||||
ansible_uuid: 99999999 | random | to_uuid
|
||||
availablefield: availableField
|
||||
configissue: configIssue
|
||||
configstatus: configStatus
|
||||
customvalue: customValue
|
||||
effectiverole: effectiveRole
|
||||
guestheartbeatstatus: guestHeartbeatStatus
|
||||
layoutex: layoutEx
|
||||
overallstatus: overallStatus
|
||||
parentvapp: parentVApp
|
||||
recenttask: recentTask
|
||||
resourcepool: resourcePool
|
||||
rootsnapshot: rootSnapshot
|
||||
triggeredalarmstate: triggeredAlarmState
|
||||
filters:
|
||||
- runtime.powerState == "poweredOn"
|
||||
keyed_groups:
|
||||
- key: config.guestId
|
||||
prefix: ''
|
||||
separator: ''
|
||||
- key: '"templates" if config.template else "guests"'
|
||||
prefix: ''
|
||||
separator: ''
|
||||
plugin: community.vmware.vmware_vm_inventory
|
||||
properties:
|
||||
- availableField
|
||||
- configIssue
|
||||
- configStatus
|
||||
- customValue
|
||||
- datastore
|
||||
- effectiveRole
|
||||
- guestHeartbeatStatus
|
||||
- layout
|
||||
- layoutEx
|
||||
- name
|
||||
- network
|
||||
- overallStatus
|
||||
- parentVApp
|
||||
- permission
|
||||
- recentTask
|
||||
- resourcePool
|
||||
- rootSnapshot
|
||||
- snapshot
|
||||
- triggeredAlarmState
|
||||
- value
|
||||
- capability
|
||||
- config
|
||||
- guest
|
||||
- runtime
|
||||
- storage
|
||||
- summary
|
||||
strict: false
|
||||
with_nested_properties: true
|
||||
|
||||
|
||||
.. _ir_plugin_satellite:
|
||||
|
||||
Red Hat Satellite 6
|
||||
---------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Red Hat Satellite 6
|
||||
pair: inventories; satellite
|
||||
pair: inventory plugins; satellite
|
||||
|
||||
::
|
||||
|
||||
group_prefix: foreman_
|
||||
keyed_groups:
|
||||
- key: foreman['environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_') | regex_replace('none', '')
|
||||
prefix: foreman_environment_
|
||||
separator: ''
|
||||
- key: foreman['location_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
|
||||
prefix: foreman_location_
|
||||
separator: ''
|
||||
- key: foreman['organization_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
|
||||
prefix: foreman_organization_
|
||||
separator: ''
|
||||
- key: foreman['content_facet_attributes']['lifecycle_environment_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
|
||||
prefix: foreman_lifecycle_environment_
|
||||
separator: ''
|
||||
- key: foreman['content_facet_attributes']['content_view_name'] | lower | regex_replace(' ', '') | regex_replace('[^A-Za-z0-9_]', '_')
|
||||
prefix: foreman_content_view_
|
||||
separator: ''
|
||||
legacy_hostvars: true
|
||||
plugin: theforeman.foreman.foreman
|
||||
validate_certs: false
|
||||
want_facts: true
|
||||
want_hostcollections: false
|
||||
want_params: true
|
||||
|
||||
|
||||
OpenStack
|
||||
----------
|
||||
|
||||
.. index::
|
||||
pair: inventories; OpenStack
|
||||
pair: inventories; OpenStack
|
||||
pair: inventory plugins; OpenStack
|
||||
|
||||
::
|
||||
|
||||
expand_hostvars: true
|
||||
fail_on_errors: true
|
||||
inventory_hostname: uuid
|
||||
plugin: openstack.cloud.openstack
|
||||
|
||||
|
||||
Red Hat Virtualization
|
||||
-----------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Red Hat Virtualization
|
||||
pair: inventories; rhv
|
||||
pair: inventory plugins; rhv
|
||||
|
||||
::
|
||||
|
||||
compose:
|
||||
ansible_host: (devices.values() | list)[0][0] if devices else None
|
||||
keyed_groups:
|
||||
- key: cluster
|
||||
prefix: cluster
|
||||
separator: _
|
||||
- key: status
|
||||
prefix: status
|
||||
separator: _
|
||||
- key: tags
|
||||
prefix: tag
|
||||
separator: _
|
||||
ovirt_hostname_preference:
|
||||
- name
|
||||
- fqdn
|
||||
ovirt_insecure: false
|
||||
plugin: ovirt.ovirt.ovirt
|
||||
|
||||
|
||||
Red Hat Ansible Automation Platform
|
||||
----------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: inventories; Red Hat Ansible Automation Platform
|
||||
pair: inventory plugins; Red Hat Ansible Automation Platform
|
||||
|
||||
::
|
||||
|
||||
include_metadata: true
|
||||
inventory_id: <inventory_id or url_quoted_named_url>
|
||||
plugin: awx.awx.tower
|
||||
validate_certs: <true or false>
|
||||
61
docs/docsite/rst/userguide/job_branching.rst
Normal file
61
docs/docsite/rst/userguide/job_branching.rst
Normal file
@@ -0,0 +1,61 @@
|
||||
.. _ug_job_branching:
|
||||
|
||||
Projects specify the branch, tag, or reference to use from source control in the ``scm_branch`` field. These are represented by the values specified in the Project Details fields as shown.
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-project-branching-emphasized.png
|
||||
|
||||
Projects have the option to "Allow Branch Override". When checked, project admins can delegate branch selection to the job templates that use that project (requiring only project ``use_role``).
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-project-branch-override-checked.png
|
||||
|
||||
|
||||
|
||||
Source tree copy behavior
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Every job run has its own private data directory. This directory contains a copy of the project source tree for the given
|
||||
``scm_branch`` the job is running. Jobs are free to make changes to the project folder and make use of those changes while it is still running. This folder is temporary and is cleaned up at the end of the job run.
|
||||
|
||||
If **Clean** is checked, AWX discards modified files in its local copy of the repository through use of the ``force`` parameter in its respective Ansible modules pertaining to `git`_ or `Subversion`_.
|
||||
|
||||
.. _`git`: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/git_module.html#parameters
|
||||
.. _`Subversion`: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/subversion_module.html#parameters
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-project-clean-checked.png
|
||||
|
||||
|
||||
Project revision behavior
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Typically, during a project update, the revision of the default branch (specified in the **SCM Branch** field of the project)
|
||||
is stored when updated, and jobs using that project will employ this revision. Providing a non-default **SCM Branch** (not a commit hash or tag) in a job, the newest revision is pulled from the source control remote immediately before the job starts.
|
||||
This revision is shown in the **Source Control Revision** field of the job and its respective project update.
|
||||
|
||||
.. image:: ../common/images/jobs-output-branch-override-example.png
|
||||
|
||||
Consequently, offline job runs are impossible for non-default branches. To be sure that a job is running a static version from source control, use tags or commit hashes. Project updates do not save the revision of all branches, only the project default branch.
|
||||
|
||||
The **SCM Branch** field is not validated, so the project must update to assure it is valid. If this field is provided or prompted for, the **Playbook** field of job templates will not be validated, and you will have to launch the job template in order to verify presence of the expected playbook.
|
||||
|
||||
Git Refspec
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: git refspec; templates
|
||||
|
||||
The **SCM Refspec** field specifies which extra references the update should download from the remote. Examples are:
|
||||
|
||||
1. ``refs/*:refs/remotes/origin/*``: fetches all references, including remotes of the remote
|
||||
2. ``refs/pull/*:refs/remotes/origin/pull/*`` (GitHub-specific): fetches all refs for all pull requests
|
||||
3. ``refs/pull/62/head:refs/remotes/origin/pull/62/head``: fetches the ref for that one GitHub pull request
|
||||
|
||||
For large projects, you should consider performance impact when using the 1st or 2nd examples here.
|
||||
|
||||
The **SCM Refspec** parameter affects the availability of the project branch, and can allow access to references not otherwise available. The examples above allow the user to supply a pull request from the **SCM Branch**, which would
|
||||
not be possible without the **SCM Refspec** field.
|
||||
|
||||
The Ansible git module fetches ``refs/heads/*`` by default. This means that a project's branches and tags (and commit hashes therein) can be used as the SCM Branch if **SCM Refspec** is blank. The value specified in the **SCM Refspec** field affects which **SCM Branch** fields can be used as overrides. Project updates (of any type) will perform an extra ``git fetch`` command to pull that refspec from the remote.
|
||||
|
||||
For example: You could set up a project that allows branch override with the 1st or 2nd refspec example --> Use this in a job template that prompts for the **SCM Branch** --> A client could launch the job template when a new pull request is created, providing the branch ``pull/N/head`` --> The job template would run against the provided GitGub pull request reference.
|
||||
|
||||
For more information on the Ansible git module, see https://docs.ansible.com/ansible/latest/collections/ansible/builtin/git_module.html.
|
||||
109
docs/docsite/rst/userguide/job_capacity.rst
Normal file
109
docs/docsite/rst/userguide/job_capacity.rst
Normal file
@@ -0,0 +1,109 @@
|
||||
|
||||
This section describes how to determine capacity for instance groups and its impact to your jobs. For container groups, see :ref:`ag_container_capacity` in the |ata|.
|
||||
|
||||
AWX capacity system determines how many jobs can run on an instance given the amount of resources available to the instance and the size of the jobs that are running (referred to as *Impact*). The algorithm used to determine this is based entirely on two things:
|
||||
|
||||
- How much memory is available to the system (``mem_capacity``)
|
||||
- How much CPU is available to the system (``cpu_capacity``)
|
||||
|
||||
Capacity also impacts Instance Groups. Since Groups are made up of instances, likewise, instances can be assigned to multiple groups. This means that impact to one instance can potentially affect the overall capacity of other Groups.
|
||||
|
||||
Instance Groups (not instances themselves) can be assigned to be used by jobs at various levels (see :ref:`ag_clustering`). When the Task Manager is preparing its graph to determine which group a job will run on, it will commit the capacity of an Instance Group to a job that hasn’t or isn’t ready to start yet.
|
||||
|
||||
Finally, in smaller configurations, if only one instance is available for a job to run, the Task Manager will allow that job to run on the instance even if it pushes the instance over capacity. This guarantees that jobs themselves won't get stuck as a result of an under-provisioned system.
|
||||
|
||||
Therefore, Capacity and Impact is not a zero-sum system relative to jobs and instances/Instance Groups.
|
||||
|
||||
For information on sliced jobs and their impact to capacity, see :ref:`ug_job_slice_execution`.
|
||||
|
||||
|
||||
|
||||
Resource determination for capacity algorithm
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The capacity algorithms are defined in order to determine how many forks a system is capable of running simultaneously. This controls how many systems Ansible itself will communicate with simultaneously. Increasing the number of forks an AWX system is running will, in general, allow jobs to run faster by performing more work in parallel. The trade-off is that this will increase the load on the system, which could cause work to slow down overall.
|
||||
|
||||
AWX can operate in two modes when determining capacity. ``mem_capacity`` (the default) will allow you to over-commit CPU resources while protecting the system from running out of memory. If most of your work is not CPU-bound, then selecting this mode will maximize the number of forks.
|
||||
|
||||
|
||||
Memory relative capacity
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
``mem_capacity`` is calculated relative to the amount of memory needed per fork. Taking into account the overhead for internal components, this comes out to be about 100MB per fork. When considering the amount of memory available to Ansible jobs, the capacity algorithm will reserve 2GB of memory to account for the presence of other services. The algorithm formula for this is:
|
||||
|
||||
::
|
||||
|
||||
(mem - 2048) / mem_per_fork
|
||||
|
||||
As an example:
|
||||
|
||||
::
|
||||
|
||||
(4096 - 2048) / 100 == ~20
|
||||
|
||||
Therefore, a system with 4GB of memory would be capable of running 20 forks. The value ``mem_per_fork`` can be controlled by setting the settings value (or environment variable) ``SYSTEM_TASK_FORKS_MEM``, which defaults to 100.
|
||||
|
||||
|
||||
|
||||
CPU relative capacity
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Often, Ansible workloads can be fairly CPU-bound. In these cases, sometimes reducing the simultaneous workload allows more tasks to run faster and reduces the average time-to-completion of those jobs.
|
||||
|
||||
Just as the ``mem_capacity`` algorithm uses the amount of memory need per fork, the ``cpu_capacity`` algorithm looks at the amount of CPU resources is needed per fork. The baseline value for this is 4 forks per core. The algorithm formula for this is:
|
||||
|
||||
::
|
||||
|
||||
cpus * fork_per_cpu
|
||||
|
||||
For example, a 4-core system:
|
||||
|
||||
::
|
||||
|
||||
4 * 4 == 16
|
||||
|
||||
The value ``fork_per_cpu`` can be controlled by setting the settings value (or environment variable) ``SYSTEM_TASK_FORKS_CPU`` which defaults to 4.
|
||||
|
||||
|
||||
Capacity job impacts
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
When selecting the capacity, it's important to understand how each job type affects capacity.
|
||||
|
||||
It's helpful to understand what forks mean to Ansible: https://www.ansible.com/blog/ansible-performance-tuning (see the section on "Know Your Forks").
|
||||
|
||||
The default forks value for Ansible is 5. However, if AWX knows that you're running against fewer systems than that, then the actual concurrency value will be lower.
|
||||
|
||||
When a job is run, AWX will add 1 to the number of forks selected to compensate for the Ansible parent process. So if you are running a playbook against 5 systems with a forks value of 5, then the actual forks value from the perspective of Job Impact will be 6.
|
||||
|
||||
|
||||
Impact of job types in AWX
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Jobs and Ad-hoc jobs follow the above model, forks + 1. If you set a fork value on your job template, your job capacity value will be the minimum of the forks value supplied, and the number of hosts that you have, plus one. The plus one is to account for the parent Ansible process.
|
||||
|
||||
Instance capacity determines which jobs get assigned to any specific instance. Jobs and ad hoc commands use more capacity if they have a higher forks value.
|
||||
|
||||
Other job types have a fixed impact:
|
||||
|
||||
- Inventory Updates: 1
|
||||
- Project Updates: 1
|
||||
- System Jobs: 5
|
||||
|
||||
If you don’t set a forks value on your job template, your job will use Ansible’s default forks value of five. Even though Ansible defaults to five forks, it will use fewer if your job has fewer than five hosts. In general, setting a forks value higher than what the system is capable of could cause trouble by running out of memory or over-committing CPU. So, the job template fork values that you use should fit on the system. If you have playbooks using 1000 forks but none of your systems individually has that much capacity, then your systems are undersized and at risk of performance or resource issues.
|
||||
|
||||
|
||||
Selecting the right capacity
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Selecting a capacity out of the CPU-bound or the memory-bound capacity limits is, in essence, selecting between the minimum or maximum number of forks. In the above examples, the CPU capacity would allow a maximum of 16 forks while the memory capacity would allow 20. For some systems, the disparity between these can be large and often times you may want to have a balance between these two.
|
||||
|
||||
The instance field ``capacity_adjustment`` allows you to select how much of one or the other you want to consider. It is represented as a value between 0.0 and 1.0. If set to a value of 1.0, then the largest value will be used. The above example involves memory capacity, so a value of 20 forks would be selected. If set to a value of 0.0 then the smallest value will be used. A value of 0.5 would be a 50/50 balance between the two algorithms which would be 18:
|
||||
|
||||
::
|
||||
|
||||
16 + (20 - 16) * 0.5 == 18
|
||||
|
||||
To view or edit the capacity in the user interface, select the **Instances** tab of the Instance Group.
|
||||
|
||||
.. image:: ../common/images/instance-group-instances-capacity-callouts.png
|
||||
102
docs/docsite/rst/userguide/job_slices.rst
Normal file
102
docs/docsite/rst/userguide/job_slices.rst
Normal file
@@ -0,0 +1,102 @@
|
||||
.. _ug_job_slice:
|
||||
|
||||
Job Slicing
|
||||
===============
|
||||
|
||||
.. index::
|
||||
single: job slice
|
||||
single: job splitting
|
||||
pair: job types; slice
|
||||
pair: job types; splitting
|
||||
pair: job types; distributed
|
||||
|
||||
A :term:`sliced job` refers to the concept of a distributed job. Distributed jobs are used for running a job across a very large number of hosts, allowing you to run multiple ansible-playbooks, each on a subset of an inventory, that can be scheduled in parallel across a cluster.
|
||||
|
||||
|
||||
By default, Ansible runs jobs from a single control instance. For jobs that do not require cross-host orchestration, job slicing takes advantage of AWX's ability to distribute work to multiple nodes in a cluster. Job slicing works by adding a Job Template field ``job_slice_count``, which specifies the number of jobs into which to slice the Ansible run. When this number is greater than 1, AWX will generate a workflow from a job template instead of a job. The inventory will be distributed evenly amongst the slice jobs. The workflow job is then started, and proceeds as though it were a normal workflow. When launching a job, the API will return either a job resource (if ``job_slice_count = 1``) or a workflow job resource. The corresponding User Interface will redirect to the appropriate screen to display the status of the run.
|
||||
|
||||
|
||||
Job slice considerations
|
||||
--------------------------
|
||||
|
||||
Consider the following when setting up job slices:
|
||||
|
||||
- A sliced job creates a workflow job, and then that creates jobs.
|
||||
|
||||
- A job slice consists of a job template, an inventory, and a slice count.
|
||||
|
||||
- When executed, a sliced job splits each inventory into a number of "slice size" chunks. It then queues jobs of ansible-playbook runs on each chunk of the appropriate inventory. The inventory fed into ansible-playbook is a pared-down version of the original inventory that only contains the hosts in that particular slice. The completed sliced job that displays on the Jobs list are labeled accordingly, with the number of sliced jobs that have run:
|
||||
|
||||
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
|
||||
|
||||
|
||||
- These sliced jobs follow normal scheduling behavior (number of forks, queuing due to capacity, assignation to instance groups based on inventory mapping).
|
||||
|
||||
.. note::
|
||||
|
||||
.. include:: ../common/job-slicing-rule.rst
|
||||
|
||||
- Sliced job templates with prompts and/or extra variables behave the same as standard job templates, applying all variables and limits to the entire set of slice jobs in the resulting workflow job. However, when passing a limit to a Sliced Job, if the limit causes slices to have no hosts assigned, those slices will fail, causing the overall job to fail.
|
||||
|
||||
|
||||
- A job slice job status of a distributed job is calculated in the same manner as workflow jobs; failure if there are any unhandled failures in its sub-jobs.
|
||||
|
||||
.. warning::
|
||||
|
||||
Any job that intends to orchestrate across hosts (rather than just applying changes to individual hosts) should not be configured as a slice job. Any job that does, may fail, and AWX will not attempt to discover or account for playbooks that fail when run as slice jobs.
|
||||
|
||||
|
||||
.. _ug_job_slice_execution:
|
||||
|
||||
Job slice execution behavior
|
||||
--------------------------------
|
||||
|
||||
When jobs are sliced, they can run on any node and some may not run at the same time (insufficient capacity in the system, for example). When slice jobs are running, job details display the workflow and job slice(s) currently running, as well as a link to view their details individually.
|
||||
|
||||
.. image:: ../common/images/sliced-job-shown-jobs-output-view.png
|
||||
|
||||
By default, job templates are not normally configured to execute simultaneously (``allow_simultaneous`` must be checked in the API or **Enable Concurrent Jobs** in the UI). Slicing overrides this behavior and implies ``allow_simultaneous`` even if that setting is unchecked. See :ref:`ug_JobTemplates` for information on how to specify this, as well as the number of job slices on your job template configuration.
|
||||
|
||||
The :ref:`ug_JobTemplates` section provides additional detail on performing the following operations in the User Interface:
|
||||
|
||||
- Launch workflow jobs with a job template that has a slice number greater than one
|
||||
- Cancel the whole workflow or individual jobs after launching a slice job template
|
||||
- Relaunch the whole workflow or individual jobs after slice jobs finish running
|
||||
- View the details about the workflow and slice jobs after a launching a job template
|
||||
- Search slice jobs specifically after you create them (see subsequent section, :ref:`ug_job_slice_search`)
|
||||
|
||||
|
||||
.. _ug_job_slice_search:
|
||||
|
||||
Search job slices
|
||||
-------------------
|
||||
|
||||
To make it easier to find slice jobs, use the Search functionality to apply a search filter to:
|
||||
|
||||
- job lists to show only slice jobs
|
||||
- job lists to show only parent workflow jobs of job slices
|
||||
- job templates lists to only show job templates that produce slice jobs
|
||||
|
||||
|
||||
To show only slice jobs in job lists, as with most cases, you can filter either on the type (jobs here) or ``unified_jobs``:
|
||||
|
||||
::
|
||||
|
||||
/api/v2/jobs/?job_slice_count__gt=1
|
||||
|
||||
|
||||
To show only parent workflow jobs of job slices:
|
||||
|
||||
::
|
||||
|
||||
/api/v2/workflow_jobs/?job_template__isnull=false
|
||||
|
||||
|
||||
To show only job templates that produce slice jobs:
|
||||
|
||||
::
|
||||
|
||||
/api/v2/job_templates/?job_slice_count__gt=1
|
||||
|
||||
|
||||
|
||||
997
docs/docsite/rst/userguide/job_templates.rst
Normal file
997
docs/docsite/rst/userguide/job_templates.rst
Normal file
@@ -0,0 +1,997 @@
|
||||
.. _ug_JobTemplates:
|
||||
|
||||
Job Templates
|
||||
===============
|
||||
|
||||
.. index::
|
||||
single: job templates
|
||||
pair: job types; run
|
||||
pair: job types; check
|
||||
pair: job types; scan
|
||||
single: system tracking; scan job
|
||||
|
||||
A :term:`job template` is a definition and set of parameters for running an Ansible job. Job templates are useful to execute the same job many times. Job templates also encourage the reuse of Ansible playbook content and collaboration between teams.
|
||||
|
||||
The **Templates** menu opens a list of the job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the timestamp of last job that ran using that template. You can click **Expanded** (arrow next to each entry) to expand to view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template.
|
||||
|
||||
|Job templates - home with example job template|
|
||||
|
||||
.. |Job templates - home with example job template| image:: ../common/images/job-templates-home-with-example-job-template.png
|
||||
|
||||
|
||||
From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a job template. To delete a job template, you must select one or more templates and click the **Delete** button. Before deleting a job template, be sure it is not used in a workflow job template.
|
||||
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
|
||||
.. |delete| image:: ../common/images/delete-button.png
|
||||
|
||||
|
||||
.. include:: ../common/work_items_deletion_warning.rst
|
||||
|
||||
.. note::
|
||||
|
||||
Job templates can be used to build a workflow template. For templates that show the Workflow Visualizer (|wf-viz-icon|) icon next to them are workflow templates. Clicking it allows you to graphically build a workflow. Many parameters in a job template allow you to enable **Prompt on Launch** that can be modified at the workflow level, and do not affect the values assigned at the job template level. For instructions, see the :ref:`ug_wf_editor` section.
|
||||
|
||||
.. |wf-viz-icon| image:: ../common/images/wf-viz-icon.png
|
||||
|
||||
Create a Job Template
|
||||
-----------------------
|
||||
|
||||
To create a new job template:
|
||||
|
||||
1. Click the **Add** button then select **Job Template** from the menu list.
|
||||
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
|
||||
.. note::
|
||||
|
||||
If a field has the **Prompt on launch** checkbox selected, launching the job will prompt you for the value for that field upon launch. Most prompted values will override any values set in the job template; exceptions are noted below.
|
||||
|
||||
.. list-table::
|
||||
:widths: 10 40 25
|
||||
:header-rows: 1
|
||||
|
||||
* - Field
|
||||
- Options
|
||||
- Prompt on Launch
|
||||
* - **Name**
|
||||
- Enter a name for the job.
|
||||
- N/A
|
||||
* - **Description**
|
||||
- Enter an arbitrary description as appropriate (optional).
|
||||
- N/A
|
||||
* - **Job Type**
|
||||
- Choose a job type:
|
||||
* **Run**: Execute the playbook when launched, running Ansible tasks on the selected hosts.
|
||||
* **Check**: Perform a "dry run" of the playbook and report changes that would be made without actually making them. Tasks that do not support check mode will be skipped and will not report potential changes.
|
||||
|
||||
More information on job types can be found in the `Playbook execution <https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_intro.html#playbook-execution>`_ section of the Ansible documentation.
|
||||
|
||||
- Yes
|
||||
* - **Inventory**
|
||||
- Choose the inventory to be used with this job template from the inventories available to the currently logged in user. A System Administrator must grant you or your team permissions to be able to use certain inventories in a job template.
|
||||
- Yes. Inventory prompts will show up as its own step in a subsequent prompt window.
|
||||
* - **Project**
|
||||
- Choose the project to be used with this job template from the projects available to the currently logged in user.
|
||||
- N/A
|
||||
* - **SCM Branch**
|
||||
- This field is only present if you chose a project that allows branch override. Specify the overriding branch to use in your job run. If left blank, the specified SCM branch (or commit hash or tag) from the project is used. For more detail, see :ref:`job branch overriding <ug_job_branching>`.
|
||||
- Yes
|
||||
* - **Execution Environment**
|
||||
- Select the container image to be used to run this job. A project must be selected before you can select an |ee|.
|
||||
- Yes. |Ee| prompts will show up as its own step in a subsequent prompt window.
|
||||
* - **Playbook**
|
||||
- Choose the playbook to be launched with this job template from the available playbooks. This field automatically populates with the names of the playbooks found in the project base path for the selected project. Alternatively, you can enter the name of the playbook if it is not listed, such as the name of a file (like ``foo.yml``) you want to use to run with that playbook. If you enter a filename that is not valid, the template will display an error, or cause the job to fail.
|
||||
- N/A
|
||||
* - **Credentials**
|
||||
- Click the |search| button to open a separate window. Choose the credential from the available options to be used with this job template. Use the drop-down menu list to filter by credential type if the list is extensive. Some credential types are not listed because they do not apply to certain job templates.
|
||||
- * If selected, upon launching a job template that has a default credential and supplying another credential will replace the default credential if it is the same type. Example of such a message:
|
||||
|
||||
|
||||
::
|
||||
|
||||
Job Template default credentials must be replaced
|
||||
with one of the same type. Please select a credential
|
||||
for the following types in order to proceed: Machine.
|
||||
|
||||
* Alternatively, you can add more credentials as you see fit.
|
||||
* Credential prompts will show up as its own step in a subsequent prompt window.
|
||||
* - **Labels**
|
||||
- * Optionally supply labels that describe this job template, such as "dev" or "test". Labels can be used to group and filter job templates and completed jobs in the display.
|
||||
* Labels are created when they are added to the Job Template. Labels are associated to a single Organization using the Project that is provided in the Job Template. Members of the Organization can create labels on a Job Template if they have edit permissions (such as admin role).
|
||||
* Once the Job Template is saved, the labels appear in the Job Templates overview in the *Expanded* view.
|
||||
* Click the (|x|) beside a label to remove it. When a label is removed, it is no longer associated with that particular Job or Job Template, but it will remain associated with any other jobs that reference it.
|
||||
* Jobs inherit labels from the Job Template at the time of launch. If a label is deleted from a Job Template, it is also deleted from the Job.
|
||||
- * If selected, even if a default value is supplied, you will be prompted upon launch to supply additional labels if needed.
|
||||
* You will not be able to delete existing labels - clicking (|x-circle|) only removes the newly added labels, not existing default labels.
|
||||
* - **Variables**
|
||||
- * Pass extra command line variables to the playbook. This is the "-e" or "--extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at `Passing Variables on the Command Line <https://docs.ansible.com/ansible/latest/reference_appendices/general_precedence.html>`_.
|
||||
* Provide key/value pairs using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. An example value might be:
|
||||
|
||||
::
|
||||
|
||||
git_branch: production
|
||||
release_version: 1.5
|
||||
- Yes. If you want to be able to specify ``extra_vars`` on a schedule, you must select **Prompt on Launch** for **Variables** on the job template, or a enable a survey on the job template, then those answered survey questions become ``extra_vars``.
|
||||
* - **Forks**
|
||||
- The number of parallel or simultaneous processes to use while executing the playbook. A value of zero uses the Ansible default setting, which is 5 parallel processes unless overridden in ``/etc/ansible/ansible.cfg``.
|
||||
- Yes
|
||||
* - **Limit**
|
||||
- A host pattern to further constrain the list of hosts managed or affected by the playbook. Multiple patterns can be separated by colons (``:``). As with core Ansible, ``a:b`` means "in group a or b", ``a:b:&c`` means "in a or b but must be in c", and ``a:!b`` means "in a, and definitely not in b". For more information and examples refer to `Patterns <https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html>`_ in the Ansible documentation.
|
||||
- Yes
|
||||
* - **Verbosity**
|
||||
- Control the level of output Ansible produces as the playbook executes. Choose the verbosity from Normal to various Verbose or Debug settings. This only appears in the "details" report view. Verbose logging includes the output of all commands. Debug logging is exceedingly verbose and includes information on SSH operations that can be useful in certain support instances. Most users do not need to see debug mode output.
|
||||
|
||||
.. warning::
|
||||
|
||||
Verbosity 5 causes AWX to block heavily when jobs are running, which could delay reporting that the job has finished (even though it has) and can cause the browser tab to lock up.
|
||||
- Yes
|
||||
* - **Job Slicing**
|
||||
- Specify the number of slices you want this job template to run. Each slice will run the same tasks against a portion of the inventory. For more information about job slices, see :ref:`ug_job_slice`.
|
||||
- Yes
|
||||
* - **Timeout**
|
||||
- Allows you to specify the length of time (in seconds) that the job may run before it is canceled. Some caveats for setting the timeout value:
|
||||
* There is a global timeout defined in the settings which defaults to 0, indicating no timeout.
|
||||
* A negative timeout (<0) on a job template is a true "no timeout" on the job.
|
||||
* A timeout of 0 on a job template defaults the job to the global timeout (which is no timeout by default).
|
||||
* A positive timeout sets the timeout for that job template.
|
||||
- Yes
|
||||
* - **Show Changes**
|
||||
- Allows you to see the changes made by Ansible tasks.
|
||||
- Yes
|
||||
* - **Instance Groups**
|
||||
- Choose :ref:`ag_instance_groups` to associate with this job template. If the list is extensive, use the |search| to narrow the options. Note, job template instance groups contribute to the job scheduling criteria, see :ref:`ag_instance_groups_job_runtime_behavior` and :ref:`ag_instance_groups_control_where_job_runs` for rules. A System Administrator must grant you or your team permissions to be able to use an instance group in a job template. Use of a container group requires admin rights.
|
||||
- * Yes. If selected, you are providing the jobs preferred instance groups in order of preference. If the first group is out of capacity, subsequent groups in the list will be considered until one with capacity is available, at which point that will be selected to run the job.
|
||||
* If you prompt for an instance group, what you enter replaces the normal instance group hierarchy and overrides all of the organizations' and inventories' instance groups.
|
||||
* Instance Groups prompts will show up as its own step in a subsequent prompt window.
|
||||
* - **Job Tags**
|
||||
- Begin typing and selecting the **Create x** drop-down to specify which parts of the playbook should be executed. For more information and examples refer to `Tags <https://https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html>`_ in the Ansible documentation.
|
||||
- Yes
|
||||
* - **Skip Tags**
|
||||
- Begin typing and selecting the **Create x** drop-down to specify certain tasks or parts of the playbook to skip. For more information and examples refer to `Tags <https://https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html>`_ in the Ansible documentation.
|
||||
- Yes
|
||||
|
||||
.. |search| image:: ../common/images/search-button.png
|
||||
|
||||
.. |x-circle| image:: ../common/images/x-delete-button.png
|
||||
|
||||
.. |x| image:: ../common/images/x-button.png
|
||||
|
||||
|
||||
3. **Options**: Specify options for launching this template, if necessary.
|
||||
|
||||
- **Privilege Escalation**: If checked, you enable this playbook to run as an administrator. This is the equivalent of passing the ``--become`` option to the ``ansible-playbook`` command.
|
||||
- **Provisioning Callbacks**: If checked, you enable a host to call back to AWX via the REST API and invoke the launch of a job from this job template. Refer to :ref:`ug_provisioning_callbacks` for additional information.
|
||||
- **Enable Webhook**: Turns on the ability to interface with a predefined SCM system web service that is used to launch a job template. Currently supported SCM systems are GitHub and GitLab.
|
||||
|
||||
.. _ug_jt_enable_webhooks:
|
||||
|
||||
If you enable webhooks, other fields display, prompting for additional information:
|
||||
|
||||
.. image:: ../common/images/job-templates-options-webhooks.png
|
||||
|
||||
- **Webhook Service**: Select which service to listen for webhooks from
|
||||
- **Webhook URL**: Automatically populated with the URL for the webhook service to POST requests to.
|
||||
- **Webhook Key**: Generated shared secret to be used by the webhook service to sign payloads sent to AWX. This must be configured in the settings on the webhook service in order for AWX to accept webhooks from this service.
|
||||
- **Webhook Credential**: Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. Before you can select it, the credential must exist. See :ref:`ug_credentials_cred_types` to create one.
|
||||
|
||||
For additional information on setting up webhooks, see :ref:`ug_webhooks`.
|
||||
|
||||
|
||||
- **Concurrent Jobs**: If checked, you are allowing jobs in the queue to run simultaneously if not dependent on one another. Check this box if you want to run job slices simultaneously. Refer to :ref:`ug_job_concurrency` for additional information.
|
||||
- **Enable Fact Storage**: When checked, AWX will store gathered facts for all hosts in an inventory related to the job running.
|
||||
- **Prevent Instance Group Fallback**: Check this option to allow only the instance groups listed in the **Instance Groups** field above to execute the job. If unchecked, all available instances in the execution pool will be used based on the hierarchy described in :ref:`ag_instance_groups_control_where_job_runs`. Click the |help| icon for more information.
|
||||
|
||||
.. |help| image:: ../common/images/tooltips-icon.png
|
||||
|
||||
|Job templates - create new job template|
|
||||
|
||||
.. |Job templates - create new job template| image:: ../common/images/job-templates-create-new-job-template.png
|
||||
|
||||
4. When you have completed configuring the details of the job template, click **Save**.
|
||||
|
||||
Saving the template does not exit the job template page but advances to the Job Template Details tab for viewing. After saving the template, you can click **Launch** to launch the job, or click **Edit** to add or change the attributes of the template, such as permissions, notifications, view completed jobs, and add a survey (if the job type is not a scan). You must first save the template prior to launching, otherwise, the **Launch** button remains grayed-out.
|
||||
|
||||
.. image:: ../common/images/job-templates-job-template-details.png
|
||||
|
||||
You can verify the template is saved when the newly created template appears on the Templates list view.
|
||||
|
||||
|
||||
|
||||
Add Permissions
|
||||
-----------------------
|
||||
|
||||
.. include:: ../common/permissions.rst
|
||||
|
||||
|
||||
Work with Notifications
|
||||
-------------------------
|
||||
|
||||
Clicking the **Notifications** tab allows you to review any notification integrations you have setup and their statuses, if they have ran.
|
||||
|
||||
.. image:: ../common/images/job-template-completed-notifications-view.png
|
||||
|
||||
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see :ref:`ug_notifications_on_off`.
|
||||
|
||||
If no notifications have been set up, click the **Add** button to create a new notification. Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types and extended messaging.
|
||||
|
||||
|
||||
View Completed Jobs
|
||||
---------------------
|
||||
|
||||
The **Completed Jobs** tab provides the list of job templates that have ran. Click **Expanded** to view details of each job, including its status, ID, and name; type of job, time started and completed, who started the job; and which template, inventory, project, and credential were used. You can filter the list of completed jobs using any of these criteria.
|
||||
|
||||
.. image:: ../common/images/job-template-completed-jobs-view.png
|
||||
|
||||
|
||||
Sliced jobs that display on this list are labeled accordingly, with the number of sliced jobs that have run:
|
||||
|
||||
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
|
||||
|
||||
|
||||
Scheduling
|
||||
-----------
|
||||
|
||||
.. index::
|
||||
pair: job templates; scheduling
|
||||
|
||||
|
||||
Access the schedules for a particular job template from the **Schedules** tab.
|
||||
|
||||
|Job Templates - schedule launch|
|
||||
|
||||
.. |Job Templates - schedule launch| image:: ../common/images/job-templates-schedules.png
|
||||
|
||||
|
||||
Schedule a Job Template
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: job templates; scheduling
|
||||
single: scheduling; add new
|
||||
|
||||
To schedule a job template run, click the **Schedules** tab.
|
||||
|
||||
- If schedules are already set up; review, edit, or enable/disable your schedule preferences.
|
||||
- If schedules have not been set up, refer to :ref:`ug_scheduling` for more information.
|
||||
|
||||
|
||||
If **Prompt on Launch** was selected for the **Credentials** field, and you create or edit scheduling information for your job template, a **Prompt** button displays at the bottom of the Schedules form. You will not be able to remove the default machine credential in the Prompt dialog without replacing it with another machine credential before you can save it. Below is an example of such a message:
|
||||
|
||||
.. note::
|
||||
|
||||
To able to set ``extra_vars`` on schedules, you must select **Prompt on Launch** for **Variables** on the job template, or a configure and enable a survey on the job template, then those answered survey questions become ``extra_vars``.
|
||||
|
||||
|
||||
.. _ug_surveys:
|
||||
|
||||
Surveys
|
||||
---------
|
||||
|
||||
.. index::
|
||||
pair: job templates; surveys
|
||||
|
||||
|
||||
Job types of Run or Check will provide a way to set up surveys in the Job Template creation or editing screens. Surveys set extra variables for the playbook similar to 'Prompt for Extra Variables' does, but in a user-friendly question and answer way. Surveys also allow for validation of user input. Click the **Survey** tab to create a survey.
|
||||
|
||||
Use cases for surveys are numerous. An example might be if operations wanted to give developers a "push to stage" button they could run without advanced Ansible knowledge. When launched, this task could prompt for answers to questions such as, "What tag should we release?"
|
||||
|
||||
Many types of questions can be asked, including multiple-choice questions.
|
||||
|
||||
|
||||
.. _ug_surveys_create:
|
||||
|
||||
Create a Survey
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: job templates; survey creation
|
||||
single: surveys; creation
|
||||
|
||||
|
||||
To create a survey:
|
||||
|
||||
1. Click the **Survey** tab and click the **Add** button.
|
||||
|
||||
2. A survey can consist of any number of questions. For each question, enter the following information:
|
||||
|
||||
- **Question**: The question to ask the user
|
||||
- **Description**: (optional) A description of what's being asked of the user.
|
||||
- **Answer Variable Name**: The Ansible variable name to store the user's response in. This is the variable to be used by the playbook. Variable names cannot contain spaces.
|
||||
- **Answer Type**: Choose from the following question types.
|
||||
|
||||
- *Text*: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Textarea*: A multi-line text field. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Password*: Responses are treated as sensitive information, much like an actual password is treated. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Multiple Choice (single select)*: A list of options, of which only one can be selected at a time. Enter the options, one per line, in the **Multiple Choice Options** box.
|
||||
- *Multiple Choice (multiple select)*: A list of options, any number
|
||||
of which can be selected at a time. Enter the options, one per
|
||||
line, in the **Multiple Choice Options** box.
|
||||
- *Integer*: An integer number. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Float*: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
|
||||
|
||||
- **Required**: Whether or not an answer to this question is required from the user.
|
||||
- **Minimum length** and **Maximum length**: Specify if a certain length in the answer is required.
|
||||
- **Default answer**: The default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user.
|
||||
|
||||
.. image:: ../common/images/job-template-create-survey.png
|
||||
|
||||
|
||||
3. Once you have entered the question information, click **Save** to add the question.
|
||||
|
||||
The survey question displays in the Survey list. For any question, you can click |edit| to edit the question, or check the box next to each question and click **Delete** to delete the question, or use the toggle button at the top of the screen to enable or disable the survey prompt(s).
|
||||
|
||||
|job-template-completed-survey|
|
||||
|
||||
.. |job-template-completed-survey| image:: ../common/images/job-template-completed-survey.png
|
||||
|
||||
|
||||
If you have more than one survey question, use the **Edit Order** button to rearrange the order of the questions by clicking and dragging on the grid icon.
|
||||
|
||||
.. image:: ../common/images/job-template-rearrange-survey.png
|
||||
|
||||
|
||||
4. To add more questions, click the **Add** button to add additional questions.
|
||||
|
||||
|
||||
|
||||
Optional Survey Questions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: job templates; survey optional questions
|
||||
single: surveys; optional questions
|
||||
|
||||
The **Required** setting on a survey question determines whether the answer is optional or not for the user interacting with it.
|
||||
|
||||
Behind the scenes, optional survey variables can be passed to the playbook in ``extra_vars``, even when they aren't filled in.
|
||||
|
||||
- If a non-text variable (input type) is marked as optional, and is not filled in, no survey ``extra_var`` is passed to the playbook.
|
||||
|
||||
- If a text input or text area input is marked as optional, is not filled in, and has a minimum ``length > 0``, no survey ``extra_var`` is passed to the playbook.
|
||||
|
||||
- If a text input or text area input is marked as optional, is not filled in, and has a minimum ``length === 0``, that survey ``extra_var`` is passed to the playbook, with the value set to an empty string ( "" ).
|
||||
|
||||
|
||||
Launch a Job Template
|
||||
--------------------------
|
||||
|
||||
.. index::
|
||||
pair: job templates; jobs, launching
|
||||
|
||||
A major benefit of AWX is the push-button deployment of Ansible playbooks. You can easily configure a template to store all parameters you would normally pass to the ansible-playbook on the command line--not just the playbooks, but the inventory, credentials, extra variables, and all options and settings you can specify on the command line.
|
||||
|
||||
Easier deployments drive consistency, by running your playbooks the same way each time, and allow you to delegate responsibilities--even users who aren’t Ansible experts can run playbooks written by others.
|
||||
|
||||
Launch a job template by any of the following ways:
|
||||
|
||||
- Access the job template list from the **Templates** menu on the left navigation bar or while in the Job Template Details view, scroll to the bottom to access the |launch| button from the list of templates.
|
||||
|
||||
.. image:: ../common/images/job-templates-home-with-example-job-template-launch.png
|
||||
|
||||
- While in the Job Template Details view of the job template you want to launch, click **Launch**.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
|
||||
A job may require additional information to run. The following data may be requested at launch:
|
||||
|
||||
- Credentials that were setup
|
||||
- The option ``Prompt on Launch`` is selected for any parameter
|
||||
- Passwords or passphrases that have been set to **Ask**
|
||||
- A survey, if one has been configured for the job templates
|
||||
- Extra variables, if requested by the job template
|
||||
|
||||
.. note::
|
||||
|
||||
If a job has user-provided values, then those are respected upon relaunch. If the user did not specify a value, then the job uses the default value from the job template. Jobs are not relaunched as-is. They are relaunched with the user prompts re-applied to the job template.
|
||||
|
||||
|
||||
Below is an example job launch that prompts for Job Tags, and runs the example survey created in :ref:`ug_surveys`.
|
||||
|
||||
|job-launch-with-prompt-job-tags|
|
||||
|
||||
.. |job-launch-with-prompt-job-tags| image:: ../common/images/job-launch-with-prompt-at-launch-jobtags.png
|
||||
|
||||
|job-launch-with-prompt-survey|
|
||||
|
||||
.. |job-launch-with-prompt-survey| image:: ../common/images/job-launch-with-prompt-at-launch-survey.png
|
||||
|
||||
.. note::
|
||||
|
||||
Providing values on one tab, and going back to a previous tab, and then continuing on to the next tab will result in having to re-provide values on the rest of the tabs. Make sure you fill in the tabs in the order the prompts appear to avoid this.
|
||||
|
||||
|
||||
Along with any extra variables set in the job template and survey, AWX automatically adds the following variables to the job environment. Also note, ``awx_``* variables are defined by the system and cannot be overridden. Variables about the job context, like ``awx_job_template_name`` are not affected if they are set in ``extra_vars``.
|
||||
|
||||
- ``awx_job_id``: The Job ID for this job run
|
||||
- ``awx_job_launch_type``: The description to indicate how the job was started:
|
||||
|
||||
- **manual**: Job was started manually by a user.
|
||||
- **relaunch**: Job was started via relaunch.
|
||||
- **callback**: Job was started via host callback.
|
||||
- **scheduled**: Job was started from a schedule.
|
||||
- **dependency**: Job was started as a dependency of another job.
|
||||
- **workflow**: Job was started from a workflow job.
|
||||
- **sync**: Job was started from a project sync.
|
||||
- **scm**: Job was created as an Inventory SCM sync.
|
||||
|
||||
- ``awx_job_template_id``: The Job Template ID that this job run uses
|
||||
- ``awx_job_template_name``: The Job Template name that this job uses
|
||||
- ``awx_execution_node``: The Execution Node name that launched this job
|
||||
- ``awx_project_revision``: The revision identifier for the source tree that this particular job uses (it is also the same as the job's field ``scm_revision``)
|
||||
- ``awx_project_scm_branch``: The configured default project SCM branch for the project the job template uses
|
||||
- ``awx_job_scm_branch`` If the SCM Branch is overwritten by the job, the value is shown here
|
||||
- ``awx_user_email``: The user email of the AWX user that started this job. This is not available for callback or scheduled jobs.
|
||||
- ``awx_user_first_name``: The user's first name of the AWX user that started this job. This is not available for callback or scheduled jobs.
|
||||
- ``awx_user_id``: The user ID of the AWX user that started this job. This is not available for callback or scheduled jobs.
|
||||
- ``awx_user_last_name``: The user's last name of the AWX user that started this job. This is not available for callback or scheduled jobs.
|
||||
- ``awx_user_name``: The user name of the AWX user that started this job. This is not available for callback or scheduled jobs.
|
||||
- ``awx_schedule_id``: If applicable, the ID of the schedule that launched this job
|
||||
- ``awx_schedule_name``: If applicable, the name of the schedule that launched this job
|
||||
- ``awx_workflow_job_id``: If applicable, the ID of the workflow job that launched this job
|
||||
- ``awx_workflow_job_name``: If applicable, the name of the workflow job that launched this job. Note this is also the same as the workflow job template.
|
||||
- ``awx_inventory_id``: If applicable, the ID of the inventory this job uses
|
||||
- ``awx_inventory_name``: If applicable, the name of the inventory this job uses
|
||||
|
||||
For compatibility, all variables are also given an "awx" prefix, for example, ``awx_job_id``.
|
||||
|
||||
Upon launch, AWX automatically redirects the web browser to the Job Status page for this job under the **Jobs** tab.
|
||||
|
||||
.. note::
|
||||
|
||||
You can re-launch the most recent job from the list view to re-run on all hosts or just failed hosts in the specified inventory. Refer to :ref:`ug_jobs` in the |atu| for more detail.
|
||||
|
||||
When slice jobs are running, job lists display the workflow and job slices, as well as a link to view their details individually.
|
||||
|
||||
.. image:: ../common/images/sliced-job-shown-jobs-list-view.png
|
||||
|
||||
.. _ug_JobTemplates_bulk_api:
|
||||
|
||||
.. note::
|
||||
|
||||
You may launch jobs in bulk using the newly added endpoint in the API, ``/api/v2/bulk/job_launch``. This endpoint accepts JSON and you can specify a list of unified job templates (such as job templates, project updates, etc) to launch. The user must have the appropriate permission to launch all the jobs. Either all jobs are launched, or an error is returned indicating why the operation was not able to complete. Use the **OPTIONS** request to return relevant schema. For more information, see the `Bulk endpoint <https://docs.ansible.com/automation-controller/latest/html/controllerapi/api_ref.html#/Bulk>`_ of the *Reference* section of the |atapi|.
|
||||
|
||||
|
||||
Copy a Job Template
|
||||
---------------------
|
||||
|
||||
If you choose to copy Job Template, it **does not** copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or admin creating the copy of the Job Template. The user copying the Job Template will be granted the admin permission, but no permissions are assigned (copied) to the Job Template.
|
||||
|
||||
1. Access the job template list from the **Templates** menu on the left navigation bar or while in the Job Template Details view, scroll to the bottom to access it from the list of templates.
|
||||
|
||||
|Job templates - home with example job template|
|
||||
|
||||
2. Click the |copy| button associated with the template you want to copy.
|
||||
|
||||
.. |copy| image:: ../common/images/copy-button.png
|
||||
|
||||
The new template with the name of the template from which you copied and a timestamp displays in the list of templates.
|
||||
|
||||
3. Click to open the new template and click **Edit**.
|
||||
|
||||
4. Replace the contents of the **Name** field with a new name, and provide or modify the entries in the other fields to complete this page.
|
||||
|
||||
5. Click **Save** when done.
|
||||
|
||||
.. _ug_jobtemplates_scanjobs:
|
||||
|
||||
|
||||
Fact Scan Playbooks
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: system tracking; fact scan playbook
|
||||
pair: fact scan job; playbook
|
||||
|
||||
The scan job playbook, ``scan_facts.yml``, contains invocations of three ``fact scan modules`` - packages, services, and files, along with Ansible's standard fact gathering. The ``scan_facts.yml`` playbook file looks like the following:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
- hosts: all
|
||||
vars:
|
||||
scan_use_checksum: false
|
||||
scan_use_recursive: false
|
||||
tasks:
|
||||
- scan_packages:
|
||||
- scan_services:
|
||||
- scan_files:
|
||||
paths: '{{ scan_file_paths }}'
|
||||
get_checksum: '{{ scan_use_checksum }}'
|
||||
recursive: '{{ scan_use_recursive }}'
|
||||
when: scan_file_paths is defined
|
||||
|
||||
|
||||
The ``scan_files`` fact module is the only module that accepts parameters, passed via ``extra_vars`` on the scan job template.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
scan_file_paths: '/tmp/'
|
||||
scan_use_checksum: true
|
||||
scan_use_recursive: true
|
||||
|
||||
|
||||
- The ``scan_file_paths`` parameter may have multiple settings (such as ``/tmp/`` or ``/var/log``).
|
||||
- The ``scan_use_checksum`` and ``scan_use_recursive`` parameters may also be set to false or omitted. An omission is the same as a false setting.
|
||||
|
||||
Scan job templates should enable ``become`` and use credentials for which ``become`` is a possibility. You can enable become by checking the **Enable Privilege Escalation** from the Options menu:
|
||||
|
||||
.. image:: ../common/images/job-templates-create-new-job-template-become.png
|
||||
|
||||
|
||||
Supported OSes for ``scan_facts.yml``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If you use the ``scan_facts.yml`` playbook with use fact cache, ensure that your OS is supported:
|
||||
|
||||
- |rhel| 5, 6, & 7
|
||||
- OEL 6 & 7
|
||||
- SLES 11 & 12
|
||||
- Debian 6, 7, 8
|
||||
- Fedora 22, 23, 24
|
||||
- Amazon Linux 2016.03
|
||||
- Windows Server 2008 and later
|
||||
|
||||
Note that some of these operating systems may require initial configuration in order to be able to run python and/or have access to the python packages (such as ``python-apt``) that the scan modules depend on.
|
||||
|
||||
|
||||
Pre-scan Setup
|
||||
~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following are examples of playbooks that configure certain distributions so that scan jobs can be run against them.
|
||||
|
||||
|
||||
**Bootstrap Fedora (23, 24)**
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
|
||||
- name: Get Fedora ready
|
||||
hosts: all
|
||||
sudo: yes
|
||||
gather_facts: no
|
||||
|
||||
tasks:
|
||||
|
||||
- name: install python-simplejson
|
||||
raw: sudo dnf -y update
|
||||
raw: sudo dnf -y install python-simplejson
|
||||
raw: sudo dnf -y install rpm-python
|
||||
|
||||
|
||||
|
||||
Custom Fact Scans
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: system tracking; custom fact scans
|
||||
pair: fact scan job; custom
|
||||
pair: custom fact scans; playbook
|
||||
|
||||
|
||||
A playbook for a custom fact scan is similar to the example of the Fact Scan Playbook above. As an example, a playbook that only uses a custom ``scan_foo`` Ansible fact module would look like this:
|
||||
|
||||
*scan_custom.yml*:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- scan_foo:
|
||||
|
||||
|
||||
*scan_foo.py*:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def main():
|
||||
module = AnsibleModule(
|
||||
argument_spec = dict())
|
||||
|
||||
foo = [
|
||||
{
|
||||
"hello": "world"
|
||||
},
|
||||
{
|
||||
"foo": "bar"
|
||||
}
|
||||
]
|
||||
results = dict(ansible_facts=dict(foo=foo))
|
||||
module.exit_json(**results)
|
||||
|
||||
main()
|
||||
|
||||
To use a custom fact module, ensure that it lives in the ``/library/`` subdirectory of the Ansible project used in the scan job template. This fact scan module is very simple, returning a hard-coded set of facts:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
[
|
||||
{
|
||||
"hello": "world"
|
||||
},
|
||||
{
|
||||
"foo": "bar"
|
||||
}
|
||||
]
|
||||
|
||||
Refer to the `Creating an info or a facts module <https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#creating-an-info-or-a-facts-module>`_ section of the Ansible documentation for more information.
|
||||
|
||||
|
||||
.. _ug_fact_caching:
|
||||
|
||||
Fact Caching
|
||||
----------------
|
||||
|
||||
.. index::
|
||||
pair: fact caching; playbook
|
||||
pair: facts; scan job templates
|
||||
|
||||
AWX can store and retrieve facts on a per-host basis through an Ansible Fact Cache plugin. This behavior is configurable on a per-job template basis. Fact caching is turned off by default but can be enabled to serve fact requests for all hosts in an inventory related to the job running. This allows you to use job templates with ``--limit`` while still having access to the entire inventory of host facts. A global timeout setting that the plugin enforces per-host, can be specified (in seconds) through the Jobs settings menu:
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs-fact-cache-timeout.png
|
||||
|
||||
Upon launching a job that uses fact cache (``use_fact_cache=True``), AWX will store all ``ansible_facts`` associated with each host in the inventory associated with the job. The Ansible Fact Cache plugin that ships with AWX will only be enabled on jobs with fact cache enabled (``use_fact_cache=True``).
|
||||
|
||||
When a job that has fact cache enabled (``use_fact_cache=True``) finishes running, AWX will restore all records for the hosts in the inventory. Any records with update times *newer* than the currently stored facts per-host will be updated in the database.
|
||||
|
||||
New and changed facts will be logged via AWX's logging facility. Specifically, to the ``system_tracking`` namespace or logger. The logging payload will include the fields:
|
||||
|
||||
- ``host_name``
|
||||
- ``inventory_id``
|
||||
- ``ansible_facts``
|
||||
|
||||
where ``ansible_facts`` is a dictionary of all Ansible facts for ``host_name`` in the AWX inventory, ``inventory_id``.
|
||||
|
||||
.. note::
|
||||
|
||||
If a hostname includes a forward slash (``/``), fact cache will not work for that host. If you have an inventory with 100 hosts and one host has a ``/`` in the name, 99 of those hosts will still collect facts.
|
||||
|
||||
|
||||
Benefits of Fact Caching
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Fact caching saves a significant amount of time over running fact gathering. If you have a playbook in a job that runs against a thousand hosts and forks, you could easily spend 10 minutes gathering facts across all of those hosts. But if you run a job on a regular basis, the first run of it caches these facts and the next run will just pull them from the database. This cuts the runtime of jobs against large inventories, including Smart Inventories, by an enormous magnitude.
|
||||
|
||||
.. note::
|
||||
Do not modify the ``ansible.cfg`` file to apply fact caching. Custom fact caching could conflict with AWX's fact caching feature. It is recommended to use the fact caching module that comes with AWX.
|
||||
|
||||
You can choose to use cached facts in your job by enabling it in the **Options** field of the Job Templates window.
|
||||
|
||||
.. image:: ../common/images/job-templates-options-use-factcache.png
|
||||
|
||||
To clear facts, you need to run the Ansible ``clear_facts`` `meta task`_. Below is an example playbook that uses the Ansible ``clear_facts`` meta task.
|
||||
|
||||
.. _`meta task`: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/meta_module.html#examples
|
||||
|
||||
::
|
||||
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Clear gathered facts from all currently targeted hosts
|
||||
meta: clear_facts
|
||||
|
||||
You can find the API endpoint for fact caching at:
|
||||
|
||||
.. code-block::
|
||||
|
||||
http://<awx server name>/api/v2/hosts/x/ansible_facts
|
||||
|
||||
.. _ug_CloudCredentials:
|
||||
|
||||
Utilizing Cloud Credentials
|
||||
------------------------------
|
||||
|
||||
.. index::
|
||||
pair: job templates; cloud credentials
|
||||
|
||||
Cloud Credentials can be used when syncing a respective cloud inventory. Cloud Credentials may also be associated with a Job Template and included in the runtime environment for use by a playbook. Cloud Credentials currently supported:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
OpenStack
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: cloud credentials; OpenStack
|
||||
|
||||
The sample playbook below invokes the ``nova_compute`` Ansible OpenStack cloud module and requires credentials to do anything meaningful, and specifically requires the following information: ``auth_url``, ``username``, ``password``, and ``project_name``. These fields are made available to the playbook via the environmental variable ``OS_CLIENT_CONFIG_FILE``, which points to a YAML file written by AWX based on the contents of the cloud credential. This sample playbook loads the YAML file into the Ansible variable space.
|
||||
|
||||
``OS_CLIENT_CONFIG_FILE`` example:
|
||||
|
||||
::
|
||||
|
||||
clouds:
|
||||
devstack:
|
||||
auth:
|
||||
auth_url: http://devstack.yoursite.com:5000/v2.0/
|
||||
username: admin
|
||||
password: your_password_here
|
||||
project_name: demo
|
||||
|
||||
Playbook example:
|
||||
|
||||
::
|
||||
|
||||
- hosts: all
|
||||
gather_facts: false
|
||||
vars:
|
||||
config_file: "{{ lookup('env', 'OS_CLIENT_CONFIG_FILE') }}"
|
||||
nova_tenant_name: demo
|
||||
nova_image_name: "cirros-0.3.2-x86_64-uec"
|
||||
nova_instance_name: autobot
|
||||
nova_instance_state: 'present'
|
||||
nova_flavor_name: m1.nano
|
||||
|
||||
nova_group:
|
||||
group_name: antarctica
|
||||
instance_name: deceptacon
|
||||
instance_count: 3
|
||||
tasks:
|
||||
- debug: msg="{{ config_file }}"
|
||||
- stat: path="{{ config_file }}"
|
||||
register: st
|
||||
- include_vars: "{{ config_file }}"
|
||||
when: st.stat.exists and st.stat.isreg
|
||||
|
||||
- name: "Print out clouds variable"
|
||||
debug: msg="{{ clouds|default('No clouds found') }}"
|
||||
|
||||
- name: "Setting nova instance state to: {{ nova_instance_state }}"
|
||||
local_action:
|
||||
module: nova_compute
|
||||
login_username: "{{ clouds.devstack.auth.username }}"
|
||||
login_password: "{{ clouds.devstack.auth.password }}"
|
||||
|
||||
Amazon Web Services
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: cloud credentials; AWS
|
||||
|
||||
Amazon Web Services cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
|
||||
|
||||
- ``AWS_ACCESS_KEY_ID``
|
||||
- ``AWS_SECRET_ACCESS_KEY``
|
||||
|
||||
All of the AWS modules will implicitly use these credentials when run via AWX without having to set the ``aws_access_key_id`` or ``aws_secret_access_key`` module options.
|
||||
|
||||
|
||||
Google
|
||||
~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: cloud credentials; Google
|
||||
|
||||
Google cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
|
||||
|
||||
- ``GCE_EMAIL``
|
||||
- ``GCE_PROJECT``
|
||||
- ``GCE_CREDENTIALS_FILE_PATH``
|
||||
|
||||
All of the Google modules will implicitly use these credentials when run via AWX without having to set the ``service_account_email``, ``project_id``, or ``pem_file`` module options.
|
||||
|
||||
Azure
|
||||
~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: cloud credentials; MS Azure
|
||||
|
||||
Azure cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
|
||||
|
||||
- ``AZURE_SUBSCRIPTION_ID``
|
||||
- ``AZURE_CERT_PATH``
|
||||
|
||||
All of the Azure modules implicitly use these credentials when run via AWX without having to set the ``subscription_id`` or ``management_cert_path`` module options.
|
||||
|
||||
VMware
|
||||
~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: cloud credentials; VMware
|
||||
|
||||
VMware cloud credentials are exposed as the following environment variables during playbook execution (in the job template, choose the cloud credential needed for your setup):
|
||||
|
||||
- ``VMWARE_USER``
|
||||
- ``VMWARE_PASSWORD``
|
||||
- ``VMWARE_HOST``
|
||||
|
||||
The sample playbook below demonstrates usage of these credentials:
|
||||
|
||||
::
|
||||
|
||||
- vsphere_guest:
|
||||
vcenter_hostname: "{{ lookup('env', 'VMWARE_HOST') }}"
|
||||
username: "{{ lookup('env', 'VMWARE_USER') }}"
|
||||
password: "{{ lookup('env', 'VMWARE_PASSWORD') }}"
|
||||
guest: newvm001
|
||||
from_template: yes
|
||||
template_src: linuxTemplate
|
||||
cluster: MainCluster
|
||||
resource_pool: "/Resources"
|
||||
vm_extra_config:
|
||||
folder: MyFolder
|
||||
|
||||
|
||||
.. _ug_provisioning_callbacks:
|
||||
|
||||
Provisioning Callbacks
|
||||
-----------------------
|
||||
|
||||
.. index::
|
||||
pair: job templates; provisioning callbacks
|
||||
|
||||
|
||||
Provisioning callbacks are a feature of AWX that allow a host to initiate a playbook run against itself, rather than waiting for a user to launch a job to manage the host from the AWX UI. Please note that provisioning callbacks are *only* used to run playbooks on the calling host. Provisioning callbacks are meant for cloud bursting (i.e. new instances with a need for client to server communication for configuration (such as transmitting an authorization key)), not to run a job against another host. This provides for automatically configuring a system after it has been provisioned by another system (such as AWS auto-scaling, or a OS provisioning system like kickstart or preseed) or for launching a job programmatically without invoking the AWX API directly. The Job Template launched only runs against the host requesting the provisioning.
|
||||
|
||||
Frequently this would be accessed via a firstboot type script, or from cron.
|
||||
|
||||
To enable callbacks, check the *Provisioning Callbacks* checkbox in the Job Template. This displays the **Provisioning Callback URL** for this job template.
|
||||
|
||||
.. note::
|
||||
|
||||
If you intend to use AWX's provisioning callback feature with a dynamic inventory, Update on Launch should be set for the inventory group used in the Job Template.
|
||||
|
||||
.. image:: ../common/images/provisioning-callbacks-config.png
|
||||
|
||||
Callbacks also require a Host Config Key, to ensure that foreign hosts with the URL cannot request configuration. Please provide a custom value for Host Config Key. The host key may be reused across multiple hosts to apply this job template against multiple hosts. Should you wish to control what hosts are able to request configuration, the key may be changed at any time.
|
||||
|
||||
To callback manually via REST, look at the callback URL in the UI, which is of the form:
|
||||
|
||||
::
|
||||
|
||||
https://<AWX_SERVER_NAME>/api/v2/job_templates/7/callback/
|
||||
|
||||
The '7' in this sample URL is the job template ID in AWX.
|
||||
|
||||
The request from the host must be a POST. Here is an example using curl (all on a single line):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl -k -f -i -H 'Content-Type:application/json' -XPOST -d '{"host_config_key": "redhat"}' \
|
||||
https://<AWX_SERVER_NAME>/api/v2/job_templates/7/callback/
|
||||
|
||||
The requesting host must be defined in your inventory for the callback to succeed. If AWX fails to locate the host either by name or IP address in one of your defined inventories, the request is denied. When running a Job Template in this way, the host initiating the playbook run against itself must be in the inventory. If the host is missing from the inventory, the Job Template will fail with a "No Hosts Matched" type error message.
|
||||
|
||||
|
||||
.. note::
|
||||
If your host is not in inventory and ``Update on Launch`` is set for the inventory group, AWX attempts to update cloud based inventory source before running the callback.
|
||||
|
||||
Successful requests result in an entry on the Jobs tab, where the results and history can be viewed.
|
||||
|
||||
While the callback can be accessed via REST, the suggested method of using the callback is to use one of the example scripts that ships with AWX - ``/usr/share/awx/request_awx_configuration.sh`` (Linux/UNIX) or ``/usr/share/awx/request_awx_configuration.ps1`` (Windows). Usage is described in the source code of the file by passing the ``-h`` flag, as shown below:
|
||||
|
||||
::
|
||||
|
||||
./request_awx_configuration.sh -h
|
||||
Usage: ./request_awx_configuration.sh <options>
|
||||
|
||||
Request server configuration from AWX.
|
||||
|
||||
OPTIONS:
|
||||
-h Show this message
|
||||
-s AWX server (e.g. https://ac.example.com) (required)
|
||||
-k Allow insecure SSL connections and transfers
|
||||
-c Host config key (required)
|
||||
-t Job template ID (required)
|
||||
-e Extra variables
|
||||
|
||||
|
||||
This script has some intelligence, it knows how to retry commands and is therefore a more robust way to use callbacks than a simple curl request. As written, the script retries once per minute for up to ten minutes.
|
||||
|
||||
.. note::
|
||||
|
||||
Please note that this is an example script. You should edit this script if you need more dynamic behavior when detecting failure scenarios, as any non-200 error code may not be a transient error requiring retry.
|
||||
|
||||
Most likely you will use callbacks with dynamic inventory in AWX, such as pulling cloud inventory from one of the supported cloud providers. In these cases, along with setting *Update On Launch*, be sure to configure an inventory cache timeout for the inventory source, to avoid hammering of your Cloud's API endpoints. Since the ``request_awx_configuration.sh`` script polls once per minute for up to ten minutes, a suggested cache invalidation time for inventory (configured on the inventory source itself) would be one or two minutes.
|
||||
|
||||
While we recommend against running the ``request_awx_configuration.sh`` script from a cron job, a suggested cron interval would be perhaps every 30 minutes. Repeated configuration can be easily handled by scheduling in AWX, so the primary use of callbacks by most users is to enable a base image that is bootstrapped into the latest configuration upon coming online. To do so, running at first boot is a better practice. First boot scripts are just simple init scripts that typically self-delete, so you would set up an init script that called a copy of the ``request_awx_configuration.sh`` script and make that into an autoscaling image.
|
||||
|
||||
Passing Extra Variables to Provisioning Callbacks
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: provisioning callbacks; extra variables
|
||||
pair: callbacks; extra variables
|
||||
|
||||
Just as you can pass ``extra_vars`` in a regular Job Template, you can also pass them to provisioning callbacks. To pass ``extra_vars``, the data sent must be part of the body of the POST request as application/json (as the content type). Use the following JSON format as an example when adding your own ``extra_vars`` to be passed:
|
||||
|
||||
::
|
||||
|
||||
'{"extra_vars": {"variable1":"value1","variable2":"value2",...}}'
|
||||
|
||||
|
||||
You can also pass extra variables to the Job Template call using ``curl``, such as is shown in the following example::
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
root@localhost:~$ curl -f -H 'Content-Type: application/json' -XPOST \
|
||||
-d '{"host_config_key": "redhat", "extra_vars": "{\"foo\": \"bar\"}"}' \
|
||||
https://<AWX_SERVER_NAME>/api/v2/job_templates/7/callback
|
||||
|
||||
For more information, refer to :ref:`Launching Jobs with Curl<launch_jobs_curl>`.
|
||||
|
||||
|
||||
.. _ug_jobtemplates_extravars:
|
||||
|
||||
Extra Variables
|
||||
-------------------
|
||||
|
||||
.. index::
|
||||
pair: job templates; survey extra variables
|
||||
pair: surveys; extra variables
|
||||
|
||||
.. note::
|
||||
|
||||
``extra_vars`` passed to the job launch API are only honored if one of the following is true:
|
||||
|
||||
- They correspond to variables in an enabled survey
|
||||
- ``ask_variables_on_launch`` is set to True
|
||||
|
||||
When you pass survey variables, they are passed as extra variables (``extra_vars``) within AWX. This can be tricky, as passing extra variables to a job template (as you would do with a survey) can override other variables being passed from the inventory and project.
|
||||
|
||||
For example, say that you have a defined variable for an inventory for ``debug = true``. It is entirely possible that this variable, ``debug = true``, can be overridden in a job template survey.
|
||||
|
||||
To ensure that the variables you need to pass are not overridden, ensure they are included by redefining them in the survey. Keep in mind that extra variables can be defined at the inventory, group, and host levels.
|
||||
|
||||
If specifying the ``ALLOW_JINJA_IN_EXTRA_VARS`` parameter, refer to the :ref:`AWX Tips and Tricks <ag_tips_jinja_extravars>` section of the |ata| to configure it in the Jobs Settings screen of the AWX UI.
|
||||
|
||||
.. index::
|
||||
pair: job templates; job variables
|
||||
single: job templates, overview
|
||||
single: job templates, hierarchy
|
||||
single: variable precedence
|
||||
single: extra_vars
|
||||
|
||||
.. note::
|
||||
|
||||
The Job Template extra variables dictionary is merged with the Survey variables.
|
||||
|
||||
|
||||
Here are some simplified examples of extra_vars in YAML and JSON formats:
|
||||
|
||||
The configuration in YAML format:
|
||||
|
||||
::
|
||||
|
||||
launch_to_orbit: true
|
||||
satellites:
|
||||
- sputnik
|
||||
- explorer
|
||||
- satcom
|
||||
|
||||
|
||||
The configuration in JSON format:
|
||||
|
||||
::
|
||||
|
||||
{
|
||||
"launch_to_orbit": true,
|
||||
"satellites": ["sputnik", "explorer", "satcom"]
|
||||
}
|
||||
|
||||
The following table notes the behavior (hierarchy) of variable precedence in AWX as it compares to variable precedence in Ansible.
|
||||
|
||||
**AWX Variable Precedence Hierarchy (last listed wins)**
|
||||
|
||||
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy.png
|
||||
|
||||
|
||||
Relaunching Job Templates
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: job templates; relaunch
|
||||
|
||||
Instead of manually relaunching a job, a relaunch is denoted by setting ``launch_type`` to ``relaunch``. The relaunch behavior deviates from the launch behavior in that it **does not** inherit ``extra_vars``.
|
||||
|
||||
Job relaunching does not go through the inherit logic. It uses the same ``extra_vars`` that were calculated for the job being relaunched.
|
||||
|
||||
For example, say that you launch a Job Template with no ``extra_vars`` which results in the creation of a Job called **j1**. Next, say that you edit the Job Template and add in some ``extra_vars`` (such as adding ``"{ "hello": "world" }"``).
|
||||
|
||||
Relaunching **j1** results in the creation of **j2**, but because there is no inherit logic and **j1** had no ``extra_vars``, **j2** will not have any ``extra_vars``.
|
||||
|
||||
To continue upon this example, if you launched the Job Template with the ``extra_vars`` you added after the creation of **j1**, the relaunch job created (**j3**) will include the ``extra_vars``. And relaunching **j3** results in the creation of **j4**, which would also include ``extra_vars``.
|
||||
318
docs/docsite/rst/userguide/jobs.rst
Normal file
318
docs/docsite/rst/userguide/jobs.rst
Normal file
@@ -0,0 +1,318 @@
|
||||
.. _ug_jobs:
|
||||
|
||||
Jobs
|
||||
=====
|
||||
|
||||
.. index::
|
||||
single: jobs
|
||||
|
||||
A :term:`job` is an instance of AWX launching an Ansible playbook against an inventory of hosts.
|
||||
|
||||
The **Jobs** link displays a list of jobs and their statuses--shown as completed successfully or failed, or as an active (running) job.
|
||||
The default view is collapsed (**Compact**) with the job name, status, job type, and start/finish times, but you can expand to see more information. You can sort this list by various criteria, and perform a search to filter the jobs of interest.
|
||||
|
||||
|Jobs - home with example job|
|
||||
|
||||
.. |Jobs - home with example job| image:: ../common/images/jobs-home-with-example-job.png
|
||||
|
||||
.. image:: ../common/images/jobs-list-all-expanded.png
|
||||
|
||||
Actions you can take from this screen include viewing the details and standard output of a particular job, relaunching (|launch|) jobs, or removing selected jobs. The relaunch operation only applies to relaunches of playbook runs and does not apply to project/inventory updates, system jobs, workflow jobs, etc.
|
||||
|
||||
.. _ug_job_results:
|
||||
|
||||
.. index::
|
||||
single: jobs; results
|
||||
single: job results
|
||||
|
||||
When a job relaunches, you are directed the Jobs Output screen as the job runs. Clicking on any type of job also takes you to the Job Output View for that job, where you can filter jobs by various criteria:
|
||||
|
||||
|
||||
.. image:: ../common/images/job-details-view-filters.png
|
||||
|
||||
|
||||
- The **Stdout** option is the default display that shows the job processes and output
|
||||
- The **Event** option allows you to filter by the event(s) of interest, such as errors, host failures, host retries, items skipped, etc. You can include as many events in the filter as necessary.
|
||||
|
||||
.. image:: ../common/images/job-details-view-filters-examples.png
|
||||
|
||||
- The **Advanced** option is a refined search that allows you a combination of including or excluding criteria, searching by key, or by lookup type. For details about using Search, refer to the :ref:`ug_search` chapter.
|
||||
|
||||
|
||||
Inventory Sync Jobs
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: job results; inventory sync
|
||||
|
||||
.. to reproduce: create a custom script and use it when adding a custom inventory source for an inventory.
|
||||
|
||||
When an inventory sync is executed, the full results automatically display in the Output tab. This shows the same information you would see if you ran it through the Ansible command line, and can be useful for debugging. The ``ANSIBLE_DISPLAY_ARGS_TO_STDOUT`` is set to ``False`` by default for all playbook runs. This matches Ansible's default behavior. This does not display task arguments in task headers in the Job Detail interface to avoid leaking certain sensitive module parameters to stdout. If you wish to restore the prior behavior (despite the security implications), you can set ``ANSIBLE_DISPLAY_ARGS_TO_STDOUT`` to ``True`` via the ``AWX_TASK_ENV`` configuration setting. For more details, refer to the `ANSIBLE_DISPLAY_ARGS_TO_STDOUT`_.
|
||||
|
||||
.. _`ANSIBLE_DISPLAY_ARGS_TO_STDOUT`: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#envvar-ANSIBLE_DISPLAY_ARGS_TO_STDOUT
|
||||
|
||||
|
||||
The icons at the top right corner of the Output tab allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
.. |delete| image:: ../common/images/delete-button.png
|
||||
.. |cancel| image:: ../common/images/job-cancel-button.png
|
||||
.. |download| image:: ../common/images/download.png
|
||||
|
||||
|
||||
|job details example of inventory sync|
|
||||
|
||||
.. |job details example of inventory sync| image:: ../common/images/jobs-show-job-results-for-inv-sync.png
|
||||
|
||||
|
||||
.. note:: An inventory update can be performed while a related job is running. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
|
||||
|
||||
|
||||
Inventory sync details
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Access the **Details** tab to provide details about the job execution.
|
||||
|
||||
.. image:: ../common/images/jobs-show-job-details-for-inv-sync.png
|
||||
|
||||
Notable details of the job executed are:
|
||||
|
||||
- **Status**: Can be any of the following:
|
||||
|
||||
- **Pending** - The inventory sync has been created, but not queued or started yet. Any job, not just inventory source syncs, will stay in pending until it’s actually ready to be run by the system. Reasons for inventory source syncs not being ready include dependencies that are currently running (all dependencies must be completed before the next step can execute), or there is not enough capacity to run in the locations it is configured to.
|
||||
- **Waiting** - The inventory sync is in the queue waiting to be executed.
|
||||
- **Running** - The inventory sync is currently in progress.
|
||||
- **Successful** - The inventory sync job succeeded.
|
||||
- **Failed** - The inventory sync job failed.
|
||||
|
||||
- **Inventory**: The name of the associated inventory group.
|
||||
|
||||
- **Source**: The type of cloud inventory.
|
||||
|
||||
- **Inventory Source Project**: The project used as the source of this inventory sync job.
|
||||
|
||||
- **Execution Environment**: The |ee| used.
|
||||
|
||||
- **Execution node**: The node used to execute the job.
|
||||
|
||||
- **Instance Group**: The name of the instance group used with this job (AWX is the default instance group).
|
||||
|
||||
By clicking on these items, where appropriate, you can view the corresponding job templates, projects, and other objects.
|
||||
|
||||
|
||||
SCM Inventory Jobs
|
||||
--------------------
|
||||
|
||||
.. index::
|
||||
pair: job results; SCM inventory
|
||||
|
||||
When an inventory sourced from an SCM is executed, the full results automatically display in the Output tab. This shows the same information you would see if you ran it through the Ansible command line, and can be useful for debugging. The icons at the top right corner of the Output tab allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
|
||||
|
||||
.. image:: ../common/images/jobs-show-job-results-for-scm-job.png
|
||||
|
||||
SCM inventory details
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Access the **Details** tab to provide details about the job execution and its associated project.
|
||||
|
||||
.. image:: ../common/images/jobs-show-job-details-for-scm-job.png
|
||||
|
||||
Notable details of the job executed are:
|
||||
|
||||
- **Status**: Can be any of the following:
|
||||
|
||||
- **Pending** - The SCM job has been created, but not queued or started yet. Any job, not just SCM jobs, will stay in pending until it’s actually ready to be run by the system. Reasons for SCM jobs not being ready include dependencies that are currently running (all dependencies must be completed before the next step can execute), or there is not enough capacity to run in the locations it is configured to.
|
||||
- **Waiting** - The SCM job is in the queue waiting to be executed.
|
||||
- **Running** - The SCM job is currently in progress.
|
||||
- **Successful** - The last SCM job succeeded.
|
||||
- **Failed** - The last SCM job failed.
|
||||
|
||||
|
||||
- **Job Type**: SCM jobs display Source Control Update.
|
||||
|
||||
- **Project**: The name of the project.
|
||||
|
||||
- **Project Status**: Indicates whether the associated project was successfully updated.
|
||||
|
||||
- **Revision**: Indicates the revision number of the sourced project that was used in this job.
|
||||
|
||||
- **Execution Environment**: Specifies the |ee| used to run this job.
|
||||
|
||||
- **Execution Node**: Indicates the node on which the job ran.
|
||||
|
||||
- **Instance Group**: Indicates the instance group on which the job ran, if specified.
|
||||
|
||||
- **Job Tags**: Tags show the various job operations executed.
|
||||
|
||||
|
||||
By clicking on these items, where appropriate, you can view the corresponding job templates, projects, and other objects.
|
||||
|
||||
|
||||
Playbook Run Jobs
|
||||
-------------------
|
||||
|
||||
.. index::
|
||||
pair: jobs results; playbook run
|
||||
|
||||
When a playbook is executed, the full results automatically display in the Output tab. This shows the same information you would see if you ran it through the Ansible command line, and can be useful for debugging.
|
||||
|
||||
.. image:: ../common/images/jobs-show-job-results-for-example-job.png
|
||||
|
||||
The events summary captures a tally of events that were run as part of this playbook:
|
||||
|
||||
- the number of times this playbook has ran in the **Plays** field
|
||||
|
||||
- the number of tasks associated with this playbook in the **Tasks** field
|
||||
|
||||
- the number of hosts associated with this playbook in the **Hosts** field
|
||||
|
||||
- the amount of time it took to complete the playbook run in the **Elapsed** field
|
||||
|
||||
.. image:: ../common/images/jobs-events-summary.png
|
||||
|
||||
|
||||
The icons next to the events summary allow you to relaunch (|launch|), download (|download|) the job output, or delete (|delete|) the job.
|
||||
|
||||
The host status bar runs across the top of the Output view. Hover over a section of the host status bar and the number of hosts associated with that particular status displays.
|
||||
|
||||
|Job - All Host Events|
|
||||
|
||||
.. |Job - All Host Events| image:: ../common/images/job-all-host-events.png
|
||||
|
||||
|
||||
The output for a Playbook job is also accessible after launching a job from the **Jobs** tab of its Job Templates page.
|
||||
|
||||
Clicking on the various line item tasks in the output, you can view its host details.
|
||||
|
||||
Search
|
||||
~~~~~~~
|
||||
|
||||
Use Search to look up specific events, hostnames, and their statuses. To filter only certain hosts with a particular status, specify one of the following valid statuses:
|
||||
|
||||
- **OK**: the playbook task returned "Ok".
|
||||
- **Changed**: the playbook task actually executed. Since Ansible tasks should be written to be idempotent, tasks may exit successfully without executing anything on the host. In these cases, the task would return Ok, but not Changed.
|
||||
- **Failed**: the task failed. Further playbook execution was stopped for this host.
|
||||
- **Unreachable**: the host was unreachable from the network or had another fatal error associated with it.
|
||||
- **Skipped**: the playbook task was skipped because no change was necessary for the host to reach the target state.
|
||||
- **Rescued**: introduced in Ansible 2.8, this shows the tasks that failed and then executes a rescue section.
|
||||
- **Ignored**: introduced in Ansible 2.8, this shows the tasks that failed and have ``ignore_errors: yes`` configured.
|
||||
|
||||
These statuses also display at bottom of each Stdout pane, in a group of "stats" called the Host Summary fields.
|
||||
|
||||
.. image:: ../common/images/job-std-out-host-summary-rescued-ignored.png
|
||||
|
||||
|
||||
The example below shows a search with only unreachable hosts.
|
||||
|
||||
.. image:: ../common/images/job-std-out-filter-failed.png
|
||||
|
||||
For more details about using the Search, refer to the :ref:`ug_search` chapter.
|
||||
|
||||
The standard output view displays all the events that occur on a particular job. By default, all rows are expanded so that all the details are displayed. Use the collapse-all button (|collapse-all|) to switch to a view that only contains the headers for plays and tasks. Click the (|expand-all|) button to view all lines of the standard output.
|
||||
|
||||
.. |collapse-all| image:: ../common/images/job-details-view-std-out-collapse-all-icon.png
|
||||
.. |expand-all| image:: ../common/images/job-details-view-std-out-expand-all-icon.png
|
||||
|
||||
Alternatively, you can display all the details of a specific play or task by clicking on the arrow icons next to them. Click an arrow from sideways to downward to expand the lines associated with that play or task. Click the arrow back to the sideways position to collapse and hide the lines.
|
||||
|
||||
|
||||
.. image:: ../common/images/job-details-view-std-out-expand-collapse-icons.png
|
||||
|
||||
|
||||
Things to note when viewing details in the expand/collapse mode:
|
||||
|
||||
|
||||
- Each displayed line that is not collapsed has a corresponding line number and start time.
|
||||
|
||||
- An expand/collapse icon is at the start of any play or task after the play or task has completed.
|
||||
|
||||
- If querying for a particular play or task, it will appear collapsed at the end of its completed process.
|
||||
|
||||
- In some cases, an error message will appear, stating that the output may be too large to display. This occurs when there are more than 4000 events. Use the search and filter for specific events to bypass the error.
|
||||
|
||||
Click on a line of an event from the **Standard Out** pane and a **Host Events** dialog displays in a separate window. This window shows the host that was affected by that particular event.
|
||||
|
||||
.. note:: Upgrading to the latest versions of AWX involves progressively migrating all historical playbook output and events. This migration process is gradual, and happens automatically in the background after installation is complete. Installations with very large amounts of historical job output (tens, or hundreds of GB of output) may notice missing job output until migration is complete. Most recent data will show up at the top of the output, followed by older events. Migrating jobs with a large amount of events may take longer than jobs with a smaller amount.
|
||||
|
||||
|
||||
Host Details
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: jobs; host events
|
||||
|
||||
The **Host Details** dialog shows information about the host affected by the selected event and its associated play and task:
|
||||
|
||||
- the **Host**
|
||||
- the **Status**
|
||||
- the type of run in the **Play** field
|
||||
- the type of **Task**
|
||||
- if applicable, the Ansible **Module** for the task, and any *arguments* for that module
|
||||
|
||||
.. image:: ../common/images/job-details-host-hostevent.png
|
||||
|
||||
|
||||
To view the results in JSON format, click on the **JSON** tab. To view the output of the task, click the **Standard Out**. To view errors from the output, click **Standard Error**.
|
||||
|
||||
|
||||
|
||||
Playbook run details
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Access the **Details** tab to provide details about the job execution.
|
||||
|
||||
.. image:: ../common/images/jobs-show-job-details-for-example-job.png
|
||||
|
||||
|
||||
Notable details of the job executed are:
|
||||
|
||||
- **Status**: Can be any of the following:
|
||||
|
||||
- **Pending** - The playbook run has been created, but not queued or started yet. Any job, not just playbook runs, will stay in pending until it is actually ready to be run by the system. Reasons for playbook runs not being ready include dependencies that are currently running (all dependencies must be completed before the next step can execute), or there is not enough capacity to run in the locations it is configured to.
|
||||
- **Waiting** - The playbook run is in the queue waiting to be executed.
|
||||
- **Running** - The playbook run is currently in progress.
|
||||
- **Successful** - The last playbook run succeeded.
|
||||
- **Failed** - The last playbook run failed.
|
||||
|
||||
- **Job Template**: The name of the job template from which this job was launched.
|
||||
|
||||
- **Inventory**: The inventory selected to run this job against.
|
||||
|
||||
- **Project**: The name of the project associated with the launched job.
|
||||
|
||||
- **Project Status**: The status of the project associated with the launched job.
|
||||
|
||||
- **Playbook**: The playbook used to launch this job.
|
||||
|
||||
- **Execution Environment**: The name of the |ee| used in this job.
|
||||
|
||||
- **Container Group**: The name of the container group used in this job.
|
||||
|
||||
- **Credentials**: The credential(s) used in this job.
|
||||
|
||||
- **Extra Variables**: Any extra variables passed when creating the job template are displayed here.
|
||||
|
||||
|
||||
By clicking on these items, where appropriate, you can view the corresponding job templates, projects, and other objects.
|
||||
|
||||
|
||||
|
||||
.. _ug_job_concurrency:
|
||||
|
||||
AWX Capacity Determination and Job Impact
|
||||
-------------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: jobs; capacity
|
||||
pair: jobs; forks
|
||||
|
||||
.. include:: job_capacity.rst
|
||||
|
||||
|
||||
Job branch overriding
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: job branch; overriding
|
||||
|
||||
.. include:: job_branching.rst
|
||||
19
docs/docsite/rst/userguide/logging_in.rst
Normal file
19
docs/docsite/rst/userguide/logging_in.rst
Normal file
@@ -0,0 +1,19 @@
|
||||
|
||||
Logging In
|
||||
----------
|
||||
|
||||
.. index::
|
||||
single: logging in
|
||||
|
||||
To log in, browse to the user interface at: ``http://<server name>/``
|
||||
|
||||
|Login form|
|
||||
|
||||
.. |Login form| image:: ../common/images/login-form.png
|
||||
|
||||
Log in using a valid username and password.
|
||||
|
||||
|
||||
The default username is ''admin'', and the default password is set during installation. To configure users and passwords, you can do so by accessing **Users** from the left navigation bar.
|
||||
|
||||
|
||||
176
docs/docsite/rst/userguide/main_menu.rst
Normal file
176
docs/docsite/rst/userguide/main_menu.rst
Normal file
@@ -0,0 +1,176 @@
|
||||
.. _ug_ui_main:
|
||||
|
||||
The User Interface
|
||||
======================
|
||||
|
||||
.. index::
|
||||
pair: main menu; dashboard
|
||||
|
||||
|
||||
The User Interface offers a friendly graphical framework for your IT orchestration needs. The left navigation bar provides quick access to resources, such as **Projects**, **Inventories**, **Job Templates**, and **Jobs**.
|
||||
|
||||
.. note::
|
||||
|
||||
The new AWX User Interface is available for tech preview and is subject to change in a future release. To preview the new UI, click the **Enable Preview of New User Interface** toggle to **On** from the Miscellaneous System option of the Settings menu.
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-misc-preview-newui.png
|
||||
|
||||
After saving, logout and log back in to access the new UI from the preview banner. To return to the current UI, click the link on the top banner where indicated.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-preview-banner.png
|
||||
|
||||
Across the top-right side of the interface, you can access your user profile, the About page, view related documentation, and log out. Right below these options, you can view the activity stream for that user by clicking on the Activity Stream |activitystream| button.
|
||||
|
||||
.. |activitystream| image:: ../common/images/activitystream.png
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-top-nav.png
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Views
|
||||
-------
|
||||
|
||||
.. index::
|
||||
single: dashboard
|
||||
pair: views; jobs
|
||||
pair: views; schedule
|
||||
|
||||
The User Interface provides several options for viewing information.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
|
||||
Dashboard view
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: dashboard; job status
|
||||
pair: dashboard; jobs tab
|
||||
pair: dashboard; schedule status
|
||||
pair: dashboard; host count
|
||||
|
||||
|
||||
|
||||
The **Dashboard** view begins with a summary of your hosts, inventories, and projects. Each of these is linked to the corresponding objects for easy access.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-topsummary.png
|
||||
|
||||
|
||||
On the main Dashboard screen, a summary appears listing your current **Job Status**. The **Job Status** graph displays the number of successful and failed jobs over a specified time period. You can choose to limit the job types that are viewed, and to change the time horizon of the graph.
|
||||
|
||||
Also available for view are summaries of **Recent Jobs** and **Recent Templates** on their respective tabs.
|
||||
|
||||
The **Recent Jobs** section displays which jobs were most recently run, their status, and time when they were run as well.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-recent-jobs.png
|
||||
|
||||
The **Recent Templates** section of this display shows a summary of the most recently used templates. You can also access this summary by clicking **Templates** from the left navigation bar.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-recent-templates.png
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Clicking on **Dashboard** from the left navigation bar or the AWX logo at any time returns you to the Dashboard.
|
||||
|
||||
|
||||
Jobs view
|
||||
~~~~~~~~~~
|
||||
|
||||
Access the **Jobs** view by clicking **Jobs** from the left navigation bar. This view shows all the jobs that have ran, including projects, templates, management jobs, SCM updates, playbook runs, etc.
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-jobs-view.png
|
||||
|
||||
|
||||
Schedules view
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Access the Schedules view by clicking **Schedules** from the left navigation bar. This view shows all the scheduled jobs that are configured.
|
||||
|
||||
|
||||
.. image:: ../common/images/ug-dashboard-schedule-view.png
|
||||
|
||||
|
||||
.. _ug_activitystreams:
|
||||
|
||||
Activity Stream
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: activity stream
|
||||
|
||||
|
||||
Most screens have an Activity Stream (|activitystream|) button. Clicking this brings up the
|
||||
**Activity Stream** for this object.
|
||||
|
||||
|
||||
|Users - Activity Stream|
|
||||
|
||||
.. |Users - Activity Stream| image:: ../common/images/users-activity-stream.png
|
||||
|
||||
An Activity Stream shows all changes for a particular object. For each change, the Activity Stream shows the time of the event, the user that
|
||||
initiated the event, and the action. The information displayed varies depending on the type of event. Clicking on the Examine (|examine|) button shows the event log for the change.
|
||||
|
||||
.. |examine| image:: ../common/images/examine-button.png
|
||||
|
||||
|event log|
|
||||
|
||||
.. |event log| image:: ../common/images/activity-stream-event-log.png
|
||||
|
||||
The Activity Stream can be filtered by the initiating user (or the
|
||||
system, if it was system initiated), and by any related object,
|
||||
such as a particular credential, job template, or schedule.
|
||||
|
||||
The Activity Stream on the main Dashboard shows the Activity Stream for the entire instance. Most pages allow viewing an activity stream filtered for that specific object.
|
||||
|
||||
|
||||
Workflow Approvals
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Access this view to see your workflow approval queue. The list contains actions that require you to approve or deny before a job can proceed. See
|
||||
:ref:`ug_wf_approval_nodes` for further detail.
|
||||
|
||||
|
||||
Resources and Access
|
||||
---------------------
|
||||
|
||||
The **Resources** and **Access** menus provide you access to the various components of AWX and allow you to configure who has permissions for those resources:
|
||||
|
||||
- Templates (:ref:`ug_JobTemplates` and :ref:`ug_wf_templates`)
|
||||
- :ref:`ug_credentials`
|
||||
- :ref:`ug_projects`
|
||||
- :ref:`ug_inventories`
|
||||
- :ref:`Hosts <ug_inventories_add_host>`
|
||||
- :ref:`ug_organizations`
|
||||
- :ref:`ug_users`
|
||||
- :ref:`ug_teams`
|
||||
|
||||
|
||||
|
||||
Administration
|
||||
---------------
|
||||
|
||||
.. index::
|
||||
single: admin menu
|
||||
|
||||
The **Administration** menu provides access to the various administrative options. From here, you can create, view, and edit:
|
||||
|
||||
- :ref:`ug_credential_types`
|
||||
- :ref:`ug_notifications`
|
||||
- :ref:`ag_management_jobs`
|
||||
- :ref:`ug_instance_groups`
|
||||
- :ref:`Instances <ag_instances>`
|
||||
- :ref:`ug_applications_auth`
|
||||
- :ref:`ug_execution_environments`
|
||||
- :ref:`ag_topology_viewer`
|
||||
|
||||
|
||||
The Settings Menu
|
||||
===================
|
||||
|
||||
Configuring global and system-level settings is accomplished through the **Settings** menu, which is described in further detail in the proceeding section. The **Settings** menu offers access to administrative configuration options.
|
||||
|
||||
.. include:: ../common/settings-menu.rst
|
||||
125
docs/docsite/rst/userguide/notification_parameters_supported.rst
Normal file
125
docs/docsite/rst/userguide/notification_parameters_supported.rst
Normal file
@@ -0,0 +1,125 @@
|
||||
.. _ir_notifications_reference:
|
||||
|
||||
Supported Attributes for Custom Notifications
|
||||
==============================================
|
||||
|
||||
.. index::
|
||||
pair: notification;attributes
|
||||
pair: notification messages;custom
|
||||
|
||||
|
||||
This section describes the list of supported job attributes and the proper syntax for constructing the message text for notifications. The supported job attributes are:
|
||||
|
||||
- ``allow_simultaneous`` - (boolean) indicates if multiple jobs can run simultaneously from the JT associated with this job
|
||||
- ``awx_node`` - (string) the instance that managed the isolated execution environment
|
||||
- ``created`` - (datetime) timestamp when this job was created
|
||||
- ``custom_virtualenv`` - (string) custom virtual environment used to execute job
|
||||
- ``description`` - (string) optional description of the job
|
||||
- ``diff_mode`` - (boolean) if enabled, textual changes made to any templated files on the host are shown in the standard output
|
||||
- ``elapsed`` - (decimal) elapsed time in seconds that the job ran
|
||||
- ``execution_node`` - (string) node the job executed on
|
||||
- ``failed`` - (boolean) true if job failed
|
||||
- ``finished`` - (datetime) date and time the job finished execution
|
||||
- ``force_handlers`` - (boolean) when handlers are forced, they will run when notified even if a task fails on that host (note that some conditions - e.g. unreachable hosts - can still prevent handlers from running)
|
||||
- ``forks`` - (int) number of forks requested for job
|
||||
- ``id`` - (int) database id for this job
|
||||
- ``job_explanation`` - (string) status field to indicate the state of the job if it wasn't able to run and capture stdout
|
||||
- ``job_slice_count`` - (integer) if run as part of a sliced job, the total number of slices (if 1, job is not part of a sliced job)
|
||||
- ``job_slice_number`` - (integer) if run as part of a sliced job, the ID of the inventory slice operated on (if not part of a sliced job, attribute is not used)
|
||||
- ``job_tags`` - (string) only tasks with specified tags will execute
|
||||
- ``job_type`` - (choice) run, check, or scan
|
||||
- ``launch_type`` - (choice) manual, relaunch, callback, scheduled, dependency, workflow, sync, or scm
|
||||
- ``limit`` - (string) playbook execution limited to this set of hosts, if specified
|
||||
- ``modified`` - (datetime) timestamp when this job was last modified
|
||||
- ``name`` - (string) name of this job
|
||||
- ``playbook`` - (string) playbook executed
|
||||
- ``scm_revision`` - (string) scm revision from the project used for this job, if available
|
||||
- ``skip_tags`` - (string) playbook execution skips over this set of tag(s), if specified
|
||||
- ``start_at_task`` - (string) playbook execution begins at the task matching this name, if specified
|
||||
- ``started`` - (datetime) date and time the job was queued for starting
|
||||
- ``status`` - (choice) new, pending, waiting, running, successful, failed, error, canceled
|
||||
- ``timeout`` - (int) amount of time (in seconds) to run before the task is canceled
|
||||
- ``type`` - (choice) data type for this job
|
||||
- ``url`` - (string) URL for this job
|
||||
- ``use_fact_cache`` - (boolean) if enabled for job, AWX acts as an Ansible Fact Cache Plugin, persisting facts at the end of a playbook run to the database and caching facts for use by Ansible
|
||||
- ``verbosity`` - (choice) 0 through 5 (corresponding to Normal through WinRM Debug)
|
||||
- ``host_status_counts`` (count of hosts uniquely assigned to each status)
|
||||
- ``skipped`` (integer)
|
||||
- ``ok`` (integer)
|
||||
- ``changed`` (integer)
|
||||
- ``failures`` (integer)
|
||||
- ``dark`` (integer)
|
||||
- ``processed`` (integer)
|
||||
- ``rescued`` (integer)
|
||||
- ``ignored`` (integer)
|
||||
- ``failed`` (boolean)
|
||||
- ``summary_fields``:
|
||||
- ``inventory``
|
||||
- ``id`` - (integer) database ID for inventory
|
||||
- ``name`` - (string) name of the inventory
|
||||
- ``description`` - (string) optional description of the inventory
|
||||
- ``has_active_failures`` - (boolean) (deprecated) flag indicating whether any hosts in this inventory have failed
|
||||
- ``total_hosts`` - (deprecated) (int) total number of hosts in this inventory.
|
||||
- ``hosts_with_active_failures`` - (deprecated) (int) number of hosts in this inventory with active failures
|
||||
- ``total_groups`` - (deprecated) (int) total number of groups in this inventory
|
||||
- ``groups_with_active_failures`` - (deprecated) (int) number of hosts in this inventory with active failures
|
||||
- ``has_inventory_sources`` - (deprecated) (boolean) flag indicating whether this inventory has external inventory sources
|
||||
- ``total_inventory_sources`` - (int) total number of external inventory sources configured within this inventory
|
||||
- ``inventory_sources_with_failures`` - (int) number of external inventory sources in this inventory with failures
|
||||
- ``organization_id`` - (id) organization containing this inventory
|
||||
- ``kind`` - (choice) (empty string) (indicating hosts have direct link with inventory) or 'smart'
|
||||
- ``project``
|
||||
- ``id`` - (int) database ID for project
|
||||
- ``name`` - (string) name of the project
|
||||
- ``description`` - (string) optional description of the project
|
||||
- ``status`` - (choices) one of new, pending, waiting, running, successful, failed, error, canceled, never updated, ok, or missing
|
||||
- ``scm_type (choice)`` - one of (empty string), git, hg, svn, insights
|
||||
- ``job_template``
|
||||
- ``id`` - (int) database ID for job template
|
||||
- ``name`` - (string) name of job template
|
||||
- ``description`` - (string) optional description for the job template
|
||||
- ``unified_job_template``
|
||||
- ``id`` - (int) database ID for unified job template
|
||||
- ``name`` - (string) name of unified job template
|
||||
- ``description`` - (string) optional description for the unified job template
|
||||
- ``unified_job_type`` - (choice) unified job type (job, workflow_job, project_update, etc.)
|
||||
- ``instance_group``
|
||||
- ``id`` - (int) database ID for instance group
|
||||
- ``name`` - (string) name of instance group
|
||||
- ``created_by``
|
||||
- ``id`` - (int) database ID of user that launched the operation
|
||||
- ``username`` - (string) username that launched the operation
|
||||
- ``first_name`` - (string) first name
|
||||
- ``last_name`` - (string) last name
|
||||
- ``labels``
|
||||
- ``count`` - (int) number of labels
|
||||
- ``results`` - list of dictionaries representing labels (e.g. {"id": 5, "name": "database jobs"})
|
||||
|
||||
Information about a job can be referenced in a custom notification message using grouped curly braces ``{{ }}``. Specific job attributes are accessed using dotted notation, for example ``{{ job.summary_fields.inventory.name }}``. Any characters used in front or around the braces, or plain text, can be added for clarification, such as '#' for job ID and single-quotes to denote some descriptor. Custom messages can include a number of variables throughout the message::
|
||||
|
||||
{{ job_friendly_name }} {{ job.id }} ran on {{ job.execution_node }} in {{ job.elapsed }} seconds.
|
||||
|
||||
In addition to the job attributes, there are some other variables that can be added to the template:
|
||||
|
||||
- ``approval_node_name`` - (string) the approval node name
|
||||
- ``approval_status`` - (choice) one of approved, denied, and timed_out
|
||||
- ``url`` - (string) URL of the job for which the notification is emitted (this applies to start, success, fail, and approval notifications)
|
||||
- ``workflow_url`` - (string) URL to the relevant approval node. This allows the notification recipient to go to the relevant workflow job page to see what's going on (i.e., ``This node can be viewed at: {{ workflow_url }}``). In cases of approval-related notifications, both ``url`` and ``workflow_url`` are the same.
|
||||
- ``job_friendly_name`` - (string) the friendly name of the job
|
||||
- ``job_metadata`` - (string) job metadata as a JSON string, for example::
|
||||
|
||||
{'url': 'https://awxhost/$/jobs/playbook/13',
|
||||
'traceback': '',
|
||||
'status': 'running',
|
||||
'started': '2019-08-07T21:46:38.362630+00:00',
|
||||
'project': 'Stub project',
|
||||
'playbook': 'ping.yml',
|
||||
'name': 'Stub Job Template',
|
||||
'limit': '',
|
||||
'inventory': 'Stub Inventory',
|
||||
'id': 42,
|
||||
'hosts': {},
|
||||
'friendly_name': 'Job',
|
||||
'finished': False,
|
||||
'credential': 'Stub credential',
|
||||
'created_by': 'admin'}
|
||||
559
docs/docsite/rst/userguide/notifications.rst
Normal file
559
docs/docsite/rst/userguide/notifications.rst
Normal file
@@ -0,0 +1,559 @@
|
||||
.. _ug_notifications:
|
||||
|
||||
Notifications
|
||||
***************
|
||||
|
||||
.. index::
|
||||
pair: jobs; notifications
|
||||
pair: notifications; inventory sources
|
||||
pair: notifications; groups
|
||||
pair: notifications; template
|
||||
|
||||
|
||||
A :term:`Notification Template` is an instance of a :term:`Notification` type (Email, Slack, Webhook, etc.) with a name, description, and a defined configuration.
|
||||
|
||||
For example:
|
||||
|
||||
- A username, password, server, and recipients are needed for an Email notification template
|
||||
- The token and a list of channels are needed for a Slack notification template
|
||||
- The URL and Headers are needed for a Webhook notification template
|
||||
|
||||
A Notification is a manifestation of the notification template; for example, when a job fails, a notification is sent using the configuration defined by the notification template.
|
||||
|
||||
At a high level, the typical flow for the notification system works as follows:
|
||||
|
||||
- A user creates a notification template to the REST API at the ``/api/v2/notification_templates`` endpoint (either through the API or through the UI).
|
||||
- A user assigns the notification template to any of the various objects that support it (all variants of job templates as well as organizations and projects) and at the appropriate trigger level for which they want the notification (started, success, or error). For example a user may wish to assign a particular notification template to trigger when Job Template 1 fails. In which case, they will associate the notification template with the job template at ``/api/v2/job_templates/n/notification_templates_error`` API endpoint.
|
||||
- You can set notifications on job start, not just job end. Users and teams are also able to define their own notifications that can be attached to arbitrary jobs.
|
||||
|
||||
|
||||
Notification Hierarchy
|
||||
=============================
|
||||
|
||||
.. index::
|
||||
pair: notifications; hierarchy
|
||||
|
||||
Notification templates assigned at certain levels will inherit templates defined on parent objects as such:
|
||||
|
||||
- Job Templates will use notification templates defined on it as well as inheriting notification templates from the Project used by the Job Template and from the Organization that it is listed under (via the Project).
|
||||
- Project Updates will use notification templates defined on the project and will inherit notification templates from the Organization associated with it
|
||||
- Inventory Updates will use notification templates defined on the Organization that it is listed under
|
||||
- Ad-hoc commands will use notification templates defined on the Organization that the inventory is associated with
|
||||
|
||||
Workflow
|
||||
==========
|
||||
|
||||
.. index::
|
||||
pair: notifications; template workflow
|
||||
|
||||
When a job succeeds or fails, the error or success handler will pull a list of relevant notification templates using the procedure defined above. It will then create a Notification object for each one containing relevant details about the job and then sends it to the destination (email addresses, slack channel(s), sms numbers, etc). These Notification objects are available as related resources on job types (jobs, inventory updates, project updates), and also at ``/api/v2/notifications``. You may also see what notifications have been sent from a notification templates by examining its related resources.
|
||||
|
||||
If a notification fails, it will not impact the job associated to it or cause it to fail. The status of the notification can be viewed at its detail endpoint (``/api/v2/notifications/<n>``).
|
||||
|
||||
|
||||
.. _ug_notifications_create:
|
||||
|
||||
Create a Notification Template
|
||||
===============================
|
||||
|
||||
.. index::
|
||||
pair: notifications; create template
|
||||
pair: template; notifications
|
||||
|
||||
|
||||
To create a Notification Template:
|
||||
|
||||
1. Click **Notifications** from the left navigation bar.
|
||||
|
||||
2. Click the **Add** button.
|
||||
|
||||
.. image:: ../common/images/notifications-template-add-new.png
|
||||
|
||||
3. Enter the name of the notification and a description in their respective fields, and specify the organization (required) it belongs to.
|
||||
|
||||
4. Choose a type of notification from the **Type** drop-down menu. Refer to the subsequent sections for additional information.
|
||||
|
||||
5. Once all required information is complete, click **Save** to add the notification.
|
||||
|
||||
|
||||
.. _ug_notifications_types:
|
||||
|
||||
Notification Types
|
||||
====================
|
||||
|
||||
.. index::
|
||||
pair: notifications; types
|
||||
triple: notifications; types; Email
|
||||
triple: notifications; types; Grafana
|
||||
triple: notifications; types; IRC
|
||||
triple: notifications; types; Mattermost
|
||||
triple: notifications; types; pagerduty
|
||||
triple: notifications; types; Rocket.Chat
|
||||
triple: notifications; types; Slack
|
||||
triple: notifications; types; Twilio
|
||||
triple: notifications; types; Webhook
|
||||
|
||||
Notification types supported with AWX:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
Each of these have their own configuration and behavioral semantics and testing them may need to be approached in different ways. Additionally, you can customize each type of notification down to a specific detail, or a set of criteria to trigger a notification. See :ref:`ug_custom_notifications` for more detail on configuring custom notifications. The following sections will give as much detail as possible on each type of notification.
|
||||
|
||||
|
||||
Email
|
||||
-------
|
||||
|
||||
The email notification type supports a wide variety of SMTP servers and has support for TLS/SSL connections.
|
||||
|
||||
You must provide the following details to setup an email notification:
|
||||
|
||||
- Host
|
||||
- Recipient list
|
||||
- Sender email
|
||||
- Port
|
||||
- Timeout (in seconds): allows you to specify up to 120 seconds, the length of time AWX may attempt connecting to the email server before giving up.
|
||||
|
||||
.. image:: ../common/images/notification-template-email.png
|
||||
|
||||
Grafana
|
||||
------------
|
||||
|
||||
Grafana is a fairly straightforward integration. First, create an API Key in the `Grafana system`_ (this is the token that is given to AWX).
|
||||
|
||||
.. _`Grafana system`: http://docs.grafana.org/tutorials/api_org_token_howto/
|
||||
|
||||
You must provide the following details to setup a Grafana notification:
|
||||
|
||||
- Grafana URL: The URL of the Grafana API service, generally ``http://yourcompany.grafana.com``.
|
||||
- Grafana API Key: The user must first create an API Key in the Grafana system (this is the token that is given to AWX).
|
||||
|
||||
The other options of note are:
|
||||
|
||||
- ID of the Dashboard: When you created an API Key for the Grafana account, you can set up a dashboard with its own unique ID.
|
||||
- ID of the Panel: If you added panels and graphs to your Grafana interface, you can specify its ID here.
|
||||
- Tags for the Annotation: Enter keywords that help identify the type(s) of events(s) of the notification you are configuring.
|
||||
- Disable SSL Verification: SSL verification is on by default, but you can choose to turn off verification the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
|
||||
|
||||
.. image:: ../common/images/notification-template-grafana.png
|
||||
|
||||
|
||||
IRC
|
||||
-----
|
||||
|
||||
The IRC notification takes the form of an IRC bot that will connect, deliver its messages to channel(s) or individual user(s), and then disconnect. The notification bot also supports SSL authentication. The bot does not currently support Nickserv identification. If a channel or user does not exist or is not on-line then the Notification will not fail; the failure scenario is reserved specifically for connectivity.
|
||||
|
||||
Connectivity information is straightforward:
|
||||
|
||||
- IRC Server Password (optional): IRC servers can require a password to connect. If the server does not require one, leave blank
|
||||
- IRC Server Port: The IRC server Port
|
||||
- IRC Server Address: The host name or address of the IRC server
|
||||
- IRC Nick: The bot's nickname once it connects to the server
|
||||
- Destination Channels or Users: A list of users and/or channels to which to send the notification.
|
||||
- SSL Connection (optional): Should the bot use SSL when connecting
|
||||
|
||||
|
||||
.. image:: ../common/images/notification-template-irc.png
|
||||
|
||||
Mattermost
|
||||
------------
|
||||
|
||||
The Mattermost notification type provides a simple interface to Mattermost's messaging and collaboration workspace. The parameters that can be specified are:
|
||||
|
||||
- Target URL (required): The full URL that will be POSTed to
|
||||
- Username
|
||||
- Channel
|
||||
- Icon URL: specifies the icon to display for this notification
|
||||
- Disable SSL Verification: Turns off verification of the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
|
||||
|
||||
.. image:: ../common/images/notification-template-mattermost.png
|
||||
|
||||
|
||||
PagerDuty
|
||||
------------
|
||||
|
||||
PagerDuty is a fairly straightforward integration. First, create an API Key in the `PagerDuty system`_ (this is the token that is given to AWX) and then create a "Service" which provides an "Integration Key" that will also be given to AWX. The other required options are:
|
||||
|
||||
.. _`PagerDuty system`: https://support.pagerduty.com/docs/generating-api-keys
|
||||
|
||||
- API Token: The user must first create an API Key in the PagerDuty system (this is the token that is given to AWX).
|
||||
- PagerDuty Subdomain: When you sign up for the PagerDuty account, you receive a unique subdomain to communicate with. For instance, if you signed up as "testuser", the web dashboard will be at ``testuser.pagerduty.com`` and you will give the API ``testuser`` as the subdomain (not the full domain).
|
||||
- API Service/Integration Key
|
||||
- Client Identifier: This will be sent along with the alert content to the pagerduty service to help identify the service that is using the api key/service. This is helpful if multiple integrations are using the same API key and service.
|
||||
|
||||
.. image:: ../common/images/notification-template-pagerduty.png
|
||||
|
||||
Rocket.Chat
|
||||
-------------
|
||||
|
||||
The Rocket.Chat notification type provides an interface to Rocket.Chat's collaboration and communication platform. The parameters that can be specified are:
|
||||
|
||||
- Target URL (required): The full URL that will be POSTed to
|
||||
- Username
|
||||
- Icon URL: specifies the icon to display for this notification
|
||||
- Disable SSL Verification: Turns off verification of the authenticity of the target's certificate. Environments that use internal or private CA's should select this option to disable verification.
|
||||
|
||||
.. image:: ../common/images/notification-template-rocketchat.png
|
||||
|
||||
|
||||
Slack
|
||||
-----
|
||||
|
||||
Slack, a collaborative team communication and messaging tool, is pretty easy to configure.
|
||||
|
||||
You must supply the following to setup Slack notifications:
|
||||
|
||||
- A Slack app (refer to the `Basic App Setup <https://api.slack.com/authentication/basics>`_ page of the Slack documentation for information on how to create one)
|
||||
|
||||
- A token (refer to `Enabling Interactions with Bots <https://api.slack.com/bot-users>`_ and specific details on bot tokens on the `Token Types <https://api.slack.com/authentication/token-types#bot>`_ documentation page)
|
||||
|
||||
Once you have a bot/app set up, you must navigate to "Your Apps", click on the newly-created app and then go to **Add features and functionality**, which allows you to configure incoming webhooks, bots, and permissions; as well as **Install your app to your workspace**.
|
||||
|
||||
You must also invite the notification bot to join the channel(s) in question in Slack. Note that private messages are not supported.
|
||||
|
||||
.. image:: ../common/images/notification-template-slack.png
|
||||
|
||||
|
||||
Twilio
|
||||
--------
|
||||
|
||||
Twilio service is an Voice and SMS automation service. Once you are signed in, you must create a phone number from which the message will be sent. You can then define a "Messaging Service" under Programmable SMS and associate the number you created before with it.
|
||||
|
||||
Note that you may need to verify this number or some other information before you are allowed to use it to send to any numbers. The Messaging Service does not need a status callback URL nor does it need the ability to Process inbound messages.
|
||||
|
||||
Under your individual (or sub) account settings, you will have API credentials. Twilio uses two credentials to determine which account an API request is coming from. The “Account SID”, which acts as a username, and the “Auth Token” which acts as a password.
|
||||
|
||||
To setup Twilio, provide the following details:
|
||||
|
||||
- Account Token
|
||||
- Source Phone Number (this is the number associated with the messaging service above and must be given in the form of "+15556667777")
|
||||
- Destination SMS number (this will be the list of numbers to receive the SMS and should be the 10-digit phone number)
|
||||
- Account SID
|
||||
|
||||
.. image:: ../common/images/notification-template-twilio.png
|
||||
|
||||
|
||||
Webhook
|
||||
---------
|
||||
|
||||
The webhook notification type provides a simple interface to sending POSTs to a predefined web service. AWX will POST to this address using application/json content type with the data payload containing all relevant details in json format. Some web service APIs expect HTTP requests to be in a certain format with certain fields. You can configure more of the webhook notification in the following ways:
|
||||
|
||||
- configure the HTTP method (using **POST** or **PUT**)
|
||||
- body of the outgoing request
|
||||
- configure authentication (using basic auth)
|
||||
|
||||
The parameters for configuring webhooks are:
|
||||
|
||||
- Username
|
||||
- Basic Auth Password
|
||||
- Target URL (required): The full URL to which the webhook notification will be PUT or POSTed.
|
||||
- Disable SSL Verification: SSL verification is on by default, but you can choose to turn off verification of the authenticity of the target’s certificate. Environments that use internal or private CA’s should select this option to disable verification.
|
||||
- HTTP Headers (required): Headers in JSON form where the keys and values are strings.
|
||||
For example, ``{"Authentication": "988881adc9fc3655077dc2d4d757d480b5ea0e11", "MessageType": "Test"}``
|
||||
- HTTP Method (required). Select the method for your webhook:
|
||||
|
||||
- POST: Creates a new resource. Also acts as a catch-all for operations that do not fit into the other categories. It is likely you need to POST unless you know your webhook service expects a PUT.
|
||||
- PUT: Updates a specific resource (by an identifier) or a collection of resources. PUT can also be used to create a specific resource if the resource identifier is known beforehand.
|
||||
|
||||
|
||||
.. image:: ../common/images/notification-template-webhook.png
|
||||
|
||||
|
||||
Webhook payloads
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
AWX sends by default the following data at the webhook endpoint:
|
||||
|
||||
::
|
||||
|
||||
job id
|
||||
name
|
||||
url
|
||||
created_by
|
||||
started
|
||||
finished
|
||||
status
|
||||
traceback
|
||||
inventory
|
||||
project
|
||||
playbook
|
||||
credential
|
||||
limit
|
||||
extra_vars
|
||||
hosts
|
||||
http method
|
||||
|
||||
An example of a ``started`` notifications via webhook message as it is returned by AWX:
|
||||
|
||||
::
|
||||
|
||||
{"id": 38, "name": "Demo Job Template", "url": "https://host/#/jobs/playbook/38", "created_by": "bianca", "started":
|
||||
"2020-07-28T19:57:07.888193+00:00", "finished": null, "status": "running", "traceback": "", "inventory": "Demo Inventory",
|
||||
"project": "Demo Project", "playbook": "hello_world.yml", "credential": "Demo Credential", "limit": "", "extra_vars": "{}",
|
||||
"hosts": {}}POST / HTTP/1.1
|
||||
|
||||
|
||||
AWX returns by default the following data at the webhook endpoint for a ``success``/``fail`` status:
|
||||
|
||||
::
|
||||
|
||||
job id
|
||||
name
|
||||
url
|
||||
created_by
|
||||
started
|
||||
finished
|
||||
status
|
||||
traceback
|
||||
inventory
|
||||
project
|
||||
playbook
|
||||
credential
|
||||
limit
|
||||
extra_vars
|
||||
hosts
|
||||
|
||||
|
||||
An example of a ``success``/``fail`` notifications via webhook message as it is returned by AWX:
|
||||
|
||||
::
|
||||
|
||||
{"id": 46, "name": "AWX-Collection-tests-awx_job_wait-long_running-XVFBGRSAvUUIrYKn", "url": "https://host/#/jobs/playbook/46",
|
||||
"created_by": "bianca", "started": "2020-07-28T20:43:36.966686+00:00", "finished": "2020-07-28T20:43:44.936072+00:00", "status": "failed",
|
||||
"traceback": "", "inventory": "Demo Inventory", "project": "AWX-Collection-tests-awx_job_wait-long_running-JJSlglnwtsRJyQmw", "playbook":
|
||||
"fail.yml", "credential": null, "limit": "", "extra_vars": "{\"sleep_interval\": 300}", "hosts": {"localhost": {"failed": true, "changed": 0,
|
||||
"dark": 0, "failures": 1, "ok": 1, "processed": 1, "skipped": 0, "rescued": 0, "ignored": 0}}}
|
||||
|
||||
|
||||
.. _ug_custom_notifications:
|
||||
|
||||
Create custom notifications
|
||||
=============================
|
||||
|
||||
You can :ref:`customize the text content <ir_notifications_reference>` of each of the :ref:`ug_notifications_types` by enabling the **Customize Messages** portion at the bottom of the notifications form using the toggle button.
|
||||
|
||||
.. image:: ../common/images/notification-template-customize.png
|
||||
|
||||
You can provide a custom message for various job events:
|
||||
|
||||
- Start
|
||||
- Success
|
||||
- Error
|
||||
- Workflow approved
|
||||
- Workflow denied
|
||||
- Workflow running
|
||||
- Workflow timed out
|
||||
|
||||
The message forms vary depending on the type of notification you are configuring. For example, messages for email and PagerDuty notifications have the appearance of a typical email form with a subject and body, in which case, AWX displays the fields as **Message** and **Message Body**. Other notification types only expect a **Message** for each type of event:
|
||||
|
||||
.. image:: ../common/images/notification-template-customize-simple.png
|
||||
|
||||
The **Message** fields are pre-populated with a template containing a top-level variable, ``job`` coupled with an attribute, such as ``id`` or ``name``, for example. Templates are enclosed in curly braces and may draw from a fixed set of fields provided by AWX, as shown in the pre-populated **Messages** fields.
|
||||
|
||||
.. image:: ../common/images/notification-template-customize-simple-syntax.png
|
||||
|
||||
This pre-populated field suggests commonly displayed messages to a recipient who is notified of an event. You can, however, customize these messages with different criteria by adding your own attribute(s) for the job as needed. Custom notification messages are rendered using Jinja - the same templating engine used by Ansible playbooks.
|
||||
|
||||
Messages and message bodies have different types of content:
|
||||
|
||||
- messages will always just be strings (one-liners only; new lines are not allowed)
|
||||
|
||||
- message bodies will be either a dictionary or block of text:
|
||||
|
||||
- the message body for *Webhooks* and *PagerDuty* uses dictionary definitions. The default message body for these is ``{{ job_metadata }}``, you can either leave that as is or provide your own dictionary
|
||||
|
||||
- the message body for email uses a block of text or a multi-line string. The default message body is:
|
||||
|
||||
.. code-block:: html
|
||||
|
||||
{{ job_friendly_name }} #{{ job.id }} had status {{ job.status }}, view details at {{ url }} {{ job_metadata }}
|
||||
|
||||
You can tweak this text (leaving ``{{ job_metadata }}`` in, or drop ``{{ job_metadata }}`` altogether). Since the body is a block of text, it can really be any string you want.
|
||||
|
||||
``{{ job_metadata }}`` gets rendered as a dictionary containing fields that describe the job being executed. In all cases, ``{{ job_metadata }}`` will include the following fields:
|
||||
|
||||
- ``id``
|
||||
- ``name``
|
||||
- ``url``
|
||||
- ``created_by``
|
||||
- ``started``
|
||||
- ``finished``
|
||||
- ``status``
|
||||
- ``traceback``
|
||||
|
||||
.. note::
|
||||
|
||||
At the present time, you cannot query individual fields within ``{{ job_metadata }}``. When using ``{{ job_metadata }}`` in a notification template, all data
|
||||
will be returned.
|
||||
|
||||
The resulting dictionary will look something like this:
|
||||
|
||||
::
|
||||
|
||||
{"id": 18,
|
||||
"name": "Project - Space Procedures",
|
||||
"url": "https://host/#/jobs/project/18",
|
||||
"created_by": "admin",
|
||||
"started": "2019-10-26T00:20:45.139356+00:00",
|
||||
"finished": "2019-10-26T00:20:55.769713+00:00",
|
||||
"status": "successful",
|
||||
"traceback": ""
|
||||
}
|
||||
|
||||
If ``{{ job_metadata }}`` is rendered in a job, it will include the following additional fields:
|
||||
|
||||
- ``inventory``
|
||||
- ``project``
|
||||
- ``playbook``
|
||||
- ``credential``
|
||||
- ``limit``
|
||||
- ``extra_vars``
|
||||
- ``hosts``
|
||||
|
||||
|
|
||||
|
||||
The resulting dictionary will look something like:
|
||||
|
||||
::
|
||||
|
||||
{"id": 12,
|
||||
"name": "JobTemplate - Launch Rockets",
|
||||
"url": "https://host/#/jobs/playbook/12",
|
||||
"created_by": "admin",
|
||||
"started": "2019-10-26T00:02:07.943774+00:00",
|
||||
"finished": null,
|
||||
"status": "running",
|
||||
"traceback": "",
|
||||
"inventory": "Inventory - Fleet",
|
||||
"project": "Project - Space Procedures",
|
||||
"playbook": "launch.yml",
|
||||
"credential": "Credential - Mission Control",
|
||||
"limit": "",
|
||||
"extra_vars": "{}",
|
||||
"hosts": {}
|
||||
}
|
||||
|
||||
If ``{{ job_metadata }}`` is rendered in a workflow job, it will include the following additional field:
|
||||
|
||||
- ``body`` (this will enumerate all the nodes in the workflow job and includes a description of the job associated with each node)
|
||||
|
||||
|
|
||||
|
||||
The resulting dictionary will look something like this:
|
||||
|
||||
::
|
||||
|
||||
{"id": 14,
|
||||
"name": "Workflow Job Template - Launch Mars Mission",
|
||||
"url": "https://host/#/workflows/14",
|
||||
"created_by": "admin",
|
||||
"started": "2019-10-26T00:11:04.554468+00:00",
|
||||
"finished": "2019-10-26T00:11:24.249899+00:00",
|
||||
"status": "successful",
|
||||
"traceback": "",
|
||||
"body": "Workflow job summary:
|
||||
|
||||
node #1 spawns job #15, \"Assemble Fleet JT\", which finished with status successful.
|
||||
node #2 spawns job #16, \"Mission Start approval node\", which finished with status successful.\n
|
||||
node #3 spawns job #17, \"Deploy Fleet\", which finished with status successful."
|
||||
}
|
||||
|
||||
|
||||
For more detail, refer to `Using variables with Jinja2`_.
|
||||
|
||||
.. _`Using variables with Jinja2`: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#using-variables-with-jinja2
|
||||
|
||||
|
||||
AWX requires valid syntax in order to retrieve the correct data to display the messages. For a list of supported attributes and the proper syntax construction, refer to the :ref:`ir_notifications_reference` section of this guide.
|
||||
|
||||
|
||||
If you create a notification template that uses invalid syntax or references unusable fields, an error message displays indicating the nature of the error. If you delete a notification’s custom message, the default message is shown in its place.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If you save the notifications template without editing the custom message (or edit and revert back to the default values), the **Details** screen assumes the defaults and will not display the custom message tables. If you edit and save any of the values, the entire table displays in the **Details** screen.
|
||||
|
||||
.. image:: ../common/images/notifications-with-without-messages.png
|
||||
|
||||
|
||||
.. _ug_notifications_on_off:
|
||||
|
||||
Enable and Disable Notifications
|
||||
==================================
|
||||
|
||||
You can select which notifications to notify you when a specific job starts, in addition to notifying you on success or failure at the end of the job run. Some behaviors to keep in mind:
|
||||
|
||||
- if a workflow template (WFJT) has notification on start enabled, and a job template (JT) within that workflow also has notification on start enabled, you will receive notifications for both
|
||||
- you can enable notifications to run on many JTs within a WFJT
|
||||
- you can enable notifications to run on a sliced job template (SJT) start and each slice will generate a notification
|
||||
- when you enable a notification to run on job start, and that notification gets deleted, the JT continues to run, but will result in an error message
|
||||
|
||||
You can enable notifications on job start, job success, and job failure, or any combination thereof, from the **Notifications** tab of the following resources:
|
||||
|
||||
- Job Template
|
||||
- Workflow Template
|
||||
- Projects (shown in the example below)
|
||||
- Inventory Source
|
||||
- Organizations
|
||||
|
||||
.. image:: ../common/images/projects-notifications-example-list.png
|
||||
|
||||
|
||||
For workflow templates that have approval nodes, in addition to *Start*, *Success*, and *Failure*, you can enable or disable certain approval-related events:
|
||||
|
||||
.. image:: ../common/images/wf-template-completed-notifications-view.png
|
||||
|
||||
Refer to :ref:`ug_wf_approval_nodes` for additional detail on working with these types of nodes.
|
||||
|
||||
|
||||
Configure the ``host`` hostname for notifications
|
||||
========================================================
|
||||
|
||||
.. index::
|
||||
pair: notifications; hostname configuration
|
||||
|
||||
In the :ref:`System Settings <configure_awx_system>`, you can replace the default value in the **Base URL of the service** field with your preferred hostname to change the notification hostname.
|
||||
|
||||
.. image:: ../common/images/configure-awx-system-misc-baseurl.png
|
||||
|
||||
Refreshing your license also changes the notification hostname. New installations of AWX should not have to set the hostname for notifications.
|
||||
|
||||
Reset the ``AWX_URL_BASE``
|
||||
------------------------------
|
||||
|
||||
.. index::
|
||||
pair: notifications; troubleshooting AWX_URL_BASE
|
||||
pair: notifications; resetting the AWX_URL_BASE
|
||||
|
||||
|
||||
The primary way that AWX determines how the base URL (``AWX_URL_BASE``) is defined is by looking at an incoming request and setting the server address based on that incoming request.
|
||||
|
||||
AWX takes settings values from the database first. If no settings values are found, it falls back to using the values from the settings files. If a user posts a license by navigating to the AWX host's IP adddress, the posted license is written to the settings entry in the database.
|
||||
|
||||
To change the ``AWX_URL_BASE`` if the wrong address has been picked up, navigate to **Miscellaneous System settings** from the Settings menu using the DNS entry you wish to appear in notifications, and re-add your license.
|
||||
|
||||
|
||||
|
||||
Notifications API
|
||||
====================
|
||||
|
||||
.. index::
|
||||
pair: notifications; API endpoints
|
||||
|
||||
Use the ``started``, ``success``, or ``error`` endpoints:
|
||||
|
||||
::
|
||||
|
||||
/api/v2/organizations/N/notification_templates_started/
|
||||
/api/v2/organizations/N/notification_templates_success/
|
||||
/api/v2/organizations/N/notification_templates_error/
|
||||
|
||||
Additionally, the ``../../../N/notification_templates_started`` endpoints have **GET** and **POST** actions for:
|
||||
|
||||
- Organizations
|
||||
- Projects
|
||||
- Inventory Sources
|
||||
- Job Templates
|
||||
- System Job Templates
|
||||
- Workflow Job Templates
|
||||
113
docs/docsite/rst/userguide/organizations.rst
Normal file
113
docs/docsite/rst/userguide/organizations.rst
Normal file
@@ -0,0 +1,113 @@
|
||||
.. _ug_organizations:
|
||||
|
||||
Organizations
|
||||
===============
|
||||
|
||||
.. index::
|
||||
single: organizations
|
||||
|
||||
An :term:`Organization` is a logical collection of **Users**, **Teams**, **Projects**, and **Inventories**, and is the highest level in the AWX object hierarchy.
|
||||
|
||||
|awx hierarchy|
|
||||
|
||||
.. |awx hierarchy| image:: ../common/images/AWXHierarchy.png
|
||||
|
||||
Access the Organizations page by clicking **Organizations** from the left navigation bar. The Organizations page displays all of the existing organizations for your installation. Organizations can be searched by **Name** or **Description**. Modify and remove organizations using the **Edit** and **Delete** buttons.
|
||||
|
||||
.. note::
|
||||
A default organization is automatically created.
|
||||
|
||||
|Organizations - home showing example organization|
|
||||
|
||||
.. |Organizations - home showing example organization| image:: ../common/images/organizations-home-showing-example-organization.png
|
||||
|
||||
From this list view, you can edit the details of an organization (|edit button|) from the **Actions** menu.
|
||||
|
||||
.. |edit button| image:: ../common/images/edit-button.png
|
||||
|
||||
.. _ug_organizations_create:
|
||||
|
||||
Creating a New Organization
|
||||
------------------------------
|
||||
|
||||
1. You can create a new organization by clicking the **Add** button.
|
||||
|
||||
|Organizations - new organization form|
|
||||
|
||||
.. |Organizations - new organization form| image:: ../common/images/organizations-new-organization-form.png
|
||||
|
||||
2. An organization has several attributes that may be configured:
|
||||
|
||||
- Enter the **Name** for your organization (required).
|
||||
- Enter a **Description** for the organization.
|
||||
- Enter **Instance Groups** on which to run this organization.
|
||||
- Enter the name of the execution environment or search for an existing **Execution Environment** on which to run this organization. See :ref:`ug_execution_environments` for more information.
|
||||
- If used, enter the **Galaxy Credentials** or search from a list of existing ones.
|
||||
|
||||
3. Click **Save** to finish creating the organization.
|
||||
|
||||
Once created, AWX displays the Organization details, and allows for the managing access and execution environments for the organization.
|
||||
|
||||
|Organizations - show record for example organization|
|
||||
|
||||
.. |Organizations - show record for example organization| image:: ../common/images/organizations-show-record-for-example-organization.png
|
||||
|
||||
|
||||
From the **Details** tab, you can edit or delete the organization.
|
||||
|
||||
.. include:: ../common/work_items_deletion_warning.rst
|
||||
|
||||
|
||||
|
||||
Work with Access
|
||||
------------------
|
||||
|
||||
.. index::
|
||||
pair: organizations; users
|
||||
pair: organizations; access
|
||||
pair; organizations; teams
|
||||
|
||||
|
||||
Clicking on **Access** (beside **Details** when viewing your organization), displays all the Users associated with this Organization and their roles.
|
||||
|
||||
|Organizations - show users for example organization|
|
||||
|
||||
.. |Organizations - show users for example organization| image:: ../common/images/organizations-show-users-permissions-organization.png
|
||||
|
||||
As you can manage the user membership for this Organization here, you can manage user membership on a per-user basis from the Users page by clicking **Users** from the left navigation bar. Organizations have a unique set of roles not described here. You can assign specific users certain levels of permissions within your organization, or allow them to act as an admin for a particular resource. Refer to :ref:`rbac-ug` for more information.
|
||||
|
||||
Clicking on a user brings up that user's details, allowing you to review, grant, edit, and remove associated permissions for that user. For more information, refer to :ref:`ug_users`.
|
||||
|
||||
Add a User or Team
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In order to add a user or team to an organization, the user or team must already be created. See :ref:`ug_users_create` and :ref:`ug_team_create` for additional detail. To add existing users or team to the Organization:
|
||||
|
||||
.. include:: ../common/permissions.rst
|
||||
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
A user or team with roles associated will retain them even after they have been reassigned to another organization.
|
||||
|
||||
|
||||
|
||||
Work with Notifications
|
||||
--------------------------------
|
||||
|
||||
.. index::
|
||||
pair: organizations; notifications
|
||||
|
||||
Clicking the **Notifications** tab allows you to review any notification integrations you have setup.
|
||||
|
||||
.. image:: ../common/images/organizations-notifications-samples-list.png
|
||||
|
||||
Use the toggles to enable or disable the notifications to use with your particular organization. For more detail, see :ref:`ug_notifications_on_off`.
|
||||
|
||||
If no notifications have been set up, you must create them from the **Notifications** option on the left navigation bar.
|
||||
|
||||
.. image:: ../common/images/organization-notifications-empty.png
|
||||
|
||||
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
|
||||
|
||||
294
docs/docsite/rst/userguide/overview.rst
Normal file
294
docs/docsite/rst/userguide/overview.rst
Normal file
@@ -0,0 +1,294 @@
|
||||
.. _overview:
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
.. index::
|
||||
pair: overview; features
|
||||
|
||||
Thank you for your interest in AWX. AWX makes it possible for users across an organization to share, vet, and manage automation content by means of a simple, powerful, and agentless technical implementation. IT managers can provide guidelines on how automation is applied to individual teams. Meanwhile, automation developers retain the freedom to write tasks that use existing knowledge, without the operational overhead of conforming to complex tools and frameworks. It is a more secure and stable foundation for deploying end-to-end automation solutions, from hybrid cloud to the edge.
|
||||
|
||||
AWX allows users to define, operate, scale, and delegate automation across their organization.
|
||||
|
||||
|
||||
Real-time Playbook Output and Exploration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; real-time playbook
|
||||
|
||||
Watch playbooks run in real time, seeing each host as they check in. Easily go back and explore the results for specific tasks and hosts in great detail. Search for specific plays or hosts and see just those results, or quickly zero in on errors that need to be corrected.
|
||||
|
||||
"Push Button" Automation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; automation
|
||||
|
||||
Access your favorite projects and re-trigger execution from the web interface with a minimum of clicking. AWX will ask for input variables, prompt for your credentials, kick off and monitor the job, and display results and host history over time.
|
||||
|
||||
|
||||
Enhanced and Simplified Role-Based Access Control and Auditing
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; role-based access control
|
||||
|
||||
AWX allows for the granting of permissions to perform a specific task (such as to view, create, or modify a file) to different teams or explicit users through role-based access control (RBAC).
|
||||
|
||||
Keep some projects private, while allowing some users to edit inventory and others to run playbooks against only certain systems--either in check (dry run) or live mode. You can also allow certain users to use credentials without exposing the credentials to them. Regardless of what you do, AWX records the history of operations and who made them--including objects edited and jobs launched.
|
||||
|
||||
Based on user feedback, AWX both expands and simplifies its role-based access control. No longer is job template visibility configured via a combination of permissions on inventory, projects, and credentials. If you want to give any user or team permissions to use a job template, just assign permissions directly on the job template. Similarly, credentials are now full objects in AWX's RBAC system, and can be assigned to multiple users and/or teams for use.
|
||||
|
||||
AWX includes an ‘Auditor’ type, who can see all aspects of the systems automation, but has no permission to run or change automation, for those that need a system-level auditor. (This may also be useful for a service account that scrapes automation information from the REST API.) Refer to :ref:`rbac-ug` for more information.
|
||||
|
||||
Subsequent releases of AWX provides more granular permissions, making it easier to delegate inside your organizations and remove automation bottlenecks.
|
||||
|
||||
|
||||
Cloud & Autoscaling Flexibility
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; autoscaling flexibility
|
||||
pair: features; cloud flexibility
|
||||
|
||||
AWX features a powerful provisioning callback feature that allows nodes to request configuration on demand. While optional, this is an ideal solution for a cloud auto-scaling scenario, integrating with provisioning servers like Cobbler, or when dealing with managed systems with unpredictable uptimes. Requiring no management software to be installed on remote nodes, the callback solution can be triggered via a simple call to 'curl' or 'wget', and is easily embeddable in init scripts, kickstarts, or preseeds. Access is controlled such that only machines in inventory can request configuration.
|
||||
|
||||
|
||||
The Ideal RESTful API
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; RESTful API
|
||||
|
||||
The AWX REST API is the ideal RESTful API for a systems management application, with all resources fully discoverable, paginated, searchable, and well modeled. A styled API browser allows API exploration from the API root at ``http://<server name>/api/``, showing off every resource and relation. Everything that can be done in the user interface can be done in the API - and more.
|
||||
|
||||
Backup and Restore
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; backup and restore
|
||||
|
||||
The ability to backup and restore your system(s) has been integrated into the AWX setup playbook, making it easy for you to backup and replicate your instance as needed.
|
||||
|
||||
Ansible Galaxy Integration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; Ansible Galaxy integration
|
||||
|
||||
When it comes to describing your automation, everyone repeats the DRY mantra--"Don’t Repeat Yourself." Using centralized copies of Ansible roles, such as in Ansible Galaxy, allows you to bring that philosophy to your playbooks. By including an Ansible Galaxy requirements.yml file in your project directory, AWX automatically fetches the roles your playbook needs from Galaxy, GitHub, or your local source control. Refer to :ref:`ug_galaxy` for more information.
|
||||
|
||||
Inventory Support for OpenStack
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; OpenStack inventory support
|
||||
|
||||
|
||||
Ansible is committed to making OpenStack simple for everyone to use. As part of that, dynamic inventory support has been added for OpenStack. This allows you to easily target any of the virtual machines or images that you’re running in your OpenStack cloud.
|
||||
|
||||
|
||||
Remote Command Execution
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; remote command execution
|
||||
|
||||
Often times, you just need to do a simple task on a few hosts, whether it’s add a single user, update a single security vulnerability, or restart a misbehaving service. AWX includes remote command execution--any task that you can describe as a single Ansible play can be run on a host or group of hosts in your inventory, allowing you to get managing your systems quickly and easily. Plus, it is all backed by an RBAC engine and detailed audit logging, removing any questions regarding who has done what to what machines.
|
||||
|
||||
|
||||
System Tracking
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; system tracking
|
||||
pair: features; fact cache
|
||||
|
||||
|
||||
You can collect facts by using the fact caching feature. Refer to :ref:`ug_fact_caching` for more detail.
|
||||
|
||||
|
||||
Integrated Notifications
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; notifications
|
||||
|
||||
AWX allows you to easily keep track of the status of your automation. You can configure stackable notifications for job templates, projects, or entire organizations, and configure different notifications for job start, job success, job failure, and job approval (for workflow nodes). The following notification sources are supported:
|
||||
|
||||
- Email
|
||||
- Grafana
|
||||
- IRC
|
||||
- Mattermost
|
||||
- PagerDuty
|
||||
- Rocket.Chat
|
||||
- Slack
|
||||
- Twilio
|
||||
- Webhook (post to an arbitrary webhook, for integration into other tools)
|
||||
|
||||
Additionally, you can :ref:`customize notification messages <ug_custom_notifications>` for each of the above notification types.
|
||||
|
||||
|
||||
Satellite Integration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; inventory sources, Red Hat Satellite 6
|
||||
|
||||
Dynamic inventory sources for Red Hat Satellite 6 are supported.
|
||||
|
||||
|
||||
Run-time Job Customization
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; run-time job customization
|
||||
|
||||
Bringing the flexibility of the Ansible command line, you can now prompt for any of the following:
|
||||
|
||||
- inventory
|
||||
- credential
|
||||
- job tags
|
||||
- limits
|
||||
|
||||
|
||||
Red Hat Insights Integration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; playbooks, Red Hat Insights
|
||||
|
||||
AWX supports integration with Red Hat Insights, which allows Insights playbooks to be used as a Project.
|
||||
|
||||
|
||||
Enhanced User Interface
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; UI
|
||||
pair: features; user interface
|
||||
|
||||
The layout of the user interface is organized with intuitive navigational elements. With information displayed at-a-glance, it is intuitive to find and use the automation you need. Compact and expanded viewing modes show and hide information as needed, and various built-in attributes make it easy to sort.
|
||||
|
||||
|
||||
Custom Virtual Environments
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; venv
|
||||
pair: features; custom environment
|
||||
|
||||
Custom Ansible environment support allows you to have different Ansible environments and specify custom paths for different teams and jobs.
|
||||
|
||||
|
||||
Authentication Enhancements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; authentication
|
||||
pair: features; OAuth 2 token
|
||||
|
||||
AWX supports LDAP, SAML, token-based authentication. Enhanced LDAP and SAML support allows you to integrate your enterprise account information in a more flexible manner. Token-based Authentication allows for easily authentication of third-party tools and services with AWX via integrated OAuth 2 token support.
|
||||
|
||||
Cluster Management
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; clustering
|
||||
pair: features; instance groups
|
||||
|
||||
Run-time management of cluster groups allows for easily configurable scaling.
|
||||
|
||||
|
||||
Container Platform Support
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; container support
|
||||
|
||||
AWX is available as a containerized pod service for Kubernetes environments that can be scaled up and down easily as needed.
|
||||
|
||||
|
||||
Workflow Enhancements
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; workflows, inventory overrides
|
||||
pair: features; workflows, convergence nodes
|
||||
pair: features; workflows, nesting
|
||||
pair: features; workflows, approval
|
||||
pair: features; workflows, pause
|
||||
|
||||
|
||||
In order to better model your complex provisioning, deployment, and orchestration workflows, AWX expanded workflows in a number
|
||||
of ways:
|
||||
|
||||
- **Inventory overrides for Workflows**. You can now override an inventory across a workflow at workflow definition time, or even at launch time. Define your application deployment workflow, and then easily re-use them in multiple environments.
|
||||
|
||||
- **Convergence nodes for Workflows**. When modeling complex processes, you sometimes need to wait for multiple steps to finish before proceeding. Now AWX workflows can easily replicate this; workflow steps can now wait for any number of prior workflow steps to complete properly before proceeding.
|
||||
|
||||
- **Workflow Nesting**. Re-use individual workflows as components of a larger workflow. Examples include combining provisioning and application deployment workflows into a single master workflow.
|
||||
|
||||
- **Workflow Pause and Approval**. You can build workflows containing approval nodes that require user intervention. This makes it possible to pause workflows in between playbooks so that a user can give approval (or denial) for continuing on to the next step in the workflow.
|
||||
|
||||
|
||||
Job Distribution
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; jobs, slicing
|
||||
pair: features; jobs, distribution
|
||||
|
||||
As automation moves enterprise-wide, the need to automate at scale grows. AWX offer the ability to take a fact gathering or
|
||||
configuration job running across thousands of machines and slice it into individual job slices that can be distributed across your AWX cluster for increased reliability, faster job completion, and better cluster utilization. If you need to change a parameter across 15,000 switches at
|
||||
scale, or gather information across your multi-thousand-node RHEL estate, you can now do so easily.
|
||||
|
||||
|
||||
Support for deployment in a FIPS-enabled environment
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; environment, FIPS
|
||||
|
||||
|
||||
If you require running your environment in restricted modes such as FIPS, AWX deploys and runs in such environments.
|
||||
|
||||
|
||||
Limit the number of hosts per organization
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; limiting, hosts
|
||||
|
||||
Lots of large organizations have instances shared among many organizations. They do not want any one organization to be able to use all the licensed hosts, this feature allows superusers to set a specified upper limit on how many licensed hosts may be allocated to each organization. The AWX algorithm factors changes in the limit for an organization and the number of total hosts across all organizations. Any inventory updates will fail if an inventory sync brings an organization out of compliance with the policy. Additionally, superusers are able to 'over-allocate' their licenses, with a warning.
|
||||
|
||||
|
||||
Inventory Plugins
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; inventory plugins
|
||||
|
||||
Updated AWX to use the following inventory plugins from upstream collections if inventory updates are run with Ansible 2.9 and later:
|
||||
|
||||
- amazon.aws.aws_ec2
|
||||
- community.vmware.vmware_vm_inventory
|
||||
- azure.azcollection.azure_rm
|
||||
- google.cloud.gcp_compute
|
||||
- theforeman.foreman.foreman
|
||||
- openstack.cloud.openstack
|
||||
- ovirt.ovirt.ovirt
|
||||
- awx.awx.tower
|
||||
|
||||
|
||||
Secret Management System
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: features; secret management system
|
||||
pair: features; credential plugins
|
||||
pair: features; credential management
|
||||
|
||||
With a secret management system, external credentials are stored and supplied for use in AWX so you don't have to provide them directly.
|
||||
|
||||
249
docs/docsite/rst/userguide/project-sign.rst
Normal file
249
docs/docsite/rst/userguide/project-sign.rst
Normal file
@@ -0,0 +1,249 @@
|
||||
.. _ug_content_signing:
|
||||
|
||||
Project Signing and Verification
|
||||
==================================
|
||||
|
||||
.. index::
|
||||
single: project validation
|
||||
pair: project; validation
|
||||
pair: project; signing
|
||||
|
||||
Project signing and verification provides the ability to sign files in your project directory and then verify whether or not that content has changed in any way, or files have been added or removed from the project unexpectedly. To accomplish this, a private key for signing and a matching public key for verifying are needed.
|
||||
|
||||
For project maintainers, the supported way to perform content signing is to use a utility called, ``ansible-sign``, through the command-line interface (CLI) that comes with it.
|
||||
|
||||
The CLI aims to make it easy to use cryptographic technology like GNU Privacy Guard (GPG) to validate that specified files within a project have not been tampered with in any way. Currently, GPG is the only supported means of signing and validation.
|
||||
|
||||
AWX is used to verify the signed content. After a matching public key has been associated with the signed project, AWX will verify that the files included during signing have not changed, and that files have been added or removed unexpectedly. If the signature is not valid or a file has changed, the project will fail to update, and no jobs making use of the project will be able to launch. Verification status of the project ensures that only secure, untampered content is run in jobs.
|
||||
|
||||
Assuming that the repository has already been configured for signing and verification (see below), the usual workflow for altering the project becomes the following:
|
||||
|
||||
1. User has a project repository set up already and wants to make a change to a file.
|
||||
2. User makes the change, runs ``ansible-sign project gpg-sign /path/to/project``, which updates a checksum manifest and signs it.
|
||||
3. User commits the change and the updated checksum manifest and the signature to the repository.
|
||||
4. When the user syncs the project, AWX (already configured, in this scenario) pulls in the new changes, checks that the public key associated with the project in AWX matches the private key that the checksum manifest was signed with (this prevents tampering with the checksum manifest itself), then re-calculates checksums of each file in the manifest to ensure that the checksum matches (and thus that no file has changed). It also looks to ensure that all files are accounted for: They must have been either included in, or excluded from, the ``MANIFEST.in`` file discussed below; if files have been added or removed unexpectedly, verification will fail.
|
||||
|
||||
.. image:: ../common/images/content-sign-diagram.png
|
||||
|
||||
|
||||
Prerequisites
|
||||
--------------
|
||||
|
||||
- A valid GPG public/private keypair is required for signing content. Refer to `How to create GPG keypairs`_ for details.
|
||||
|
||||
.. _`How to create GPG keypairs`: https://www.redhat.com/sysadmin/creating-gpg-keypairs
|
||||
|
||||
Vist the `GnuPG documentation <https://www.gnupg.org/documentation/index.html>`_ for more information regarding GPG keys.
|
||||
|
||||
You can verify that you have a valid GPG keypair and in your default GnuPG keyring, with the following command:
|
||||
|
||||
::
|
||||
|
||||
$ gpg --list-secret-keys
|
||||
|
||||
|
||||
|
||||
If the above command produces no output, or one line of output that states, ``trustdb was created``, then you do not have a secret key in your default keyring. In this case, refer to `How to create GPG keypairs`_ to learn how to create a new keypair before proceeding. If it produces output other than that, you have a valid secret key and are ready to move on to using ``ansible-sign``.
|
||||
|
||||
|
||||
Add a GPG key to AWX
|
||||
----------------------
|
||||
|
||||
In order to use the GPG key for content singing and validation in AWX, you must add it running the following command in the CLI:
|
||||
|
||||
::
|
||||
|
||||
$ gpg --list-keys
|
||||
$ gpg --export --armour <key fingerprint> > my_public_key.asc
|
||||
|
||||
|
||||
1. In the AWX user interface, click **Credentials** from the left side navigation menu then click the **Add** button.
|
||||
|
||||
2. Provide the new credential a meaningful name (for example, “Infrastructure team public GPG key”)
|
||||
|
||||
3. In the Credential Type field, select **GPG Public Key**.
|
||||
|
||||
4. Click **Browse** to locate and select the public key file (e.g., ``my_public_key.asc``)
|
||||
|
||||
5. Click **Save** when done.
|
||||
|
||||
.. image:: ../common/images/credentials-gpg-details.png
|
||||
|
||||
This credential can now be selected in :ref:`projects <ug_projects_add>`, and content verification will automatically take place on future project syncs.
|
||||
|
||||
.. image:: ../common/images/project-create-with-gpg-creds.png
|
||||
|
||||
.. note::
|
||||
|
||||
Use the project cache SCM timeout to control how often you want AWX to re-validate the signed content. When a project is configured to update on launch (of any job template configured to use that project), you can enable the cache timeout setting, which tells it to update after N seconds have passed since the last update. If validation is running too frequently, you can slow down how often project updates occur by specifying the time in the **Cache Timeout** field of the Option Details pane of the project.
|
||||
|
||||
.. image:: ../common/images/project-update-launch-cache-timeout.png
|
||||
|
||||
|
||||
|
||||
Access the ``ansible-sign`` CLI utility
|
||||
-----------------------------------------
|
||||
|
||||
The ``ansible-sign`` utility provide options for the user to sign and verify whether the project is signed.
|
||||
|
||||
1. Run the following command to install ``ansible-sign``:
|
||||
|
||||
::
|
||||
|
||||
$ dnf install ansible-sign
|
||||
|
||||
2. Verify that ``ansible-sign`` was successfully installed:
|
||||
|
||||
::
|
||||
|
||||
$ ansible-sign --version
|
||||
|
||||
|
||||
Output similar to the following displays (possibly with a different version number):
|
||||
|
||||
::
|
||||
|
||||
ansible-sign 0.1
|
||||
|
||||
|
||||
This indicates you have successfully installed ``ansible-sign``.
|
||||
|
||||
|
||||
Signing your project
|
||||
--------------------
|
||||
|
||||
As the name suggests, signing a project involves an Ansible project directory. Refer to the `Ansible documentation <https://docs.ansible.com/ansible/latest/user_guide/sample_setup.html>`_ for more sophisticated examples of project directory structures.
|
||||
|
||||
The following sample project has a very simple structure. An inventory file, and two small playbooks under a playbooks directory:
|
||||
|
||||
::
|
||||
|
||||
$ cd sample-project/
|
||||
$ tree -a .
|
||||
.
|
||||
├── inventory
|
||||
└── playbooks
|
||||
└── get_uptime.yml
|
||||
└── hello.yml
|
||||
|
||||
1 directory, 3 files
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
The commands used in this section assume that your working directory is the root of your project. As a rule, ``ansible-sign project`` commands always take the project root directory as their last argument, and therefore, we use ``.`` to indicate the current working directory.
|
||||
|
||||
The way that ``ansible-sign`` protects content from tampering is by taking checksums (SHA256) of all of the secured files in the project, compiling those into a checksum manifest file, and then finally signing that manifest file.
|
||||
|
||||
The first step toward signing content is to create a file that tells ``ansible-sign`` which files to protect. This file should be called ``MANIFEST.in`` and reside in the project root directory.
|
||||
|
||||
Internally, ``ansible-sign`` makes use of the ``distlib.manifest`` module of Python’s distlib library, and thus ``MANIFEST.in`` must follow the syntax that this library specifies. See the `Python Packaging User Guide <https://packaging.python.org/en/latest/guides/using-manifest-in/#manifest-in-commands>`_ for an explanation of the ``MANIFEST.in`` file directives.
|
||||
|
||||
In the sample project, included are two directives, resulting in a ``MANIFEST.in`` file that looks like this:
|
||||
|
||||
::
|
||||
|
||||
include inventory
|
||||
recursive-include playbooks *.yml
|
||||
|
||||
With this file in place, generate your checksum manifest file and sign it. Both of these steps are achieved in a single ``ansible-sign`` command:
|
||||
|
||||
::
|
||||
|
||||
$ ansible-sign project gpg-sign .
|
||||
[OK ] GPG signing successful!
|
||||
[NOTE ] Checksum manifest: ./.ansible-sign/sha256sum.txt
|
||||
[NOTE ] GPG summary: signature created
|
||||
|
||||
|
||||
Now the project has been signed.
|
||||
|
||||
Notice that the ``gpg-sign`` subcommand resides under the ``project`` subcommand. For signing project content, every command will start with ``ansible-sign project``. As noted above, as a rule, every ``ansible-sign project`` command takes the project root directory as its final argument.
|
||||
|
||||
As mentioned earlier, ``ansible-sign`` by default makes use of your default keyring and looks for the first available secret key that it can find, to sign your project. You can specify a specific secret key to use with the ``--fingerprint`` option, or even a completely independent GPG home directory with the ``--gnupg-home`` option.
|
||||
|
||||
.. note::
|
||||
|
||||
If you are using a desktop environment, GnuPG will automatically prompt you for your secret key’s passphrase. If this functionality does not work, or you are working without a desktop environment (e.g., via SSH), you can use the ``-p/--prompt-passphrase`` flag after ``gpg-sign`` in the above command, which will cause ``ansible-sign`` to prompt for the password instead.
|
||||
|
||||
Upon viewing the structure of the project directory, notice that a new ``.ansible-sign`` directory was created. This directory contains the checksum manifest and a detached GPG signature for it.
|
||||
|
||||
::
|
||||
|
||||
$ tree -a .
|
||||
.
|
||||
├── .ansible-sign
|
||||
│ ├── sha256sum.txt
|
||||
│ └── sha256sum.txt.sig
|
||||
├── inventory
|
||||
├── MANIFEST.in
|
||||
└── playbooks
|
||||
├── get_uptime.yml
|
||||
└── hello.yml
|
||||
|
||||
|
||||
Verifying your project
|
||||
----------------------
|
||||
|
||||
If you want to verify that a signed Ansible project has not been altered, you can use ``ansible-sign`` to check whether the signature is valid and that the checksums of the files match what the checksum manifest says they should be. In particular, the ``ansible-sign project gpg-verify`` command can be used to automatically verify both of these conditions.
|
||||
|
||||
::
|
||||
|
||||
$ ansible-sign project gpg-verify .
|
||||
[OK ] GPG signature verification succeeded.
|
||||
[OK ] Checksum validation succeeded.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
By default, ``ansible-sign`` makes use of your default GPG keyring to look for a matching public key. You can specify a keyring file with the ``--keyring`` option, or a different GPG home with the ``--gnugpg-home`` option.
|
||||
|
||||
If verification fails for any reason, information will be displayed to help you debug the cause. More verbosity can be enabled by passing the global ``--debug`` flag, immediately after ``ansible-sign`` in your commands.
|
||||
|
||||
.. note::
|
||||
|
||||
When a GPG credential is used in a project, content verification will automatically take place on future project syncs.
|
||||
|
||||
|
||||
|
||||
Automate signing
|
||||
-----------------
|
||||
|
||||
In environments with highly-trusted CI environments (e.g., OpenShift, Jenkins, etc.), it is possible to automate the signing process. For example, you could store your GPG private key in a CI platform of choice as a secret, and import that into GnuPG in the CI environment. You could then run through the signing workflow above within the normal CI workflow/container/environment.
|
||||
|
||||
When signing a project using GPG, the environment variable ``ANSIBLE_SIGN_GPG_PASSPHRASE`` can be set to the passphrase of the signing key. This can be injected (and masked/secured) in a CI pipeline.
|
||||
|
||||
Depending on the scenario at hand, ``ansible-sign`` will return with a different exit-code, during both signing and verification. This can also be useful in the context of CI and automation, as a CI environment can act differently based on the failure (for example, sending alerts for some errors but silently failing for others).
|
||||
|
||||
These are the exit codes used in ``ansible-sign`` currently, which can be considered stable:
|
||||
|
||||
.. list-table::
|
||||
:widths: 10 25 40
|
||||
:header-rows: 1
|
||||
|
||||
* - Exit code
|
||||
- Approximate meaning
|
||||
- Example scenarios
|
||||
* - 0
|
||||
- Success
|
||||
- * Signing was successful
|
||||
* Verification was successful
|
||||
* - 1
|
||||
- General failure
|
||||
- * The checksum manifest file contained a syntax error during verification
|
||||
* The signature file did not exist during verification
|
||||
* ``MANIFEST.in`` did not exist during signing
|
||||
* - 2
|
||||
- Checksum verification failure
|
||||
- * The checksum hashes calculated during verification differed from what
|
||||
was in the signed checksum manifest (e.g., a project file was changed but the signing process was not re-completed)
|
||||
* - 3
|
||||
- Signature verification failure
|
||||
- * The signer's public key was not in the user's GPG keyring
|
||||
* The wrong GnuPG home directory or keyring file was specified
|
||||
* The signed checksum manifest file was modified in some way
|
||||
* - 4
|
||||
- Signing process failure
|
||||
- * The signer's private key was not found in the GPG keyring
|
||||
* The wrong GnuPG home directory or keyring file was specified
|
||||
|
||||
489
docs/docsite/rst/userguide/projects.rst
Normal file
489
docs/docsite/rst/userguide/projects.rst
Normal file
@@ -0,0 +1,489 @@
|
||||
.. _ug_projects:
|
||||
|
||||
Projects
|
||||
===========
|
||||
|
||||
.. index::
|
||||
single: projects
|
||||
|
||||
A :term:`Project` is a logical collection of Ansible playbooks.
|
||||
|
||||
You can manage playbooks and playbook directories by either placing them manually under the Project Base Path on your server, or by placing your playbooks into a source code management (SCM) system supported by AWX, including Git, Subversion, and Red Hat Insights. To create a Red Hat Insights project, refer to :ref:`insights`.
|
||||
|
||||
.. note::
|
||||
|
||||
By default, the Project Base Path is ``/var/lib/awx/projects``, but this may have been modified by the administrator. It is configured in ``/etc/awx/conf.d/custom.py``. Use caution when editing this file, as incorrect settings can disable your installation.
|
||||
|
||||
The Projects page displays the list of the projects that are currently available. The default view is collapsed (**Compact**) with project name and its status, but you can use the arrow next to each entry to expand for more information.
|
||||
|
||||
|Projects - home with example project|
|
||||
|
||||
.. |Projects - home with example project| image:: ../common/images/projects-list-all.png
|
||||
|
||||
.. image:: ../common/images/projects-list-all-expanded.png
|
||||
|
||||
For each project listed, you can get the latest SCM revision (|refresh|), edit the project (|edit|), or copy the project attributes (|copy|), using the respective icons next to each project. Projects are allowed to be updated while a related job is running. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
|
||||
|
||||
.. |edit-icon| image:: ../common/images/edit-button.png
|
||||
|
||||
.. |copy| image:: ../common/images/copy-button.png
|
||||
|
||||
.. |refresh| image:: ../common/images/refresh-gray.png
|
||||
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
|
||||
|
||||
**Status** indicates the state of the project and may be one of the following (note that you can also filter your view by specific status types):
|
||||
|
||||
- **Pending** - The source control update has been created, but not queued or started yet. Any job (not just source control updates) will stay in pending until it's actually ready to be run by the system. Reasons for it not being ready because it has dependencies that are currently running so it has to wait until they are done, or there is not enough capacity to run in the locations it is configured to.
|
||||
- **Waiting** - The source control update is in the queue waiting to be executed.
|
||||
- **Running** - The source control update is currently in progress.
|
||||
- **Successful** - The last source control update for this project succeeded.
|
||||
- **Failed** - The last source control update for this project failed.
|
||||
- **Error** - The last source control update job failed to run at all. (To be deprecated.)
|
||||
- **Canceled** - The last source control update for the project was canceled.
|
||||
- **Never updated** - The project is configured for source control, but has never been updated.
|
||||
- **OK** - The project is not configured for source control, and is correctly in place. (To be deprecated.)
|
||||
- **Missing** - Projects are absent from the project base path of ``/var/lib/awx/projects`` (applicable for manual or source control managed projects).
|
||||
|
||||
.. - Cancel: Cancel a scheduled update from source control (only available when scheduling a source control update)
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Projects of credential type Manual cannot update or schedule source control-based actions without being reconfigured as an SCM type credential.
|
||||
|
||||
|
||||
.. include:: ../common/work_items_deletion_warning.rst
|
||||
|
||||
|
||||
.. _ug_projects_add:
|
||||
|
||||
Add a new project
|
||||
--------------------
|
||||
|
||||
.. index::
|
||||
pair: projects; add new
|
||||
|
||||
To create a new project:
|
||||
|
||||
1. Click the **Add** button, which launches the **Create Project** window.
|
||||
|
||||
|Projects - create new project|
|
||||
|
||||
.. |Projects - create new project| image:: ../common/images/projects-create-new-project.png
|
||||
|
||||
2. Enter the appropriate details into the following required fields:
|
||||
|
||||
- **Name**
|
||||
- **Description** (optional)
|
||||
- **Organization** - A project must have at least one organization. Pick one organization now to create the project, and then after the project is created you can add additional organizations.
|
||||
- **Execution Environment** (optional) - Enter the name of the execution environment or search from a list of existing ones to run this project. See :ref:`upgrade_venv` in the |atumg| for more information.
|
||||
- **Source Control Type** - Select from the drop-down menu list an SCM type associated with this project. The options in the subsequent section become available depend on the type you choose. Refer to :ref:`ug_manual` or :ref:`ug_projects_scm_types` in the subsequent sections for more detail.
|
||||
- **Content Signature Validation Credential** - Use this optional field to enable content verification. Specify the GPG key to use for validating content signature during project sync. If the content has been tampered with, the job will not run. See :ref:`ug_content_signing` for more detail.
|
||||
|
||||
|
||||
3. Click **Save** when done.
|
||||
|
||||
The following describes ways projects are sourced in AWX:
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
.. _ug_manual:
|
||||
|
||||
Manage playbooks manually
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: projects; playbooks
|
||||
single: playbooks; manage manually
|
||||
|
||||
- Create one or more directories to store playbooks under the Project Base Path (for example, /var/lib/awx/projects/).
|
||||
- Create or copy playbook files into the playbook directory.
|
||||
- Ensure that the playbook directory and files are owned by the same UNIX user and group that the AWX service runs as.
|
||||
- Ensure that the permissions are appropriate for the playbook directories and files.
|
||||
|
||||
If adding a manual project, each project path inside of the project root folder can only be assigned to one project. If you receive the following message, ensure that you have not already assigned the project path to an existing project:
|
||||
|
||||
``All of the project paths have been assigned to existing projects, or there are no directories found in the base path.``
|
||||
``You will need to add a project path before creating a new project.``
|
||||
|
||||
If you have trouble adding a project path, check the permissions and SELinux context settings for the project directory and files.
|
||||
|
||||
.. warning::
|
||||
If you have not added any Ansible playbook directories to the base project path, you will receive the following message:
|
||||
|
||||
|Projects - create new warning|
|
||||
|
||||
Correct this issue by creating the appropriate playbook directories and checking out playbooks from your SCM or otherwise copying playbooks into the appropriate playbook directories.
|
||||
|
||||
.. |Projects - create new warning| image:: ../common/images/projects-create-manual-warning.png
|
||||
|
||||
|
||||
.. _ug_projects_scm_types:
|
||||
|
||||
Manage playbooks using source control
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: projects; playbooks
|
||||
pair: Git; source control
|
||||
pair: SCM; types
|
||||
single: SCM types
|
||||
pair: Subversion; source control
|
||||
single: playbooks; source control
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
SCM Types - Git and Subversion
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To configure playbooks to use source control, in the Project **Details** tab:
|
||||
|
||||
1. Select the appropriate option (Git or Subversion) from the **SCM Type** drop-down menu list.
|
||||
|
||||
|Projects - create SCM project|
|
||||
|
||||
.. |Projects - create SCM project| image:: ../common/images/projects-create-scm-project.png
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
|
||||
- **SCM URL** - See an example in the tooltip |tooltip|.
|
||||
|
||||
.. |tooltip| image:: ../common/images/tooltips-icon.png
|
||||
|
||||
- **SCM Branch/Tag/Commit** - Optionally enter the SCM branch, tags, commit hashes, arbitrary refs, or revision number (if applicable) from the source control (Git or Subversion) to checkout. Some commit hashes and refs may not be available unless you also provide a custom refspec in the next field. If left blank, the default is HEAD which is the last checked out Branch/Tag/Commit for this project.
|
||||
- **SCM Refspec** - This field is an option specific to git source control and only advanced users familiar and comfortable with git should specify which references to download from the remote repository. For more detail, see :ref:`job branch overriding <ug_job_branching>`.
|
||||
|
||||
- **Source Control Credential** - If authentication is required, select the appropriate source control credential
|
||||
|
||||
3. In the **SCM Update Options**, optionally select the launch behavior, if applicable.
|
||||
|
||||
- **Clean** - Removes any local modifications prior to performing an update.
|
||||
- **Delete** - Deletes the local repository in its entirety prior to performing an update. Depending on the size of the repository this may significantly increase the amount of time required to complete an update.
|
||||
- **Track submodules** - Tracks the latest commit. See more details in the tooltip |tooltip|.
|
||||
- **Update Revision on Launch** - Updates the revision of the project to the current revision in the remote source control, as well as cache the roles directory from :ref:`Galaxy <ug_galaxy>` or :ref:`Collections <ug_collections>`. AWX ensures that the local revision matches and that the roles and collections are up-to-date with the last update. Also, to avoid job overflows if jobs are spawned faster than the project can sync, selecting this allows you to configure a Cache Timeout to cache prior project syncs for a certain number of seconds.
|
||||
- **Allow Branch Override** - Allows a job template or an inventory source that uses this project to launch with a specified SCM branch or revision other than that of the project's. For more detail, see :ref:`job branch overriding <ug_job_branching>`.
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-project-branch-override-checked.png
|
||||
|
||||
4. Click **Save** to save your project.
|
||||
|
||||
.. tip::
|
||||
Using a GitHub link offers an easy way to use a playbook. To help get you started, use the ``helloworld.yml`` file available at: https://github.com/ansible/tower-example.git
|
||||
|
||||
This link offers a very similar playbook to the one created manually in the instructions found in the :ref:`qs_start`. Using it will not alter or harm your system in anyway.
|
||||
|
||||
|
||||
SCM Type - Red Hat Insights
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: projects; playbooks
|
||||
pair: Insights; source control
|
||||
single: playbooks; source control
|
||||
|
||||
To configure playbooks to use Red Hat Insights, in the Project **Details** tab:
|
||||
|
||||
1. Select **Red Hat Insights** from the **SCM Type** drop-down menu list.
|
||||
|
||||
2. Red Hat Insights requires a credential for authentication. Select from the **Credential** field the appropriate credential for use with Insights.
|
||||
|
||||
3. In the **SCM Update Options**, optionally select the launch behavior, if applicable.
|
||||
|
||||
- **Clean** - Removes any local modifications prior to performing an update.
|
||||
- **Delete** - Deletes the local repository in its entirety prior to performing an update. Depending on the size of the repository this may significantly increase the amount of time required to complete an update.
|
||||
- **Update Revision on Launch** - Updates the revision of the project to the current revision in the remote source control, as well as cache the roles directory from :ref:`Galaxy <ug_galaxy>` or :ref:`Collections <ug_collections>`. AWX ensures that the local revision matches and that the roles and collections are up-to-date with the last update. Also, to avoid job overflows if jobs are spawned faster than the project can sync, selecting this allows you to configure a Cache Timeout to cache prior project syncs for a certain number of seconds.
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-insights.png
|
||||
|
||||
|
||||
3. Click **Save** to save your project.
|
||||
|
||||
|
||||
SCM Type - Remote Archive
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
.. index::
|
||||
pair: projects; playbooks
|
||||
pair: remote archive; source control
|
||||
single: playbooks; source control
|
||||
|
||||
Playbooks using a remote archive allow projects to be provided based on a build process that produces a versioned artifact, or release, containing all the requirements for that project in a single archive.
|
||||
|
||||
To configure playbooks to use a remote archive, in the Project **Details** tab:
|
||||
|
||||
1. Select **Remote Archive** from the **SCM Type** drop-down menu list.
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
|
||||
- **SCM URL** - requires a URL to a remote archive, such as a *GitHub Release* or a build artifact stored in *Artifactory* and unpacks it into the project path for use
|
||||
- **SCM Credential** - If authentication is required, select the appropriate SCM credential
|
||||
|
||||
3. In the **SCM Update Options**, optionally select the launch behavior, if applicable.
|
||||
|
||||
- **Clean** - Removes any local modifications prior to performing an update.
|
||||
- **Delete** - Deletes the local repository in its entirety prior to performing an update. Depending on the size of the repository this may significantly increase the amount of time required to complete an update.
|
||||
- **Update Revision on Launch** - Not recommended, as this option updates the revision of the project to the current revision in the remote source control, as well as cache the roles directory from :ref:`Galaxy <ug_galaxy>` or :ref:`Collections <ug_collections>`.
|
||||
- **Allow Branch Override** - Not recommended, as this option allows a job template that uses this project to launch with a specified SCM branch or revision other than that of the project's.
|
||||
|
||||
.. image:: ../common/images/projects-create-scm-rm-archive.png
|
||||
|
||||
.. note::
|
||||
Since this SCM type is intended to support the concept of unchanging artifacts, it is advisable to disable Galaxy integration (for roles, at minimum).
|
||||
|
||||
|
||||
4. Click **Save** to save your project.
|
||||
|
||||
|
||||
Updating projects from source control
|
||||
----------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: projects; source control update
|
||||
|
||||
1. Update an existing SCM-based project by selecting the project and clicking the |refresh| button.
|
||||
|
||||
.. note::
|
||||
|
||||
Please note that immediately after adding a project setup to use source control, a "Sync" starts that fetches the project details from the configured source control.
|
||||
|
||||
|projects - list all|
|
||||
|
||||
.. |projects - list all| image:: ../common/images/projects-list-all.png
|
||||
|
||||
2. Click on project's status under the **Status** column to get further details about the update process.
|
||||
|
||||
.. image:: ../common/images/projects-list-status-more.png
|
||||
|
||||
|
||||
|Project - update status|
|
||||
|
||||
.. |Project - update status| image:: ../common/images/projects-update-status.png
|
||||
|
||||
|
||||
Work with Permissions
|
||||
------------------------
|
||||
.. index::
|
||||
pair: projects; permissions
|
||||
|
||||
The set of permissions assigned to this project (role-based access controls) that provide the ability to read, modify, and administer projects, inventories, job templates, and other AWX elements are Privileges.
|
||||
|
||||
You can access the project permissions via the **Access** tab next to the **Details** tab. This screen displays a list of users that currently have permissions to this project. The list may be sorted and searched by **Username**, **First Name**, or **Last Name**.
|
||||
|
||||
|Projects - permissions list for example project|
|
||||
|
||||
.. |Projects - permissions list for example project| image:: ../common/images/projects-permissions-example.png
|
||||
|
||||
|
||||
Add Permissions
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. include:: ../common/permissions.rst
|
||||
|
||||
|
||||
Work with Notifications
|
||||
------------------------
|
||||
|
||||
Clicking the **Notifications** tab allows you to review any notification integrations you have setup.
|
||||
|
||||
.. image:: ../common/images/projects-notifications-example-list.png
|
||||
|
||||
Use the toggles to enable or disable the notifications to use with your particular project. For more detail, see :ref:`ug_notifications_on_off`.
|
||||
|
||||
If no notifications have been set up, you can configure them from the **Notifications** link from the left navigation bar to create a new notification.
|
||||
|
||||
.. image:: ../common/images/project-notifications-empty.png
|
||||
|
||||
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
|
||||
|
||||
|
||||
Work with Job Templates
|
||||
---------------------------
|
||||
|
||||
Clicking on **Job Templates** allows you to add and review any job templates or workflow templates associated with this project.
|
||||
|
||||
.. image:: ../common/images/projects-templates-example-list.png
|
||||
|
||||
Click on the recent jobs that ran using that template to see its details and other useful information. You can sort this list by various criteria, and perform a search to filter the templates of interest.
|
||||
|
||||
.. image:: ../common/images/projects-templates-search-dropdown.png
|
||||
|
||||
From this view, you can also launch (|launch|), edit (|edit|), or copy (|copy|) the template configuration.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
|
||||
|
||||
Work with Schedules
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: projects; scheduling
|
||||
|
||||
|
||||
Clicking on **Schedules** allows you to review any schedules set up for this project.
|
||||
|
||||
.. image:: ../common/images/generic-schedules-list-configured.png
|
||||
|
||||
|
||||
Schedule a Project
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
||||
To schedule a project run, click the **Schedules** tab.
|
||||
|
||||
- If schedules are already set up; review, edit, or enable/disable your schedule preferences.
|
||||
- If schedules have not been set up, refer to :ref:`ug_scheduling` for more information.
|
||||
|
||||
|
||||
.. _ug_galaxy:
|
||||
|
||||
Ansible Galaxy Support
|
||||
-----------------------
|
||||
|
||||
.. index::
|
||||
single: Ansible Galaxy
|
||||
single: Galaxy support
|
||||
|
||||
|
||||
At the end of a Project update, AWX searches for a file called ``requirements.yml`` in the ``roles`` directory, located at ``<project-top-level-directory>/roles/requirements.yml``. If this file is found, the following command automatically runs:
|
||||
|
||||
::
|
||||
|
||||
ansible-galaxy role install -r roles/requirements.yml -p <project-specific cache location>/requirements_roles -vvv
|
||||
|
||||
|
||||
This file allows you to reference Galaxy roles or roles within other repositories which can be checked out in conjunction with your own project. The addition of this Ansible Galaxy support eliminates the need to create git submodules for achieving this result. Given that SCM projects (along with roles/collections) are pulled into and executed from a private job environment, a <private job directory> specific to the project within ``/tmp`` is created by default. However, you can specify another **Job Execution Path** based on your environment in the Jobs Settings tab of the Settings window:
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs-execution-path.png
|
||||
|
||||
The cache directory is a subdirectory inside the global projects folder. The content may be copied from the cache location to ``<job private directory>/requirements_roles`` location.
|
||||
|
||||
By default, AWX has a system-wide setting that allows roles to be dynamically downloaded from the ``roles/requirements.yml`` file for SCM projects. You may turn off this setting in the **Jobs settings** screen of the Settings menu by switching the **Enable Role Download** toggle button to **OFF**.
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs-download-roles.png
|
||||
|
||||
|
||||
Whenever a project sync runs, AWX determines if the project source and any roles from Galaxy and/or Collections are out of date with the project. Project updates will download the roles inside the update.
|
||||
|
||||
If jobs need to pick up a change made to an upstream role, updating the project will ensure this happens. A change to the role means that a new commit was pushed to the *provision-role* source control. To make this change take effect in a job, you do not need to push a new commit to the *playbooks* repo, but you **do need** to update the project, which downloads roles to a local cache. For instance, say you have two git repositories in source control. The first one is *playbooks* and the project in AWX points to this URL. The second one is *provision-role* and it is referenced by the ``roles/requirements.yml`` file inside of the *playbooks* git repo.
|
||||
|
||||
.. this section is used in collections support section below
|
||||
|
||||
In short, jobs would download the most recent roles before every job run. Roles and collections are locally cached for performance reasons, and you will need to select **Update Revision on Launch** in the project SCM Update Options to ensure that the upstream role is re-downloaded before each job run:
|
||||
|
||||
|update-on-launch|
|
||||
|
||||
.. |update-on-launch| image:: ../common/images/projects-scm-update-options-update-on-launch-checked.png
|
||||
|
||||
.. end reused section
|
||||
|
||||
The update happens much earlier in the process than the sync, so this surfaces errors and details faster and in a more logic place.
|
||||
|
||||
For more information and examples on the syntax of the ``requirements.yml`` file, refer to the `role requirements section`_ in the Ansible documentation.
|
||||
|
||||
.. _role requirements section: https://docs.ansible.com/ansible/latest/galaxy/user_guide.html#installing-multiple-roles-from-a-file
|
||||
|
||||
.. this section is used in common/isolationt_variables section too
|
||||
|
||||
If there are any directories that should specifically be exposed, you can specify those in the Jobs section of the Settings screen in the **Paths to Expose to Isolated Jobs** or by updating the following entry in the settings file:
|
||||
|
||||
``AWX_ISOLATION_SHOW_PATHS = ['/list/of/', '/paths']``
|
||||
|
||||
.. note::
|
||||
The primary file you may want to add to ``AWX_ISOLATION_SHOW_PATHS`` is ``/var/lib/awx/.ssh``, if your playbooks need to use keys or settings defined there.
|
||||
|
||||
|
||||
If you made changes in the settings file, be sure to restart services with the ``awx-service restart`` command after your changes have been saved.
|
||||
|
||||
.. end reused section
|
||||
|
||||
|
||||
In the User Interface, you can configure these settings in the Jobs settings window.
|
||||
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs-path-to-expose.png
|
||||
|
||||
|
||||
.. _ug_collections:
|
||||
|
||||
Collections Support
|
||||
---------------------
|
||||
|
||||
.. index::
|
||||
single: Ansible collections
|
||||
single: collections support
|
||||
|
||||
AWX supports project-specific `Ansible collections <https://docs.ansible.com/ansible/latest/collections_guide/index.html>`_ in job runs. If you specify a collections requirements file in the SCM at ``collections/requirements.yml``, AWX will install collections in that file in the implicit project sync before a job run.
|
||||
|
||||
By default, AWX has a system-wide setting that allows collections to be dynamically downloaded from the ``collections/requirements.yml`` file for SCM projects. You may turn off this setting in the **Jobs settings** tab of the Settings menu by switching the **Enable Collections Download** toggle button to **OFF**.
|
||||
|
||||
.. image:: ../common/images/configure-awx-jobs-download-collections.png
|
||||
|
||||
Roles and collections are locally cached for performance reasons, and you will need to select **Update Revision on Launch** in the project SCM Update Options to ensure this:
|
||||
|
||||
|update-on-launch|
|
||||
|
||||
|
||||
.. _ug_collections_usage:
|
||||
|
||||
Using Collections via Hub
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Before AWX can use |ah| as the default source for collections content, you need to create an API token in the |ah| UI so that it could be specified in AWX. You may connect to a private |ah| or a public |ah| collection, the only difference is which URL you specify.
|
||||
|
||||
1. Navigate to https://cloud.redhat.com/ansible/automation-hub/token and click **Load token**.
|
||||
|
||||
2. Click the copy icon to copy the API token to the clipboard.
|
||||
|
||||
.. image:: ../common/images/projects-ah-loaded-token-shown.png
|
||||
|
||||
3. To use the public |ah|, create an |ah| credential using the copied token and pointing to the URLs shown in the **Server URL** and **SSO URL** fields of the token page:
|
||||
|
||||
- **Galaxy Server URL** = ``https://cloud.redhat.com/api/automation-hub/``
|
||||
|
||||
- **AUTH SEVER URL** = ``https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token``
|
||||
|
||||
4. To use a private |ah|, create an |ah| credential using a token retrieved from the Repo Management dashboard of your local |ah| and pointing to the published repo URL as shown:
|
||||
|
||||
.. image:: ../common/images/projects-ah-repo-mgmt-get-token.png
|
||||
.. image:: ../common/images/projects-ah-repo-mgmt-repos-published.png
|
||||
|
||||
You can create different repos with different namespaces/collections in them. But for each repo in |ah| you need to create a different |ah| credential. Copy the **Ansible CLI URL** from the |ah| UI in the format of ``https://$<hub_url>/api/galaxy/content/<repo you want to pull from>`` into the **Galaxy Server URL** field of the *Create Credential* form:
|
||||
|
||||
.. image:: ../common/images/projects-create-ah-credential.png
|
||||
|
||||
5. Navigate to the organization for which you want to be able to sync content from |ah| and add the new |ah| credential to the organization. This step allows you to associate each organization with the |ah| credential (i.e. repo) that you want to be able to use content from.
|
||||
|
||||
.. image:: ../common/images/projects-organizations-add-ah-credential.png
|
||||
|
||||
.. note::
|
||||
|
||||
Suppose you have two repos:
|
||||
|
||||
- *Prod*: ``Namespace 1`` and ``Namespace 2``, each with collection ``A`` and ``B`` so: ``namespace1.collectionA:v2.0.0`` and ``namespace2.collectionB:v2.0.0``
|
||||
|
||||
- *Stage*: ``Namespace 1`` with only collection ``A`` so: ``namespace1.collectionA:v1.5.0`` on |ah|, you will have a repo URL for *Prod* and *Stage*.
|
||||
|
||||
You can create an |ah| credential for each one. Then you can assign different levels of access to different organizations. For example, you can create a Developers organization has access to both repos, while an Operations organization just has access to the |ah| **Prod** repo only.
|
||||
|
||||
6. If the |ah| has self-signed certificates, click the toggle to enable the setting **Ignore Ansible Galaxy SSL Certificate Verification**. For **public Automation Hub**, which uses a signed certificate, click the toggle to disable it instead. Note this is a global setting:
|
||||
|
||||
.. image:: ../common/images/settings-jobs-ignore-galaxy-certs.png
|
||||
|
||||
7. Create a project, where the source repository specifies the necessary collections in a requirements file located in the ``collections/requirements.yml`` file. Refer to the syntax described in the corresponding `Ansible documentation <https://docs.ansible.com/ansible/latest/user_guide/collections_using.html#install-multiple-collections-with-a-requirements-file>`_.
|
||||
|
||||
.. image:: ../common/images/projects-add-ah-source-repo.png
|
||||
|
||||
8. In the Projects list view, click |update| to run an update against this project. AWX fetches the Galaxy collections from the ``collections/requirements.yml`` file and report it as changed; and the collections will now be installed for any job template using this project.
|
||||
|
||||
.. |update| image:: ../common/images/refresh-gray.png
|
||||
|
||||
.. note::
|
||||
|
||||
If updates are needed from Galaxy or Collections, a sync is performed that downloads the required roles, consuming that much more space in your /tmp file. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
|
||||
|
||||
|
||||
For more information on collections, refer to `Using Collections <https://docs.ansible.com/ansible/latest/user_guide/collections_using.html>`_.
|
||||
66
docs/docsite/rst/userguide/scheduling.rst
Normal file
66
docs/docsite/rst/userguide/scheduling.rst
Normal file
@@ -0,0 +1,66 @@
|
||||
.. _ug_scheduling:
|
||||
|
||||
Schedules
|
||||
===========
|
||||
|
||||
.. index::
|
||||
pair: projects; scheduling
|
||||
pair: job templates; scheduling
|
||||
pair: workflow templates; scheduling
|
||||
pair: inventory source; scheduling
|
||||
|
||||
|
||||
You can access all your configured schedules by clicking **Schedules** from the left navigation bar. The schedules list may be sorted by any of the attributes from each column using the directional arrows. You can also search by name, date, or the name of the month in which a schedule runs.
|
||||
|
||||
Each schedule has a corresponding **Actions** column that has options to enable/disable that schedule using the **ON/OFF** toggle next to the schedule name and to allow editing (|edit|) of that schedule.
|
||||
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
|
||||
.. image:: ../common/images/schedules-sample-list.png
|
||||
|
||||
If you are setting up a template, a project, or an inventory source, clicking on the **Schedules** tab allows you to configure schedules for these resources. Once schedules are created, they are listed by:
|
||||
|
||||
- **Name**: Clicking the schedule name opens its details
|
||||
- **Type**: Identifies whether the schedule is associated with a source control update or a system-managed job schedule
|
||||
- **Next Run**: The next scheduled run of this task
|
||||
|
||||
.. image:: ../common/images/generic-schedules-example-list.png
|
||||
|
||||
|
||||
Add a new schedule
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Schedules can only be created from a template, project, or inventory source, and not directly on the main **Schedules** screen itself. To create a new schedule:
|
||||
|
||||
1. Click the **Schedules** tab of the resource you are configuring (template, project, or inventory source).
|
||||
|
||||
2. Click the **Add** button, which opens the **Create Schedule** window.
|
||||
|
||||
.. image:: ../common/images/generic-create-schedule.png
|
||||
|
||||
3. Enter the appropriate details into the following fields:
|
||||
|
||||
- **Name** (required)
|
||||
- **Start Date** (required)
|
||||
- **Start Time** (required)
|
||||
- **Local Time Zone** - The entered Start Time should be in this timezone
|
||||
- **Repeat Frequency** - Appropriate scheduling options display depending on the frequency you select
|
||||
|
||||
The **Schedule Details** displays when you established a schedule, allowing you to review the schedule settings and a list of the scheduled occurrences in the selected Local Time Zone.
|
||||
|
||||
.. image:: ../common/images/generic-create-schedule-details.png
|
||||
|
||||
.. caution::
|
||||
|
||||
Jobs are scheduled in UTC. Repeating jobs that run at a specific time of day may move relative to a local timezone when Daylight Savings Time shifts occur. The system resolves the local time zone based time to UTC when the schedule is saved. To ensure your schedules are correctly set, you should set your schedules in UTC time.
|
||||
|
||||
|
||||
4. Once done, click **Save**.
|
||||
|
||||
|
||||
You can use the **ON/OFF** toggle button to stop an active schedule or activate a stopped schedule.
|
||||
|
||||
|
||||
.. image:: ../common/images/generic-schedules-list-configured.png
|
||||
|
||||
|
||||
6
docs/docsite/rst/userguide/search_sort.rst
Normal file
6
docs/docsite/rst/userguide/search_sort.rst
Normal file
@@ -0,0 +1,6 @@
|
||||
.. _ug_search:
|
||||
|
||||
Search
|
||||
========
|
||||
|
||||
.. include:: ../common/search.rst
|
||||
357
docs/docsite/rst/userguide/security.rst
Normal file
357
docs/docsite/rst/userguide/security.rst
Normal file
@@ -0,0 +1,357 @@
|
||||
.. _ug_security:
|
||||
|
||||
Security
|
||||
=========
|
||||
|
||||
.. index::
|
||||
single: security
|
||||
|
||||
The following sections will help you gain an understanding of how AWX handles and lets you control file system security.
|
||||
|
||||
All playbooks are executed via the ``awx`` file system user. For running jobs, AWX offers job isolation via the use of Linux containers. This projection ensures jobs can only access playbooks, roles, and data from the Project directory for that job template.
|
||||
|
||||
For credential security, users may choose to upload locked SSH keys and set the unlock password to "ask". You can also choose to have the system prompt them for SSH credentials or sudo passwords rather than having the system store them in the database.
|
||||
|
||||
|
||||
Playbook Access and Information Sharing
|
||||
-----------------------------------------
|
||||
|
||||
.. index::
|
||||
pair: playbooks; sharing access
|
||||
pair: playbooks; sharing content
|
||||
pair: playbooks; process isolation
|
||||
|
||||
|
||||
AWX's use of automation execution environments and Linux containers prevents playbooks from reading files outside of their project directory.
|
||||
|
||||
By default, the only data exposed to the ansible-playbook process inside the container is the current project being used.
|
||||
|
||||
You can customize this in the Job Settings and expose additional directories from the host into the container. Refer the next section, :ref:`ug_isolation` for more information.
|
||||
|
||||
.. _ug_isolation:
|
||||
|
||||
Isolation functionality and variables
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: troubleshooting; isolation
|
||||
pair: isolation; functionality
|
||||
pair: isolation; variables
|
||||
|
||||
.. include:: ../common/isolation_variables.rst
|
||||
|
||||
.. _rbac-ug:
|
||||
|
||||
Role-Based Access Controls
|
||||
-----------------------------
|
||||
|
||||
.. index::
|
||||
single: role-based access controls
|
||||
pair: security; RBAC
|
||||
|
||||
Role-Based Access Controls (RBAC) are built into AWX and allow administrators to delegate access to server inventories, organizations, and more. Administrators can also centralize the management of various credentials, allowing end users to leverage a needed secret without ever exposing that secret to the end user. RBAC controls allow AWX to help you increase security and streamline management.
|
||||
|
||||
RBACs are easiest to think of in terms of Roles which define precisely who or what can see, change, or delete an "object" for which a specific capability is being set. RBAC is the practice of granting roles to users or teams.
|
||||
|
||||
There are a few main concepts that you should become familiar with regarding AWX's RBAC design--roles, resources, and users. Users can be members of a role, which gives them certain access to any resources associated with that role, or any resources associated with "descendant" roles.
|
||||
|
||||
A role is essentially a collection of capabilities. Users are granted access to these capabilities and AWX's resources through the roles to which they are assigned or through roles inherited through the role hierarchy.
|
||||
|
||||
Roles associate a group of capabilities with a group of users. All capabilities are derived from membership within a role. Users receive capabilities only through the roles to which they are assigned or through roles they inherit through the role hierarchy. All members of a role have all capabilities granted to that role. Within an organization, roles are relatively stable, while users and capabilities are both numerous and may change rapidly. Users can have many roles.
|
||||
|
||||
|
||||
Role Hierarchy and Access Inheritance
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Imagine that you have an organization named "SomeCompany" and want to allow two people, "Josie" and "Carter", access to manage all the settings associated with that organization. You should make both people members of the organization's ``admin_role``.
|
||||
|
||||
|user-role-relationship|
|
||||
|
||||
.. |user-role-relationship| image:: ../common/images/user-role-relationship.png
|
||||
|
||||
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on.
|
||||
|
||||
This concept is referred to as the 'Role Hierarchy':
|
||||
|
||||
- Parent roles get all capabilities bestowed on any child roles
|
||||
- Members of roles automatically get all capabilities for the role they are a member of, as well as any child roles.
|
||||
|
||||
The Role Hierarchy is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on).
|
||||
|
||||
|rbac-role-hierarchy|
|
||||
|
||||
.. |rbac-role-hierarchy| image:: ../common/images/rbac-role-hierarchy.png
|
||||
|
||||
Often, you will have many Roles in a system and you will want some roles to include all of the capabilities of other roles. For example, you may want a System Administrator to have access to everything that an Organization Administrator has access to, who has everything that a Project Administrator has access to, and so on. We refer to this concept as the 'Role Hierarchy' and it is represented by allowing Roles to have "Parent Roles". Any capability that a Role has is implicitly granted to any parent roles (or parents of those parents, and so on). Of course Roles can have more than one parent, and capabilities are implicitly granted to all parents.
|
||||
|
||||
|rbac-heirarchy-morecomplex|
|
||||
|
||||
.. |rbac-heirarchy-morecomplex| image:: ../common/images/rbac-heirarchy-morecomplex.png
|
||||
|
||||
RBAC controls also give you the capability to explicitly permit User and Teams of Users to run playbooks against certain sets of hosts. Users and teams are restricted to just the sets of playbooks and hosts to which they are granted capabilities. And, with AWX, you can create or import as many Users and Teams as you require--create users and teams manually or import them from LDAP or Active Directory.
|
||||
|
||||
RBACs are easiest to think of in terms of who or what can see, change, or delete an "object" for which a specific capability is being determined.
|
||||
|
||||
Applying RBAC
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
The following sections cover how to apply AWX's RBAC system in your environment.
|
||||
|
||||
|
||||
Editing Users
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
When editing a user, a AWX system administrator may specify the user as being either a *System Administrator* (also referred to as the Superuser) or a *System Auditor*.
|
||||
|
||||
- System administrators implicitly inherit all capabilities for all objects (read/write/execute) within the AWX environment.
|
||||
- System Auditors implicitly inherit the read-only capability for all objects within the AWX environment.
|
||||
|
||||
Editing Organizations
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When editing an organization, system administrators may specify the following roles:
|
||||
|
||||
- One or more users as organization administrators
|
||||
- One or more users as organization auditors
|
||||
- And one or more users (or teams) as organization members
|
||||
|
||||
|
||||
Users/teams that are members of an organization can view their organization administrator.
|
||||
|
||||
Users who are organization administrators implicitly inherit all capabilities for all objects within that AWX organization.
|
||||
|
||||
Users who are organization auditors implicitly inherit the read-only capability for all objects within that AWX organization.
|
||||
|
||||
|
||||
Editing Projects in an Organization
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When editing a project in an organization for which they are the administrator, system administrators and organization administrators may specify:
|
||||
|
||||
- One or more users/teams that are project administrators
|
||||
- One or more users/teams that are project members
|
||||
- And one or more users/teams that may update the project from SCM, from among the users/teams that are members of that organization.
|
||||
|
||||
Users who are members of a project can view their project administrators.
|
||||
|
||||
Project administrators implicitly inherit the capability to update the project from SCM.
|
||||
|
||||
Administrators can also specify one or more users/teams (from those that are members of that project) that can use that project in a job template.
|
||||
|
||||
|
||||
Creating Inventories and Credentials within an Organization
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
All access that is granted to use, read, or write credentials is handled through roles, which use AWX's RBAC system to grant ownership, auditor, or usage roles.
|
||||
|
||||
System administrators and organization administrators may create inventories and credentials within organizations under their administrative capabilities.
|
||||
|
||||
Whether editing an inventory or a credential, System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) to be granted the usage capability for that inventory or credential.
|
||||
|
||||
System administrators and organization administrators may specify one or more users/teams (from those that are members of that organization) that have the capabilities to update (dynamic or manually) an inventory. Administrators can also execute ad hoc commands for an inventory.
|
||||
|
||||
|
||||
Editing Job Templates
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
System administrators, organization administrators, and project administrators, within a project under their administrative capabilities, may create and modify new job templates for that project.
|
||||
|
||||
When editing a job template, administrators (AWX, organization, and project) can select among the inventory and credentials in the organization for which they have usage capabilities or they may leave those fields blank so that they will be selected at runtime.
|
||||
|
||||
Additionally, they may specify one or more users/teams (from those that are members of that project) that have execution capabilities for that job template. The execution capability is valid regardless of any explicit capabilities the user/team may have been granted against the inventory or credential specified in the job template.
|
||||
|
||||
User View
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
A user can:
|
||||
|
||||
- See any organization or project for which they are a member
|
||||
- Create their own credential objects which only belong to them
|
||||
- See and execute any job template for which they have been granted execution capabilities
|
||||
|
||||
If a job template that a user has been granted execution capabilities on does not specify an inventory or credential, the user will be prompted at run-time to select among the inventory and credentials in the organization they own or have been granted usage capabilities.
|
||||
|
||||
Users that are job template administrators can make changes to job templates; however, to change to the inventory, project, playbook, credentials, or instance groups used in the job template, the user must also have the "Use" role for the project and inventory currently being used or being set.
|
||||
|
||||
.. _rbac-ug-roles:
|
||||
|
||||
Roles
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
All access that is granted to use, read, or write credentials is handled through roles, and roles are defined for a resource.
|
||||
|
||||
|
||||
Built-in roles
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
The following table lists the RBAC system roles and a brief description of the how that role is defined with regard to privileges in AWX.
|
||||
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| System Role | What it can do |
|
||||
+=======================================================================+==========================================================================================+
|
||||
| System Administrator - System wide singleton | Manages all aspects of the system |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| System Auditor - System wide singleton | Views all aspects of the system |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Ad Hoc Role - Inventory | Runs ad hoc commands on an Inventory |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Admin Role - Organizations, Teams, Inventory, Projects, Job Templates | Manages all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Auditor Role - All | Views all aspects of a defined Organization, Project, Inventory, or Job Template |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Execute Role - Job Templates | Runs assigned Job Template |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Member Role - Organization, Team | Manages all of the settings associated with that Organization or Team |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Read Role - Organizations, Teams, Inventory, Projects, Job Templates | Views all aspects of a defined Organization, Team, Inventory, Project, or Job Template |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Update Role - Project | Updates the Project from the configured source control management system |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Update Role - Inventory | Updates the Inventory using the cloud source update system |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Owner Role - Credential | Owns and manages all aspects of this Credential |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
| Use Role - Credential, Inventory, Project, IGs, CGs | Uses the Credential, Inventory, Project, IGs, or CGs in a Job Template |
|
||||
+-----------------------------------------------------------------------+------------------------------------------------------------------------------------------+
|
||||
|
||||
|
||||
A Singleton Role is a special role that grants system-wide permissions. AWX currently provides two built-in Singleton Roles but the ability to create or customize a Singleton Role is not supported at this time.
|
||||
|
||||
Common Team Roles - "Personas"
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Support personnel typically works on ensuring that AWX is available and manages it a way to balance supportability and ease-of-use for users. Often, support will assign “Organization Owner/Admin” to users in order to allow them to create a new Organization and add members from their team the respective access needed. This minimizes supporting individuals and focuses more on maintaining uptime of the service and assisting users who are using AWX.
|
||||
|
||||
Below are some common roles managed by the AWX Organization:
|
||||
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | System Role | | Common User | | Description |
|
||||
| | (for Organizations) | | Roles | |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Owner | | Team Lead - | | This user has the ability to control access for other users in their organization. |
|
||||
| | | Technical Lead | | They can add/remove and grant users specific access to projects, inventories, and job templates. |
|
||||
| | | | This user also has the ability to create/remove/modify any aspect of an organization’s projects, |
|
||||
| | | | templates, inventories, teams, and credentials. |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Auditor | | Security Engineer - | | This account can view all aspects of the organization in read-only mode. |
|
||||
| | | Project Manager | | This may be good for a user who checks in and maintains compliance. |
|
||||
| | | | This might also be a good role for a service account who manages or |
|
||||
| | | | ships job data from AWX to some other data collector. |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Member - | | All other users | | These users by default as an organization member do not receive any access to any aspect |
|
||||
| | Team | | | of the organization. In order to grant them access the respective organization owner needs |
|
||||
| | | | to add them to their respective team and grant them Admin, Execute, Use, Update, Ad-hoc |
|
||||
| | | | permissions to each component of the organization’s projects, inventories, and job templates. |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Member - | | Power users - | | Organization Owners can provide “admin” through the team interface, over any component |
|
||||
| | Team “Owner” | | Lead Developer | | of their organization including projects, inventories, and job templates. These users are able |
|
||||
| | | | to modify and utilize the respective component given access. |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Member - | | Developers - | | This will be the most common and allows the organization member the ability to execute |
|
||||
| | Team “Execute” | | Engineers | | job templates and read permission to the specific components. This is permission applies to templates. |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Member - | | Developers - | | This permission applies to an organization’s credentials, inventories, and projects. |
|
||||
| | Team “Use” | | Engineers | | This permission allows the ability for a user to use the respective component within their job template.|
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
| | Member - | | Developers - | | This permission applies to projects. Allows the user to be able to run an SCM update on a project. |
|
||||
| | Team “Update” | | Engineers | |
|
||||
+-----------------------+------------------------+-----------------------------------------------------------------------------------------------------------+
|
||||
|
||||
|
||||
Function of roles: editing and creating
|
||||
------------------------------------------
|
||||
|
||||
Organization “resource roles” functionality are specific to a certain resource type - such as workflows. Being a member of such a role usually provides two types of permissions, in the case of workflows, where a user is given a "workflow admin role" for the organization "Default":
|
||||
|
||||
- this user can create new workflows in the organization "Default"
|
||||
- user can edit all workflows in the "Default" organization
|
||||
|
||||
One exception is job templates, where having the role is irrelevant of creation permission (more details on its own section).
|
||||
|
||||
Independence of resource roles and organization membership roles
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Resource-specific organization roles are independent of the organization roles of admin and member. Having the "workflow admin role" for the "Default" organization will not allow a user to view all users in the organization, but having a "member" role in the "Default" organization will. The two types of roles are delegated independently of each other.
|
||||
|
||||
|
||||
Necessary permissions to edit job templates
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Users can edit fields not impacting job runs (non-sensitive fields) with a Job Template admin role alone. However, to edit fields that impact job runs in a job template, a user needs the following:
|
||||
|
||||
- **admin** role to the job template and container groups
|
||||
- **use** role to related project
|
||||
- **use** role to related inventory
|
||||
- **use** role to related instance groups
|
||||
|
||||
An "organization job template admin" role was introduced, but having this role isn't sufficient by itself to edit a job template within the organization if the user does not have use role to the project / inventory / instance group or an admin role to the container group that a job template uses.
|
||||
|
||||
In order to delegate *full* job template control (within an organization) to a user or team, you will need grant the team or user all 3 organization-level roles:
|
||||
|
||||
- job template admin
|
||||
- project admin
|
||||
- inventory admin
|
||||
|
||||
This will ensure that the user (or all users who are members of the team with these roles) have full access to modify job templates in the organization. If a job template uses an inventory or project from another organization, the user with these organization roles may still not have permission to modify that job template. For clarity of managing permissions, it is best-practice to not mix projects / inventories from different organizations.
|
||||
|
||||
RBAC permissions
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Each role should have a content object, for instance, the org admin role has a content object of the org. To delegate a role, you need admin permission to the content object, with some exceptions that would result in you being able to reset a user's password.
|
||||
|
||||
**Parent** is the organization.
|
||||
|
||||
**Allow** is what this new permission will explicitly allow.
|
||||
|
||||
**Scope** is the parent resource that this new role will be created on. Example: ``Organization.project_create_role``.
|
||||
|
||||
An assumption is being made that the creator of the resource should be given the admin role for that resource. If there are any instances where resource creation does not also imply resource administration, they will be explicitly called out.
|
||||
|
||||
Here are the rules associated with each admin type:
|
||||
|
||||
**Project Admin**
|
||||
|
||||
- Allow: Create, read, update, delete any project
|
||||
- Scope: Organization
|
||||
- User Interface: *Project Add Screen - Organizations*
|
||||
|
||||
**Inventory Admin**
|
||||
|
||||
- Parent: Org admin
|
||||
- Allow: Create, read, update, delete any inventory
|
||||
- Scope: Organization
|
||||
- User Interface: *Inventory Add Screen - Organizations*
|
||||
|
||||
.. note::
|
||||
|
||||
As it is with the **Use** role, if you give a user Project Admin and Inventory Admin, it allows them to create Job Templates (not workflows) for your organization.
|
||||
|
||||
**Credential Admin**
|
||||
|
||||
- Parent: Org admin
|
||||
- Allow: Create, read, update, delete shared credentials
|
||||
- Scope: Organization
|
||||
- User Interface: *Credential Add Screen - Organizations*
|
||||
|
||||
**Notification Admin**
|
||||
|
||||
- Parent: Org admin
|
||||
- Allow: Assignment of notifications
|
||||
- Scope: Organization
|
||||
|
||||
**Workflow Admin**
|
||||
|
||||
- Parent: Org admin
|
||||
- Allow: Create a workflow
|
||||
- Scope: Organization
|
||||
|
||||
**Org Execute**
|
||||
|
||||
- Parent: Org admin
|
||||
- Allow: Executing JTs and WFJTs
|
||||
- Scope: Organization
|
||||
|
||||
|
||||
The following is a sample scenario showing an organization with its roles and which resource(s) each have access to:
|
||||
|
||||
.. image:: ../common/images/rbac-multiple-resources-scenario.png
|
||||
143
docs/docsite/rst/userguide/teams.rst
Normal file
143
docs/docsite/rst/userguide/teams.rst
Normal file
@@ -0,0 +1,143 @@
|
||||
.. _ug_teams:
|
||||
|
||||
Teams
|
||||
======
|
||||
|
||||
.. index::
|
||||
single: teams
|
||||
|
||||
A :term:`Team` is a subdivision of an organization with associated users, projects, credentials, and permissions. Teams provide a means to implement role-based access control schemes and delegate responsibilities across organizations. For instance, permissions may be granted to a whole Team rather than each user on the Team.
|
||||
|
||||
You can create as many Teams of users as make sense for your Organization. Each Team can be assigned permissions, just as with Users. Teams can also scalably assign ownership for Credentials, preventing multiple interface click-throughs to assign the same Credentials to the same user.
|
||||
|
||||
Access the Teams page by clicking **Teams** from the left navigation bar. The team list may be sorted and searched by **Name** or **Organization**.
|
||||
|
||||
|
||||
.. image:: ../common/images/organizations-teams-list.png
|
||||
|
||||
Clicking the Edit (|edit-button|) button next to the list of **Teams** allows you to edit details about the team. You can also review **Users** and **Permissions** associated with this Team.
|
||||
|
||||
.. |edit-button| image:: ../common/images/edit-button.png
|
||||
|
||||
|
||||
.. _ug_team_create:
|
||||
|
||||
Create a Team
|
||||
----------------
|
||||
|
||||
To create a new Team:
|
||||
|
||||
1. Click the **Add** button.
|
||||
|
||||
|
||||
|Teams - create new team|
|
||||
|
||||
.. |Teams - create new team| image:: ../common/images/teams-create-new-team.png
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
|
||||
- Name
|
||||
- Description (optional)
|
||||
- Organization (Choose from an existing organization)
|
||||
|
||||
3. Click **Save**.
|
||||
|
||||
Once the Team is successfully created, AWX opens the **Details** dialog, which also allows you to review and edit your Team information.
|
||||
|
||||
|Teams - example team successfully created|
|
||||
|
||||
.. |Teams - example team successfully created| image:: ../common/images/teams-example-team-successfully-created.png
|
||||
|
||||
|
||||
Team Access
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: teams; users
|
||||
|
||||
This tab displays the list of Users that are members of this Team. This list may be searched by **Username**, **First Name**, or **Last Name**. For more information, refer to :ref:`ug_users`.
|
||||
|
||||
|Teams - users list|
|
||||
|
||||
.. |Teams - users list| image:: ../common/images/teams-users-list.png
|
||||
|
||||
|
||||
.. _ug_teams_permissions:
|
||||
|
||||
Add a User
|
||||
^^^^^^^^^^^
|
||||
|
||||
In order to add a user to a team, the user must already be created. Refer to :ref:`ug_users_create` to create a user. Adding a user to a team adds them as a member only, specifying a role for the user on different resources can be done in the **Access** tab . To add existing users to the Team:
|
||||
|
||||
1. In the **Access** tab, click the **Add** button.
|
||||
|
||||
2. Follow the prompts to add user(s) and assign them to roles.
|
||||
|
||||
3. Click **Save** when done.
|
||||
|
||||
To remove roles for a particular user, click the disassociate (x) button next to its resource.
|
||||
|
||||
.. image:: ../common/images/permissions-disassociate.png
|
||||
|
||||
This launches a confirmation dialog, asking you to confirm the disassociation.
|
||||
|
||||
.. image:: ../common/images/permissions-disassociate-confirm.png
|
||||
|
||||
|
||||
Team Roles
|
||||
~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: teams; permissions
|
||||
pair: teams; roles
|
||||
|
||||
|
||||
Selecting the **Roles** view displays a list of the permissions that are currently available for this Team. The permissions list may be sorted and searched by **Resource Name**, **Type**, or **Role**.
|
||||
|
||||
|Teams - permissions list|
|
||||
|
||||
.. |Teams - permissions list| image:: ../common/images/teams-permissions-sample-roles.png
|
||||
|
||||
The set of privileges assigned to Teams that provide the ability to read, modify, and administer projects, inventories, and other AWX elements are permissions. By default, the Team is given the "read" permission (also called a role).
|
||||
|
||||
Permissions must be set explicitly via an Inventory, Project, Job Template, or within the Organization view.
|
||||
|
||||
|
||||
Add Team Permissions
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
To add permissions to a Team:
|
||||
|
||||
1. Click the **Add** button, which opens the Add Permissions Wizard.
|
||||
|
||||
.. image:: ../common/images/teams-users-add-permissions-form.png
|
||||
:alt: Add Permissions Form
|
||||
|
||||
2. Click to select the object for which the team will have access and click **Next**.
|
||||
|
||||
3. Click to select the resource to assign team roles and click **Next**.
|
||||
|
||||
.. image:: ../common/images/teams-permissions-templates-select.png
|
||||
|
||||
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have different options available.
|
||||
|
||||
.. image:: ../common/images/teams-permissions-template-roles.png
|
||||
|
||||
|
||||
5. Click **Save** when done, and the Add Permissions Wizard closes to display the updated profile for the user with the roles assigned for each selected resource.
|
||||
|
||||
.. image:: ../common/images/teams-permissions-sample-roles.png
|
||||
|
||||
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
You can also add teams, individual, or multiple users and assign them permissions at the object level (projects, inventories, job templates, and workflow templates) as well. This feature reduces the time for an organization to onboard many users at one time.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
182
docs/docsite/rst/userguide/users.rst
Normal file
182
docs/docsite/rst/userguide/users.rst
Normal file
@@ -0,0 +1,182 @@
|
||||
.. _ug_users:
|
||||
|
||||
Users
|
||||
-----
|
||||
|
||||
.. index::
|
||||
single: users
|
||||
|
||||
|
||||
A :term:`User` is someone who has access to AWX with associated permissions and credentials. Access the Users page by clicking **Users** from the left navigation bar. The User list may be sorted and searched by **Username**, **First Name**, or **Last Name** and click the headers to toggle your sorting preference.
|
||||
|
||||
.. image:: ../common/images/users-home-with-example-users.png
|
||||
:alt: Users - home with example users
|
||||
|
||||
You can easily view permissions and user type information by looking beside their user name in the User overview screen.
|
||||
|
||||
|
||||
.. _ug_users_create:
|
||||
|
||||
Create a User
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
To create a new user:
|
||||
|
||||
1. Click the **Add** button, which opens the Create User dialog.
|
||||
|
||||
2. Enter the appropriate details about your new user. Fields marked with an asterisk (*) are required.
|
||||
|
||||
.. note::
|
||||
|
||||
When modifying your own password, log out and log back in again in order for it to take effect.
|
||||
|
||||
Three types of Users can be assigned:
|
||||
|
||||
- **Normal User**: Normal Users have read and write access limited to the resources (such as inventory, projects, and job templates) for which that user has been granted the appropriate roles and privileges.
|
||||
|
||||
- **System Auditor**: Auditors implicitly inherit the read-only capability for all objects within the environment.
|
||||
|
||||
- **System Administrator**: A System Administrator (also known as Superuser) has full system administration privileges -- with full read and write privileges over the entire installation. A System Administrator is typically responsible for managing all aspects of AWX and delegating responsibilities for day-to-day work to various Users. Assign with caution!
|
||||
|
||||
.. image:: ../common/images/users-create-user-form-types.png
|
||||
:alt: User Types
|
||||
|
||||
.. note::
|
||||
|
||||
The initial user (usually "admin") created by the installation process is a Superuser. One Superuser must always exist. To delete the "admin" user account, you must first create another Superuser account.
|
||||
|
||||
3. Select **Save** when finished.
|
||||
|
||||
Once the user is successfully created, the **User** dialog opens for that newly created User.
|
||||
|
||||
.. |edit-button| image:: ../common/images/edit-button.png
|
||||
|
||||
.. image:: ../common/images/users-edit-user-form.png
|
||||
:alt: Edit User Form
|
||||
|
||||
You may delete the user from its Details screen by clicking **Delete**, or once you exit the details screen, you can delete users from a list of current users. See :ref:`ug_users_delete` for more detail.
|
||||
|
||||
The same window opens whether you click on the user's name, or the Edit (|edit-button|) button beside the user. Here, the User's **Organizations**, **Teams**, and **Roles**, as well as other user membership details, may be reviewed and modified.
|
||||
|
||||
.. note::
|
||||
|
||||
If the user is not a newly-created user, the user's details screen displays the last login activity of that user.
|
||||
|
||||
.. image:: ../common/images/users-last-login-info.png
|
||||
|
||||
When you log in as yourself, and view the details of your own user profile, you can manage tokens from your user profile. See :ref:`ug_users_tokens` for more detail.
|
||||
|
||||
.. image:: ../common/images/user-with-token-button.png
|
||||
|
||||
.. _ug_users_delete:
|
||||
|
||||
Delete a User
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
Before you can delete a user, you must have user permissions. When you delete a user account, the name and email of the user are permanently removed from AWX.
|
||||
|
||||
1. Expand the **Access** menu from the left navigation bar, and click **Users** to display a list of the current users.
|
||||
|
||||
2. Select the check box(es) for the user(s) that you want to remove and click **Delete**.
|
||||
|
||||
.. image:: ../common/images/users-home-users-checked-delete.png
|
||||
|
||||
3. Click **Delete** in the confirmation warning message to permanently delete the user.
|
||||
|
||||
|
||||
Users - Organizations
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: users; organizations
|
||||
|
||||
This displays the list of organizations of which that user is a member. This list may be searched by Organization Name or Description.
|
||||
Organization membership cannot be modified from this display panel.
|
||||
|
||||
|Users - Organizations list for example user|
|
||||
|
||||
.. |Users - Organizations list for example user| image:: ../common/images/users-organizations-list-for-example-user.png
|
||||
|
||||
Users - Teams
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: users; teams
|
||||
|
||||
This displays the list of teams of which that user is a member. This list may be searched by **Team Name** or **Description**. Team membership cannot be modified from this display panel. For more information, refer to :ref:`ug_teams`.
|
||||
|
||||
Until a Team has been created and the user has been assigned to that team, the assigned Teams Details for the User appears blank.
|
||||
|
||||
|Users - teams list for example user|
|
||||
|
||||
.. |Users - teams list for example user| image:: ../common/images/users-teams-list-for-example-user.png
|
||||
|
||||
|
||||
.. _ug_users_roles:
|
||||
|
||||
Users - Roles
|
||||
~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: users; permissions
|
||||
pair: users; roles
|
||||
|
||||
|
||||
The set of permissions assigned to this user (role-based access controls) that provide the ability to read, modify, and administer projects, inventories, job templates, and other AWX elements are Roles.
|
||||
|
||||
.. note::
|
||||
|
||||
It is important to note that the job template administrator may not have access to other resources (inventory, project, credentials, or instance groups) associated with the template. Without access to these, certain fields in the job template are not editable. System Administrators may grant individual users permissions to certain resources as necessary. See :ref:`ug_users_permissions` for detail.
|
||||
|
||||
This screen displays a list of the roles that are currently assigned to the selected User and can be sorted and searched by **Name**, **Type**, or **Role**.
|
||||
|
||||
|Users - permissions list for example user|
|
||||
|
||||
.. |Users - permissions list for example user| image:: ../common/images/users-permissions-list-for-example-user.png
|
||||
|
||||
.. _ug_users_permissions:
|
||||
|
||||
Add Permissions
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
To add permissions to a particular user:
|
||||
|
||||
1. Click the **Add** button, which opens the Add Permissions Wizard.
|
||||
|
||||
.. image:: ../common/images/users-add-permissions-form.png
|
||||
:alt: Add Permissions Form
|
||||
|
||||
2. Click to select the object for which the user will have access and click **Next**.
|
||||
|
||||
3. Click to select the resource to assign team roles and click **Next**.
|
||||
|
||||
.. image:: ../common/images/users-permissions-IG-select.png
|
||||
|
||||
4. Click the checkbox beside the role to assign that role to your chosen type of resource. Different resources have different options available.
|
||||
|
||||
.. image:: ../common/images/users-permissions-IG-roles.png
|
||||
|
||||
|
||||
5. Click **Save** when done, and the Add Permissions Wizard closes to display the updated profile for the user with the roles assigned for each selected resource.
|
||||
|
||||
.. image:: ../common/images/users-permissions-sample-roles.png
|
||||
|
||||
To remove Permissions for a particular resource, click the disassociate (x) button next to its resource. This launches a confirmation dialog, asking you to confirm the disassociation.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
You can also add teams, individual, or multiple users and assign them permissions at the object level (templates, credentials, inventories, projects, organizations, or instance groups) as well. This feature reduces the time for an organization to onboard many users at one time.
|
||||
|
||||
|
||||
.. _ug_users_tokens:
|
||||
|
||||
Users - Tokens
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The **Tokens** tab will only be present for your user (yourself). Before you add a token for your user, you may want to :ref:`create an application <ug_applications_auth_create>` if you want to associate your token to it. You may also create a personal access token (PAT) without associating it with any application. To create a token for your user:
|
||||
|
||||
1. If not already selected, click on your user from the Users list view to configure your OAuth 2 tokens.
|
||||
|
||||
.. include:: ../common/add-token.rst
|
||||
|
||||
174
docs/docsite/rst/userguide/webhooks.rst
Normal file
174
docs/docsite/rst/userguide/webhooks.rst
Normal file
@@ -0,0 +1,174 @@
|
||||
.. _ug_webhooks:
|
||||
|
||||
Working with Webhooks
|
||||
======================
|
||||
|
||||
.. index::
|
||||
single: webhooks
|
||||
pair: webhooks; GitHub
|
||||
pair: webhooks; GitLab
|
||||
pair: webhooks; payload
|
||||
|
||||
A :term:`Webhook` provides the ability to execute specified commands between apps over the web. AWX currently provides webhook integration with GitHub and GitLab. This section describes the procedure for setting up a webhook through their respective services.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
|
||||
The webhook post-status-back functionality for GitHub and GitLab is designed for work only under certain CI events. Receiving another kind of event will result in messages like the one below in the service log:
|
||||
|
||||
::
|
||||
|
||||
awx.main.models.mixins Webhook event did not have a status API endpoint associated, skipping.
|
||||
|
||||
|
||||
GitHub webhook setup
|
||||
---------------------
|
||||
|
||||
AWX has the ability to run jobs based on a triggered webhook event coming in. Job status information (pending, error, success) can be sent back only for pull request events. If you determine you do not want AWX to post job statuses back to the webhook service, skip steps 1-2, and go directly to :ref:`step 3 <ug_webhooks_setup_github>`.
|
||||
|
||||
1. Optionally generate a personal access token (PAT) for use with AWX.
|
||||
|
||||
a. In the profile settings of your GitHub account, click **Settings**.
|
||||
|
||||
b. At the very bottom of the settings, click **<> Developer Settings**.
|
||||
|
||||
c. In the Developer settings, click **Personal access tokens**.
|
||||
d. From the Personal access tokens screen, click **Generate new token**.
|
||||
e. When prompted, enter your GitHub account password to continue.
|
||||
f. In the **Note** field, enter a brief description about what this PAT will be used for.
|
||||
g. In the Scope fields, the automation webhook only needs repo scope access, with the exception of invites. For information about other scopes, click the link right above the table to access the docs.
|
||||
|
||||
.. image:: ../common/images/webhooks-create-webhook-github-scope.png
|
||||
|
||||
h. Click the **Generate Token** button.
|
||||
|
||||
i. Once the token is generated, make sure you copy the PAT, as it will be used in a later step. You will not be able to access this token again in GitHub.
|
||||
|
||||
2. Use the PAT to optionally create a GitHub credential:
|
||||
|
||||
a. Go to your instance, and :ref:`create a new credential for the GitHub PAT <ug_credentials_github>` using the above generated token.
|
||||
b. Make note of the name of this credential, as it will be used in the job template that posts back to GitHub.
|
||||
|
||||
.. image:: ../common/images/webhooks-create-credential-github-PAT-token.png
|
||||
|
||||
c. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the previous step.
|
||||
|
||||
.. image:: ../common/images/webhooks-job-template-gh-webhook-credential.png
|
||||
|
||||
|
|
||||
|
||||
d. Click **Save**. Now your job template is set up to be able to post back to GitHub. An example of one may look like this:
|
||||
|
||||
.. image:: ../common/images/webhooks-awx-to-github-status.png
|
||||
|
||||
.. _ug_webhooks_setup_github:
|
||||
|
||||
3. Go to a specific GitHub repo you want to configure webhooks and click **Settings**.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-settings.png
|
||||
|
||||
4. Under Options, click **Webhooks**.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-settings-options.png
|
||||
|
||||
5. On the Webhooks page, click **Add webhook**.
|
||||
|
||||
6. To complete the Add Webhook page, you need to :ref:`enable webhooks in a job template <ug_jt_enable_webhooks>` (or in a :ref:`workflow job template <ug_wfjt_enable_webhooks>`), which will provide you with the following information:
|
||||
|
||||
a. Copy the contents of the **Webhook URL** from the job template, and paste it in the **Payload URL** field. GitHub uses this address to send results to.
|
||||
b. Set the **Content type** to **application/json**.
|
||||
c. Copy the contents of the **Webhook Key** from the job template above and paste it in the **Secret** field.
|
||||
d. Leave **Enable SSL verification** selected.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-add-webhook.png
|
||||
|
||||
|
|
||||
|
||||
e. Next, you must select the types of events you want to trigger a webhook. Any such event will trigger the Job or Workflow. In order to have job status (pending, error, success) sent back to GitHub, you must select **Pull requests** in the individual events section.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-choose-events.png
|
||||
|
||||
f. Leave **Active** checked and click **Add Webhook**.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-add-webhook-actve.png
|
||||
|
||||
|
||||
7. After your webhook is configured, it displays in the list of webhooks active for your repo, along with the ability to edit or delete it. Click on a webhook, and it brings you to the Manage webhook screen. Scroll to the very bottom of the screen to view all the delivery attempts made to your webhook and whether they succeeded or failed.
|
||||
|
||||
.. image:: ../common/images/webhooks-github-repo-webhooks-deliveries.png
|
||||
|
||||
For more information, refer to the `GitHub Webhooks developer documentation <https://developer.github.com/webhooks/>`_.
|
||||
|
||||
|
||||
|
||||
GitLab webhook setup
|
||||
---------------------
|
||||
|
||||
AWX has the ability to run jobs based on a triggered webhook event coming in. Job status information (pending, error, success) can be sent back only for merge request events. If you determine you do not want AWX to post job statuses back to the webhook service, skip steps 1-2, and go directly to :ref:`step 3 <ug_webhooks_setup_gitlab>`.
|
||||
|
||||
1. Optionally, generate a personal access token (PAT). This token gives AWX the ability to post statuses back when we run jobs based on a webhook coming in.
|
||||
|
||||
a. In the profile settings of your GitLab account, click **Settings**.
|
||||
|
||||
b. On the sidebar, under User Settings, click **Access Tokens**.
|
||||
|
||||
.. image:: ../common/images/webhooks-create-webhook-gitlab-settings.png
|
||||
|
||||
c. In the **Name** field, enter a brief description about what this PAT will be used for.
|
||||
d. Skip the **Expires at** field unless you want to set an expiration date for your webhook.
|
||||
e. In the Scopes fields, select the ones applicable to your integration. For AWX, API is the only selection necessary.
|
||||
|
||||
.. image:: ../common/images/webhooks-create-webhook-gitlab-scope.png
|
||||
|
||||
f. Click the **Create personal access token** button.
|
||||
|
||||
g. Once the token is generated, make sure you copy the PAT, as it will be used in a later step. You will not be able to access this token again in GitLab.
|
||||
|
||||
2. Use the PAT to optionally create a GitLab credential:
|
||||
|
||||
a. Go to your instance, and :ref:`create a new credential for the GitLab PAT <ug_credentials_gitlab>` using the above generated token.
|
||||
b. Make note of the name of this credential, as it will be used in the job template that posts back to GitHub.
|
||||
|
||||
.. image:: ../common/images/webhooks-create-credential-gitlab-PAT-token.png
|
||||
|
||||
c. Go to the job template with which you want to enable webhooks, and select the webhook service and credential you created in the previous step.
|
||||
|
||||
.. image:: ../common/images/webhooks-job-template-gl-webhook-credential.png
|
||||
|
||||
|
|
||||
|
||||
d. Click **Save**. Now your job template is set up to be able to post back to GitLab. An example of one may look like this:
|
||||
|
||||
.. image:: ../common/images/webhooks-awx-to-gitlab-status.png
|
||||
|
||||
|
||||
.. _ug_webhooks_setup_gitlab:
|
||||
|
||||
3. Go to a specific GitLab repo you want to configure webhooks and click **Settings > Integrations**.
|
||||
|
||||
.. image:: ../common/images/webhooks-gitlab-repo-settings.png
|
||||
|
||||
4. To complete the Integrations page, you need to :ref:`enable webhooks in a job template <ug_jt_enable_webhooks>` (or in a :ref:`workflow job template <ug_wfjt_enable_webhooks>`), which will provide you with the following information:
|
||||
|
||||
a. Copy the contents of the **Webhook URL** from the job template above, and paste it in the **URL** field. GitLab uses this address to send results to.
|
||||
b. Copy the contents of the **Webhook Key** from the job template above and paste it in the **Secret Token** field.
|
||||
c. Next, you must select the types of events you want to trigger a webhook. Any such event will trigger the Job or Workflow. In order to have job status (pending, error, success) sent back to GitLab, you must select **Merge request events** in the Trigger section.
|
||||
d. Leave **Enable SSL verification** selected.
|
||||
e. Click **Add webhook**.
|
||||
|
||||
.. image:: ../common/images/webhooks-gitlab-repo-add-webhook.png
|
||||
|
||||
|
||||
5. After your webhook is configured, it displays in the list of Project Webhooks for your repo, along with the ability to test events, edit or delete the webhook. Testing a webhook event displays the results at the top of the page whether it succeeded or failed.
|
||||
|
||||
For more information, refer to the `GitLab webhooks integrations documentation <https://docs.gitlab.com/ee/user/project/integrations/webhooks.html>`_.
|
||||
|
||||
|
||||
Payload output
|
||||
---------------
|
||||
|
||||
The entire payload is exposed as an extra variable. To view the payload information, go to the Jobs Detail view of the job template that ran with the webhook enabled. In the **Extra Variables** field of the Details pane, view the payload output from the ``awx_webhook_payload`` variable, as shown in the example below.
|
||||
|
||||
.. image:: ../common/images/webhooks-jobs-extra-vars-payload.png
|
||||
|
||||
.. image:: ../common/images/webhooks-jobs-extra-vars-payload-expanded.png
|
||||
594
docs/docsite/rst/userguide/workflow_templates.rst
Normal file
594
docs/docsite/rst/userguide/workflow_templates.rst
Normal file
@@ -0,0 +1,594 @@
|
||||
.. _ug_wf_templates:
|
||||
|
||||
Workflow Job Templates
|
||||
========================
|
||||
|
||||
.. index::
|
||||
single: workflow job templates
|
||||
|
||||
A :term:`workflow job template` links together a sequence of disparate resources that accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit. These resources may include:
|
||||
|
||||
- job templates
|
||||
- workflow templates
|
||||
- project syncs
|
||||
- inventory source syncs
|
||||
|
||||
The **Templates** menu opens a list of the workflow and job templates that are currently available. The default view is collapsed (Compact), showing the template name, template type, and the statuses of the jobs that ran using that template, but you can click **Expanded** to view more information. This list is sorted alphabetically by name, but you can sort by other criteria, or search by various fields and attributes of a template. From this screen, you can launch (|launch|), edit (|edit|), and copy (|copy|) a workflow job template.
|
||||
|
||||
.. |delete| image:: ../common/images/delete-button.png
|
||||
|
||||
|
||||
Only workflow templates have the Workflow Visualizer icon (|wf-viz-icon|) as a shortcut for accessing the workflow editor.
|
||||
|
||||
.. |wf-viz-icon| image:: ../common/images/wf-viz-icon.png
|
||||
|
||||
|
||||
|Wf templates - home with example wf template|
|
||||
|
||||
.. |Wf templates - home with example wf template| image:: ../common/images/wf-templates-home-with-example-wf-template.png
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
Workflow templates can be used as building blocks for another workflow template. Many parameters in a workflow template allow you to enable **Prompt on Launch** that can be modified at the workflow job template level, and do not affect the values assigned at the individual workflow template level. For instructions, see the :ref:`ug_wf_editor` section.
|
||||
|
||||
|
||||
Create a Workflow Template
|
||||
----------------------------
|
||||
|
||||
To create a new workflow job template:
|
||||
|
||||
1. Click the |add options template| button then select **Workflow Template** from the menu list.
|
||||
|
||||
.. |add options template| image:: ../common/images/add-options-template.png
|
||||
|
||||
|Wf templates - create new wf template|
|
||||
|
||||
.. |Wf templates - create new wf template| image:: ../common/images/wf-templates-create-new-wf-template.png
|
||||
|
||||
|
||||
2. Enter the appropriate details into the following fields:
|
||||
|
||||
.. note::
|
||||
|
||||
If a field has the **Prompt on launch** checkbox selected, launching the workflow template, or when the workflow template is used within another workflow template, it will prompt for the value for that field upon launch. Most prompted values will override any values set in the workflow job template; exceptions are noted below.
|
||||
|
||||
.. list-table::
|
||||
:widths: 10 35 30
|
||||
:header-rows: 1
|
||||
|
||||
* - Field
|
||||
- Options
|
||||
- Prompt on Launch
|
||||
* - **Name**
|
||||
- Enter a name for the job.
|
||||
- N/A
|
||||
* - **Description**
|
||||
- Enter an arbitrary description as appropriate (optional).
|
||||
- N/A
|
||||
* - **Organization**
|
||||
- Choose the organization to be used with this template from the organizations available to the currently logged in user.
|
||||
- N/A
|
||||
* - **Inventory**
|
||||
- Optionally choose the inventory to be used with this template from the inventories available to the currently logged in user.
|
||||
- Yes
|
||||
* - **Limit**
|
||||
- A host pattern to further constrain the list of hosts managed or affected by the playbook. Multiple patterns can be separated by colons (``:``). As with core Ansible, ``a:b`` means "in group a or b", ``a:b:&c`` means "in a or b but must be in c", and ``a:!b`` means "in a, and definitely not in b". For more information and examples refer to `Patterns <https://docs.ansible.com/ansible/latest/inventory_guide/intro_patterns.html>`_ in the Ansible documentation.
|
||||
- Yes
|
||||
* - **Source control branch**
|
||||
- Select a branch for the workflow. This branch is applied to all workflow job template nodes that prompt for a branch.
|
||||
- Yes
|
||||
* - **Labels**
|
||||
- * Optionally supply labels that describe this workflow job template, such as "dev" or "test". Labels can be used to group and filter workflow job templates and completed jobs in the display.
|
||||
* Labels are created when they are added to the workflow template. Labels are associated to a single Organization using the Project that is provided in the workflow template. Members of the Organization can create labels on a workflow template if they have edit permissions (such as an admin role).
|
||||
* Once the workflow template is saved, the labels appear in the WFJT's Details view.
|
||||
* Click the (|x|) beside a label to remove it. When a label is removed, it is no longer associated with that particular workflow template, but it will remain associated with any other jobs or job templates that reference it.
|
||||
* Labels are only applied to the workflow templates not the job template nodes that are used in the workflow.
|
||||
- * Yes. If selected, even if a default value is supplied, you will be prompted upon launch to supply additional labels if needed.
|
||||
* You will not be able to delete existing labels - clicking (|x-circle|) only removes the newly added labels, not existing default labels.
|
||||
* - **Variables**
|
||||
- * Pass extra command line variables to the playbook. This is the "-e" or "--extra-vars" command line parameter for ansible-playbook that is documented in the Ansible documentation at `Passing Variables on the Command Line <https://docs.ansible.com/ansible/latest/reference_appendices/general_precedence.html>`_.
|
||||
* Provide key/value pairs using either YAML or JSON. These variables have a maximum value of precedence and overrides other variables specified elsewhere. An example value might be:
|
||||
|
||||
::
|
||||
|
||||
git_branch: production
|
||||
release_version: 1.5
|
||||
- * Yes. If you want to be able to specify ``extra_vars`` on a schedule, you must select **Prompt on Launch** for **Variables** on the workflow job template, or a enable a survey on the workflow job template, then those answered survey questions become ``extra_vars``.
|
||||
* For more information about extra variables, refer to :ref:`ug_wf_templates_extravars`.
|
||||
* - **Job Tags**
|
||||
- Begin typing and selecting the **Create x** drop-down to specify which parts of the playbook should be executed.
|
||||
- Yes
|
||||
* - **Skip Tags**
|
||||
- Begin typing and selecting the **Create x** drop-down to specify certain tasks or parts of the playbook to skip.
|
||||
- Yes
|
||||
|
||||
For more information about **Job Tags** and **Skip Tags**, refer to `Tags <https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_tags.html>`_ in the Ansible documentation.
|
||||
|
||||
.. |x-circle| image:: ../common/images/x-delete-button.png
|
||||
|
||||
.. |x| image:: ../common/images/x-button.png
|
||||
|
||||
|
||||
3. **Options**: Specify options for launching this workflow job template, if necessary.
|
||||
|
||||
- Check **Enable Webhooks** to turn on the ability to interface with a predefined SCM system web service that is used to launch a workflow job template. Currently supported SCM systems are GitHub and GitLab.
|
||||
|
||||
.. _ug_wfjt_enable_webhooks:
|
||||
|
||||
If you enable webhooks, other fields display, prompting for additional information:
|
||||
|
||||
- **Webhook Service**: Select which service to listen for webhooks from
|
||||
- **Webhook Credential**: Optionally, provide a GitHub or GitLab personal access token (PAT) as a credential to use to send status updates back to the webhook service. Before you can select it, the credential must exist. See :ref:`ug_credentials_cred_types` to create one.
|
||||
|
||||
Upon **Save**, additional fields populate and the Workflow Visualizer automatically opens.
|
||||
|
||||
- **Webhook URL**: Automatically populated with the URL for the webhook service to POST requests to.
|
||||
- **Webhook Key**: Generated shared secret to be used by the webhook service to sign payloads sent to AWX. This must be configured in the settings on the webhook service in order for AWX to accept webhooks from this service.
|
||||
|
||||
For additional information on setting up webhooks, see :ref:`ug_webhooks`.
|
||||
|
||||
- Check **Enable Concurrent Jobs** to allow simultaneous runs of this workflow. Refer to :ref:`ug_job_concurrency` for additional information.
|
||||
|
||||
|
||||
4. When you have completed configuring the workflow template, click **Save**.
|
||||
|
||||
Saving the template exits the workflow template page and the Workflow Visualizer opens to allow you to build a workflow. See the :ref:`ug_wf_editor` section for further instructions. Otherwise, you may close the Workflow Visualizer to return to the Details tab of the newly saved template in order to review, edit, add permissions, notifications, schedules, and surveys, or view completed jobs and build a workflow template at a later time. Alternatively, you can click **Launch** to launch the workflow, but you must first save the template prior to launching, otherwise, the **Launch** button remains grayed-out. Also, note the **Notifications** tab is present only after the template has been saved.
|
||||
|
||||
.. image:: ../common/images/wf-templates-wf-template-saved.png
|
||||
|
||||
|
||||
|
||||
|
||||
Work with Permissions
|
||||
-----------------------
|
||||
|
||||
Clicking on **Access** allows you to review, grant, edit, and remove associated permissions for users as well as team members.
|
||||
|
||||
.. image:: ../common/images/wf-template-completed-permissions-view.png
|
||||
|
||||
Click the **Add** button to create new permissions for this workflow template by following the prompts to assign them accordingly.
|
||||
|
||||
|
||||
|
||||
Work with Notifications
|
||||
------------------------
|
||||
|
||||
Clicking on **Notifications** allows you to review any notification integrations you have setup. The **Notifications** tab is present only after the template has been saved.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-completed-notifications-view.png
|
||||
|
||||
Use the toggles to enable or disable the notifications to use with your particular template. For more detail, see :ref:`ug_notifications_on_off`.
|
||||
|
||||
If no notifications have been set up, see :ref:`ug_notifications_create` for detail.
|
||||
|
||||
.. image:: ../common/images/wf-template-no-notifications-blank.png
|
||||
|
||||
|
||||
Refer to :ref:`ug_notifications_types` for additional details on configuring various notification types.
|
||||
|
||||
|
||||
|
||||
View Completed Jobs
|
||||
--------------------
|
||||
|
||||
The **Completed Jobs** tab provides the list of workflow templates that have ran. Click **Expanded** to view the various details of each job.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-completed-jobs-list.png
|
||||
|
||||
|
||||
From this view, you can click the job ID - name of the workflow job and see its graphical representation. The example below shows the job details of a workflow job.
|
||||
|
||||
.. image:: ../common/images/wf-template-jobID-detail-example.png
|
||||
|
||||
.. If a workflow template is used in another workflow, the jobs details indicate a parent workflow.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-job-detail-with-parent.png
|
||||
|
||||
.. In the above example, click the parent workflow template, **Overall**, to view its Job Details page and the graphical details of the nodes and statuses of each as they were launched.
|
||||
|
||||
.. .. image:: ../common/images/wf-template-jobs-detail-example.png
|
||||
|
||||
The nodes are marked with labels that help you identify them at a glance. See the legend_ in the :ref:`ug_wf_editor` section for more information.
|
||||
|
||||
|
||||
Work with Schedules
|
||||
----------------------
|
||||
|
||||
.. index::
|
||||
pair: workflow template; scheduling
|
||||
|
||||
Clicking on **Schedules** allows you to review any schedules set up for this template.
|
||||
|
||||
.. .. image:: ../common/images/templates-schedules-example-list.png
|
||||
|
||||
|
||||
|
||||
Schedule a Workflow Template
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; scheduling
|
||||
single: scheduling; add new
|
||||
|
||||
To schedule a wokflow job template run, click the **Schedules** tab.
|
||||
|
||||
- If schedules are already set up; review, edit, or enable/disable your schedule preferences.
|
||||
- If schedules have not been set up, refer to :ref:`ug_scheduling` for more information.
|
||||
|
||||
If a workflow template used in a nested workflow has a survey, or the **Prompt on Launch** selected for the inventory option, the **PROMPT** button displays next to the **SAVE** and **CANCEL** buttons on the schedule form. Clicking the **PROMPT** button shows an optional INVENTORY step where you can provide or remove an inventory or skip this step without any changes.
|
||||
|
||||
.. _ug_wf_surveys:
|
||||
|
||||
Surveys
|
||||
---------
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; surveys
|
||||
|
||||
Workflows containing job types of Run or Check provide a way to set up surveys in the Workflow Job Template creation or editing screens. Surveys set extra variables for the playbook similar to 'Prompt for Extra Variables' does, but in a user-friendly question and answer way. Surveys also allow for validation of user input. Click the **Survey** tab to create a survey.
|
||||
|
||||
Use cases for surveys are numerous. An example might be if operations wanted to give developers a "push to stage" button they could run without advanced Ansible knowledge. When launched, this task could prompt for answers to questions such as, "What tag should we release?"
|
||||
|
||||
Many types of questions can be asked, including multiple-choice questions.
|
||||
|
||||
.. _ug_wf_surveys_create:
|
||||
|
||||
Create a Survey
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: workflow templates; survey creation
|
||||
single: surveys; creation
|
||||
|
||||
|
||||
To create a survey:
|
||||
|
||||
1. Click the **Survey** tab to bring up the **Add Survey** window.
|
||||
|
||||
.. figure:: ../common/images/wf-template-create-survey.png
|
||||
:alt: Workflow Job Template - create survey
|
||||
|
||||
Use the **ON/OFF** toggle button at the top of the screen to quickly activate or deactivate this survey prompt.
|
||||
|
||||
2. A survey can consist of any number of questions. For each question, enter the following information:
|
||||
|
||||
- **Name**: The question to ask the user.
|
||||
- **Description**: (optional) A description of what's being asked of the user.
|
||||
- **Answer Variable Name**: The Ansible variable name to store the user's response in. This is the variable to be used by the playbook. Variable names cannot contain spaces.
|
||||
- **Answer Type**: Choose from the following question types.
|
||||
|
||||
- *Text*: A single line of text. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Textarea*: A multi-line text field. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Password*: Responses are treated as sensitive information, much like an actual password is treated. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Multiple Choice (single select)*: A list of options, of which only one can be selected at a time. Enter the options, one per line, in the **Multiple Choice Options** box.
|
||||
- *Multiple Choice (multiple select)*: A list of options, any number
|
||||
of which can be selected at a time. Enter the options, one per
|
||||
line, in the **Multiple Choice Options** box.
|
||||
- *Integer*: An integer number. You can set the minimum and maximum length (in characters) for this answer.
|
||||
- *Float*: A decimal number. You can set the minimum and maximum length (in characters) for this answer.
|
||||
|
||||
- **Default Answer**: Depending on which type chosen, you can supply the default answer to the question. This value is pre-filled in the interface and is used if the answer is not provided by the user.
|
||||
|
||||
- **Required**: Whether or not an answer to this question is required from the user.
|
||||
|
||||
3. Once you have entered the question information, click the **Add** button to add the question.
|
||||
|
||||
A stylized version of the survey is presented in the Preview pane. For any question, you can click on the **Edit** button to edit the question, the **Delete** button to delete the question, and click and drag on the grid icon to rearrange the order of the questions.
|
||||
|
||||
4. Return to the left pane to add additional questions.
|
||||
|
||||
5. When done, click **Save** to save the survey.
|
||||
|
||||
|Workflow-template-completed-survey|
|
||||
|
||||
.. |Workflow-template-completed-survey| image:: ../common/images/wf-template-completed-survey.png
|
||||
|
||||
|
||||
Optional Survey Questions
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. index::
|
||||
single: workflow templates; survey optional questions
|
||||
single: surveys; optional questions
|
||||
|
||||
The **Required** setting on a survey question determines whether the answer is optional or not for the user interacting with it.
|
||||
|
||||
Behind the scenes, optional survey variables can be passed to the playbook in ``extra_vars``, even when they aren't filled in.
|
||||
|
||||
- If a non-text variable (input type) is marked as optional, and is not filled in, no survey ``extra_var`` is passed to the playbook.
|
||||
|
||||
- If a text input or text area input is marked as optional, is not filled in, and has a minimum ``length > 0``, no survey ``extra_var`` is passed to the playbook.
|
||||
|
||||
- If a text input or text area input is marked as optional, is not filled in, and has a minimum ``length === 0``, that survey ``extra_var`` is passed to the playbook, with the value set to an empty string ( "" ).
|
||||
|
||||
|
||||
.. _ug_wf_editor:
|
||||
|
||||
Workflow Visualizer
|
||||
-----------------------
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; workflow visualizer
|
||||
pair: visualizer; workflow
|
||||
|
||||
|
||||
The Workflow Visualizer provides a graphical way of linking together job templates, workflow templates, project syncs, and inventory syncs to build a workflow template. Before building a workflow template, refer to the :ref:`ug_workflows` section for considerations associated with various scenarios on parent, child, and sibling nodes.
|
||||
|
||||
Build a Workflow
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can set up any combination of two or more of the following node types to build a workflow: Template (Job Template or Workflow Job Template), Project Sync, Inventory Sync, or Approval. Each node is represented by a rectangle while the relationships and their associated edge types are represented by a line (or link) that connects them.
|
||||
|
||||
|
||||
1. In the details/edit view of a workflow template, click the **Visualizer** tab or from the Templates list view, click the (|wf-viz-icon|) icon to launch the Workflow Visualizer.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new.png
|
||||
|
||||
2. Click the |start| button to display a list of nodes to add to your workflow.
|
||||
|
||||
.. |start| image:: ../common/images/wf-start-button.png
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-list.png
|
||||
|
||||
3. On the right pane, select the type of node you want to add from the drop-down menu:
|
||||
|
||||
.. image:: ../common/images/wf-add-node-selections.png
|
||||
|
||||
If selecting an **Approval** node, see :ref:`ug_wf_approval_nodes` for further detail.
|
||||
|
||||
Selecting a node provides the available valid options associated with it.
|
||||
|
||||
.. note::
|
||||
If you select a job template that does not have a default inventory when populating a workflow graph, the inventory of the parent workflow will be used. Though a credential is not required in a job template, you will not be able to choose a job template for your workflow if it has a credential that requires a password, unless the credential is replaced by a prompted credential.
|
||||
|
||||
4. Once a node is selected, the workflow begins to build, and you must specify the type of action to be taken for the selected node. This action is also referred to as *edge type*.
|
||||
|
||||
5. If the node is a root node, the edge type defaults to **Always** and is non-editable.
|
||||
|
||||
For subsequent nodes, you can select one of the following scenarios (edge type) to apply to each:
|
||||
|
||||
- **Always**: Continue to execute regardless of success or failure.
|
||||
- **On Success**: Upon successful completion, execute the next template.
|
||||
- **On Failure**: Upon failure, execute a different template.
|
||||
|
||||
.. _convergence_node:
|
||||
|
||||
6. Select the behavior of the node if it is a convergent node from the **Convergence** field:
|
||||
|
||||
- **Any** is the default behavior, allowing *any* of the nodes to complete as specified, before triggering the next converging node. As long as the status of one parent meets one of those run conditions, an ANY child node will run. In other words, an ANY node requires **all** nodes to complete, but only one node must complete with the expected outcome.
|
||||
|
||||
- Choose **All** to ensure that *all* nodes complete as specified, before converging and triggering the next node. The purpose of ALL nodes is to make sure that every parent met it's expected outcome in order to run the child node. The workflow checks to make sure every parent behaved as expected in order to run the child node. Otherwise, it will not run the child node.
|
||||
|
||||
If selected, the graphical view will label the node as **ALL**.
|
||||
|
||||
.. image:: ../common/images/wf-editor-convergent-node-all.png
|
||||
|
||||
.. note::
|
||||
|
||||
If a node is a root node, or a node that does not have any nodes converging into it, setting the **Convergence** rule does not apply, as its behavior is dictated by the action that triggers it.
|
||||
|
||||
|
||||
7. If a job template used in the workflow has **Prompt on Launch** selected for any of its parameters, a **Prompt** button appears, allowing you to change those values at the node level. Use the wizard to change the value(s) in each of the tabs and click **Confirm** in the Preview tab.
|
||||
|
||||
.. image:: ../common/images/wf-editor-prompt-button-wizard.png
|
||||
|
||||
Likewise, if a workflow template used in the workflow has **Prompt on Launch** selected for the inventory option, use the wizard to supply the inventory at the prompt. If the parent workflow has its own inventory, it will override any inventory that is supplied here.
|
||||
|
||||
.. image:: ../common/images/wf-editor-prompt-button-inventory-wizard.png
|
||||
|
||||
.. note::
|
||||
|
||||
For workflow job templates with promptable fields that are required, but do not have a default, you must provide those values when creating a node before the **Select** button becomes enabled. The two cases that disable the **Select** button until a value is provided via the **Prompt** button: 1) when you select the **Prompt on Launch** checkbox in a workflow job template, but do not provide a default, or 2) when you create a survey question that is required but do not provide a default answer. However, this is **NOT** the case with credentials. Credentials that require a password on launch are **not permitted** when creating a workflow node, since everything needed to launch the node must be provided when the node is created. So, if a workflow job template prompts for credentials, AWX prevents you from being able to select a credential that requires a password.
|
||||
|
||||
You must also click **Select** when the prompt wizard closes in order to apply the changes at that node. Otherwise, any changes you make will revert back to the values set in the actual job template.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-buttons.png
|
||||
|
||||
Once the node is created, it is labeled with its job type. A template that is associated with each workflow node will run based on the selected run scenario as it proceeds. Click the compass (|compass|) icon to display the legend for each run scenario and their job types.
|
||||
|
||||
.. _legend:
|
||||
|
||||
.. |compass| image:: ../common/images/wf-editor-compass-button.png
|
||||
|
||||
.. image:: ../common/images/wf-editor-key-dropdown-list.png
|
||||
|
||||
8. Hovering over a node allows you to add |add node| another node, view info |info node| about the node, edit |edit| the node details, edit an existing link |edit link|, or delete |delete node| the selected node.
|
||||
|
||||
.. |add node| image:: ../common/images/wf-editor-add-button.png
|
||||
.. |edit link| image:: ../common/images/wf-editor-edit-link.png
|
||||
.. |delete node| image:: ../common/images/wf-editor-delete-button.png
|
||||
.. |info node| image:: ../common/images/wf-editor-info-button.png
|
||||
.. |edit| image:: ../common/images/edit-button.png
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template.png
|
||||
|
||||
|
||||
9. When done adding/editing a node, click **Select** to save any modifications and render it on the graphical view. For possible ways to build your workflow, see :ref:`ug_wf_building_scenarios`.
|
||||
|
||||
10. When done with building your workflow template, click **Save** to save your entire workflow template and return to the new workflow template details page.
|
||||
|
||||
.. important::
|
||||
|
||||
Clicking **Close** on this pane will not save your work, but instead, closes the entire Workflow Visualizer and you will have to start over.
|
||||
|
||||
|
||||
.. _ug_wf_approval_nodes:
|
||||
|
||||
Approval nodes
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
Choosing an **Approval** node requires user intervention in order to advance the workflow. This functions as a means to pause the workflow in between playbooks so that a user can give approval to continue on to the next playbook in the workflow, giving the user a specified amount of time to intervene, but also allows the user to continue as quickly as possible without having to wait on some other trigger.
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-form.png
|
||||
|
||||
The default for the timeout is none, but you can specify the length of time before the request expires and automatically gets denied. After selecting and supplying the information for the approval node, it displays on the graphical view with a pause (|pause|) icon next to it.
|
||||
|
||||
.. |pause| image:: ../common/images/wf-node-approval-icon.png
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-node.png
|
||||
|
||||
The approver is anyone who can execute the workflow job template containing the approval nodes, has org admin or above privileges (for the org associated with that workflow job template), or any user who has the *Approve* permission explicitly assigned to them within that specific workflow job template.
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-notifications.png
|
||||
|
||||
If pending approval nodes are not approved within the specified time limit (if an expiration was assigned) or they are denied, then they are marked as "timed out" or "failed", respectively, and move on to the next "on fail node" or "always node". If approved, the "on success" path is taken. If you try to POST in the API to a node that has already been approved, denied or timed out, an error message notifies you that this action is redundant, and no further steps will be taken.
|
||||
|
||||
Below shows the various levels of permissions allowed on approval workflows:
|
||||
|
||||
.. image:: ../common/images/wf-node-approval-rbac.png
|
||||
|
||||
.. source file located on google spreadsheet "Workflow approvals chart"
|
||||
|
||||
.. _ug_wf_building_scenarios:
|
||||
|
||||
Node building scenarios
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
You can add a sibling node by clicking the |add node| on the parent node:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-sibling-node.png
|
||||
|
||||
You can insert another node in between nodes by hovering over the line that connects the two until the |add node| appears. Clicking on the |add node| automatically inserts the node between the two nodes.
|
||||
|
||||
.. image:: ../common/images/wf-editor-insert-node-template.png
|
||||
|
||||
To add a root node to depict a split scenario, click the |start| button again:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-split.png
|
||||
|
||||
At any node where you want to create a split scenario, hover over the node from which the split scenario begins and click the |add node|. This essentially adds multiple nodes from the same parent node, creating sibling nodes:
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-siblings.png
|
||||
|
||||
.. note::
|
||||
|
||||
When adding a new node, the **PROMPT** button applies to workflow templates as well. Workflow templates will prompt for inventory and surveys.
|
||||
|
||||
If you want to undo the last inserted node, click on another node without making a selection from the right pane. Or, click **Cancel** from the right pane.
|
||||
|
||||
|
||||
Below is an example of a workflow that contains all three types of jobs that is initiated by a job template that if it fails to run, proceed to the project sync job, and regardless of whether that fails or succeeds, proceed to the inventory sync job.
|
||||
|
||||
.. image:: ../common/images/wf-editor-create-new-add-template-example.png
|
||||
|
||||
Remember to refer to the Key at the top of the window to identify the meaning of the symbols and colors associated with the graphical depiction.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
In a workflow with a set of sibling nodes having varying edge types, and you remove a node that has a follow-on node attached to it, the attached node automatically joins the set of sibling nodes and retains its edge type:
|
||||
|
||||
|
||||
.. image:: ../common/images/wf-node-delete-scenario.png
|
||||
|
||||
|
||||
The following ways you can modify your nodes:
|
||||
|
||||
- If you want to edit a node, click on the node you want to edit. The right pane displays the current selections. Make your changes and click **Select** to apply them to the graphical view.
|
||||
|
||||
- To edit the edge type for an existing link (success/failure/always), click on the link. The right pane displays the current selection. Make your changes and click **Save** to apply them to the graphical view.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-edit-link.png
|
||||
|
||||
- To add a new link from one node to another, click the link |edit link| icon that appears on each node. Doing this highlights the nodes that are possible to link to. These feasible options are indicated by the dotted lines. Invalid options are indicated by grayed out boxes (nodes) that would otherwise produce an invalid link. The example below shows the **Demo Project** as a possible option for the **e2e-ec20de52-project** to link to, as indicated by the arrows:
|
||||
|
||||
.. image:: ../common/images/wf-node-link-scenario.png
|
||||
|
||||
- To remove a link, click the link and click the **Unlink** button.
|
||||
|
||||
.. image:: ../common/images/wf-editor-wizard-unlink.png
|
||||
|
||||
This button only appears in the right hand panel if the target or child node has more than one parent. All nodes must be linked to at least one other node at all times so you must create a new link before removing an old one.
|
||||
|
||||
|
||||
Click the Tools icon (|tools|) to zoom, pan, or reposition the view. Alternatively, you can drag the workflow diagram to reposition it on the screen or use the scroll on your mouse to zoom.
|
||||
|
||||
.. |tools| image:: ../common/images/tools.png
|
||||
|
||||
|
||||
|
||||
Launch a Workflow Template
|
||||
-------------------------------
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; jobs, launching
|
||||
|
||||
Launch a workflow template by any of the following ways:
|
||||
|
||||
- Access the workflow templates list from the **Templates** menu on the left navigation bar or while in the workflow template Details view, scroll to the bottom to access the |launch| button from the list of templates.
|
||||
|
||||
.. image:: ../common/images/wf-templates-wf-template-launch.png
|
||||
|
||||
- While in the Workflow Job Template Details view of the job you want to launch, click **Launch**.
|
||||
|
||||
.. |launch| image:: ../common/images/launch-button.png
|
||||
|
||||
Along with any extra variables set in the workflow job template and survey, AWX automatically adds the same variables as those added for a workflow job template upon launch. Additionally, AWX automatically redirects the web browser to the Jobs Details page for this job, displaying the progress and the results.
|
||||
|
||||
Events related to approvals on workflows display in the Activity Stream (|activity-stream|) with detailed information about the approval requests, if any.
|
||||
|
||||
.. |activity-stream| image:: ../common/images/activitystream.png
|
||||
|
||||
.. .. image:: ../common/images/wf-activity-stream-events.png
|
||||
|
||||
|
||||
Copy a Workflow Template
|
||||
-------------------------------
|
||||
|
||||
AWX allows you the ability to copy a workflow template. If you choose to copy a workflow template, it **does not** copy any associated schedule, notifications, or permissions. Schedules and notifications must be recreated by the user or admin creating the copy of the workflow template. The user copying the workflow template will be granted the admin permission, but no permissions are assigned (copied) to the workflow template.
|
||||
|
||||
1. Access the workflow template that you want to copy from the **Templates** menu on the left navigation bar or while in the Workflow Job Template Details view, scroll to the bottom to access it from a list of templates.
|
||||
|
||||
2. Click the |copy| button.
|
||||
|
||||
.. |copy| image:: ../common/images/copy-button.png
|
||||
|
||||
A new template opens with the name of the template from which you copied and a timestamp.
|
||||
|
||||
.. image:: ../common/images/wf-list-view-copy-example.png
|
||||
|
||||
Select the copied template and replace the contents of the **Name** field with a new name, and provide or modify the entries in the other fields to complete this template.
|
||||
|
||||
3. Click **Save** when done.
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
If a resource has a related resource that you don't have the right level of permission to, you cannot copy the resource, such as in the case where a project uses a credential that a current user only has *Read* access. However, for a workflow template, if any of its nodes uses an unauthorized job template, inventory, or credential, the workflow template can still be copied. But in the copied workflow template, the corresponding fields in the workflow template node will be absent.
|
||||
|
||||
|
||||
.. _ug_wf_templates_extravars:
|
||||
|
||||
Extra Variables
|
||||
----------------
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; survey extra variables
|
||||
pair: surveys; extra variables
|
||||
|
||||
.. note::
|
||||
|
||||
``extra_vars`` passed to the job launch API are only honored if one of the following is true:
|
||||
|
||||
- They correspond to variables in an enabled survey
|
||||
- ``ask_variables_on_launch`` is set to True
|
||||
|
||||
When you pass survey variables, they are passed as extra variables (``extra_vars``). This can be tricky, as passing extra variables to a workflow template (as you would do with a survey) can override other variables being passed from the inventory and project.
|
||||
|
||||
For example, say that you have a defined variable for an inventory for ``debug = true``. It is entirely possible that this variable, ``debug = true``, can be overridden in a workflow template survey.
|
||||
|
||||
To ensure that the variables you need to pass are not overridden, ensure they are included by redefining them in the survey. Keep in mind that extra variables can be defined at the inventory, group, and host levels.
|
||||
|
||||
.. index::
|
||||
pair: workflow templates; job variables
|
||||
single: workflow templates, overview
|
||||
single: workflow templates, hierarchy
|
||||
single: variable precedence
|
||||
single: extra_vars
|
||||
|
||||
The following table notes the behavior (hierarchy) of variable precedence in AWX as it compares to variable precedence in Ansible.
|
||||
|
||||
**Variable Precedence Hierarchy (last listed wins)**
|
||||
|
||||
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy-Workflows.png
|
||||
179
docs/docsite/rst/userguide/workflows.rst
Normal file
179
docs/docsite/rst/userguide/workflows.rst
Normal file
@@ -0,0 +1,179 @@
|
||||
.. _ug_workflows:
|
||||
|
||||
|
||||
Workflows
|
||||
============
|
||||
|
||||
.. index::
|
||||
single: workflows
|
||||
|
||||
Workflows allow you to configure a sequence of disparate job templates (or workflow templates) that may or may not share inventory, playbooks, or permissions. However, workflows have ‘admin’ and ‘execute’ permissions, similar to job templates. A workflow accomplishes the task of tracking the full set of jobs that were part of the release process as a single unit.
|
||||
|
||||
|
||||
Job or workflow templates are linked together using a graph-like structure called nodes. These nodes can be jobs, project syncs, or inventory syncs. A template can be part of different workflows or used multiple times in the same workflow. A copy of the graph structure is saved to a workflow job when you launch the workflow.
|
||||
|
||||
The example below shows a workflow that contains all three, as well as a workflow job template:
|
||||
|
||||
.. image:: ../common/images/wf-node-all-scenarios-wf-in-wf.png
|
||||
|
||||
|
||||
As the workflow runs, jobs are spawned from the node's linked template. Nodes linking to a job template which has prompt-driven fields (``job_type``, ``job_tags``, ``skip_tags``, ``limit``) can contain those fields, and will not be prompted on launch. Job templates with promptable credential and/or inventory, WITHOUT defaults, will not be available for inclusion in a workflow.
|
||||
|
||||
|
||||
Workflow scenarios and considerations
|
||||
----------------------------------------
|
||||
|
||||
Consider the following scenarios for building workflows:
|
||||
|
||||
- A root node is set to ALWAYS by default and it not editable.
|
||||
|
||||
.. image:: ../common/images/wf-root-node-always.png
|
||||
|
||||
- A node can have multiple parents and children may be linked to any of the states of success, failure, or always. If always, then the state is neither success or failure. States apply at the node level, not at the workflow job template level. A workflow job will be marked as successful unless it is canceled or encounters an error.
|
||||
|
||||
.. image:: ../common/images/wf-sibling-nodes-all-edge-types.png
|
||||
|
||||
- If you remove a job or workflow template within the workflow, the node(s) previously connected to those deleted, automatically get connected upstream and retains its edge type as in the example below:
|
||||
|
||||
.. image:: ../common/images/wf-node-delete-scenario.png
|
||||
|
||||
- You could have a convergent workflow, where multiple jobs converge into one. In this scenario, any of the jobs or all of them must complete before the next one runs, as shown in the example below:
|
||||
|
||||
.. image:: ../common/images/wf-node-convergence.png
|
||||
|
||||
In the example provided, AWX runs the first two job templates in parallel. When they both finish and succeed as specified, the 3rd downstream (:ref:`convergence node <convergence_node>`), will trigger.
|
||||
|
||||
- Prompts for inventory and surveys will apply to workflow nodes in workflow job templates.
|
||||
|
||||
- If you launch from the API, running a ``get`` command displays a list of warnings and highlights missing components. The basic workflow for a workflow job template is illustrated below.
|
||||
|
||||
.. image:: ../common/images/workflow-diagram.png
|
||||
|
||||
- It is possible to launch several workflows simultaneously, and set a schedule for when to launch them. You can set notifications on workflows, such as when a job completes, similar to that of job templates.
|
||||
|
||||
.. note::
|
||||
|
||||
.. include:: ../common/job-slicing-rule.rst
|
||||
|
||||
|
||||
- You can build a recursive workflow, but if AWX detects an error, it will stop at the time the nested workflow attempts to run.
|
||||
|
||||
- Artifacts gathered in jobs in the sub-workflow will be passed to downstream nodes.
|
||||
|
||||
- An inventory can be set at the workflow level, or prompt for inventory on launch.
|
||||
|
||||
- When launched, all job templates in the workflow that have ``ask_inventory_on_launch=true`` will use the workflow level inventory.
|
||||
|
||||
- Job templates that do not prompt for inventory will ignore the workflow inventory and run against their own inventory.
|
||||
|
||||
- If a workflow prompts for inventory, schedules and other workflow nodes may provide the inventory.
|
||||
|
||||
- In a workflow convergence scenario, ``set_stats`` data will be merged in an undefined way, so it is recommended that you set unique keys.
|
||||
|
||||
|
||||
Extra Variables
|
||||
----------------
|
||||
|
||||
.. index::
|
||||
pair: workflows; survey extra variables
|
||||
pair: surveys; extra variables
|
||||
|
||||
Also similar to job templates, workflows use surveys to specify variables to be used in the playbooks in the workflow, called extra_vars. Survey variables are combined with extra_vars defined on the workflow job template, and saved to the workflow job extra_vars. extra_vars in the workflow job are combined with job template variables when spawning jobs within the workflow.
|
||||
|
||||
Workflows utilize the same behavior (hierarchy) of variable precedence as Job Templates with the exception of three additional variables. Refer to the Variable Precedence Hierarchy in the :ref:`ug_jobtemplates_extravars` section of the Job Templates chapter of this guide. The three additional variables include:
|
||||
|
||||
.. image:: ../common/images/Architecture-AWX_Variable_Precedence_Hierarchy-Workflows.png
|
||||
|
||||
Workflows included in a workflow will follow the same variable precedence - they will only inherit variables if they are specifically prompted for, or defined as part of a survey.
|
||||
|
||||
In addition to the workflow ``extra_vars``, jobs and workflows ran as part of a workflow can inherit variables in the artifacts dictionary of a parent job in the workflow (also combining with ancestors further upstream in its branch). These can be defined by the ``set_stats`` `Ansible module`_.
|
||||
|
||||
.. _`Ansible module`: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/set_stats_module.html
|
||||
|
||||
If you use the ``set_stats`` module in your playbook, you can produce results that can be consumed downstream by another job, for example, notify users as to the success or failure of an integration run. In this example, there are two playbooks that can be combined in a workflow to exercise artifact passing:
|
||||
|
||||
- **invoke_set_stats.yml**: first playbook in the workflow:
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: "Artifact integration test results to the web"
|
||||
local_action: 'shell curl -F "file=@integration_results.txt" https://file.io'
|
||||
register: result
|
||||
|
||||
- name: "Artifact URL of test results to Workflows"
|
||||
set_stats:
|
||||
data:
|
||||
integration_results_url: "{{ (result.stdout|from_json).link }}"
|
||||
|
||||
|
||||
- **use_set_stats.yml**: second playbook in the workflow
|
||||
|
||||
::
|
||||
|
||||
---
|
||||
- hosts: localhost
|
||||
tasks:
|
||||
- name: "Get test results from the web"
|
||||
uri:
|
||||
url: "{{ integration_results_url }}"
|
||||
return_content: true
|
||||
register: results
|
||||
|
||||
- name: "Output test results"
|
||||
debug:
|
||||
msg: "{{ results.content }}"
|
||||
|
||||
|
||||
The ``set_stats`` module processes this workflow as follows:
|
||||
|
||||
1. The contents of an integration results (example: integration_results.txt below) is first uploaded to the web.
|
||||
|
||||
::
|
||||
|
||||
the tests are passing!
|
||||
|
||||
2. Through the **invoke_set_stats** playbook, ``set_stats`` is then invoked to artifact the URL of the uploaded integration_results.txt into the Ansible variable "integration_results_url".
|
||||
3. The second playbook in the workflow consumes the Ansible extra variable "integration_results_url". It calls out to the web using the ``uri`` module to get the contents of the file uploaded by the previous Job Template Job. Then, it simply prints out the contents of the gotten file.
|
||||
|
||||
.. note::
|
||||
|
||||
For artifacts to work, keep the default setting, ``per_host = False`` in the ``set_stats`` module.
|
||||
|
||||
|
||||
Workflow States
|
||||
----------------
|
||||
|
||||
The workflow job can have the following states (no Failed state):
|
||||
|
||||
- Waiting
|
||||
|
||||
- Running
|
||||
|
||||
- Success (finished)
|
||||
|
||||
- Cancel
|
||||
|
||||
- Error
|
||||
|
||||
- Failed
|
||||
|
||||
In the workflow scheme, canceling a job cancels the branch, while canceling the workflow job cancels the entire workflow.
|
||||
|
||||
|
||||
Role-Based Access Controls
|
||||
-----------------------------
|
||||
|
||||
.. can you verify this info is still valid for 3.4?
|
||||
|
||||
To edit and delete a workflow job template, you must have the admin role. To create a workflow job template, you must be an organization admin or a system admin. However, you can run a workflow job template that contains job templates you don't have permissions for. Similar to projects, organization admins can create a blank workflow and then grant an 'admin_role' to a low-level user, after which they can go about delegating more access and building the graph. You must have execute access to a job template to add it to a workflow job template.
|
||||
|
||||
Other tasks such as the ability to make a duplicate copy and re-launch a workflow can also be performed, depending on what kinds of permissions are granted to a particular user. Generally, you should have permissions to all the resources used in a workflow (like job templates) before relaunching or making a copy.
|
||||
|
||||
.. ^^
|
||||
|
||||
For more information on performing the tasks described in this section, refer to the :ref:`Administration Guide <ag_start>`.
|
||||
|
||||
|
||||
Reference in New Issue
Block a user