Removed UI-focused user docs from AWX. (#15641)

* Replaced with larger graphic.

* Revert "Replaced with larger graphic."

This reverts commit 1214b00052b43c46c5ee9b2833e61c779884ec1c.

* Removed UI-focused user docs from AWX.

* Fixed indentation for release notes

* Removed/updated image files no longer needed.
This commit is contained in:
TVo 2024-11-22 07:43:43 -07:00 committed by GitHub
parent d2cd4e08c5
commit 790875ceef
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
746 changed files with 77 additions and 13606 deletions

View File

@ -8,10 +8,10 @@ from importlib import import_module
sys.path.insert(0, os.path.abspath('./rst/rest_api/_swagger'))
project = u'Ansible AWX'
copyright = u'2023, Red Hat'
copyright = u'2024, Red Hat'
author = u'Red Hat'
pubdateshort = '2023-08-04'
pubdateshort = '2024-11-22'
pubdate = datetime.strptime(pubdateshort, '%Y-%m-%d').strftime('%B %d, %Y')
# The name for this set of Sphinx documents. If None, it defaults to
@ -58,19 +58,11 @@ locale_dirs = ['locale/'] # path is example but recommended.
gettext_compact = False # optional.
rst_epilog = """
.. |atqi| replace:: *AWX Quick Installation Guide*
.. |atqs| replace:: *AWX Quick Setup Guide*
.. |atir| replace:: *AWX Installation and Reference Guide*
.. |ata| replace:: *AWX Administration Guide*
.. |atu| replace:: *AWX User Guide*
.. |atumg| replace:: *AWX Upgrade and Migration Guide*
.. |atapi| replace:: *AWX API Guide*
.. |atrn| replace:: *AWX Release Notes*
.. |aa| replace:: Ansible Automation
.. |AA| replace:: Automation Analytics
.. |aap| replace:: Ansible Automation Platform
.. |ab| replace:: ansible-builder
.. |ap| replace:: Automation Platform
.. |at| replace:: AWX
.. |At| replace:: AWX
.. |ah| replace:: Automation Hub

View File

@ -1,32 +0,0 @@
Changing the Default Timeout for Authentication
=================================================
.. index::
pair: troubleshooting; authentication timeout
pair: authentication timeout; changing the default
single: authentication token
single: authentication expiring
single: log
single: login timeout
single: timeout login
pair: timeout; session
The default length of time, in seconds, that your supplied token is valid can be changed in the System Settings screen of the AWX user interface:
1. Click the **Settings** from the left navigation bar.
3. Click **Miscellaneous Authentication settings** under the System settings.
3. Click **Edit**.
4. Enter the timeout period in seconds in the **Idle Time Force Log Out** text field.
.. image:: ../common/images/configure-awx-system-timeout.png
:alt: Miscellaneous Authentication settings showing the Idle Time Force Logout field where you can adjust the token validity period.
4. Click **Save** to apply your changes.
.. note::
If you are accessing AWX directly and are having trouble getting your authentication to stay, in that you have to keep logging in over and over, try clearing your web browser's cache. In situations like this, it is often found that the authentication token has been cached in the browser session and must be cleared.

View File

@ -1,220 +0,0 @@
.. _ag_clustering:
Clustering
============
.. index::
pair: redundancy; instance groups
pair: redundancy; clustering
Clustering is sharing load between hosts. Each instance should be able to act as an entry point for UI and API access. This should enable AWX administrators to use load balancers in front of as many instances as they wish and maintain good data visibility.
.. note::
Load balancing is optional and is entirely possible to have ingress on one or all instances as needed. The ``CSRF_TRUSTED_ORIGIN`` setting may be required if you are using AWX behind a load balancer. See :ref:`ki_csrf_trusted_origin_setting` for more detail.
Each instance should be able to join AWX cluster and expand its ability to execute jobs. This is a simple system where jobs can and will run anywhere rather than be directed on where to run. Also, clustered instances can be grouped into different pools/queues, called :ref:`ag_instance_groups`.
Setup Considerations
---------------------
.. index::
single: clustering; setup considerations
pair: clustering; PostgreSQL
This section covers initial setup of clusters only. For upgrading an existing cluster, refer to the |atumg|.
Important considerations to note in the new clustering environment:
- PostgreSQL is still a standalone instance and is not clustered. AWX does not manage replica configuration or database failover (if the user configures standby replicas).
- When spinning up a cluster, the database node should be a standalone server, and PostgreSQL should not be installed on one of AWX nodes.
- PgBouncer is not recommended for connection pooling with AWX. Currently, AWX relies heavily on ``pg_notify`` for sending messages across various components, and therefore, PgBouncer cannot readily be used in transaction pooling mode.
- The maximum supported instances in a cluster is 20.
- All instances should be reachable from all other instances and they should be able to reach the database. It is also important for the hosts to have a stable address and/or hostname (depending on how the AWX host is configured).
- All instances must be geographically collocated, with reliable low-latency connections between instances.
- For purposes of upgrading to a clustered environment, your primary instance must be part of the ``default`` group in the inventory *AND* it needs to be the first host listed in the ``default`` group.
- Manual projects must be manually synced to all instances by the customer, and updated on all instances at once.
- The ``inventory`` file for platform deployments should be saved/persisted. If new instances are to be provisioned, the passwords and configuration options, as well as host names, must be made available to the installer.
Scaling the Web and Task pods independently
--------------------------------------------
You can scale replicas up or down for each deployment by using the ``web_replicas`` or ``task_replicas`` respectively. You can scale all pods across both deployments by using ``replicas`` as well. The logic behind these CRD keys acts as such:
- If you specify the ``replicas`` field, the key passed will scale both the ``web`` and ``task`` replicas to the same number.
- If ``web_replicas`` or ``task_replicas`` is ever passed, it will override the existing ``replicas`` field on the specific deployment with the new key value.
These new replicas can be constrained in a similar manner to previous single deployments by appending the particular deployment name in front of the constraint used. More about those new constraints can be found below in the :ref:`ag_assign_pods_to_nodes` section.
.. _ag_assign_pods_to_nodes:
Assigning AWX pods to specific nodes
-------------------------------------
You can constrain the AWX pods created by the operator to run on a certain subset of nodes. ``node_selector`` and ``postgres_selector`` constrains the AWX pods to run only on the nodes that match all the specified key/value pairs. ``tolerations`` and ``postgres_tolerations`` allow the AWX pods to be scheduled onto nodes with matching taints. The ability to specify ``topologySpreadConstraints`` is also allowed through ``topology_spread_constraints`` If you want to use affinity rules for your AWX pod, you can use the ``affinity`` option.
If you want to constrain the web and task pods individually, you can do so by specifying the deployment type before the specific setting. For example, specifying ``task_tolerations`` will allow the AWX task pod to be scheduled onto nodes with matching taints.
+----------------------------------+------------------------------------------+----------+
| Name | Description | Default |
+----------------------------------+------------------------------------------+----------+
| postgres_image | Path of the image to pull | postgres |
+----------------------------------+------------------------------------------+----------+
| postgres_image_version | Image version to pull | 13 |
+----------------------------------+------------------------------------------+----------+
| node_selector | AWX pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| web_node_selector | AWX web pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| task_node_selector | AWX task pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| topology_spread_constraints | AWX pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| web_topology_spread_constraints | AWX web pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| task_topology_spread_constraints | AWX task pods' topologySpreadConstraints | '' |
+----------------------------------+------------------------------------------+----------+
| affinity | AWX pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| web_affinity | AWX web pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| task_affinity | AWX task pods' affinity rules | '' |
+----------------------------------+------------------------------------------+----------+
| tolerations | AWX pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| web_tolerations | AWX web pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| task_tolerations | AWX task pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
| annotations | AWX pods' annotations | '' |
+----------------------------------+------------------------------------------+----------+
| postgres_selector | Postgres pods' nodeSelector | '' |
+----------------------------------+------------------------------------------+----------+
| postgres_tolerations | Postgres pods' tolerations | '' |
+----------------------------------+------------------------------------------+----------+
Example of customization could be:
::
---
spec:
...
node_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
topology_spread_constraints: |
- maxSkew: 100
topologyKey: "topology.kubernetes.io/zone"
whenUnsatisfiable: "ScheduleAnyway"
labelSelector:
matchLabels:
app.kubernetes.io/name: "<resourcename>"
tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
task_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX_task"
effect: "NoSchedule"
postgres_selector: |
disktype: ssd
kubernetes.io/arch: amd64
kubernetes.io/os: linux
postgres_tolerations: |
- key: "dedicated"
operator: "Equal"
value: "AWX"
effect: "NoSchedule"
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
- another-node-label-value
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
Status and Monitoring via Browser API
--------------------------------------
AWX itself reports as much status as it can via the Browsable API at ``/api/v2/ping`` in order to provide validation of the health of the cluster, including:
- The instance servicing the HTTP request
- The timestamps of the last heartbeat of all other instances in the cluster
- Instance Groups and Instance membership in those groups
View more details about Instances and Instance Groups, including running jobs and membership information at ``/api/v2/instances/`` and ``/api/v2/instance_groups/``.
Instance Services and Failure Behavior
----------------------------------------
Each AWX instance is made up of several different services working collaboratively:
- HTTP Services - This includes the AWX application itself as well as external web services.
- Callback Receiver - Receives job events from running Ansible jobs.
- Dispatcher - The worker queue that processes and runs all jobs.
- Redis - This key value store is used as a queue for event data propagated from ansible-playbook to the application.
- Rsyslog - log processing service used to deliver logs to various external logging services.
AWX is configured in such a way that if any of these services or their components fail, then all services are restarted. If these fail sufficiently often in a short span of time, then the entire instance will be placed offline in an automated fashion in order to allow remediation without causing unexpected behavior.
Job Runtime Behavior
---------------------
The way jobs are run and reported to a 'normal' user of AWX does not change. On the system side, some differences are worth noting:
- When a job is submitted from the API interface it gets pushed into the dispatcher queue. Each AWX instance will connect to and receive jobs from that queue using a particular scheduling algorithm. Any instance in the cluster is just as likely to receive the work and execute the task. If a instance fails while executing jobs, then the work is marked as permanently failed.
.. image:: ../common/images/clustering-visual.png
:alt: An illustration depicting job distribution in an AWX cluster.
- Project updates run successfully on any instance that could potentially run a job. Projects will sync themselves to the correct version on the instance immediately prior to running the job. If the needed revision is already locally checked out and Galaxy or Collections updates are not needed, then a sync may not be performed.
- When the sync happens, it is recorded in the database as a project update with a ``launch_type = sync`` and ``job_type = run``. Project syncs will not change the status or version of the project; instead, they will update the source tree *only* on the instance where they run.
- If updates are needed from Galaxy or Collections, a sync is performed that downloads the required roles, consuming that much more space in your /tmp file. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
Job Runs
^^^^^^^^^^^
By default, when a job is submitted to the AWX queue, it can be picked up by any of the workers. However, you can control where a particular job runs, such as restricting the instances from which a job runs on.
In order to support temporarily taking an instance offline, there is a property enabled defined on each instance. When this property is disabled, no jobs will be assigned to that instance. Existing jobs will finish, but no new work will be assigned.

View File

@ -1,89 +0,0 @@
.. _ag_configure_awx:
AWX Configuration
~~~~~~~~~~~~~~~~~~~
.. index::
single: configure AWX
.. _configure_awx_overview:
You can configure various AWX settings within the Settings screen in the following tabs:
Each tab contains fields with a **Reset** button, allowing you to revert any value entered back to the default value. **Reset All** allows you to revert all the values to their factory default values.
**Save** applies changes you make, but it does not exit the edit dialog. To return to the Settings screen, click **Settings** from the left navigation bar or use the breadcrumbs at the top of the current view.
.. _configure_awx_jobs:
Jobs
=========
.. index::
single: jobs
pair: configuration; jobs
The Jobs tab allows you to configure the types of modules that are allowed to be used by AWX's Ad Hoc Commands feature, set limits on the number of jobs that can be scheduled, define their output size, and other details pertaining to working with Jobs in AWX.
1. From the left navigation bar, click **Settings** from the left navigation bar and select **Jobs settings** from the Settings screen.
2. Set the configurable options from the fields provided. Click the tooltip |help| icon next to the field that you need additional information or details about. Refer to the :ref:`ug_galaxy` section for details about configuring Galaxy settings.
.. note::
The values for all the timeouts are in seconds.
.. image:: ../common/images/configure-awx-jobs.png
:alt: Screenshot of the AWX job configuration settings.
3. Click **Save** to apply the settings or **Cancel** to abandon the changes.
.. _configure_awx_system:
System
======
.. index::
pair: configuration; system
The System tab allows you to define the base URL for the AWX host, configure alerts, enable activity capturing, control visibility of users, enable certain AWX features and functionality through a license file, and configure logging aggregation options.
1. From the left navigation bar, click **Settings**.
2. The right side of the Settings window is a set of configurable System settings. Select from the following options:
- **Miscellaneous System settings**: enable activity streams, specify the default execution environment, define the base URL for the AWX host, enable AWX administration alerts, set user visibility, define analytics, specify usernames and passwords, and configure proxies.
- **Miscellaneous Authentication settings**: configure options associated with authentication methods and sessions (timeout, number of sessions logged in, tokens).
- **Logging settings**: configure logging options based on the type you choose:
.. image:: ../common/images/configure-awx-system-logging-types.png
:alt: Logging settings shown with the list of options for Logging Aggregator Types.
For more information about each of the logging aggregation types, refer to the :ref:`ag_logging` section of the |ata|.
3. Set the configurable options from the fields provided. Click the tooltip |help| icon next to the field that you need additional information or details about. Below is an example of the System settings window.
.. |help| image:: ../common/images/tooltips-icon.png
.. image:: ../common/images/configure-awx-system.png
:alt: Miscellaneous System settings window showing all possible configurable options.
.. note::
The **Allow External Users to Create Oauth2 Tokens** setting is disabled by default. This ensures external users cannot *create* their own tokens. If you enable then disable it, any tokens created by external users in the meantime will still exist, and are not automatically revoked.
4. Click **Save** to apply the settings or **Cancel** to abandon the changes.
.. _configure_awx_ui:
User Interface
================
.. index::
pair: configuration; UI
pair: configuration; data collection
pair: configuration; custom logo
pair: configuration; custom login message
pair: logo; custom
pair: login message; custom
.. include:: ../common/logos_branding.rst

View File

@ -1,465 +0,0 @@
.. _ag_ext_exe_env:
Container and Instance Groups
==================================
.. index::
pair: container; groups
pair: instance; groups
AWX allows you to execute jobs via ansible playbook runs directly on a member of the cluster or in a namespace of an Openshift cluster with the necessary service account provisioned called a Container Group. You can execute jobs in a container group only as-needed per playbook. For more information, see :ref:`ag_container_groups` towards the end of this section.
For |ees|, see :ref:`ug_execution_environments` in the |atu|.
.. _ag_instance_groups:
Instance Groups
------------------
Instances can be grouped into one or more Instance Groups. Instance groups can be assigned to one or more of the resources listed below.
- Organizations
- Inventories
- Job Templates
When a job associated with one of the resources executes, it will be assigned to the instance group associated with the resource. During the execution process, instance groups associated with Job Templates are checked before those associated with Inventories. Similarly, instance groups associated with Inventories are checked before those associated with Organizations. Thus, Instance Group assignments for the three resources form a hierarchy: Job Template **>** Inventory **>** Organization.
Here are some of the things to consider when working with instance groups:
- You may optionally define other groups and group instances in those groups. These groups should be prefixed with ``instance_group_``. Instances are required to be in the ``awx`` or ``execution_nodes`` group alongside other ``instance_group_`` groups. In a clustered setup, at least one instance **must** be present in the ``awx`` group, which will appear as ``controlplane`` in the API instance groups. See :ref:`ag_awx_group_policies` for example scenarios.
- A ``default`` API instance group is automatically created with all nodes capable of running jobs. Technically, it is like any other instance group but if a specific instance group is not associated with a specific resource, then job execution will always fall back to the ``default`` instance group. The ``default`` instance group always exists (it cannot be deleted nor renamed).
- Do not create a group named ``instance_group_default``.
- Do not name any instance the same as a group name.
.. _ag_awx_group_policies:
``awx`` group policies
^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: policies; awx groups
Use the following criteria when defining nodes:
- nodes in the ``awx`` group can define ``node_type`` hostvar to be ``hybrid`` (default) or ``control``
- nodes in the ``execution_nodes`` group can define ``node_type`` hostvar to be ``execution`` (default) or ``hop``
You can define custom groups in the inventory file by naming groups with ``instance_group_*`` where ``*`` becomes the name of the group in the API. Or, you can create custom instance groups in the API after the install has finished.
The current behavior expects a member of an ``instance_group_*`` be part of ``awx`` or ``execution_nodes`` group. Consider this example scenario:
::
[awx]
126-addr.tatu.home ansible_host=192.168.111.126 node_type=control
[awx:vars]
peers=execution_nodes
[execution_nodes]
[instance_group_test]
110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928
As a result of running the installer, you will get the error below:
.. code-block:: bash
TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] ***
fatal: [126-addr.tatu.home -> localhost]: FAILED! => {"msg": "The host '110-addr.tatu.home' is not present in either [awx] or [execution_nodes]"}
To fix this, you could move the box ``110-addr.tatu.home`` to an ``execution_node`` group.
::
[awx]
126-addr.tatu.home ansible_host=192.168.111.126 node_type=control
[awx:vars]
peers=execution_nodes
[execution_nodes]
110-addr.tatu.home ansible_host=192.168.111.110 receptor_listener_port=8928
[instance_group_test]
110-addr.tatu.home
This results in:
.. code-block:: bash
TASK [ansible.automation_platform_installer.check_config_static : Validate mesh topology] ***
ok: [126-addr.tatu.home -> localhost] => {"changed": false, "mesh": {"110-addr.tatu.home": {"node_type": "execution", "peers": [], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": true, "receptor_listener_port": 8928, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}, "126-addr.tatu.home": {"node_type": "control", "peers": ["110-addr.tatu.home"], "receptor_control_filename": "receptor.sock", "receptor_control_service_name": "control", "receptor_listener": false, "receptor_listener_port": 27199, "receptor_listener_protocol": "tcp", "receptor_log_level": "info"}}}
Upon upgrading from older versions of awx, the legacy ``instance_group_`` member will most likely have the awx code installed, which would cause that node to be placed in the ``awx`` group.
Configuring Instance Groups from the API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: instance group; configure
pair: instance group; API
Instance groups can be created by POSTing to ``/api/v2/instance_groups`` as a system administrator.
Once created, instances can be associated with an instance group with:
.. code-block:: bash
HTTP POST /api/v2/instance_groups/x/instances/ {'id': y}`
An instance that is added to an instance group will automatically reconfigure itself to listen on the group's work queue. See the following section, :ref:`ag_instance_group_policies`, for more details.
.. _ag_instance_group_policies:
Instance group policies
^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: policies; instance groups
pair: clustering; instance group policies
You can configure AWX instances to automatically join Instance Groups when they come online by defining a :term:`policy`. These policies are evaluated for every new instance that comes online.
Instance Group Policies are controlled by three optional fields on an ``Instance Group``:
- ``policy_instance_percentage``: This is a number between 0 - 100. It guarantees that this percentage of active AWX instances will be added to this Instance Group. As new instances come online, if the number of Instances in this group relative to the total number of instances is less than the given percentage, then new ones will be added until the percentage condition is satisfied.
- ``policy_instance_minimum``: This policy attempts to keep at least this many instances in the Instance Group. If the number of available instances is lower than this minimum, then all instances will be placed in this Instance Group.
- ``policy_instance_list``: This is a fixed list of instance names to always include in this Instance Group.
The Instance Groups list view from the |at| User Interface provides a summary of the capacity levels for each instance group according to instance group policies:
|Instance Group policy example|
.. |Instance Group policy example| image:: ../common/images/instance-groups_list_view.png
:alt: Instance Group list view with example instance group and container groups.
See :ref:`ug_instance_groups_create` for further detail.
Notable policy considerations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
- ``policy_instance_percentage`` and ``policy_instance_minimum`` both set minimum allocations. The rule that results in more instances assigned to the group will take effect. For example, if you have a ``policy_instance_percentage`` of 50% and a ``policy_instance_minimum`` of 2 and you start 6 instances, 3 of them would be assigned to the Instance Group. If you reduce the number of total instances in the cluster to 2, then both of them would be assigned to the Instance Group to satisfy ``policy_instance_minimum``. This way, you can set a lower bound on the amount of available resources.
- Policies do not actively prevent instances from being associated with multiple Instance Groups, but this can effectively be achieved by making the percentages add up to 100. If you have 4 instance groups, assign each a percentage value of 25 and the instances will be distributed among them with no overlap.
Manually pinning instances to specific groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: pinning; instance groups
pair: clustering; pinning
If you have a special instance which needs to be exclusively assigned to a specific Instance Group but don't want it to automatically join other groups via "percentage" or "minimum" policies:
1. Add the instance to one or more Instance Groups' ``policy_instance_list``
2. Update the instance's ``managed_by_policy`` property to be ``False``.
This will prevent the Instance from being automatically added to other groups based on percentage and minimum policy; it will only belong to the groups you've manually assigned it to:
.. code-block:: bash
HTTP PATCH /api/v2/instance_groups/N/
{
"policy_instance_list": ["special-instance"]
}
HTTP PATCH /api/v2/instances/X/
{
"managed_by_policy": False
}
.. _ag_instance_groups_job_runtime_behavior:
Job Runtime Behavior
^^^^^^^^^^^^^^^^^^^^^^
When you run a job associated with a instance group, some behaviors worth noting are:
- If a cluster is divided into separate instance groups, then the behavior is similar to the cluster as a whole. If two instances are assigned to a group then either one is just as likely to receive a job as any other in the same group.
- As AWX instances are brought online, it effectively expands the work capacity of the system. If those instances are also placed into instance groups, then they also expand that group's capacity. If an instance is performing work and it is a member of multiple groups, then capacity will be reduced from all groups for which it is a member. De-provisioning an instance will remove capacity from the cluster wherever that instance was assigned.
.. note::
Not all instances are required to be provisioned with an equal capacity.
.. _ag_instance_groups_control_where_job_runs:
Control Where a Job Runs
^^^^^^^^^^^^^^^^^^^^^^^^^
If any of the job template, inventory, or organization has instance groups associated with them, a job ran from that job template will not be eligible for the default behavior. That means that if all of the instances inside of the instance groups associated with these 3 resources are out of capacity, the job will remain in the pending state until capacity becomes available.
The order of preference in determining which instance group to submit the job to is as follows:
1. job template
2. inventory
3. organization (by way of project)
If instance groups are associated with the job template, and all of these are at capacity, then the job will be submitted to instance groups specified on inventory, and then organization. Jobs should execute in those groups in preferential order as resources are available.
The global ``default`` group can still be associated with a resource, just like any of the custom instance groups defined in the playbook. This can be used to specify a preferred instance group on the job template or inventory, but still allow the job to be submitted to any instance if those are out of capacity.
As an example, by associating ``group_a`` with a Job Template and also associating the ``default`` group with its inventory, you allow the ``default`` group to be used as a fallback in case ``group_a`` gets out of capacity.
In addition, it is possible to not associate an instance group with one resource but designate another resource as the fallback. For example, not associating an instance group with a job template and have it fall back to the inventory and/or the organization's instance group.
This presents two other great use cases:
1. Associating instance groups with an inventory (omitting assigning the job template to an instance group) will allow the user to ensure that any playbook run against a specific inventory will run only on the group associated with it. This can be super useful in the situation where only those instances have a direct link to the managed nodes.
2. An administrator can assign instance groups to organizations. This effectively allows the administrator to segment out the entire infrastructure and guarantee that each organization has capacity to run jobs without interfering with any other organization's ability to run jobs.
Likewise, an administrator could assign multiple groups to each organization as desired, as in the following scenario:
- There are three instance groups: A, B, and C. There are two organizations: Org1 and Org2.
- The administrator assigns group A to Org1, group B to Org2 and then assign group C to both Org1 and Org2 as an overflow for any extra capacity that may be needed.
- The organization administrators are then free to assign inventory or job templates to whichever group they want (or just let them inherit the default order from the organization).
|Instance Group example|
.. |Instance Group example| image:: ../common/images/instance-groups-scenarios.png
:alt: Illustration showing grouping scenarios.
Arranging resources in this way offers a lot of flexibility. Also, you can create instance groups with only one instance, thus allowing you to direct work towards a very specific Host in the AWX cluster.
.. _ag_instancegrp_cpacity:
Instance group capacity limits
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: instance groups; capacity
pair: instance groups; limits
pair: instance groups; forks
pair: instance groups; jobs
Sometimes there is external business logic which may drive the desire to limit the concurrency of jobs sent to an instance group, or the maximum number of forks to be consumed.
For traditional instances and instance groups, there could be a desire to allow two organizations to run jobs on the same underlying instances, but limit each organization's total number of concurrent jobs. This can be achieved by creating an instance group for each organization and assigning the value for ``max_concurrent_jobs``.
For container groups, AWX is generally not aware of the resource limits of the OpenShift cluster. There may be limits set on the number of pods on a namespace, or only resources available to schedule a certain number of pods at a time if no auto-scaling is in place. Again, in this case, we can adjust the value for ``max_concurrent_jobs``.
Another parameter available is ``max_forks``. This provides additional flexibility for capping the capacity consumed on an instance group or container group. This may be used if jobs with a wide variety of inventory sizes and "forks" values are being run. This way, you can limit an organization to run up to 10 jobs concurrently, but consume no more than 50 forks at a time.
::
max_concurrent_jobs: 10
max_forks: 50
If 10 jobs that use 5 forks each are run, an 11th job will wait until one of these finishes to run on that group (or be scheduled on a different group with capacity).
If 2 jobs are running with 20 forks each, then a 3rd job with a ``task_impact`` of 11 or more will wait until one of these finishes to run on that group (or be scheduled on a different group with capacity).
For container groups, using the ``max_forks`` value is useful given that all jobs are submitted using the same ``pod_spec`` with the same resource requests, irrespective of the "forks" value of the job. The default ``pod_spec`` sets requests and not limits, so the pods can "burst" above their requested value without being throttled or reaped. By setting the ``max_forks`` value, you can help prevent a scenario where too many jobs with large forks values get scheduled concurrently and cause the OpenShift nodes to be oversubscribed with multiple pods using more resources than their requested value.
To set the maximum values for the concurrent jobs and forks in an instance group, see :ref:`ug_instance_groups_create` in the |atu|.
.. _ag_instancegrp_deprovision:
Deprovision Instance Groups
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: groups; deprovisioning
Re-running the setup playbook does not automatically deprovision instances since clusters do not currently distinguish between an instance that was taken offline intentionally or due to failure. Instead, shut down all services on the AWX instance and then run the deprovisioning tool from any other instance:
#. Shut down the instance or stop the service with the command, ``automation-awx-service stop``.
#. Run the deprovision command ``$ awx-manage deprovision_instance --hostname=<name used in inventory file>`` from another instance to remove it from the AWX cluster registry.
Example: ``awx-manage deprovision_instance --hostname=hostB``
Similarly, deprovisioning instance groups in AWX does not automatically deprovision or remove instance groups, even though re-provisioning will often cause these to be unused. They may still show up in API endpoints and stats monitoring. These groups can be removed with the following command:
Example: ``awx-manage unregister_queue --queuename=<name>``
Removing an instance's membership from an instance group in the inventory file and re-running the setup playbook does not ensure the instance won't be added back to a group. To be sure that an instance will not be added back to a group, remove via the API and also remove it in your inventory file, or you can stop defining instance groups in the inventory file altogether. You can also manage instance group topology through the |at| User Interface. For more information on managing instance groups in the UI, refer to :ref:`Instance Groups <ug_instance_groups>` in the |atu|.
.. _ag_container_groups:
Container Groups
-----------------
.. index::
single: container groups
pair: containers; instance groups
AWX supports :term:`Container Groups`, which allow you to execute jobs in pods on Kubernetes (k8s) or OpenShift clusters. Container groups act as a pool of resources within a virtual environment. These pods are created on-demand and only exist for the duration of the playbook run. This is known as the ephemeral execution model and ensures a clean environment for every job run.
In some cases, it is desirable to have container groups be "always-on", which is configured through the creation of an instance.
Container groups are different from |ees| in that |ees| are container images and do not use a virtual environment. See :ref:`ug_execution_environments` in the |atu| for further detail.
Create a container group
^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. include:: ../common/get-creds-from-service-account.rst
To create a container group:
1. Use the AWX user interface to create an :ref:`ug_credentials_ocp_k8s` credential that will be used with your container group, see :ref:`ug_credentials_add` in the |atu| for detail.
2. Create a new container group by navigating to the Instance Groups configuration window by clicking **Instance Groups** from the left navigation bar.
3. Click the **Add** button and select **Create Container Group**.
|IG - create new CG|
.. |IG - create new CG| image:: ../common/images/instance-group-create-new-cg.png
:alt: Create new container group form.
4. Enter a name for your new container group and select the credential previously created to associate it to the container group.
.. _ag_customize_pod_spec:
Customize the pod spec
^^^^^^^^^^^^^^^^^^^^^^^^
AWX provides a simple default pod specification, however, you can provide a custom YAML (or JSON) document that overrides the default pod spec. This field uses any custom fields (for example, ``ImagePullSecrets``) that can be "serialized" as valid pod JSON or YAML. A full list of options can be found in the `OpenShift documentation <https://docs.openshift.com/online/pro/architecture/core_concepts/pods_and_services.html>`_.
To customize the pod spec, check the **Customize pod specification** option to enable and expand the **Custom pod spec** field where you specify the namespace and provide additional customizations as needed.
|IG - CG customize pod|
.. |IG - CG customize pod| image:: ../common/images/instance-group-customize-cg-pod.png
:alt: Create new container group form with the option to custom the pod spec.
Click **Expand** to view the entire customization window.
.. image:: ../common/images/instance-group-customize-cg-pod-expanded.png
:alt: The expanded view for customizing the pod spec.
.. note::
The image used at job launch time is determined by which |ee| is associated with the job. If a Container Registry credential is associated with the |ee|, then AWX will attempt to make a ``ImagePullSecret`` to pull the image. If you prefer not to give the service account permission to manage secrets, you must pre-create the ``ImagePullSecret`` and specify it on the pod spec, and omit any credential from the |ee| used.
.. tip::
In order to override DNS/host entries, use the ``hostAliases`` attribute on the pod spec. When the pod is created, these entries will be added to ``/etc/hosts`` in the container running the job.
::
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
For more information, refer to Kubernetes' documentation on `Adding additional entries with hostAliases <https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/#adding-additional-entries-with-hostaliases>`_.
Once the container group is successfully created, the **Details** tab of the newly created container group remains, which allows you to review and edit your container group information. This is the same menu that is opened if the Edit (|edit-button|) button is clicked from the **Instance Group** link. You can also edit **Instances** and review **Jobs** associated with this instance group.
.. |edit-button| image:: ../common/images/edit-button.png
:alt: Edit button.
|IG - example CG successfully created|
.. |IG - example CG successfully created| image:: ../common/images/instance-group-example-cg-successfully-created.png
:alt: Example of the successfully created instance group as shown in the Jobs tab of the Instance groups window.
Container groups and instance groups are labeled accordingly.
.. note::
Using a custom pod spec may cause issues on upgrades if the default ``pod_spec`` changes. Since any manifest can be applied to any namespace, with the namespace specified separately, most likely you will only need to override the namespace. Similarly, pinning a default image for different releases of the platform to different versions of the default job runner container is tricky. If the default image is specified in the pod spec, then upgrades do not pick up the new default changes that are made to the default pod spec.
Verify container group functions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To verify the deployment and termination of your container:
1. Create a mock inventory and associate the container group to it by populating the name of the container group in the **Instance Group** field. See :ref:`ug_inventories_add` in the |atu| for detail.
|Dummy inventory|
.. |Dummy inventory| image:: ../common/images/inventories-create-new-cg-test-inventory.png
:alt: Example of creating a new container group test inventory.
2. Create "localhost" host in inventory with variables:
::
{'ansible_host': '127.0.0.1', 'ansible_connection': 'local'}
|Inventory with localhost|
.. |Inventory with localhost| image:: ../common/images/inventories-create-new-cg-test-localhost.png
:alt: The new container group test inventory showing the populated variables.
3. Launch an ad hoc job against the localhost using the *ping* or *setup* module. Even though the **Machine Credential** field is required, it does not matter which one is selected for this simple test.
|Launch inventory with localhost|
.. |Launch inventory with localhost| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost.png
.. image:: ../common/images/inventories-launch-adhoc-cg-test-localhost2.png
:alt: Launching a Ping adhoc command on the newly created inventory with localhost.
You can see in the jobs detail view the container was reached successfully using one of ad hoc jobs.
|Inventory with localhost ping success|
.. |Inventory with localhost ping success| image:: ../common/images/inventories-launch-adhoc-cg-test-localhost-success.png
:alt: Jobs output view showing a successfully ran adhoc job.
If you have an OpenShift UI, you can see pods appear and disappear as they deploy and terminate. Alternatively, you can use the CLI to perform a ``get pod`` operation on your namespace to watch these same events occurring in real-time.
View container group jobs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When you run a job associated with a container group, you can see the details of that job in the **Details** view and its associated container group and the execution environment that spun up.
|IG - instances jobs|
.. |IG - instances jobs| image:: ../common/images/instance-group-job-details-with-cgs.png
:alt: Example Job details window showing the associated execution environment and container group.
Kubernetes API failure conditions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When running a container group and the Kubernetes API responds that the resource quota has been exceeded, AWX keeps the job in pending state. Other failures result in the traceback of the **Error Details** field showing the failure reason, similar to the example here:
::
Error creating pod: pods is forbidden: User "system: serviceaccount: aap:example" cannot create resource "pods" in API group "" in the namespace "aap"
.. _ag_container_capacity:
Container capacity limits
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: container groups; capacity
pair: container groups; limits
Capacity limits and quotas for containers are defined via objects in the Kubernetes API:
- To set limits on all pods within a given namespace, use the ``LimitRange`` object. Refer to the OpenShift documentation for `Quotas and Limit Ranges <https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#overview>`_.
- To set limits directly on the pod definition launched by AWX, see :ref:`ag_customize_pod_spec` and refer to the OpenShift documentation to set the options to `compute resources <https://docs.openshift.com/online/pro/dev_guide/compute_resources.html#dev-compute-resources>`_.
.. Note::
Container groups do not use the capacity algorithm that normal nodes use. You would need to explicitly set the number of forks at the job template level, for instance. If forks are configured in AWX, that setting will be passed along to the container.

View File

@ -1,21 +0,0 @@
.. _ag_custom_inventory_script:
Custom Inventory Scripts
--------------------------
.. index::
single: custom inventory scripts
single: inventory scripts; custom
Inventory scripts have been discontinued. For more information, see :ref:`ug_customscripts` in the |atu|.
If you use custom inventory scripts, migrate to sourcing these scripts from a project. See :ref:`ag_inv_import` in the subsequent chapter, and also refer to :ref:`ug_inventory_sources` in the |atu| for more detail.
If you are migrating to |ees|, see:
- :ref:`upgrade_venv`
- :ref:`mesh_topology_ee` in the |atumg| to validate your topology
If you already have a mesh topology set up and want to view node type, node health, and specific details about each node, see :ref:`ag_topology_viewer` later in this guide.

View File

@ -1,13 +0,0 @@
.. _ag_custom_rebranding:
***************************
Using Custom Logos in AWX
***************************
.. index::
single: custom logo
single: rebranding
pair: logo; custom
.. include:: ../common/logos_branding.rst

View File

@ -1,47 +0,0 @@
.. _ag_start:
=============================
Administering AWX Deployments
=============================
Learn how to administer AWX deployments through custom scripts, management jobs, and DevOps workflows.
This guide assumes at least basic understanding of the systems that you manage and maintain with AWX.
This guide applies to the latest version of AWX only.
The content in this guide is updated frequently and might contain functionality that is not available in previous versions.
Likewise content in this guide can be removed or replaced if it applies to functionality that is no longer available in the latest version.
**Join us online**
Need help or want to discuss AWX including the documentation? See the :ref:`Communication guide<communication>` to learn how to join the conversation!
.. toctree::
:maxdepth: 2
:numbered:
self
init_script
custom_inventory_script
scm-inv-source
multi-creds-assignment
management_jobs
clustering
containers_instance_groups
instances
topology_viewer
logfiles
logging
metrics
performance
secret_handling
security_best_practices
awx-manage
configure_awx
isolation_variables
oauth2_token_auth
authentication_timeout
session_limits
custom_rebranding
troubleshooting
tipsandtricks
.. monitoring

View File

@ -1,29 +0,0 @@
.. _ag_restart_awx:
Starting, Stopping, and Restarting AWX
----------------------------------------
To install AWX: https://github.com/ansible/awx-operator/tree/devel/docs/installation
.. these instructions will be ported over to here in the near future (TBD)
To migrate from an old AWX to a new AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/migration/migration.md
.. these instructions will be ported over to here in the near future (TBD)
To upgrade you AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/upgrade/upgrading.md
.. these instructions will be ported over to here in the near future (TBD)
To restart an AWX instance, you must first kill the container and restart it. Access the web-task container in the Operator to invoke the supervisord restart.
.. these instructions will need to be fleshed out (TBD)
To uninstall you AWX instance: https://github.com/ansible/awx-operator/blob/devel/docs/uninstall/uninstall.md
.. these instructions will be ported over to here in the near future (TBD)

View File

@ -1,377 +0,0 @@
.. _ag_instances:
Managing Capacity With Instances
=================================
.. index::
pair: topology;capacity
pair: mesh;capacity
pair: remove;capacity
pair: add;capacity
Scaling your mesh is only available on Openshift and Kubernetes (K8S) deployments of AWX and is possible through adding or removing nodes from your cluster dynamically, through the **Instances** resource of the AWX User Interface, without running the installation script.
Instances serve as nodes in your mesh topology. Automation mesh allows you to extend the footprint of your automation. Where you launch a job and where the ``ansible-playbook`` runs can be in different locations.
.. image:: ../common/images/instances_mesh_concept.drawio.png
:alt: Site A pointing to Site B and dotted arrows to two hosts from Site B
Automation mesh is useful for:
- traversing difficult network topologies
- bringing execution capabilities (the machine running ``ansible-playbook``) closer to your target hosts
The nodes (control, hop, and execution instances) are interconnected via receptor, forming a virtual mesh.
.. image:: ../common/images/instances_mesh_concept_with_nodes.drawio.png
:alt: Control node pointing to hop node, which is pointing to two execution nodes.
Prerequisites
--------------
- |rhel| (RHEL) or Debian operating system. Bring a machine online with a compatible Red Hat family OS (e.g. RHEL 8 and 9) or Debian 11. This machine requires a static IP, or a resolvable DNS hostname that the AWX cluster can access. If the ``listener_port`` is defined, the machine will also need an available open port on which to establish inbound TCP connections (e.g. 27199).
In general, the more CPU cores and memory the machine has, the more jobs that can be scheduled to run on that machine at once. See :ref:`ug_job_concurrency` for more information on capacity.
- The system that is going to run the install bundle to setup the remote node requires the collection ``ansible.receptor`` to be installed:
- If machine has access to the internet:
::
ansible-galaxy install -r requirements.yml
Installing the receptor collection dependency from the ``requirements.yml`` file will consistently retrieve the receptor version specified there, as well as any other collection dependencies that may be needed in the future.
- If machine does not have access to the internet, refer to `Downloading a collection for offline use <https://docs.ansible.com/ansible/latest/collections_guide/collections_installing.html#downloading-a-collection-for-offline-use>`_.
- To manage instances from the AWX user interface, you must have System Administrator or System Auditor permissions.
Common topologies
------------------
Instances make up the network of devices that communicate with one another. They are the building blocks of an automation mesh. These building blocks serve as nodes in a mesh topology. There are several kinds of instances:
+-----------+-----------------------------------------------------------------------------------------------------------------+
| Node Type | Description |
+===========+=================================================================================================================+
| Control | Nodes that run persistent Ansible Automation Platform services, and delegate jobs to hybrid and execution nodes |
+-----------+-----------------------------------------------------------------------------------------------------------------+
| Hybrid | Nodes that run persistent Ansible Automation Platform services and execute jobs |
| | (not applicable to operator-based installations) |
+-----------+-----------------------------------------------------------------------------------------------------------------+
| Hop | Used for relaying across the mesh only |
+-----------+-----------------------------------------------------------------------------------------------------------------+
| Execution | Nodes that run jobs delivered from control nodes (jobs submitted from the users Ansible automation) |
+-----------+-----------------------------------------------------------------------------------------------------------------+
Simple topology
~~~~~~~~~~~~~~~~
One of the ways to expand job capacity is to create a standalone execution node that can be added to run alongside the Kubernetes deployment of AWX. These machines will not be a part of the AWX Kubernetes cluster. The control nodes running in the cluster will connect and submit work to these machines via Receptor. The machines are registered in AWX as type "execution" instances, meaning they will only be used to run AWX jobs, not dispatch work or handle web requests as control nodes do.
Hop nodes can be added to sit between the control plane of AWX and standalone execution nodes. These machines will not be a part of the AWX Kubernetes cluster and they will be registered in AWX as node type "hop", meaning they will only handle inbound and outbound traffic for otherwise unreachable nodes in a different or more strict network.
Below is an example of an AWX task pod with two execution nodes. Traffic to execution node 2 flows through a hop node that is setup between it and the control plane.
.. image:: ../common/images/instances_awx_task_pods_hopnode.drawio.png
:alt: AWX task pod with a hop node between the control plane of AWX and standalone execution nodes.
Below are sample values used to configure each node in a simple topology:
.. list-table::
:widths: 20 30 10 20 15
:header-rows: 1
* - Instance type
- Hostname
- Listener port
- Peers from control nodes
- Peers
* - Control plane
- awx-task-65d6d96987-mgn9j
- n/a
- n/a
- [hop node]
* - Hop node
- awx-hop-node
- 27199
- True
- []
* - Execution node
- awx-example.com
- n/a
- False
- [hop node]
Mesh topology
~~~~~~~~~~~~~~
Mesh ingress is a feature that allows remote nodes to connect inbound to the control plane. This is especially useful when creating remote nodes in restricted networking environments that disallow inbound traffic.
.. image:: ../common/images/instances_mesh_ingress_topology.drawio.png
:alt: Mesh ingress architecture showing the peering relationship between nodes.
Below are sample values used to configure each node in a mesh ingress topology:
.. list-table::
:widths: 20 30 10 20 15
:header-rows: 1
* - Instance type
- Hostname
- Listener port
- Peers from control nodes
- Peers
* - Control plane
- awx-task-65d6d96987-mgn9j
- n/a
- n/a
- [hop node]
* - Hop node
- awx-mesh-ingress-1
- 27199
- True
- []
* - Execution node
- awx-example.com
- n/a
- False
- [hop node]
In order to create a mesh ingress for AWX, see the `Mesh Ingress <https://ansible.readthedocs.io/projects/awx-operator/en/latest/user-guide/advanced-configuration/mesh-ingress.html>`_ chapter of the AWX Operator Documentation for information on setting up this type of topology. The last step is to create a remote execution node and add the execution node to an instance group in order for it to be used in your job execution. Whatever execution environment image used to run a playbook needs to be accessible for your remote execution node. Everything you are using in your playbook also needs to be accessible from this remote execution node.
.. image:: ../common/images/instances-job-template-using-remote-execution-ig.png
:alt: Job template using the instance group with the execution node to run jobs.
:width: 1400px
.. _ag_instances_add:
Add an instance
----------------
To create an instance in AWX:
1. Click **Instances** from the left side navigation menu of the AWX UI.
2. In the Instances list view, click the **Add** button and the Create new Instance window opens.
.. image:: ../common/images/instances_create_new.png
:alt: Create a new instance form.
:width: 1400px
An instance has several attributes that may be configured:
- Enter a fully qualified domain name (ping-able DNS) or IP address for your instance in the **Host Name** field (required). This field is equivalent to ``hostname`` in the API.
- Optionally enter a **Description** for the instance
- The **Instance State** field is auto-populated, indicating that it is being installed, and cannot be modified
- Optionally specify the **Listener Port** for the receptor to listen on for incoming connections. This is an open port on the remote machine used to establish inbound TCP connections. This field is equivalent to ``listener_port`` in the API.
- Select from the options in **Instance Type** field to specify the type you want to create. Only execution and hop nodes can be created as operator-based installations do not support hybrid nodes. This field is equivalent to ``node_type`` in the API.
- In the **Peers** field, select the instance hostnames you want your new instance to connect outbound to.
- In the **Options** fields:
- Check the **Enable Instance** box to make it available for jobs to run on an execution node.
- Check the **Managed by Policy** box to allow policy to dictate how the instance is assigned.
- Check the **Peers from control nodes** box to allow control nodes to peer to this instance automatically. Listener port needs to be set if this is enabled or the instance is a peer.
3. Once the attributes are configured, click **Save** to proceed.
Upon successful creation, the Details of the one of the created instances opens.
.. image:: ../common/images/instances_create_details.png
:alt: Details of the newly created instance.
:width: 1400px
.. note::
The proceeding steps 4-8 are intended to be ran from any computer that has SSH access to the newly created instance.
4. Click the download button next to the **Install Bundle** field to download the tarball that contain files to allow AWX to make proper TCP connections to the remote machine.
.. image:: ../common/images/instances_install_bundle.png
:alt: Instance details showing the Download button in the Install Bundle field of the Details tab.
:width: 1400px
5. Extract the downloaded ``tar.gz`` file from the location you downloaded it. The install bundle contains TLS certificates and keys, a certificate authority, and a proper Receptor configuration file. To facilitate that these files will be in the right location on the remote machine, the install bundle includes an ``install_receptor.yml`` playbook. The playbook requires the Receptor collection which can be obtained via:
::
ansible-galaxy collection install -r requirements.yml
6. Before running the ``ansible-playbook`` command, edit the following fields in the ``inventory.yml`` file:
- ``ansible_user`` with the username running the installation
- ``ansible_ssh_private_key_file`` to contain the filename of the private key used to connect to the instance
::
---
all:
hosts:
remote-execution:
ansible_host: <hostname>
ansible_user: <username> # user provided
ansible_ssh_private_key_file: ~/.ssh/id_rsa
The content of the ``inventory.yml`` file serves as a template and contains variables for roles that are applied during the installation and configuration of a receptor node in a mesh topology. You may modify some of the other fields, or replace the file in its entirety for advanced scenarios. Refer to `Role Variables <https://github.com/ansible/receptor-collection/blob/main/README.md>`_ for more information on each variable.
7. Save the file to continue.
8. Run the following command on the machine you want to update your mesh:
::
ansible-playbook -i inventory.yml install_receptor.yml
Wait a few minutes for the periodic AWX task to do a health check against the new instance. You may run a health check by selecting the node and clicking the **Run health check** button from its Details page at any time. Once the instances endpoint or page reports a "Ready" status for the instance, jobs are now ready to run on this machine!
9. To view other instances within the same topology or associate peers, click the **Peers** tab.
.. image:: ../common/images/instances_peers_tab.png
:alt: "Peers" tab showing two peers.
:width: 1400px
To associate peers with your node, click the **Associate** button to open a dialog box of instances eligible for peering.
.. image:: ../common/images/instances_associate_peer.png
:alt: Instances available to peer with the example hop node.
:width: 1400px
Execution nodes can peer with either hop nodes or other execution nodes. Hop nodes can only peer with execution nodes unless you check the **Peers from control nodes** check box from the **Options** field.
.. note::
If you associate or disassociate a peer, a notification will inform you to re-run the install bundle from the Peer Detail view (the :ref:`ag_topology_viewer` has the download link).
.. image:: ../common/images/instances_associate_peer_reinstallmsg.png
:alt: Notification to re-run the installation bundle due to change in the peering.
You can remove an instance by clicking **Remove** in the Instances page, or by setting the instance ``node_state = deprovisioning`` via the API. Upon deleting, a pop-up message will appear to notify that you may need to re-run the install bundle to make sure things that were removed are no longer connected.
10. To view a graphical representation of your updated topology, refer to the :ref:`ag_topology_viewer` section of this guide.
Manage instances
-----------------
Click **Instances** from the left side navigation menu to access the Instances list.
.. image:: ../common/images/instances_list_view.png
:alt: List view of instances in AWX
:width: 1400px
The Instances list displays all the current nodes in your topology, along with relevant details:
- **Host Name**
.. _node_statuses:
- **Status** indicates the state of the node:
- **Installed**: a node that has successfully installed and configured, but has not yet passed the periodic health check
- **Ready**: a node that is available to run jobs or route traffic between nodes on the mesh. This replaces the previously “Healthy” node state used in the mesh topology
- **Provisioning**: a node that is in the process of being added to a current mesh, but is awaiting the job to install all of the packages (currently not yet supported and is subject to change in a future release)
- **Deprovisioning**: a node that is in the process of being removed from a current mesh and is finishing up jobs currently running on it
- **Unavailable**: a node that did not pass the most recent health check, indicating connectivity or receptor problems
- **Provisioning Failure**: a node that failed during provisioning (currently not yet supported and is subject to change in a future release)
- **De-provisioning Failure**: a node that failed during deprovisioning (currently not yet supported and is subject to change in a future release)
- **Node Type** specifies whether the node is a control, hop, execution node, or hybrid (not applicable to operator-based installations). See :term:`node` for further detail.
- **Capacity Adjustment** allows you to adjust the number of forks in your nodes
- **Used Capacity** indicates how much capacity has been used
- **Actions** allow you to enable or disable the instance to control whether jobs can be assigned to it
From this page, you can add, remove or run health checks on your nodes. Use the check boxes next to an instance to select it to remove or run a health check against. When a button is grayed-out, you do not have permission for that particular action. Contact your Administrator to grant you the required level of access. If you are able to remove an instance, you will receive a prompt for confirmation, like the one below:
.. image:: ../common/images/instances_delete_prompt.png
:alt: Prompt for deleting instances in AWX
:width: 1400px
.. note::
You can still remove an instance even if it is active and jobs are running on it. AWX will attempt to wait for any jobs running on this node to complete before actually removing it.
Click **Remove** to confirm.
.. _health_check:
If running a health check on an instance, at the top of the Details page, a message displays that the health check is in progress.
.. image:: ../common/images/instances_health_check.png
:alt: Health check for instances in AWX
:width: 1400px
Click **Reload** to refresh the instance status.
.. note::
Health checks are ran asynchronously, and may take up to a minute for the instance status to update, even with a refresh. The status may or may not change after the health check. At the bottom of the Details page, a timer/clock icon displays next to the last known health check date and time stamp if the health check task is currently running.
.. image:: ../common/images/instances_health_check_pending.png
:alt: Health check for instance still in pending state.
The example health check shows the status updates with an error on node 'one':
.. image:: ../common/images/topology-viewer-instance-with-errors.png
:alt: Health check showing an error in one of the instances.
:width: 1400px
Using a custom Receptor CA
---------------------------
Refer to the AWX Operator Documentation, `Custom Receptor CA <https://ansible.readthedocs.io/projects/awx-operator/en/latest/user-guide/advanced-configuration/custom-receptor-certs.html>`_ for detail.
Using a private image for the default EE
------------------------------------------
Refer to the AWX Operator Documentation on `Default execution environments from private registries <https://ansible.readthedocs.io/projects/awx-operator/en/latest/user-guide/advanced-configuration/default-execution-environments-from-private-registries.html>`_ for detail.
Troubleshooting
----------------
If you encounter issues while setting up instances, refer to these troubleshooting tips.
Fact cache not working
~~~~~~~~~~~~~~~~~~~~~~~
Make sure the system timezone on the execution node matches ``settings.TIME_ZONE`` (default is 'UTC') on AWX. Fact caching relies on comparing modified times of artifact files, and these modified times are not timezone-aware. Therefore, it is critical that the timezones of the execution nodes match AWX's timezone setting.
To set the system timezone to UTC:
::
ln -s /usr/share/zoneinfo/Etc/UTC /etc/localtime
Permission denied errors
~~~~~~~~~~~~~~~~~~~~~~~~~~
Jobs may fail with the following error, or similar:
::
"msg":"exec container process `/usr/local/bin/entrypoint`: Permission denied"
For RHEL-based machines, this could be due to SELinux that is enabled on the system. You can pass these ``extra_settings`` container options to override SELinux protections:
::
DEFAULT_CONTAINER_RUN_OPTIONS = ['--network', 'slirp4netns:enable_ipv6=true', '--security-opt', 'label=disable']

View File

@ -1,12 +0,0 @@
.. _ag_isolation_variables:
Isolation functionality and variables
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: troubleshooting; isolation
pair: isolation; functionality
pair: isolation; variables
.. include:: ../common/isolation_variables.rst

View File

@ -1,9 +0,0 @@
**************
AWX Logfiles
**************
.. index::
single: logfiles
The AWX logfiles are streamed real-time on the console.

View File

@ -1,365 +0,0 @@
.. _ag_logging:
************************
Logging and Aggregation
************************
.. index::
single: logging
pair: logging; schema
pair: logging; logstash
pair: logging; splunk
pair: logging; loggly
pair: logging; sumologic
pair: logging; ELK stack
pair: logging; Elastic stack
pair: logging; rsyslog
Logging is a feature that provides the capability to send detailed logs to several kinds of 3rd party external log aggregation services. Services connected to this data feed serve as a useful means in gaining insight into AWX usage or technical trends. The data can be used to analyze events in the infrastructure, monitor for anomalies, and correlate events from one service with events in another. The types of data that are most useful to AWX are job fact data, job events/job runs, activity stream data, and log messages. The data is sent in JSON format over a HTTP connection using minimal service-specific tweaks engineered in a custom handler or via an imported library.
Installing AWX will install a newer version of rsyslog, which will replace the version that comes with the RHEL base. The version of rsyslog that is installed by AWX does not include the following rsyslog modules:
- rsyslog-udpspoof.x86_64
- rsyslog-libdbi.x86_64
After installing AWX, use only AWX provided rsyslog package for any logging outside of AWX that may have previously been done with the RHEL provided rsyslog package. If you already use rsyslog for logging system logs on AWX instances, you can continue to use rsyslog to handle logs from outside of AWX by running a separate rsyslog process (using the same version of rsyslog that AWX is), and pointing it to a separate /etc/rsyslog.conf.
.. note::
For systems that use rsyslog outside of AWX, consider any conflict that may arise with also using new version of rsyslog that comes with AWX.
You can configure from the ``/api/v2/settings/logging/`` endpoint how AWX rsyslog process handles messages that have not yet been sent in the event that your external logger goes offline:
- ``LOG_AGGREGATOR_MAX_DISK_USAGE_GB``: specifies the amount of data to store (in gigabytes) during an outage of the external log aggregator (defaults to 1). Equivalent to the ``rsyslogd queue.maxdiskspace`` setting.
- ``LOG_AGGREGATOR_MAX_DISK_USAGE_PATH``: specifies the location to persist logs that should be retried after an outage of the external log aggregator (defaults to ``/var/lib/awx``). Equivalent to the ``rsyslogd queue.spoolDirectory`` setting.
For example, if Splunk goes offline, rsyslogd stores a queue on the disk until Splunk comes back online. By default, it will store up to 1GB of events (while Splunk is offline) but you can make that more than 1GB if necessary, or change the path where you save the queue.
Loggers
----------
Below are special loggers (except for ``awx``, which constitutes generic server logs) that provide large amount of information in a predictable structured or semi-structured format, following the same structure as one would expect if obtaining the data from the API:
- ``job_events``: Provides data returned from the Ansible callback module
- ``activity_stream``: Displays the record of changes to the objects within the AWX application
- ``system_tracking``: Provides fact data gathered by Ansible ``setup`` module (i.e. ``gather_facts: True``) when job templates are ran with **Enable Fact Cache** selected
- ``awx``: Provides generic server logs, which include logs that would normally be written to a file. It contains the standard metadata that all logs have, except it only has the message from the log statement.
These loggers only use log-level of INFO, except for the ``awx`` logger, which may be any given level.
Additionally, the standard AWX logs are be deliverable through this same mechanism. It is apparent how to enable or disable each of these five sources of data without manipulating a complex dictionary in your local settings file, as well as adjust the log-level consumed from the standard AWX logs.
To configure various logging components in AWX, click **Settings** from the left navigation bar then select **Logging settings** from the list of System options.
Log message schema
~~~~~~~~~~~~~~~~~~~~
Common schema for all loggers:
- ``cluster_host_id``: Unique identifier of the host within the AWX cluster
- ``level``: Standard python log level, roughly reflecting the significance of the event All of the data loggers as a part of this feature use INFO level, but the other AWX logs will use different levels as appropriate
- ``logger_name``: Name of the logger we use in the settings, for example, "activity_stream"
- ``@timestamp``: Time of log
- ``path``: File path in code where the log was generated
Activity stream schema
~~~~~~~~~~~~~~~~~~~~~~~~~
- (common): This uses all the fields common to all loggers listed above
- ``actor``: Username of the user who took the action documented in the log
- ``changes``: JSON summary of what fields changed, and their old/new values.
- ``operation``: The basic category of the changed logged in the activity stream, for instance, "associate".
- ``object1``: Information about the primary object being operated on, consistent with what we show in the activity stream
- ``object2``: If applicable, the second object involved in the action
Job event schema
~~~~~~~~~~~~~~~~~~~~
This logger reflects the data being saved into job events, except when they would otherwise conflict with expected standard fields from the logger, in which case the fields are nested. Notably, the field host on the ``job_event`` model is given as ``event_host``. There is also a sub-dictionary field, ``event_data`` within the payload, which contains different fields depending on the specifics of the Ansible event.
This logger also includes the common fields.
Scan / fact / system tracking data schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
These contain a detailed dictionary-type fields that are either services, packages, or files.
- (common): This uses all the fields common to all loggers listed above
- ``services``: For services scans, this field is included and has keys based on the name of the service. **NOTE**: Periods are disallowed by elastic search in names, and are replaced with "_" by our log formatter
- ``package``: Included for log messages from package scans
- ``files``: Included for log messages from file scans
- ``host``: Name of host scan applies to
- ``inventory_id``: Inventory id host is inside of
Job status changes
~~~~~~~~~~~~~~~~~~~~~
This is a intended to be a lower-volume source of information about changes in job states compared to job events, and also intended to capture changes to types of unified jobs other than job template based jobs.
In addition to common fields, these logs include fields present on the job model.
AWX logs
~~~~~~~~~~~~~~~~
In addition to the common fields, this contains a ``msg`` field with the log message. Errors contain a separate ``traceback`` field. These logs can be enabled or disabled with the ``ENABLE EXTERNAL LOGGING`` option from the Logging settings page.
Logging Aggregator Services
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The logging aggregator service works with the following monitoring and data analysis systems:
.. contents:: :local:
Logstash
^^^^^^^^^
These instructions describe how to use the logstash container.
1. Uncomment the following lines in the ``docker-compose.yml`` file:
::
#- logstash
...
#logstash:
# build:
# context: ./docker-compose
# dockerfile: Dockerfile-logstash
2. POST the following content to 1`/api/v2/settings/logging/1` (this uses authentication set up inside of the logstash configuration file).
::
{
"LOG_AGGREGATOR_HOST": "http://logstash",
"LOG_AGGREGATOR_PORT": 8085,
"LOG_AGGREGATOR_TYPE": "logstash",
"LOG_AGGREGATOR_USERNAME": "awx_logger",
"LOG_AGGREGATOR_PASSWORD": "workflows",
"LOG_AGGREGATOR_LOGGERS": [
"awx",
"activity_stream",
"job_events",
"system_tracking"
],
"LOG_AGGREGATOR_INDIVIDUAL_FACTS": false,
"LOG_AGGREGATOR_TOWER_UUID": "991ac7e9-6d68-48c8-bbde-7ca1096653c6",
"LOG_AGGREGATOR_ENABLED": true
}
.. note:: HTTP must be specified in the ``LOG_AGGREGATOR_HOST`` if you are using the docker development environment.
3. To view the most recent logs from the container:
::
docker exec -i -t $(docker ps -aqf "name=tools_logstash_1") tail -n 50 /logstash.log
4. To add logstash plugins, you can add any plugins you need in ``tools/elastic/logstash/Dockerfile`` before running the container.
Splunk
^^^^^^^^
AWX's Splunk logging integration uses the Splunk HTTP Collector. When configuring a SPLUNK logging aggregator, add the full URL to the HTTP Event Collector host, like in the following example:
.. code-block:: text
https://example.com/api/v2/settings/logging
{
"LOG_AGGREGATOR_HOST": "https://splunk_host:8088/services/collector/event",
"LOG_AGGREGATOR_PORT": null,
"LOG_AGGREGATOR_TYPE": "splunk",
"LOG_AGGREGATOR_USERNAME": "",
"LOG_AGGREGATOR_PASSWORD": "$encrypted$",
"LOG_AGGREGATOR_LOGGERS": [
"awx",
"activity_stream",
"job_events",
"system_tracking"
],
"LOG_AGGREGATOR_INDIVIDUAL_FACTS": false,
"LOG_AGGREGATOR_ENABLED": true,
"LOG_AGGREGATOR_TOWER_UUID": ""
}
Splunk HTTP Event Collector listens on 8088 by default so it is necessary to provide the full HEC event URL (with port) in order for incoming requests to be processed successfully. These values are entered in the example below:
.. image:: ../common/images/logging-splunk-awx-example.png
For further instructions on configuring the HTTP Event Collector, refer to the `Splunk documentation`_.
.. _`Splunk documentation`: http://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector
Loggly
^^^^^^^
To set up the sending of logs through Loggly's HTTP endpoint, refer to https://www.loggly.com/docs/http-endpoint/. Loggly uses the URL convention described at http://logs-01.loggly.com/inputs/TOKEN/tag/http/, which is shown inputted in the **Logging Aggregator** field in the example below:
.. image:: ../common/images/logging-loggly-awx-example.png
Sumologic
^^^^^^^^^^^^
In Sumologic, create a search criteria containing the json files that provide the parameters used to collect the data you need.
.. image:: ../common/images/logging_sumologic_main.png
Elastic stack (formerly ELK stack)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If starting from scratch, standing up your own version the elastic stack, the only change you required is to add the following lines to the logstash ``logstash.conf`` file:
::
filter {
json {
source => "message"
}
}
.. note::
Backward-incompatible changes were introduced with Elastic 5.0.0, and different configurations may be required depending on what versions you are using.
.. _ag_ctit_logging:
Set Up Logging
---------------
Log Aggregation
~~~~~~~~~~~~~~~~~~~~
To set up logging to any of the aggregator types:
1. Click **Settings** from the left navigation bar.
2. Under the list of System options, click to select **Logging settings**.
3. At the bottom of the Logging settings screen, click **Edit**.
4. Set the configurable options from the fields provided:
- **Enable External Logging**: Click the toggle button to **ON** if you want to send logs to an external log aggregator.
- **Logging Aggregator**: Enter the hostname or IP address you want to send logs.
- **Logging Aggregator Port**: Specify the port for the aggregator if it requires one.
.. note::
When the connection type is HTTPS, you can enter the hostname as a URL with a port number and therefore, you are not required to enter the port again. But TCP and UDP connections are determined by the hostname and port number combination, rather than URL. So in the case of TCP/UDP connection, supply the port in the specified field. If instead a URL is entered in host field (**Logging Aggregator** field), its hostname portion will be extracted as the actual hostname.
- **Logging Aggregator Type**: Click to select the aggregator service from the drop-down menu:
.. image:: ../common/images/configure-awx-system-logging-types.png
- **Logging Aggregator Username**: Enter the username of the logging aggregator if it requires it.
- **Logging Aggregator Password/Token**: Enter the password of the logging aggregator if it requires it.
- **Log System Tracking Facts Individually**: Click the tooltip |help| icon for additional information whether or not you want to turn it on, or leave it off by default.
- **Logging Aggregator Protocol**: Click to select a connection type (protocol) to communicate with the log aggregator. Subsequent options vary depending on the selected protocol.
- **Logging Aggregator Level Threshold**: Select the level of severity you want the log handler to report.
- **TCP Connection Timeout**: Specify the connection timeout in seconds. This option is only applicable to HTTPS and TCP log aggregator protocols.
- **Enable/disable HTTPS certificate verification**: Certificate verification is enabled by default for HTTPS log protocol. Click the toggle button to **OFF** if you do not want the log handler to verify the HTTPS certificate sent by the external log aggregator before establishing a connection.
- **Loggers to Send Data to the Log Aggregator Form**: All four types of data are pre-populated by default. Click the tooltip |help| icon next to the field for additional information on each data type. Delete the data types you do not want.
- **Log Format For API 4XX Errors**: Configure a specific error message. See :ref:`logging-api-400-error-config` for further detail.
.. |help| image:: ../common/images/tooltips-icon.png
5. Review your entries for your chosen logging aggregation. Below is an example of one set up for Splunk:
.. image:: ../common/images/configure-awx-system-logging-splunk-example.png
7. When done, click **Save** to apply the settings or **Cancel** to abandon the changes.
8. To verify if your configuration is set up correctly, click **Save** first then click **Test**. This sends a test log message to the log aggregator using the current logging configuration in AWX. You should check to make sure this test message was received by your external log aggregator.
.. note::
If the **Test** button is disabled, it is an indication that the fields are different than their initial values so save your changes first, and make sure the **Enable External Logging** toggle is set to ON.
.. _logging-api-400-error-config:
API 4XX Error Configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When the API encounters an issue with a request, it will typically return an HTTP error code in the 400 range along with an error. When this happens, an error message will be generated in the log which follows the pattern:
```
status {status_code} received by user {user_name} attempting to access {url_path} from {remote_addr}
```
These messages can be configured as required. To modify the default API 4XX errors log message format, do the following:
1. Click **Settings** from the left navigation bar.
2. Under the list of System options, click to select **Logging settings**.
3. At the bottom of the Logging settings screen, click **Edit**.
4. Modify the field **Log Format For API 4XX Errors**.
Items surrounded by ``{}`` will be substituted when the log error is generated. The following variables can be used:
- **status_code**: The HTTP status code the API is returning
- **user_name**: The name of the user that was authenticated when making the API request
- **url_path**: The path portion of the URL being called (aka the API endpoint)
- **remote_addr**: The remote address received by AWX
- **error**: The error message returned by the API or, if no error is specified, the HTTP status as text
.. _logging-api-otel:
OTel configuration with AWX
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can integrate OTel with AWX by configuring logging manually to point to your OTel collector. To do this, add the following codeblock in your `settings file <https://github.com/ansible/awx/blob/devel/tools/docker-compose/ansible/roles/sources/templates/local_settings.py.j2#L50>`_ (``local_settings.py.j2``):
.. code-block:: python
LOGGING['handlers']['otel'] |= {
'class': 'awx.main.utils.handlers.OTLPHandler',
'endpoint': 'http://otel:4317',
}
# Add otel log handler to all log handlers where propagate is False
for name in LOGGING['loggers'].keys():
if not LOGGING['loggers'][name].get('propagate', True):
handler = LOGGING['loggers'][name].get('handlers', [])
if 'otel' not in handler:
LOGGING['loggers'][name].get('handlers', []).append('otel')
# Everything without explicit propagate=False ends up logging to 'awx' so add it
handler = LOGGING['loggers']['awx'].get('handlers', [])
if 'otel' not in handler:
LOGGING['loggers']['awx'].get('handlers', []).append('otel')
Edit ``'endpoint': 'http://otel:4317',`` to point to your OTel collector.
To see it working in the dev environment, set the following:
::
OTEL=true GRAFANA=true LOKI=true PROMETHEUS=true make docker-compose
Then go to `http://localhost:3001 <http://localhost:3001>`_ to access Grafana and see the logs.
Troubleshoot Logging
---------------------
API 4XX Errors
~~~~~~~~~~~~~~~~~~~~
You can include the API error message for 4XX errors by modifying the log format for those messages. Refer to the :ref:`logging-api-400-error-config` section for more detail.

View File

@ -1,144 +0,0 @@
.. _ag_management_jobs:
Management Jobs
------------------
.. index::
single: management jobs
single: cleaning old data
single: removing old data
**Management Jobs** assist in the cleaning of old data from AWX, including system tracking information, tokens, job histories, and activity streams. You can use this if you have specific retention policies or need to decrease the storage used by your AWX database. Click **Management Jobs** from the left navigation bar.
|management jobs|
.. |management jobs| image:: ../common/images/ug-management-jobs.png
Several job types are available for you to schedule and launch:
- **Cleanup Activity Stream**: Remove activity stream history older than a specified number of days
- **Cleanup Expired OAuth 2 Tokens**: Remove expired OAuth 2 access tokens and refresh tokens
- **Cleanup Expired Sessions**: Remove expired browser sessions from the database
- **Cleanup Job Details**: Remove job history older than a specified number of days
Removing Old Activity Stream Data
============================================
.. index::
pair: management jobs; cleanup activity stream
single: activity stream cleanup management job
To remove older activity stream data, click on the launch (|launch|) button beside **Cleanup Activity Stream**.
|activity stream launch - remove activity stream launch|
.. |activity stream launch - remove activity stream launch| image:: ../common/images/ug-management-jobs-remove-activity-stream-launch.png
Enter the number of days of data you would like to save and click **Launch**.
.. _ag_mgmt_job_schedule:
Scheduling
~~~~~~~~~~~~
To review or set a schedule for purging data marked for deletion:
1. For a particular cleanup job, click the **Schedules** tab.
.. image:: ../common/images/ug-management-jobs-remove-activity-stream-schedule.png
Note that you can turn this scheduled management job on and off easily using the **ON/OFF** toggle button.
2. Click on the name of the job, in this example "Cleanup Activity Schedule", to review the schedule settings and click **Edit** to modify them. You can also use the **Add** button to create a new schedule for this management job.
.. image:: ../common/images/ug-management-jobs-remove-activity-stream-schedule-details.png
3. Enter the appropriate details into the following fields and click **Save**:
- Name (required)
- Start Date (required)
- Start Time (required)
- Local Time Zone (the entered Start Time should be in this timezone)
- Repeat Frequency (the appropriate options display as the update frequency is modified including data you do not want to include by specifying exceptions)
- Days of Data to Keep (required) - specify how much data you want to retain
The **Details** tab displays a description of the schedule and a list of the scheduled occurrences in the selected Local Time Zone.
.. note::
Jobs are scheduled in UTC. Repeating jobs that runs at a specific time of day may move relative to a local timezone when Daylight Saving Time shifts occur.
.. _ag_mgmt_job_notify:
Notifications
~~~~~~~~~~~~~~~
To review or set notifications associated with a management job:
1. For a particular cleanup job, click the **Notifications** tab.
.. image:: ../common/images/management-job-notifications.png
If none exist, see :ref:`ug_notifications` for more information.
.. image:: ../common/images/management-job-notifications-empty.png
An example of a notifications with details specified:
.. image:: ../common/images/management-job-add-notification-details.png
Cleanup Expired OAuth2 Tokens
====================================
.. index::
pair: management jobs; cleanup expired OAuth2 tokens
single: expired OAuth2 tokens cleanup management job
To remove expired OAuth2 tokens, click on the launch (|launch|) button beside **Cleanup Expired OAuth2 Tokens**.
You can review or set a schedule for cleaning up expired OAuth2 tokens by performing the same procedure described for activity stream management jobs. See :ref:`ag_mgmt_job_schedule` for detail.
You can also set or review notifications associated with this management job the same way as described in :ref:`ag_mgmt_job_notify` for activity stream management jobs, and refer to :ref:`ug_notifications` for more detail.
Cleanup Expired Sessions
====================================
.. index::
pair: management jobs; cleanup expired sessions
single: expired sessions cleanup management job
To remove expired sessions, click on the launch (|launch|) button beside **Cleanup Expired Sessions**.
You can review or set a schedule for cleaning up expired sessions by performing the same procedure described for activity stream management jobs. See :ref:`ag_mgmt_job_schedule` for detail.
You can also set or review notifications associated with this management job the same way as described in :ref:`ag_mgmt_job_notify` for activity stream management jobs, and refer to :ref:`ug_notifications` for more detail.
Removing Old Job History
====================================
.. index::
pair: management jobs; cleanup job history
single: job history cleanup management job
To remove job history older than a specified number of days, click on the launch (|launch|) button beside **Cleanup Job Details**.
.. |launch| image:: ../common/images/launch-button.png
|management jobs - cleanup job launch|
.. |management jobs - cleanup job launch| image:: ../common/images/ug-management-jobs-cleanup-job-launch.png
Enter the number of days of data you would like to save and click **Launch**.
.. note::
The initial job run for an AWX resource (e.g. Projects, Job Templates) is excluded from **Cleanup Job Details**, regardless of retention value.
You can review or set a schedule for cleaning up old job history by performing the same procedure described for activity stream management jobs. See :ref:`ag_mgmt_job_schedule` for detail.
You can also set or review notifications associated with this management job the same way as described in :ref:`ag_mgmt_job_notify` for activity stream management jobs, and refer to :ref:`ug_notifications` for more detail.

View File

@ -1,54 +0,0 @@
.. _ag_metrics:
Metrics
============
.. index::
pair: metrics; prometheus
A metrics endpoint is available in the API: ``/api/v2/metrics/`` that surfaces instantaneous metrics about AWX, which can be consumed by system monitoring software like the open source project Prometheus.
The type of data shown at the ``metrics/`` endpoint is ``Content-type: text/plain`` and ``application/json`` as well. This endpoint contains useful information, such as counts of how many active user sessions there are, or how many jobs are actively running on each AWX node. Prometheus can be configured to scrape these metrics from AWX by hitting AWX metrics endpoint and storing this data in a time-series database. Clients can later use Prometheus in conjunction with other software like Grafana or Metricsbeat to visualize that data and set up alerts.
Set up Prometheus
-------------------
To set up and use Prometheus, you will need to install Prometheus on a virtual machine or container. Refer to the `Prometheus documentation`_ for further detail.
.. _`Prometheus documentation`: https://prometheus.io/docs/introduction/first_steps/
1. In the Prometheus config file (typically ``prometheus.yml``), specify a ``<token_value>``, a valid user/password for an AWX user you have created, and a ``<awx_host>``.
.. note:: Alternatively, you can provide an OAuth2 token (which can be generated at ``/api/v2/users/N/personal_tokens/``). By default, the config assumes a user with username=admin and password=password.
Using an OAuth2 Token, created at the ``/api/v2/tokens`` endpoint to authenticate prometheus with AWX, the following example provides a valid scrape config if the URL for your AWX's metrics endpoint was ``https://awx_host:443/metrics``.
::
scrape_configs
- job_name: 'awx'
tls_config:
insecure_skip_verify: True
metrics_path: /api/v2/metrics
scrape_interval: 5s
scheme: https
bearer_token: <token_value>
# basic_auth:
# username: admin
# password: password
static_configs:
- targets:
- <awx_host>
For help configuring other aspects of Prometheus, such as alerts and service discovery configurations, refer to the `Prometheus configuration docs`_.
.. _`Prometheus configuration docs`: https://prometheus.io/docs/prometheus/latest/configuration/configuration/
If Prometheus is already running, you must restart it in order to apply the configuration changes by making a **POST** to the reload endpoint, or by killing the Prometheus process or service.
2. Use a browser to navigate to your graph in the Prometheus UI at ``http://your_prometheus:9090/graph`` and test out some queries. For example, you can query the current number of active AWX user sessions by executing: ``awx_sessions_total{type="user"}``.
.. image:: ../common/images/metrics-prometheus-ui-query-example.png
Refer to the metrics endpoint in AWX API for your instance (``api/v2/metrics``) for more ways to query.

View File

@ -1,107 +0,0 @@
.. _ag_multicred_assgn:
Multi-Credential Assignment
=============================
.. index::
single: credentials
pair: credentials; multi
pair: credentials; assignment
AWX provides support for assigning zero or more credentials to a job template.
Important Changes
--------------------
Job templates now have a single interface for credential assignment. From the API endpoint:
``GET /api/v2/job_templates/N/credentials/``
You can associate and disassociate credentials using ``POST`` requests, similar to the behavior in the deprecated ``extra_credentials`` endpoint:
.. code-block:: text
POST /api/v2/job_templates/N/credentials/ {'associate': true, 'id': 'X'}
POST /api/v2/job_templates/N/credentials/ {'disassociate': true, 'id': 'Y'}
Under this model, a job template is considered valid even when there are *no* credentials assigned to it. This model also provides users the ability to assign multiple Vault credentials to a job template.
Launch Time Considerations
------------------------------
Job templates have a configurable attribute, ``ask_credential_on_launch`` , when set to ``True``, it signifies that if desired, you may specify a list of credentials at launch time to override those defined on the job template. For example:
.. code-block:: text
POST /api/v2/job_templates/N/launch/ {'credentials': [A, B, C]}`
If ``ask_credential_on_launch`` is ``False``, it signifies that custom credentials provided in the ``POST /api/v2/job_templates/N/launch/`` will be ignored.
Under this model, the only purpose for ``ask_credential_on_launch`` is to signal API clients to prompt the user for (optional) changes at launch time.
.. _ag_multi_vault:
Multi-Vault Credentials
-------------------------
As it possible to assign multiple credentials to a job, you can specify multiple Vault credentials to decrypt when your job template runs. This functionality mirrors the support for `multiple vault passwords for a playbook run <http://docs.ansible.com/ansible/latest/vault.html#vault-ids-and-multiple-vault-passwords>`_ in Ansible 2.4 and later.
Vault credentials now have an optional field, ``vault_id``, which is analogous to the ``--vault-id`` argument to ``ansible-playbook``. To run a playbook which makes use of multiple vault passwords:
1. Create a Vault credential in AWX for each vault password; specify the Vault ID as a field on the credential and input the password (which will be encrypted and stored).
2. Assign multiple vault credentials to the job template via the new credentials endpoint:
.. code-block:: text
POST /api/v2/job_templates/N/credentials/
{
'associate': true,
'id': X
}
Alternatively, you can perform the same assignment in AWX User Interface in the *Create Credential* page:
.. image:: ../common/images/credentials-create-multivault-credential.png
In the above example, the credential created specifies the secret to be used by its Vault Identifier ("first") and password pair. When this credential is used in a Job Template, as in the example below, it will only decrypt the secret associated with the "first" Vault ID:
.. image:: ../common/images/job-template-include-multi-vault-credential.png
If you have a playbook that is setup the traditional way with all the secrets in one big file without distinction, then leave the **Vault Identifier** field blank when setting up the Vault credential.
Prompted Vault Credentials
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Passwords for Vault credentials that are marked with "Prompt on launch", the launch endpoint of any related Job Templates will communicate necessary Vault passwords via the ``passwords_needed_to_start`` key:
.. code-block:: text
GET /api/v2/job_templates/N/launch/
{
'passwords_needed_to_start': [
'vault_password.X',
'vault_password.Y',
]
}
``X`` and ``Y`` in the above example are primary keys of the associated Vault credentials.
.. code-block:: text
POST /api/v2/job_templates/N/launch/
{
'credential_passwords': {
'vault_password.X': 'first-vault-password'
'vault_password.Y': 'second-vault-password'
}
}

View File

@ -1,460 +0,0 @@
.. _ag_oauth2_token_auth:
Token-Based Authentication
==================================================
.. index::
single: token-based authentication
single: authentication
OAuth 2 is used for token-based authentication. You can manage OAuth tokens as well as applications, a server-side representation of API clients used to generate tokens. By including an OAuth token as part of the HTTP authentication header, you can authenticate yourself and adjust the degree of restrictive permissions in addition to the base RBAC permissions. Refer to `RFC 6749`_ for more details of OAuth 2 specification.
.. _`RFC 6749`: https://tools.ietf.org/html/rfc6749
For details on using the ``manage`` utility to create tokens, refer to the :ref:`ag_token_utility` section.
Managing OAuth 2 Applications and Tokens
------------------------------------------
Applications and tokens can be managed as a top-level resource at ``/api/<version>/applications`` and ``/api/<version>/tokens``. These resources can also be accessed respective to the user at ``/api/<version>/users/N/<resource>``. Applications can be created by making a **POST** to either ``api/<version>/applications`` or ``/api/<version>/users/N/applications``.
Each OAuth 2 application represents a specific API client on the server side. For an API client to use the API via an application token, it must first have an application and issue an access token. Individual applications are accessible via their primary keys: ``/api/<version>/applications/<pk>/``. Here is a typical application:
::
{
"id": 1,
"type": "o_auth2_application",
"url": "/api/v2/applications/2/",
"related": {
"tokens": "/api/v2/applications/2/tokens/"
},
"summary_fields": {
"organization": {
"id": 1,
"name": "Default",
"description": ""
},
"user_capabilities": {
"edit": true,
"delete": true
},
"tokens": {
"count": 0,
"results": []
}
},
"created": "2018-07-02T21:16:45.824400Z",
"modified": "2018-07-02T21:16:45.824514Z",
"name": "My Application",
"description": "",
"client_id": "Ecmc6RjjhKUOWJzDYEP8TZ35P3dvsKt0AKdIjgHV",
"client_secret": "7Ft7ym8MpE54yWGUNvxxg6KqGwPFsyhYn9QQfYHlgBxai74Qp1GE4zsvJduOfSFkTfWFnPzYpxqcRsy1KacD0HH0vOAQUDJDCidByMiUIH4YQKtGFM1zE1dACYbpN44E",
"client_type": "confidential",
"redirect_uris": "",
"authorization_grant_type": "password",
"skip_authorization": false,
"organization": 1
}
As shown in the example above, ``name`` is the human-readable identifier of the application. The rest of the fields, like ``client_id`` and ``redirect_uris``, are mainly used for OAuth2 authorization, which is covered later in :ref:`ag_use_oauth_pat`.
The values for the ``client_id`` and ``client_secret`` fields are generated during creation and are non-editable identifiers of applications, while ``organization`` and ``authorization_grant_type`` are required upon creation and become non-editable.
Access Rules for Applications
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Access rules for applications are as follows:
- System administrators can view and manipulate all applications in the system
- Organization administrators can view and manipulate all applications belonging to Organization members
- Other users can only view, update, and delete their own applications, but cannot create any new applications
Tokens, on the other hand, are resources used to actually authenticate incoming requests and mask the permissions of the underlying user. There are two ways to create a token:
- POST to the ``/api/v2/tokens/`` endpoint with ``application`` and ``scope`` fields to point to the related application and specify token scope
- POST to the ``/api/v2/applications/<pk>/tokens/`` endpoint with the ``scope`` field (the parent application will be automatically linked)
Individual tokens are accessible via their primary keys: ``/api/<version>/tokens/<pk>/``. Here is an example of a typical token:
.. code-block:: text
{
"id": 4,
"type": "o_auth2_access_token",
"url": "/api/v2/tokens/4/",
"related": {
"user": "/api/v2/users/1/",
"application": "/api/v2/applications/1/",
"activity_stream": "/api/v2/tokens/4/activity_stream/"
},
"summary_fields": {
"application": {
"id": 1,
"name": "Default application for root",
"client_id": "mcU5J5uGQcEQMgAZyr5JUnM3BqBJpgbgL9fLOVch"
},
"user": {
"id": 1,
"username": "root",
"first_name": "",
"last_name": ""
}
},
"created": "2018-02-23T14:39:32.618932Z",
"modified": "2018-02-23T14:39:32.643626Z",
"description": "App Token Test",
"user": 1,
"token": "*************",
"refresh_token": "*************",
"application": 1,
"expires": "2018-02-24T00:39:32.618279Z",
"scope": "read"
},
For an OAuth 2 token, the only fully editable fields are ``scope`` and ``description``. The ``application`` field is non-editable on update, and all other fields are entirely non-editable, and are auto-populated during creation, as follows:
- ``user`` field corresponds to the user the token is created for, and in this case, is also the user creating the token
- ``expires`` is generated according to AWX configuration setting ``OAUTH2_PROVIDER``
- ``token`` and ``refresh_token`` are auto-generated to be non-clashing random strings
Both application tokens and personal access tokens are shown at the ``/api/v2/tokens/`` endpoint. The ``application`` field in the personal access tokens is always **null**. This is a good way to differentiate the two types of tokens.
Access rules for tokens
^^^^^^^^^^^^^^^^^^^^^^^^^
Access rules for tokens are as follows:
- Users can create a token if they are able to view the related application; and are also able to create a personal token for themselves
- System administrators are able to view and manipulate every token in the system
- Organization administrators are able to view and manipulate all tokens belonging to Organization members
- System Auditors can view all tokens and applications
- Other normal users are only able to view and manipulate their own tokens
.. note::
Users can only view the token or refresh the token value at the time of creation only.
.. _ag_use_oauth_pat:
Using OAuth 2 Token System for Personal Access Tokens (PAT)
---------------------------------------------------------------
The easiest and most common way to obtain an OAuth 2 token is to create a personal access token at the ``/api/v2/users/<userid>/personal_tokens/`` endpoint, as shown in this example below:
::
curl -H "Content-type: application/json" -d '{"description":"Personal AWX CLI token", "application":null, "scope":"write"}' https://<USERNAME>:<PASSWORD>@<AWX_SERVER>/api/v2/users/<USER_ID>/personal_tokens/ | python -m json.tool
You could also pipe the JSON output through ``jq``, if installed.
Following is an example of using the personal token to access an API endpoint using curl:
::
curl -H "Authorization: Bearer <token>" -H "Content-Type: application/json" -d '{}' https://awx/api/v2/job_templates/5/launch/
In AWX, the OAuth 2 system is built on top of the `Django Oauth Toolkit`_, which provides dedicated endpoints for authorizing, revoking, and refreshing tokens. These endpoints can be found under the ``/api/v2/users/<USER_ID>/personal_tokens/`` endpoint, which also provides detailed examples on some typical usage of those endpoints. These special OAuth 2 endpoints only support using the ``x-www-form-urlencoded`` **Content-type**, so none of the ``api/o/*`` endpoints accept ``application/json``.
.. _`Django Oauth Toolkit`: https://django-oauth-toolkit.readthedocs.io/en/latest/
.. note::
You can also request tokens using the ``/api/o/token`` endpoint by specifying ``null`` for the application type.
Alternatively, you can :ref:`add tokens <ug_tokens_auth_create>` for users through the AWX user interface, as well as configure the expiration of an access token and its associated refresh token (if applicable).
.. image:: ../common/images/configure-awx-system-misc-sys-token-expire.png
Token scope mask over RBAC system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The scope of an OAuth 2 token is a space-separated string composed of valid scope keywords, 'read' and 'write'. These keywords are configurable and used to specify permission level of the authenticated API client. Read and write scopes provide a mask layer over the Role-Based Access Control (RBAC) permission system of AWX. Specifically, a 'write' scope gives the authenticated user the full permissions the RBAC system provides, while a 'read' scope gives the authenticated user only read permissions the RBAC system provides. Note that 'write' implies 'read' as well.
For example, if you have administrative permissions to a job template, you can view, modify, launch, and delete the job template if authenticated via session or basic authentication. In contrast, if you are authenticated using OAuth 2 token, and the related token scope is 'read', you can only view, but not manipulate or launch the job template, despite being an administrator. If the token scope is 'write' or 'read write', you can take full advantage of the job template as its administrator.
To acquire and use a token, first create an application token:
1. Make an application with ``authorization_grant_type`` set to ``password``. HTTP POST the following to the ``/api/v2/applications/`` endpoint (supplying your own organization ID):
::
{
"name": "Admin Internal Application",
"description": "For use by secure services & clients. ",
"client_type": "confidential",
"redirect_uris": "",
"authorization_grant_type": "password",
"skip_authorization": false,
"organization": <organization-id>
}
2. Make a token and POST to the ``/api/v2/tokens/`` endpoint:
::
{
"description": "My Access Token",
"application": <application-id>,
"scope": "write"
}
This returns a <token-value> that you can use to authenticate with for future requests (this will not be shown again).
3. Use the token to access a resource. The following uses curl as an example:
::
curl -H "Authorization: Bearer <token-value>" -H "Content-Type: application/json" https://<awx>/api/v2/users/
The ``-k`` flag may be needed if you have not set up a CA yet and are using SSL.
To revoke a token, you can make a DELETE on the detail page for that token, using that token's ID. For example:
::
curl -u <user>:<password> -X DELETE https://<awx>/api/v2/tokens/<pk>/
Similarly, using a token:
::
curl -H "Authorization: Bearer <token-value>" -X DELETE https://<awx>/api/v2/tokens/<pk>/
.. _ag_oauth2_token_auth_grant_types:
Application Functions
-----------------------
This page lists OAuth 2 utility endpoints used for authorization, token refresh, and revoke. The ``/api/o/`` endpoints are not meant to be used in browsers and do not support HTTP GET. The endpoints prescribed here strictly follow RFC specifications for OAuth 2, so use that for detailed reference. The following is an example of the typical usage of these endpoints in AWX, in particular, when creating an application using various grant types:
- Authorization Code
- Password
.. note::
You can perform any of the application functions described here using AWX user interface. Refer to the :ref:`ug_applications_auth` section of the |atu| for more detail.
Application using ``authorization code`` grant type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The application ``authorization code`` grant type should be used when access tokens need to be issued directly to an external application or service.
.. note::
You can only use the ``authorization code`` type to acquire an access token when using an application. When integrating an external webapp with AWX, that webapp may need to create OAuth2 Tokens on behalf of users in that other webapp. Creating an application in AWX with the ``authorization code`` grant type is the preferred way to do this because:
- this allows an external application to obtain a token from AWX for a user, using their credentials.
- compartmentalized tokens issued for a particular application allows those tokens to be easily managed (revoke all tokens associated with that application without having to revoke *all* tokens in the system, for example)
To create an application named *AuthCodeApp* with the ``authorization-code`` grant type, perform a POST to the ``/api/v2/applications/`` endpoint:
::
{
"name": "AuthCodeApp",
"user": 1,
"client_type": "confidential",
"redirect_uris": "http://<awx>/api/v2",
"authorization_grant_type": "authorization-code",
"skip_authorization": false
}
.. _`Django-oauth-toolkit simple test application`: http://django-oauth-toolkit.herokuapp.com/consumer/
The workflow that occurs when you issue a **GET** to the ``authorize`` endpoint from the client application with the ``response_type``, ``client_id``, ``redirect_uris``, and ``scope``:
1. AWX responds with the authorization code and status to the ``redirect_uri`` specified in the application.
2. The client application then makes a **POST** to the ``api/o/token/`` endpoint on AWX with the ``code``, ``client_id``, ``client_secret``, ``grant_type``, and ``redirect_uri``.
3. AWX responds with the ``access_token``, ``token_type``, ``refresh_token``, and ``expires_in``.
Refer to `Django's Test Your Authorization Server`_ toolkit to test this flow.
.. _`Django's Test Your Authorization Server`: http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server
You may specify the number of seconds an authorization code remains valid in the **System settings** screen:
.. image:: ../common/images/configure-awx-system-misc-sys-authcode-expire.png
Requesting an access token after this duration will fail. The duration defaults to 600 seconds (10 minutes), based on the `RFC6749 <https://tools.ietf.org/html/rfc6749>`_ recommendation.
The best way to set up app integrations with AWX using the Authorization Code grant type is to whitelist the origins for those cross-site requests. More generally, you need to whitelist the service or application you are integrating with AWX, for which you want to provide access tokens. To do this, have your Administrator add this whitelist to their local AWX settings:
::
CORS_ALLOWED_ORIGIN_REGEXES = [
r"http://django-oauth-toolkit.herokuapp.com*",
r"http://www.example.com*"
]
Where ``http://django-oauth-toolkit.herokuapp.com`` and ``http://www.example.com`` are applications needing tokens with which to access AWX.
Application using ``password`` grant type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``password`` grant type or ``Resource owner password-based`` grant type is ideal for users who have native access to the web app and should be used when the client is the Resource owner. The following supposes an application, 'Default Application' with grant type ``password``:
::
{
"id": 6,
"type": "application",
...
"name": "Default Application",
"user": 1,
"client_id": "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l",
"client_secret": "fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo",
"client_type": "confidential",
"redirect_uris": "",
"authorization_grant_type": "password",
"skip_authorization": false
}
Logging in is not required for ``password`` grant type, so you can simply use curl to acquire a personal access token through the ``/api/v2/tokens/`` endpoint:
.. code-block:: text
curl --user <user>:<password> -H "Content-type: application/json" \
--data '{
"description": "Token for Nagios Monitoring app",
"application": 1,
"scope": "write"
}' \
https://<awx>/api/v2/tokens/
.. note::
The special OAuth 2 endpoints only support using the ``x-www-form-urlencoded`` **Content-type**, so as a result, none of the ``api/o/*`` endpoints accept ``application/json``.
Upon success, a response displays in JSON format containing the access token, refresh token and other information:
::
HTTP/1.1 200 OK
Server: nginx/1.12.2
Date: Tue, 05 Dec 2017 16:48:09 GMT
Content-Type: application/json
Content-Length: 163
Connection: keep-alive
Content-Language: en
Vary: Accept-Language, Cookie
Pragma: no-cache
Cache-Control: no-store
Strict-Transport-Security: max-age=15768000
{"access_token": "9epHOqHhnXUcgYK8QanOmUQPSgX92g", "token_type": "Bearer", "expires_in": 315360000000, "refresh_token": "jMRX6QvzOTf046KHee3TU5mT3nyXsz", "scope": "read"}
Application Token Functions
------------------------------
This section describes the refresh and revoke functions associated with tokens. Everything that follows (Refreshing and revoking tokens at the ``/api/o/`` endpoints) can currently only be done with application tokens.
Refresh an existing access token
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The following example shows an existing access token with a refresh token provided:
::
{
"id": 35,
"type": "access_token",
...
"user": 1,
"token": "omMFLk7UKpB36WN2Qma9H3gbwEBSOc",
"refresh_token": "AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z",
"application": 6,
"expires": "2017-12-06T03:46:17.087022Z",
"scope": "read write"
}
The ``/api/o/token/`` endpoint is used for refreshing the access token:
::
curl \
-d "grant_type=refresh_token&refresh_token=AL0NK9TTpv0qp54dGbC4VUZtsZ9r8z" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://<awx>/api/o/token/ -i
In the above POST request, ``refresh_token`` is provided by ``refresh_token`` field of the access token above that. The authentication information is of format ``<client_id>:<client_secret>``, where ``client_id`` and ``client_secret`` are the corresponding fields of the underlying related application of the access token.
.. note::
The special OAuth 2 endpoints only support using the ``x-www-form-urlencoded`` **Content-type**, so as a result, none of the ``api/o/*`` endpoints accept ``application/json``.
Upon success, a response displays in JSON format containing the new (refreshed) access token with the same scope information as the previous one:
::
HTTP/1.1 200 OK
Server: nginx/1.12.2
Date: Tue, 05 Dec 2017 17:54:06 GMT
Content-Type: application/json
Content-Length: 169
Connection: keep-alive
Content-Language: en
Vary: Accept-Language, Cookie
Pragma: no-cache
Cache-Control: no-store
Strict-Transport-Security: max-age=15768000
{"access_token": "NDInWxGJI4iZgqpsreujjbvzCfJqgR", "token_type": "Bearer", "expires_in": 315360000000, "refresh_token": "DqOrmz8bx3srlHkZNKmDpqA86bnQkT", "scope": "read write"}
Essentially, the refresh operation replaces the existing token by deleting the original and then immediately creating a new token with the same scope and related application as the original one. Verify that new token is present and the old one is deleted in the ``/api/v2/tokens/`` endpoint.
.. _ag_oauth2_token_revoke:
Revoke an access token
^^^^^^^^^^^^^^^^^^^^^^^^^
Similarly, you can revoke an access token by using the ``/api/o/revoke-token/`` endpoint.
Revoking an access token by this method is the same as deleting the token resource object, but it allows you to delete a token by providing its token value, and the associated ``client_id`` (and ``client_secret`` if the application is ``confidential``). For example:
::
curl -d "token=rQONsve372fQwuc2pn76k3IHDCYpi7" \
-u "gwSPoasWSdNkMDtBN3Hu2WYQpPWCO9SwUEsKK22l:fI6ZpfocHYBGfm1tP92r0yIgCyfRdDQt0Tos9L8a4fNsJjQQMwp9569eIaUBsaVDgt2eiwOGe0bg5m5vCSstClZmtdy359RVx2rQK5YlIWyPlrolpt2LEpVeKXWaiybo" \
http://<awx>/api/o/revoke_token/ -i
.. note::
The special OAuth 2 endpoints only support using the ``x-www-form-urlencoded`` **Content-type**, so as a result, none of the ``api/o/*`` endpoints accept ``application/json``.
Alternatively, you can use the ``manage`` utility, :ref:`ag_manage_utility_revoke_tokens`, to revoke tokens as described in the :ref:`ag_token_utility` section.
This setting can be configured at the system-level in the AWX User Interface:
.. image:: ../common/images/configure-awx-system-oauth2-tokens-toggle.png
Upon success, a response of ``200 OK`` displays. Verify the deletion by checking whether the token is present in the ``/api/v2/tokens/`` endpoint.

View File

@ -1,376 +0,0 @@
.. _ag_performance:
Improving AWX Performance
==================================
.. index::
pair: performance; AWX
This section aims to provide the guidelines for tuning AWX for performance and scalability.
.. _ag_performance_improvements:
Performance improvements
-------------------------
.. index::
pair: improvements; process
AWX brings multiple improvements that support large scale deployments of AWX. Major gains have been made to support workloads with many more concurrent jobs. In the past, there were issues with excessive database connections, job scheduling issues when there were thousands of pending and running jobs, and issues with successfully starting jobs when operating near 100% capacity of control nodes.
Additionally, changes have been made by default to take advantage of CPU capacity available on larger control nodes. This means customers who provision larger control nodes and want to run thousands of concurrent jobs have multiple improvements to look forward to in this release:
.. contents::
:local:
Vertical scaling improvements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: improvements; scaling
Control nodes are responsible for processing the output of jobs and writing them to the database. The process that does this is called the callback receiver. The callback receiver has a configurable number of workers, controlled by the setting ``JOB_EVENT_WORKERS``. In the past, the default for this setting was always 4, regardless of the CPU or memory capacity of the node. Now, in traditional virtual machines, the ``JOB_EVENT_WORKERS`` will be set to the same as the number of CPU if that is greater than 4. This means administrators that provision larger control nodes will see greater ability for those nodes to keep up with the job output created by jobs without having to manually adjust ``JOB_EVENT_WORKERS``.
Job scheduling improvements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: improvements; scheduling
When jobs are created either via a schedule, a workflow, the UI or the API, they are first created in Pending state. To determine when and where to run this job, a background task called the Task Manager collects all pending and running jobs and determines where capacity is available to run the job. In previous versions of AWX, scheduling slowed as the number of pending and running jobs increased, and the Task Manager was vulnerable to timing out without having made any progress. The scenario exhibits symptoms of having thousands of pending jobs, available capacity, but no jobs starting.
Optimizations in the job scheduler have made scheduling faster, as well as safeguards to better ensure the scheduler commits its progress even if it is nearing time out. Additionally, work that previously occurred in the Task Manager that blocked its progress has been decoupled into separate, non-blocking work units executed by the Dispatcher.
Database resource usage improvements
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: improvements; database usage
The use of database connections by running jobs has dramatically decreased, which removes a previous limit to concurrent running jobs, as well reduces pressure on memory consumption of PostgreSQL.
Each job in AWX has a worker process, called the dispatch worker, on the control node that started the process, which submits the work to the execution node via the Receptor, as well as consumes the output of the job and puts it in the Redis queue for the callback receiver to serialize the output and write it to the database as job events.
The dispatch worker is also responsible for noticing if the job has been canceled by the user in order to then cancel the receptor work unit. In the past, the worker maintained multiple open database connections per job. This caused two main problems:
- The application would begin to experience errors attempting to open new database connections (for API calls or other essential processes) when there were more than 350 jobs running concurrently, unless users increased the maximum number of connections.
- Even idle connections consume memory. For example, in experiments done by AWS, idle connections to PostgreSQL were shown to consume at least 1.5 MB of memory. So if an AWX administrator wanted to support running 2,000 concurrent jobs, this could result in 9GB of memory consumed on PostgreSQL from just idle connections alone.
The dispatch process closes database connections once the job has started. This means now the number of concurrent running jobs is no longer limited by the maximum number of database connections, and the risk of over-consuming memory on PostgreSQL is greatly reduced.
Stability improvements
~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: improvements; stability
Notable stability improvements in this release:
- **Improvements to job reaping** - Fixed root cause of jobs in waiting getting reaped before the job ever started, which often occurred when running near 100% capacity on control and hybrid nodes.
- **Improvements in stability for Operator-based deployments** - Resolved issues with multiple control pod deployments erroneously marking each other as offline. Now scaling operator-based deployments horizontally is more stable.
Metrics enhancements
~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: improvements; metrics
Metrics added in this release to track:
- **awx_database_connections_total** - Tracks current number of open database connections. When included in monitoring, can help identify when errors have occurred due lack of available database connections.
- **callback_receiver_event_processing_avg_seconds** - Proxy for “how far behind the callback receiver workers are in processing output". If this number stays large, consider horizontally scaling the control plane and reducing the ``capacity_adjustment`` value on the node.
Capacity Planning
------------------
.. index::
pair: planning; capacity
Example capacity planning exercise
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: exercise; capacity planning
Determining the number and size of instances to support the desired workload must take into account the following:
- Managed hosts
- Tasks/hour per host
- Maximum number of concurrent jobs you want to support
- Maximum number of forks set on jobs
- Node size you prefer to deploy (CPU/Memory/Disk)
With this data, you can calculate the number of tasks per hour, which the cluster needs control capacity to process; as well as the number of “forks” or capacity you need to be able to run your peak load, which the cluster needs execution capacity to run.
For example, to plan for a cluster with:
- 300 managed hosts
- 1,000 tasks/hour per host, or 16 tasks per minute per host
- 10 concurrent jobs
- Forks set to 5 on playbooks
- Average event size 1 Mb
- Preferred node size of 4 cpu and 16 GB Ram with disks rated at 3000 IOPs
Known factors:
- To run the 10 concurrent jobs, you need at least (10 jobs * 5 forks) + (10 jobs * 1 base task impact of a job) = 60 execution capacity
- To control 10 concurrent jobs, you need at least 10 control capacity.
- Running 1000 tasks * 300 managed hosts/hour will produce at least 300,000 events/hour. You would need to run the job to see exactly how many events it produces because this is dependent on the specific task and verbosity. For example, a debug task printing “Hello World” produces 6 job events with the verbosity of 1 on one host. With a verbosity of 3, it produces 34 job events on one host. Therefore, estimate the task produces at least 6 events. That means, closer to 3,000,000 events/hour or approximately 833 events/second.
To determine how many execution and control nodes you will need, reference experiment results in the following table that show the observed event processing rate of a single control node with 5 execution nodes of equal size (API Capacity column). The default “forks” setting of job templates is 5, so using this default the maximum number of jobs a control node can dispatch to execution nodes will make 5 execution nodes of equal CPU/RAM use 100% of their capacity, arriving to the previously mentioned 1:5 ratio of control to execution capacity.
.. list-table::
:widths: 15 10 5 5 10 10 10
:header-rows: 1
* - Node
- API Capacity
- Default Execution Capacity
- Default Control Capacity
- Mean Event Processing Rate at 100% capacity usage
- Mean Events Processing Rate at 50% capacity usage
- Mean Events Processing Rate at 40% capacity usage
* - 4 CPU @ 2.5Ghz, 16 GB RAM Control Node, max 3000 IOPs disk
- 100 - 300 requests/second
- n/a
- 137 jobs
- 1100/second
- 1400/second
- 1630/second
* - 4 CPU @ 2.5Ghz, 16 GB RAM Execution Node, max 3000 IOPs disk
- n/a
- 137
- 0
- n/a
- n/a
- n/a
* - 4 CPU @ 2.5Ghz, 16 GB RAM DB Node, max 3000 IOPs disk
- n/a
- n/a
- n/a
- n/a
- n/a
- n/a
This table shows that controlling jobs competes with job event processing on the control node. Therefore, over-provisioning control capacity can have a positive impact on reducing processing times. When processing times are high, users can experience a delay between when the job runs and when they can view the output in the API or UI.
For the example workload on 300 managed hosts, executing 1000 tasks/hour per host, 10 concurrent jobs with forks set to 5 on playbooks, and an average event size 1 Mb, do the following:
- Deploy 1 execution node, 1 control node, 1 DB node of 4 CPU @ 2.5Ghz, 16 GB RAM with disk having ~3000 IOPs
- Keep default fork setting of 5 on job templates
- Use the capacity adjustment feature on the control node to reduce the capacity down to 16 (lowest value) to reserve more of the Control nodes capacity for processing events
.. image:: ../common/images/perf-capacity-adj-instances.png
Factors influencing node size choice
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: factors; node size
single: node size choice
The previous exercise was done given that the cluster administrator already had a preferred node size, which happened to be the minimum recommended node size for AWX. Increasing the RAM and CPU on nodes increases the calculated capacity of the instances. For each instance type, there are different considerations as to why you may want to vertically scale the node.
Control nodes
^^^^^^^^^^^^^^
Vertically scaling a control node increases the number of jobs it can perform control tasks for, which requires both more CPU and memory. In general, scaling CPU alongside memory in the same proportion is recommended (e.g. 1 CPU: 4GB RAM). Even in the case where memory consumption is observed to be high, increasing the CPU of an instance can often relieve pressure, as most memory consumption of control nodes is usually from unprocessed events.
As mentioned in the :ref:`ag_performance_improvements` section, increasing the number of CPU can also increase the job event processing rate of a control node. At this time, vertically scaling a control node does not increase the number of workers that handle web requests, so horizontally scaling is more effective, if the desire is to increase the API availability.
Execution Nodes
^^^^^^^^^^^^^^^^
Vertical scaling an execution node will provide more forks for job execution. As mentioned in the example, a host with 16 GB of memory will by default, be assigned the capacity to run 137 “forks”, which at the default setting of 5 forks/job, will be able to run around 22 jobs concurrently. In general, scaling CPU alongside memory in the same proportion is recommended. Like control and hybrid nodes, there is a “capacity adjustment” on each execution instance that can be used to align actual utilization with the estimation of capacity consumption AWX makes. By default, all nodes are set to the top range of the capacity AWX estimates the node to have. If actual monitoring data reveals the node to be over-utilized, decreasing the capacity adjustment can help bring this in line with actual usage.
Vertically scaling execution will do exactly what the user expects and increase the number of concurrent jobs an instance can run. One downside is that concurrently running jobs on the same execution node, while isolated from each other in the sense that they cannot access the others data, can impact the other's performance, if a particular job is very resource-consumptive and overwhelms the node to the extent that it degrades performance of the entire node. Horizontal scaling the execution plane (e.g deploying more execution nodes) can provide some additional isolation of workloads, as well as allowing administrators to assign different instances to different instance groups, which can then be assigned to Organizations, Inventories, or Job Templates. This can enable something like an instance group that can only be used for running jobs against a “production” Inventory, this way jobs for development do not end up eating up capacity and causing higher priority jobs to queue waiting for capacity.
Hop Nodes
^^^^^^^^^^
Hop nodes have very low memory and CPU utilization and there is no significant motivation for vertically scaling hop nodes. A hop node that serves as the sole connection of many execution nodes to the control plane should be monitored for network bandwidth utilization, if this is seen to be saturated, changes to the network may be worth considering.
Hybrid nodes
^^^^^^^^^^^^^
Hybrid nodes perform both execution and control tasks, so vertically scaling these nodes both increases the number of jobs they can run, and now in 4.3.0, how many events they can process.
Capacity planning for Operator based Deployments
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: Operator; deployment
For Operator based deployments, refer to `Ansible AWX Operator documentation <https://ansible.readthedocs.io/projects/awx-operator>`_.
Monitoring AWX
----------------------
.. index::
pair: monitoring; AWX
It is a best practice to monitor your AWX hosts both from a system level as well as at the application level. System level monitoring would include information about disk I/O, RAM utilization, CPU utilization, and network traffic.
For application level monitoring, AWX provides Prometheus-style metrics on an API endpoint ``/api/v2/metrics``. This can be used to monitor aggregate data about job status as well as subsystem performance such as for job output processing or job scheduling.
Monitoring the actual CPU and memory utilization of your hosts is important because capacity management for instances does not dynamically introspect into the actual resource usage of hosts. The resource impact of automation will vary based on what exactly the playbooks are doing. For example, many cloud or networking modules do most of the actual processing on the node running the Ansible playbook (the execution node), which can have a significantly different impact on AWX than running ``yum update`` across many hosts, where the execution node spends much of the time during this task waiting on results.
If CPU or memory usage is very high, consider lowering the capacity adjustment on affected instances in AWX. This will limit how many jobs are run on or controlled by this instance.
Using this in combination with application level metrics can help identify what was happening in the application when and if any service degradation occurred. Having information about AWXs performance over time can be very useful in diagnosing problems or doing capacity planning for future growth.
Database Settings
------------------
.. index::
pair: settings; database
The following are configurable settings in the database that may help improve performance:
- **Autovacuuming**. Setting this PostgreSQL setting to true is a good practice. However, autovacuuming will not occur if there is never any idle time on the database. If it is observed that autovacuuming is not sufficiently cleaning up space on the database disk, then scheduling specific vacuum tasks during specific maintenance windows can be a solution.
- **GUC** parameters. Following are certain GUC (Grand Unified Configuration) parameters recommended for memory management in PostgreSQL, which is helpful for improving the performance of the database server. Recommended settings for each parameter are also provided.
- ``shared_buffers`` (integer)
- ``work_mem`` (integer)
- ``maintenance_work_mem`` (integer)
All of these parameters reside under the ``postgresql.conf`` file (inside ``$PDATA`` directory), which manages the configurations of the database server.
The **shared_buffers** parameter determines how much memory is dedicated to the server for caching data. Set in ``postgresql.conf``, the default value for this parameter is::
#sharedPostgres_buffers = 128MB
The value should be set at 15%-25% of the machines total RAM. For example: if your machines RAM size is 32 GB, then the recommended value for ``shared_buffers`` is 8 GB. Please note that the database server needs to be restarted after this change.
The **work_mem** parameter basically provides the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. Sort operations are used for order by, distinct, and merge join operations. Hash tables are used in hash joins and hash based aggregation. Set in ``postgresql.conf``, the default value for this parameter is::
#work_mem = 4MB
Setting the correct value of ``work_mem`` parameter can result in less disk-swapping, and therefore far quicker queries.
We can use the formula below to calculate the optimal ``work_mem`` value for the database server::
Total RAM * 0.25 / max_connections
The ``max_connections`` parameter is one of the GUC parameters to specify the maximum number of concurrent connections to the database server. Please note setting a large ``work_mem`` can cause issues like PostgreSQL server going out of memory (OOM), if there are too many open connections to the database.
The **maintenance_work_mem** parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations. Set in ``postgresql.conf``, the default value for this parameter is::
#maintenance_work_mem = 64MB
It is recommended to set this value higher than ``work_mem``; this can improve performance for vacuuming. In general, it should calculated as::
Total RAM * 0.05
Max Connections
~~~~~~~~~~~~~~~~~~~~~
For a realistic method of determining a value of ``max_connections``, a ballpark formula for AWX is outlined here.
Database connections will scale with the number of control and hybrid nodes.
Per-node connection needs are listed here.
* Callback Receiver workers: 4 connections per node or the number of CPUs per node, whichever is larger
* Dispatcher Workers: instance (forks) capacity plus 7
* uWSGI workers: 16 connections per node
* Listeners and auxiliary services: 4 connections per node
* Reserve for installer and other actions: 5 connections in total
Each of these points represent maximum expected connection use in high-load circumstances.
To apply this, consider a cluster with 3 hybrid nodes, each with 8 CPUs and 16 GB of RAM.
The capacity formula will determine a capacity of 132 forks per node based on the memory and capacity formula.
(3 nodes) x (
(8 CPUs / node) x (1 connection / CPU) +
(132 forks / node) x (1 connection / fork) + (7 connections / node) +
(16 connections / node) +
(4 connections / node)
) + (5 connections)
Adding up all the components comes out to 506 for this example cluster.
Practically, this means that the max_connections should be set to something higher than this.
Additional connections should be added to account for other platform components.
This calculation is most sensitive to the number of forks per node. Database connections are briefly opened at the start of and end of jobs. Environments where bursts of many jobs start at once will be most likely to reach the theoretical max number of open database connections.
The max number of jobs that would be started concurrently can be adjusted by modifying the effective capacity of the instances. This can be done with the SYSTEM_TASK_ABS_MEM setting, the capacity adjustment on instances, or with instance groups max jobs or max forks.
AWX Settings
~~~~~~~~~~~~~~~~~~~~~
.. index::
pair: settings; AWX
pair: settings; performance
Many AWX settings are available to set via AWX UI or API. There are additional settings that are only available as file-based settings. Refer to product documentation about where each of these settings can be set. This section will focus on why administrators may want to adjust these values.
Live events in the AWX UI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: settings; live events
Events are broadcast to all nodes so that the events can be served over websocket to any client that connects to a control nodes web service. This task is expensive, and becomes more expensive as the number of events that the cluster is producing increases as well as the number of control nodes increase, because all events are broadcast to all nodes regardless of how many clients are subscribed to particular jobs.
There are a few settings that allow you to influence behavior of how job events are displayed in the UI and served over websockets.
For large clusters with large job event loads, an easy way to avoid the additional overhead is to disable live streaming events (the events are only loaded on hard refresh to a jobs output detail page). This is possible by setting ``UI_LIVE_UPDATES_ENABLED`` to False or set the **Enable Activity Stream** toggle to **Off** from the AWX UI Miscellaneous System Settings window.
.. image:: ../common/images/perf-enable-activity-stream.png
If disabling live streaming of events is not possible, for very verbose jobs with many events, administrators can consider reducing the number of events shown per second or before truncating or hiding events in the UI. The following settings all address issues of rate or size of events.
::
# Returned in the header on event api lists as a recommendation to the UI
# on how many events to display before truncating/hiding
MAX_UI_JOB_EVENTS = 4000
# The maximum size of the ansible callback event's "res" data structure,
# (the "res" is the full "result" of the module)
# beyond this limit and the value will be removed (e.g. truncated)
MAX_EVENT_RES_DATA = 700000
# Note: These settings may be overridden by database settings.
EVENT_STDOUT_MAX_BYTES_DISPLAY = 1024
MAX_WEBSOCKET_EVENT_RATE = 30
# The amount of time before a stdout file is expired and removed locally
# Note that this can be recreated if the stdout is downloaded
LOCAL_STDOUT_EXPIRE_TIME = 2592000
Job Event Processing (Callback Receiver) Settings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: settings; job events
pair: settings; callback receiver
The callback receiver is a process with multiple workers. The number of workers spawned is determined by the setting ``JOB_EVENT_WORKERS``. These workers pull events off of a queue in Redis where unprocessed events are placed by jobs respective dispatch workers as results are available. As mentioned in the :ref:`ag_performance_improvements` section, this number of workers increased based on the number of CPU detected on the control instance. Previously, this setting was hardcoded to 4 workers, and administrators had to set this file based setting via a custom settings file on each control node.
This setting is still available for administrators to modify, with the knowledge that that values above 1 worker per CPU or less than 4 workers is not recommended. Greater values will have more workers available to clear the Redis queue as events stream to AWX, but may compete with other processes for CPU seconds. Lower values of workers may compete less for CPU on a node that also has had its number of UWSGI workers increased significantly, to prioritize serving web requests.
Task Manager (Job Scheduling) Settings
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. index::
pair: settings; task manager
pair: settings; job scheduling
The task manager is a periodic task that collects tasks that need to be scheduled and determines what instances have capacity and are eligible for running them. Its job is to find and assign the control and execution instances, update the jobs status to waiting, and send the message to the control node via ``pg_notify`` for the dispatcher to pick up the task and start running it.
As mentioned in the :ref:`ag_performance_improvements` section, a number of optimizations and refactors of this process were implemented in version 4.3. One such refactor was to fix a defect that when the task manager did reach its timeout, it was terminated in such a way that it did not make any progress. Multiple changes were implemented to fix this, so that as the task manager approaches its timeout, it makes an effort to exit and commit any progress made on that run. These issues generally arise when there are thousands of pending jobs, so may not be applicable to your use case.
The first “short-circuit” available to limit how much work the task manager attempts to do in one run is ``START_TASK_LIMIT``. The default is 100 jobs, which is a safe default. If there are remaining jobs to schedule, a new run of the task manager will be scheduled to run immediately after the current run. Users who are willing to risk potentially longer individual runs of the task manager in order to start more jobs in individual run may consider increasing the ``START_TASK_LIMIT``. One metric, the Prometheus metrics, available in ``/api/v2/metrics`` observes how long individual runs of the task manager take is “task_manager__schedule_seconds”.
As a safeguard against excessively long runs of the task manager, there is a timeout which is determined by the setting “TASK_MANAGER_TIMEOUT”. This is when the task manager will begin to exit any loops and attempt to commit any progress it made. The task is not actually killed until ``TASK_MANAGER_TIMEOUT`` + ``TASK_MANAGER_TIMEOUT_GRACE_PERIOD`` seconds has passed.
Additional Resources
---------------------
For workloads with high levels of API interaction, best practices include:
- Use a load balancer
- Limit the rate
- Set max connections per node to 100
- Use dynamic inventory sources instead of individually creating inventory hosts via the API
- Use webhook notifications instead of polling for job status
Since the published blog, additional observations have been made in the field regarding authentication methods. For automation clients that will make many requests in rapid succession, using tokens is a best practice, because depending on the type of user, there may be additional overhead when using basic authentication. Refer to :ref:`ag_oauth2_token_auth` for detail on how to generate and use tokens.

View File

@ -1,65 +0,0 @@
.. _ag_inv_import:
Inventory File Importing
=========================
.. index::
single: inventory file importing
single: inventory scripts; custom
AWX allows you to choose an inventory file from source control, rather than creating one from scratch. This function is the same as custom inventory scripts, except that the contents are obtained from source control instead of editing their contents browser. This means, the files are non-editable and as inventories are updated at the source, the inventories within the projects are also updated accordingly, including the ``group_vars`` and ``host_vars`` files or directory associated with them. SCM types can consume both inventory files and scripts, the overlap between inventory files and custom types in that both do scripts.
Any imported hosts will have a description of "imported" by default. This can be overridden by setting the ``_awx_description`` variable on a given host. For example, if importing from a sourced .ini file, you could add the following host variables:
::
[main]
127.0.0.1 _awx_description="my host 1"
127.0.0.2 _awx_description="my host 2"
Similarly, group descriptions also default to "imported", but can be overridden by the ``_awx_description`` as well.
In order to use old inventory scripts in source control, see :ref:`ug_customscripts` in the |atu| for detail.
Custom Dynamic Inventory Scripts
---------------------------------
A custom dynamic inventory script stored in version control can be imported and run. This makes it much easier to make changes to an inventory script — rather than having to copy and paste one into AWX, it is pulled directly from source control and then executed. The script must be written to handle any credentials needed for doing its work and you are responsible for installing any Python libraries needed by the script (which is the same requirement for custom dynamic inventory scripts). And this applies to both user-defined inventory source scripts and SCM sources as they are both exposed to Ansible *virtualenv* requirements related to playbooks.
You can specify environment variables when you edit the SCM inventory source itself. For some scripts, this will be sufficient, however, this is not a secure way to store secret information that gives access to cloud providers or inventory.
The better way is to create a new credential type for the inventory script you are going to use. The credential type will need to specify all the necessary types of inputs. Then, when you create a credential of this type, the secrets will be stored in an encrypted form. If you apply that credential to the inventory source, the script will have access to those inputs like environment variables or files.
For more detail, refer to :ref:`Credential types <ug_credential_types>`.
SCM Inventory Source Fields
-----------------------------
The source fields used are:
- ``source_project``: project to use
- ``source_path``: relative path inside the project indicating a directory or a file. If left blank, "" is still a relative path indicating the root directory of the project
- ``source_vars``: if set on a "file" type inventory source then they will be passed to the environment vars when running
An update of the project automatically triggers an inventory update where it is used. An update of the project is scheduled immediately after creation of the inventory source. Neither inventory nor project updates are blocked while a related job is running. In cases where you have a big project (around 10 GB), disk space on ``/tmp`` may be an issue.
You can specify a location manually in the AWX User Interface from the Create Inventory Source page. Refer to the :ref:`ug_inventories` section of the |atu| for instructions on creating an inventory source.
This listing should be refreshed to latest SCM info on a project update. If no inventory sources use a project as an SCM inventory source, then the inventory listing may not be refreshed on update.
For inventories with SCM sources, the Job Details page for inventory updates show a status indicator for the project update as well as the name of the project. The status indicator links to the project update job. The project name links to the project.
.. image:: ../common/images/jobs-details-scm-sourced-inventories.png
An inventory update can be performed while a related job is running.
Supported File Syntax
^^^^^^^^^^^^^^^^^^^^^^
AWX uses the ``ansible-inventory`` module from Ansible to process inventory files, and supports all valid inventory syntax that AWX requires.

View File

@ -1,130 +0,0 @@
.. _ag_secret_handling:
Secret handling and connection security
=======================================
This document describes how AWX handles secrets and connections in a secure fashion.
Secret Handling
---------------
AWX manages three sets of secrets:
- user passwords for local AWX users
- secrets for AWX operational use (database password, message
bus password, etc.)
- secrets for automation use (SSH keys, cloud credentials, external
password vault credentials, etc.)
User passwords for local users
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWX hashes local AWX user passwords with the PBKDF2 algorithm using a SHA256 hash.
Secret handling for operational use
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. index::
single: keys
pair: secret key; handling
pair: secret key; regenerate
AWX contains the following secrets used operationally:
- ``/etc/awx/SECRET_KEY``
- A secret key used for encrypting automation secrets in the
database (see below). If the ``SECRET_KEY`` changes or is unknown,
no encrypted fields in the database will be accessible.
- ``/etc/awx/awx.{cert,key}``
- SSL certificate and key for the AWX web service. A
self-signed cert/key is installed by default; the customer can
provide a locally appropriate certificate and key.
- Database password in ``/etc/awx/conf.d/postgres.py`` and message bus
password in ``/etc/awx/conf.d/channels.py``
- Passwords for connecting to AWX component services
These secrets are all stored unencrypted on the AWX server, as they are all needed to be read by the AWX service at startup
in an automated fashion. All secrets are protected by Unix permissions, and restricted to root and the AWX service user awx.
If hiding of these secrets is required, the files that these secrets are read from are interpreted Python. These files can be adjusted to retrieve these secrets via some other mechanism anytime a service restarts.
.. note::
If the secrets system is down, AWX will be unable to get the information and may fail in a way that would be recoverable once the service is restored. Using some redundancy on that system is highly recommended.
If, for any reason you believe the ``SECRET_KEY`` AWX generated for you has been compromised and needs to be regenerated, you can run a tool from the installer that behaves much like AWX backup and restore tool.
To generate a new secret key, run ``setup.sh -k`` using the inventory from your install.
A backup copy of the prior key is saved in ``/etc/awx/``.
Secret handling for automation use
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
AWX stores a variety of secrets in the database that are
either used for automation or are a result of automation. These secrets
include:
- all secret fields of all credential types (passwords, secret keys,
authentication tokens, secret cloud credentials)
- secret tokens and passwords for external services defined in AWX settings
- “password” type survey fields entries
To encrypt secret fields, AWX uses AES in CBC mode with a 256-bit key
for encryption, PKCS7 padding, and HMAC using SHA256 for authentication.
The encryption/decryption process derives the AES-256 bit encryption key
from the ``SECRET_KEY`` (described above), the field name of the model field
and the database assigned auto-incremented record ID. Thus, if any
attribute used in the key generation process changes, AWX fails to
correctly decrypt the secret. AWX is designed such that the
``SECRET_KEY`` is never readable in playbooks AWX launches, that
these secrets are never readable by AWX users, and no secret field values
are ever made available via the AWX REST API. If a secret value is
used in a playbook, we recommend using ``no_log`` on the task so that
it is not accidentally logged.
Connection Security
-------------------
Internal Services
~~~~~~~~~~~~~~~~~
AWX connects to the following services as part of internal
operation:
- PostgreSQL database
- A Redis key/value store
The connection to redis is over a local unix socket, restricted to the awx service user.
The connection to the PostgreSQL database is done via password authentication over TCP, either via localhost or remotely (external
database). This connection can use PostgreSQLs built in support for SSL/TLS, as natively configured by the installer support.
SSL/TLS protocols are configured by the default OpenSSL configuration.
External Access
~~~~~~~~~~~~~~~
AWX is accessed via standard HTTP/HTTPS on standard ports, provided by nginx. A self-signed cert/key is installed by default; the
customer can provide a locally appropriate certificate and key. SSL/TLS algorithm support is configured in the ``/etc/nginx/nginx.conf`` file. An “intermediate” profile is used by default, and can be configured. Changes must be reapplied on each update.
Managed Nodes
~~~~~~~~~~~~~
AWX also connects to managed machines and services as part of automation. All connections to managed machines are done via standard
secure mechanism as specified such as SSH, WinRM, SSL/TLS, and so on - each of these inherits configuration from the system configuration for the feature in question (such as the system OpenSSL configuration).

View File

@ -1,112 +0,0 @@
.. _ag_security_best_practices:
Security Best Practices
=========================
AWX is deployed in a secure fashion for use to automate typical environments. However, managing certain operating system environments, automation, and automation platforms, may require some additional best practices to ensure security. This document describes best practices for automation in a secure manner.
Understand the architecture of Ansible and AWX
----------------------------------------------------------
Ansible and AWX comprise a general purpose, declarative, automation platform. That means that once an Ansible playbook is launched (via AWX, or directly on the command line), the playbook, inventory, and credentials provided to Ansible are considered to be the source of truth. If policies are desired around external verification of specific playbook content, job definition, or inventory contents, these processes must be undertaken before the automation is launched (whether via the AWX web UI, or the AWX API).
These can take many forms. The use of source control, branching, and mandatory code review is best practice for Ansible automation. There are many tools that can help create process flow around using source control in this manner.
At a higher level, many tools exist that allow for creation of approvals and policy-based actions around arbitrary workflows, including automation; these tools can then use Ansible via AWXs API to perform automation.
We recommend all customers of AWX select a secure default administrator password at time of installation. See :ref:`tips_change_password` for more information.
AWX exposes services on certain well-known ports, such as port 80 for HTTP traffic and port 443 for HTTPS traffic. We recommend that you do not expose AWX on the open internet, significantly reducing the threat surface of your installation.
Granting access
-----------------
Granting access to certain parts of the system exposes security risks. Apply the following practices to help secure access:
.. contents::
:local:
Minimize administrative accounts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Minimizing the access to system administrative accounts is crucial for maintaining a secure system. A system administrator/root user can access, edit, and disrupt any system application. Keep the number of people/accounts with root access to as small of a group as possible. Do not give out `sudo` to `root` or `awx` (awx user) to untrusted users. Know that when restricting administrative access via mechanisms like `sudo`, that restricting to a certain set of commands may still give a wide range of access. Any command that allows for execution of a shell or arbitrary shell commands, or any command that can change files on the system, is fundamentally equivalent to full root access.
In an AWX context, any AWX system administrator or superuser account can edit, change, and update any inventory or automation definition in AWX. Restrict this to the minimum set of users possible for low-level AWX configuration and disaster recovery only.
Minimize local system access
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AWX, when used with best practices, should not require local user access except for administrative purposes. Non-administrator users should not have access to the AWX system.
Remove access to credentials from users
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If an automation credential is only stored in AWX, it can be further secured. Services such as OpenSSH can be configured to only allow credentials on connections from specific addresses. Credentials used by automation can be different than credentials used by system administrators for disaster-recovery or other ad-hoc management, allowing for easier auditing.
Enforce separation of duties
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Different pieces of automation may need to access a system at different levels. For example, you may have low-level system automation that applies patches and performs security baseline checking, while a higher-level piece of automation deploys applications. By using different keys or credentials for each piece of automation, the effect of any one key vulnerability is minimized, while also allowing for easy baseline auditing.
Available resources
--------------------
Several resources exist in AWX and elsewhere to ensure a secure platform. Consider utilizing the following functionality:
.. contents::
:local:
Audit and logging functionality
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
For any administrative access, it is key to audit and watch for actions.
For AWX, this is done via the built-in Activity Stream support that logs all changes within AWX, as well as via the automation logs.
Best practices dictate collecting logging and auditing centrally, rather than reviewing it on the local system. It is recommended that AWX be configured to use whatever IDS and/or logging/auditing (Splunk) is standard in your environment. AWX includes built-in logging integrations for Elastic Stack, Splunk, Sumologic, Loggly, and more. See :ref:`ag_logging` for more information.
Existing security functionality
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Do not disable SELinux, and do not disable AWXs existing multi-tenant containment. Use AWXs role-based access control (RBAC) to delegate the minimum level of privileges required to run automation. Use Teams in AWX to assign permissions to groups of users rather than to users individually. See :ref:`rbac-ug` in the |atu|.
.. _ag_security_django_password:
Django password policies
^^^^^^^^^^^^^^^^^^^^^^^^^^
AWX admins can leverage Django to set password policies at creation time via ``AUTH_PASSWORD_VALIDATORS`` to validate AWX user passwords. In the ``custom.py`` file located at ``/etc/awx/conf.d`` of your AWX instance, add the following code block example:
.. code-block:: text
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
'OPTIONS': {
'min_length': 9,
}
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
For more information, see `Password management in Django <https://docs.djangoproject.com/en/3.2/topics/auth/passwords/#module-django.contrib.auth.password_validation>`_ in addition to the example posted above.
Be sure to restart your AWX instance for the change to take effect. See :ref:`ag_restart_awx` for detail.

View File

@ -1,26 +0,0 @@
.. _ag_session_limits:
Working with Session Limits
=================================
.. index::
single: session limits
single: session.py
pair: SESSIONS_PER_USER; session limits
pair: AUTH_BASIC_ENABLED; session limits
Setting a session limit allows administrators to limit the number of simultaneous sessions per user or per IP address.
A session is created for each browser that a user uses to log in, which forces the user to log out any extra sessions after they exceed the administrator-defined maximum.
Session limits may be important, depending on your particular setup. For example, perhaps you only want a single user on your system with a single login per device (where the user could log in on his work laptop, phone, or home computer). In such a case, you would want to create a session limit equal to 1 (one). If the user logs in on his laptop, for example, then logs in using his phone, his laptop session expires (times out) and only the login on the phone persists. Proactive session limits will kick the user out when the session is idle. The default value is **-1**, which disables the maximum sessions allowed altogether, meaning you can have as many sessions without an imposed limit.
While session counts can be very limited, they can also be expanded to cover as many session logins as are needed by your organization.
When a user logs in and their login results in other users being logged out, the session limit has been reached and those users who are logged out are notified as to why the logout occurred.
To make changes to your session limits, navigate to the **Miscellaneous System settings** of the Settings menu and edit the **Maximum Number Of Simultaneous Logged In Sessions** setting or use the :ref:`api_browsable_api` if you are comfortable with making REST requests.
.. note::
To make the best use of session limits, disable ``AUTH_BASIC_ENABLED`` by changing the value to ``False``, as it falls outside of the scope of session limit enforcement. Alternatively, in the System Settings of the AWX UI, toggle the **Enable HTTP Basic Auth** to off.
.. image:: ../common/images/configure-awx-session-limits.png

View File

@ -1,379 +0,0 @@
.. _tips_and_tricks:
************************
AWX Tips and Tricks
************************
.. index::
single: tips
single: best practices
single: help
.. contents::
:local:
Using the AWX CLI Tool
==============================
.. index::
pair: AWX CLI; command line interface
pair: tips; AWX CLI
pair: tips; command line interface
AWX has a full-featured command line interface. Refer to `AWX Command Line Interface`_ documentation for configuration and usage instructions.
.. _`AWX Command Line Interface`: https://docs.ansible.com/automation-controller/latest/html/controllercli/usage.html
.. _tips_change_password:
Changing the AWX Admin Password
=======================================
.. index::
pair: admin password; changing password
pair: tips; admin password change
pair: awx-manage; change password
During the installation process, you are prompted to enter an administrator password which is used for the ``admin`` superuser/first user created in AWX. If you log into the instance via SSH, it will tell you the default admin password in the prompt. If you need to change this password at any point, run the following command as root on the AWX server:
::
awx-manage changepassword admin
Next, enter a new password. After that, the password you have entered will work as the admin password in the web UI.
To set policies at creation time for password validation using Django, see :ref:`ag_security_django_password` for detail.
Creating an AWX Admin from the commandline
==================================================
.. index::
pair: admin creation; commandline
pair: super user creation; awx-manage
pair: tips; admin creation
Once in a while you may find it helpful to create an admin (superuser) account from the commandline. To create an admin, run the following command as root on the AWX server and enter in the admin information as prompted:
::
awx-manage createsuperuser
Setting up a jump host to use with AWX
========================================
.. index::
pair: jump host; ProxyCommand
pair: tips; jump host
pair: tips; ProxyCommand
Credentials supplied by AWX will not flow to the jump host via ProxyCommand. They are only used for the end-node once the tunneled connection is set up.
To make this work, configure a fixed user/keyfile in the AWX user's SSH config in the ProxyCommand definition that sets up the connection through the jump host. For example:
::
Host tampa
Hostname 10.100.100.11
IdentityFile [privatekeyfile]
Host 10.100..
Proxycommand ssh -W [jumphostuser]@%h:%p tampa
You can also add a jump host to your AWX instance through Inventory variables. These variables can be set at either the inventory, group, or host level. To add this, navigate to your inventory and in the ``variables`` field of whichever level you choose, add the following variables:
::
ansible_user: <user_name>
ansible_connection: ssh
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q <user_name>@<jump_server_name>"'
View Ansible outputs for JSON commands when using AWX
==================================================================
.. index::
single: Ansible output for JSON commands
single: JSON commands, Ansible output
When working with AWX, you can use the API to obtain the Ansible outputs for commands in JSON format.
To view the Ansible outputs, browse to:
::
https://<awx server name>/api/v2/jobs/<job_id>/job_events/
Locate and configure the Ansible configuration file
=====================================================
.. index::
pair: tips; configuration file location
pair: tips; configuration file configuration
single: Ansible configuration file
single: ansible.cfg
pair: tips; ansible.cfg
While Ansible does not require a configuration file, OS packages often include a default one in ``/etc/ansible/ansible.cfg`` for possible customization. In order to use a custom ``ansible.cfg`` file, place it at the root of your project. AWX runs ``ansible-playbook`` from the root of the project directory, where it will then find the custom ``ansible.cfg`` file. An ``ansible.cfg`` anywhere else in the project will be ignored.
To learn which values you can use in this file, refer to the `configuration file on github`_.
.. _`configuration file on github`: https://github.com/ansible/ansible/blob/devel/examples/ansible.cfg
Using the defaults are acceptable for starting out, but know that you can configure the default module path or connection type here, as well as other things.
AWX overrides some ansible.cfg options. For example, AWX stores the SSH ControlMaster sockets, the SSH agent socket, and any other per-job run items in a per-job temporary directory that is passed to the container used for job execution.
View a listing of all ansible\_ variables
===========================================
.. index::
pair: tips; ansible_variables, viewing all
Ansible by default gathers “facts” about the machines under its management, accessible in Playbooks and in templates. To view all facts available about a machine, run the ``setup`` module as an ad hoc action:
::
ansible -m setup hostname
This prints out a dictionary of all facts available for that particular host. For more information, refer to: https://docs.ansible.com/ansible/playbooks_variables.html#information-discovered-from-systems-facts
.. _ag_tips_jinja_extravars:
The ALLOW_JINJA_IN_EXTRA_VARS variable
========================================
Setting ``ALLOW_JINJA_IN_EXTRA_VARS = template`` only works for saved job template extra variables. Prompted variables and survey variables are excluded from the 'template'. This parameter has three values: ``template`` to allow usage of Jinja saved directly on a job template definition (the default), ``never`` to disable all Jinja usage (recommended), and ``always`` to always allow Jinja (strongly discouraged, but an option for prior compatibility).
This parameter is configurable in the Jobs Settings screen of the AWX UI:
.. image:: ../common/images/settings-jobs-jinja.png
Using execution environments
============================
.. index::
single: execution environment
pair: add; execution environment
pair: jobs; add execution environment
See :ref:`ug_execution_environments` in the |atu|.
Configuring the ``awxhost`` hostname for notifications
===============================================================
.. index::
pair: notifications; hostname configuration
In the :ref:`System Settings <configure_awx_system>`, you can replace ``https://awxhost`` in the **Base URL of the service** field with your preferred hostname to change the notification hostname.
.. image:: ../common/images/configure-awx-system-misc-baseurl.png
New installations of AWX should not have to set the hostname for notifications.
.. _launch_jobs_curl:
Launching Jobs with curl
===========================
.. index::
pair: tips; curl
Launching jobs with AWX API is simple. Here are some easy to follow examples using the ``curl`` tool.
Assuming that your Job Template ID is '1', your AWX IP is 192.168.42.100, and that ``admin`` and ``awxsecret`` are valid login credentials, you can create a new job this way:
::
curl -f -k -H 'Content-Type: application/json' -XPOST \
--user admin:awxsecret \
http://192.168.42.100/api/v2/job_templates/1/launch/
This returns a JSON object that you can parse and use to extract the 'id' field, which is the ID of the newly created job.
You can also pass extra variables to the Job Template call, such as is shown in the following example:
.. code-block:: text
curl -f -k -H 'Content-Type: application/json' -XPOST \
-d '{"extra_vars": "{\"foo\": \"bar\"}"}' \
--user admin:awxsecret http://192.168.42.100/api/v2/job_templates/1/launch/
You can view the live API documentation by logging into http://192.168.42.100/api/ and browsing around to the various objects available.
.. note::
The ``extra_vars`` parameter needs to be a string which contains JSON, not just a JSON dictionary, as you might expect. Use caution when escaping the quotes, etc.
Dynamic Inventory and private IP addresses
===========================================
.. index::
pair: tips; EC2 VPC instances
pair: tips; private IPs with dynamic inventory
pair: tips; dynamic inventory and private IPs
By default, AWX only shows instances in a VPC that have an Elastic IP (EIP) address associated with them. To view all of your VPC instances, perform the following steps:
- In the AWX interface, select your inventory.
- Click on the group that has the Source set to AWS, and click on the Source tab.
- In the "Source Variables" box, enter: ``vpc_destination_variable: private_ip_address``
Save and trigger an update of the group. You should now be able to see all of your VPC instances.
.. note::
AWX must be running inside the VPC with access to those instances in order to usefully configure them.
Filtering instances returned by the dynamic inventory sources in AWX
======================================================================
.. index::
pair: tips; filtering instances
pair: tips; dynamic inventory and instance filtering
pair: tips; instance filtering
By default, the dynamic inventory sources in AWX (AWS, Google, etc) return all instances available to the cloud credentials being used. They are automatically joined into groups based on various attributes. For example, AWS instances are grouped by region, by tag name and value, by security groups, etc. To target specific instances in your environment, write your playbooks so that they target the generated group names. For example:
::
---
- hosts: tag_Name_webserver
tasks:
...
You can also use the ``Limit`` field in the Job Template settings to limit a playbook run to a certain group, groups, hosts, or a combination thereof. The syntax is the same as the ``--limit parameter`` on the ansible-playbook command line.
You may also create your own groups by copying the auto-generated groups into your custom groups. Make sure that the ``Overwrite`` option is disabled on your dynamic inventory source, otherwise subsequent synchronization operations will delete and replace your custom groups.
Using an unreleased module from Ansible source with AWX
==========================================================
.. index::
pair: tips; Ansible modules, unreleased
pair: tips; unreleased modules
pair: tips; modules, using unreleased
If there is a feature that is available in the latest Ansible core branch that you would like to leverage with your AWX system, making use of it in AWX is fairly simple.
First, determine which is the updated module you want to use from the available Ansible Core Modules or Ansible Extra Modules GitHub repositories.
Next, create a new directory, at the same directory level of your Ansible source playbooks, named ``/library``.
Once this is created, copy the module you want to use and drop it into the ``/library`` directory--it will be consumed first over your system modules and can be removed once you have updated the stable version via your normal package manager.
Using callback plugins with AWX
================================
.. index::
pair: tips; callback plugins
pair: tips; plugins, callback
Ansible has a flexible method of handling actions during playbook runs, called callback plugins. You can use these plugins with AWX to do things like notify services upon playbook runs or failures, send emails after every playbook run, etc. For official documentation on the callback plugin architecture, refer to: http://docs.ansible.com/developing_plugins.html#callbacks
.. note::
AWX does not support the ``stdout`` callback plugin because Ansible only allows one, and it is already being used by AWX for streaming event data.
You may also want to review some example plugins, which should be modified for site-specific purposes, such as those available at:
https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/callback
To use these plugins, put the callback plugin ``.py`` file into a directory called ``/callback_plugins`` alongside your playbook in your AWX Project. Then, specify their paths (one path per line) in the **Ansible Callback Plugins** field of the Job settings, located towards the bottom of the screen:
.. image:: ../common/images/configure-awx-jobs-callback.png
.. note::
To have most callbacks shipped with Ansible applied globally, you must add them to the ``callback_whitelist`` section of your ``ansible.cfg``. If you have a custom callbacks, refer to the Ansible documentation for `Enabling callback plugins <https://docs.ansible.com/ansible/latest/plugins/callback.html#enabling-callback-plugins>`_.
Connecting to Windows with winrm
====================================
.. index::
pair: tips; Windows connection
pair: tips; winrm
By default AWX attempts to ``ssh`` to hosts. You must add the ``winrm`` connection info to the group variables to which the Windows hosts belong. To get started, edit the Windows group in which the hosts reside and place the variables in the source/edit screen for the group.
To add ``winrm`` connection info:
Edit the properties for the selected group by clicking on the |edit| button to the right of the group name that contains the Windows servers. In the "variables" section, add your connection information as such: ``ansible_connection: winrm``
Once done, save your edits. If Ansible was previously attempting an SSH connection and failed, you should re-run the job template.
.. |edit| image:: ../common/images/edit-button.png
Importing existing inventory files and host/group vars into AWX
================================================================
.. index::
pair: tips; inventory import
pair: importing inventory; importing host/group vars
pair: tips; host/group vars import
To import an existing static inventory and the accompanying host and group vars into AWX, your inventory should be in a structure that looks similar to the following:
::
inventory/
|-- group_vars
| `-- mygroup
|-- host_vars
| `-- myhost
`-- hosts
To import these hosts and vars, run the ``awx-manage`` command:
::
awx-manage inventory_import --source=inventory/ \
--inventory-name="My AWX Inventory"
If you only have a single flat file of inventory, a file called ansible-hosts, for example, import it like the following:
::
awx-manage inventory_import --source=./ansible-hosts \
--inventory-name="My AWX Inventory"
In case of conflicts or to overwrite an inventory named "My AWX Inventory", run:
::
awx-manage inventory_import --source=inventory/ \
--inventory-name="My AWX Inventory" \
--overwrite --overwrite-vars
If you receive an error, such as:
::
ValueError: need more than 1 value to unpack
Create a directory to hold the hosts file, as well as the group_vars:
::
mkdir -p inventory-directory/group_vars
Then, for each of the groups that have :vars listed, create a file called ``inventory-directory/group_vars/<groupname>`` and format the variables in YAML format.
Once broken out, the importer will handle the conversion correctly.

View File

@ -1,60 +0,0 @@
.. _ag_topology_viewer:
Topology Viewer
================
.. index::
pair: topology;viewer
The topology viewer allows you to view node type, node health, and specific details about each node if you already have a mesh topology deployed. In order to access the topology viewer from the AWX user interface, you must have System Administrator or System Auditor permissions.
To access the topology viewer from the AWX user interface:
1. In the Administration menu from left navigation bar, click **Topology View**.
The Topology View opens and displays a graphic representation of how each receptor node links together.
.. image:: ../common/images/topology-viewer-initial-view.png
2. To adjust the zoom levels, refresh the current view, or manipulate the graphic views, use the control buttons on the upper right-hand corner of the window.
.. image:: ../common/images/topology-viewer-view-controls.png
You can also click and drag to pan around; and use the scroll wheel on your mouse or trackpad to zoom. The fit-to-screen feature automatically scales the graphic to fit on the screen and repositions it in the center. It is particularly useful when you want to see a large mesh in its entirety.
.. image:: ../common/images/topology-viewer-zoomed-view.png
To reset the view to its default view, click **Reset zoom**.
3. Refer to the Legend to the left of the graphic to identify the type of nodes that are represented.
.. note::
If the Legend is not present, use the toggle on the upper right corner of the window to enable it.
The Legend shows the :ref:`node status <node_statuses>` by color, which is indicative of the health of the node. The status of **Error** in the legend encompasses the **Unavailable** state (as displayed in the Instances list view) plus any future error conditions encountered in later versions of AWX. Also depicted in the legend are the link statuses:
- **Established**: this is a link state that indicates a peer connection between nodes that are either ready, unavailable, or disabled
- **Adding**: this is a link state indicating a peer connection between nodes that was selected to be added to the mesh topology
- **Removing**: this is a link state indicating a peer connection between nodes that was selected to be removed from the topology
4. Hover over a node and the connectors highlight to show its immediate connected nodes (peers) or click on a node to retrieve details about it, such as its hostname, node type, and status.
.. image:: ../common/images/topology-viewer-node-hover-click.png
5. Click on the link for instance hostname from the details displayed, and you will be redirected to its Details page that provides more information about that node, most notably for information about an ``Error`` status, as in the example below.
.. image:: ../common/images/topology-viewer-node-view.png
.. image:: ../common/images/topology-viewer-instance-details.png
At the bottom of the Details view, you can remove the instance, run a health check on the instance on an as-needed basis, or unassign jobs from the instance. By default, jobs can be assigned to each node. However, you can disable it to exclude the node from having any jobs running on it.
For more information on creating new nodes and scaling the mesh, refer to :ref:`ag_instances` in this guide for more detail.

View File

@ -1,235 +0,0 @@
.. _admin_troubleshooting:
***********************
Troubleshooting AWX
***********************
.. index::
single: troubleshooting
single: help
Some troubleshooting tools are built in the AWX user interface that may help you address some issues you might encounter. To access these tools, navigate to **Settings** and select **Troubleshooting**.
.. image:: ../common/images/settings_troubleshooting_highlighted.png
The options available are:
- **Enable or Disable tmp dir cleanup**: choose whether you want to clean up the ``tmp`` directory.
- **Debug Web Requests**: choose whether you want web requests to log messages for debugging purposes.
- **Release Receptor Work**: disables cleaning up job pods. If you disable this, the jobs pods will remain in your cluster indefinitely, allowing you to examine them post-run. If you are missing data there, run ``kubectl logs <job-pod-name>`` and provide the logs in a issue report.
.. image:: ../common/images/troubleshooting_options.png
Click **Edit** to modify the settings. Use the toggle to enable and disable the appropriate settings.
.. _admin_troubleshooting_extra_settings:
Error logging and extra settings
=================================
.. index::
pair: troubleshooting; general help
pair: troubleshooting; error logs
AWX server errors are streamed and not logged, however you may be able to pass them in on the AWX spec file.
With ``extra_settings``, you can pass multiple custom settings via the ``awx-operator``. The parameter ``extra_settings`` will be appended to the ``/etc/tower/settings.py`` file and can be an alternative to the ``extra_volumes`` parameter.
+----------------+----------------+---------+
| Name | Description | Default |
+----------------+----------------+---------+
| extra_settings | Extra settings | '' |
+----------------+----------------+---------+
Parameters configured in ``extra_settings`` are set as read-only settings in AWX. As a result, they cannot be changed in the UI after deployment. If you need to change the setting after the initial deployment, you need to change it on the AWX CR spec.
Example configuration of ``extra_settings`` parameter:
::
spec:
extra_settings:
- setting: MAX_PAGE_SIZE
value: "500"
- setting: LOG_AGGREGATOR_LEVEL
value: "'DEBUG'"
For some settings, such as ``LOG_AGGREGATOR_LEVEL``, the value may need double quotes as shown in the above example.
.. taken from https://github.com/ansible/awx-operator/blob/devel/docs/user-guide/advanced-configuration/extra-settings.md
.. _admin_troubleshooting_sosreport:
sosreport
==========
.. index::
pair: troubleshooting; sosreport
The ``sosreport`` is a utility that collects diagnostic information for root cause analysis.
Problems connecting to your host
===================================
.. index::
pair: troubleshooting; host connections
If you are unable to run the ``helloworld.yml`` example playbook from the Quick Start Guide or other playbooks due to host connection errors, try the following:
- Can you ``ssh`` to your host? Ansible depends on SSH access to the servers you are managing.
- Are your hostnames and IPs correctly added in your inventory file? (Check for typos.)
Unable to login to AWX via HTTP
==================================
Access to AWX is intentionally restricted through a secure protocol (HTTPS). In cases where your configuration is set up to run an AWX node behind a load balancer or proxy as "HTTP only", and you only want to access it without SSL (for troubleshooting, for example), you may change the settings of the ``/etc/tower/conf.d`` of your AWX instance. The operator has ``extra_settings`` that allows you to change a file-based setting in OCP. See :ref:`admin_troubleshooting_extra_settings` for detail.
Once in the spec, set the following accordingly:
::
SESSION_COOKIE_SECURE = False
CSRF_COOKIE_SECURE = False
Changing these settings to ``False`` will allow AWX to manage cookies and login sessions when using the HTTP protocol. This must be done on every node of a cluster installation to properly take effect.
To apply the changes, run:
::
awx-service restart
WebSockets port for live events not working
===================================================
.. index::
pair: live events; port changes
pair: troubleshooting; live events
pair: troubleshooting; websockets
AWX uses port 80/443 on the AWX server to stream live updates of playbook activity and other events to the client browser. These ports are configured for 80/443 by default, but if they are blocked by firewalls, close any firewall rules that opened up or added for the previous websocket ports, this will ensure your firewall allows traffic through this port.
Problems running a playbook
==============================
.. index::
pair: troubleshooting; host connections
If you are unable to run the ``helloworld.yml`` example playbook from the Quick Start Guide or other playbooks due to playbook errors, try the following:
- Are you authenticating with the user currently running the commands? If not, check how the username has been setup or pass the ``--user=username`` or ``-u username`` commands to specify a user.
- Is your YAML file correctly indented? You may need to line up your whitespace correctly. Indentation level is significant in YAML. You can use ``yamlint`` to check your playbook. For more information, refer to the YAML primer at: http://docs.ansible.com/YAMLSyntax.html
- Items beginning with a ``-`` are considered list items or plays. Items with the format of ``key: value`` operate as hashes or dictionaries. Ensure you don't have extra or missing ``-`` plays.
Problems when running a job
==============================
.. index::
pair: troubleshooting; job does not run
If you are having trouble running a job from a playbook, you should review the playbook YAML file. When importing a playbook, either manually or via a source control mechanism, keep in mind that the host definition is controlled by AWX and should be set to ``hosts: all``.
Playbooks aren't showing up in the "Job Template" drop-down
=============================================================
.. index::
pair: playbooks are not viewable; Job Template drop-down list
pair: troubleshooting; playbooks not appearing
If your playbooks are not showing up in the Job Template drop-down list, here are a few things you can check:
- Make sure that the playbook is valid YML and can be parsed by Ansible.
- Make sure the permissions and ownership of the project path (/var/lib/awx/projects) is set up so that the "awx" system user can view the files. You can run this command to change the ownership:
::
chown awx -R /var/lib/awx/projects/
Playbook stays in pending
===========================
.. index::
pair: troubleshooting; pending playbook
If you are attempting to run a playbook Job and it stays in the "Pending" state indefinitely, try the following:
- Ensure all supervisor services are running via ``supervisorctl status``.
- Check to ensure that the ``/var/`` partition has more than 1 GB of space available. Jobs will not complete with insufficient space on the ``/var/`` partition.
- Run ``awx-service restart`` on the AWX server.
If you continue to have problems, run ``sosreport`` as root on the AWX server, then file a `support request`_ with the result.
.. _`support request`: http://support.ansible.com/
Cancel an AWX job
=========================
.. index::
pair: troubleshooting; job cancellation
When issuing a ``cancel`` request on a currently running AWX job, AWX issues a ``SIGINT`` to the ``ansible-playbook`` process. While this causes Ansible to stop dispatching new tasks and exit, in many cases, module tasks that were already dispatched to remote hosts will run to completion. This behavior is similar to pressing ``Ctrl-C`` during a command-line Ansible run.
With respect to software dependencies, if a running job is canceled, the job is essentially removed but the dependencies will remain.
Reusing an external database causes installations to fail
=============================================================
.. index::
pair: installation failure; external database
Instances have been reported where reusing the external DB during subsequent installation of nodes causes installation failures.
For example, say that you performed a clustered installation. Next, say that you needed to do this again and performed a second clustered installation reusing the same external database, only this subsequent installation failed.
When setting up an external database which has been used in a prior installation, the database used for the clustered node must be manually cleared before any additional installations can succeed.
Private EC2 VPC Instances in the AWX Inventory
=======================================================
.. index::
pair: EC2; VPC instances
pair: troubleshooting; EC2 VPC instances
By default, AWX only shows instances in a VPC that have an Elastic IP (EIP) associated with them. To see all of your VPC instances, perform the following steps:
1. In the AWX interface, select your inventory.
2. Click on the group that has the Source set to AWS, and click on the Source tab.
3. In the ``Source Variables`` box, enter:
::
vpc_destination_variable: private_ip_address
Next, save and then trigger an update of the group. Once this is done, you should be able to see all of your VPC instances.
.. note::
AWX must be running inside the VPC with access to those instances if you want to configure them.
Troubleshooting "Error: provided hosts list is empty"
======================================================
.. index::
pair: troubleshooting; hosts list
single: hosts lists (empty)
If you receive the message "Skipping: No Hosts Matched" when you are trying to run a playbook through AWX, here are a few things to check:
- Make sure that your hosts declaration line in your playbook matches the name of your group/host in inventory exactly (these are case sensitive).
- If it does match and you are using Ansible Core 2.0 or later, check your group names for spaces and modify them to use underscores or no spaces to ensure that the groups can be recognized.
- Make sure that if you have specified a Limit in the Job Template that it is a valid limit value and still matches something in your inventory. The Limit field takes a pattern argument, described here: http://docs.ansible.com/intro_patterns.html
Please file a support ticket if you still run into issues after checking these options.

View File

@ -1,32 +0,0 @@
2. Click the **Tokens** tab from your user's profile.
When no tokens are present, the Tokens screen prompts you to add them:
.. image:: ../common/images/users-tokens-empty.png
3. Click the **Add** button, which opens the Create Token window.
4. Enter the following details in Create Token window:
- **Application**: enter the name of the application with which you want to associate your token. Alternatively, you can search for it by clicking the |search| button. This opens a separate window that allows you to choose from the available options. Use the Search bar to filter by name if the list is extensive. Leave this field blank if you want to create a Personal Access Token (PAT) that is not linked to any application.
- **Description**: optionally provide a short description for your token.
- **Scope** (required): specify the level of access you want this token to have.
.. |search| image:: ../common/images/search-button.png
5. When done, click **Save** or **Cancel** to abandon your changes.
After the token is saved, the newly created token for the user displays with the token information and when it expires.
.. image:: ../common/images/users-token-information-example.png
.. note:: This is the only time the token value and associated refresh token value will ever be shown.
In the user's profile, the application for which it is assigned to and its expiration displays in the token list view.
.. image:: ../common/images/users-token-assignment-example.png

View File

@ -1,36 +0,0 @@
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: containergroup-service-account
namespace: containergroup-namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-containergroup-service-account
namespace: containergroup-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: role-containergroup-service-account-binding
namespace: containergroup-namespace
subjects:
- kind: ServiceAccount
name: containergroup-service-account
namespace: containergroup-namespace
roleRef:
kind: Role
name: role-containergroup-service-account
apiGroup: rbac.authorization.k8s.io

View File

@ -1,28 +0,0 @@
Copyright © Red Hat, Inc.
===============================
Ansible, |aap|, Red Hat, and |rhel| are trademarks of Red Hat, Inc., registered in the United States and other countries.
If you distribute this document, or a modified version of it, you must provide attribution to Red Hat, Inc. and provide a link to the original version. 
**Third Party Rights**
Ubuntu and Canonical are registered trademarks of Canonical Ltd.
The CentOS Project is copyright protected. The CentOS Marks are trademarks of Red Hat, Inc. (“Red Hat”).
Microsoft, Windows, Windows Azure, and Internet Explore are trademarks of Microsoft, Inc.
VMware is a registered trademark or trademark of VMware, Inc.
Amazon Web Services", "AWS", "Amazon EC2", and "EC2”, are trademarks of Amazon Web Services, Inc. or its affiliates.
OpenStack™ and OpenStack logo are trademarks of OpenStack, LLC.
Chrome™ and Google Compute Engine™ service registered trademarks of Google Inc.
Safari® is a registered trademark of Apple, Inc.
Firefox® is a registered trademark of the Mozilla Foundation.
All other trademarks are the property of their respective owners.

View File

@ -1,2 +0,0 @@
The ability to build and deploy Python virtual environments for automation has been replaced by Ansible execution environments. Unlike legacy virtual environments, execution environments are container images that make it possible to incorporate system-level dependencies and collection-based content. Each execution environment allows you to have a customized image to run jobs, and each of them contain only what you need when running the job, nothing more.

View File

@ -1,33 +0,0 @@
A ``ContainerGroup`` is a type of ``InstanceGroup`` that has an associated Credential that allows for connecting to an OpenShift cluster. To set up a container group, you must first have the following:
- A namespace you can launch into (every cluster has a “default” namespace, but you may want to use a specific namespace)
- A service account that has the roles that allow it to launch and manage Pods in this namespace
- If you will be using |ees| in a private registry, and have a Container Registry credential associated to them in AWX, the service account also needs the roles to get, create, and delete secrets in the namespace. If you do not want to give these roles to the service account, you can pre-create the ``ImagePullSecrets`` and specify them on the pod spec for the ContainerGroup. In this case, the |ee| should NOT have a Container Registry credential associated, or AWX will attempt to create the secret for you in the namespace.
- A token associated with that service account (OpenShift or Kubernetes Bearer Token)
- A CA certificate associated with the cluster
This section describes creating a Service Account in an Openshift cluster (or K8s) in order to be used to run jobs in a container group via AWX. After the Service Account is created, its credentials are provided to AWX in the form of an Openshift or Kubernetes API bearer token credential. Below describes how to create a service account and collect the needed information for configuring AWX.
To configure AWX:
1. To create a service account, you may download and use this sample service account, :download:`containergroup sa <../common/containergroup-sa.yml>` and modify it as needed to obtain the above credentials.
2. Apply the configuration from ``containergroup-sa.yml``::
oc apply -f containergroup-sa.yml
3. Get the secret name associated with the service account::
export SA_SECRET=$(oc get sa containergroup-service-account -o json | jq '.secrets[0].name' | tr -d '"')
4. Get the token from the secret::
oc get secret $(echo ${SA_SECRET}) -o json | jq '.data.token' | xargs | base64 --decode > containergroup-sa.token
5. Get the CA cert::
oc get secret $SA_SECRET -o json | jq '.data["ca.crt"]' | xargs | base64 --decode > containergroup-ca.crt
6. Use the contents of ``containergroup-sa.token`` and ``containergroup-ca.crt`` to provide the information for the :ref:`ug_credentials_ocp_k8s` required for the container group.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 102 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 85 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 851 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 80 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.1 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 77 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.6 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 5.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.3 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 2.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 171 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 134 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 124 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 112 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 207 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 193 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 81 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 196 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 131 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 129 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 192 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

Some files were not shown because too many files have changed in this diff Show More