* tower/release_3.2.3:
fix unicode bugs with log statements
use --export option for ansible-inventory
add support for new "BECOME" prompt in Ansible 2.5+ for adhoc commands
enforce strings for secret password inputs on Credentials
fix a bug for "users should be able to change type of unused credential"
fix xss vulnerabilities - on host recent jobs popover - on schedule name tooltip
fix a bug when testing UDP-based logging configuration
bump templates form credential_types page limit
Wait for Slack RTM API websocket connection to be established
don't process artifacts from custom `set_stat` calls asynchronously
don't overwrite env['ANSIBLE_LIBRARY'] when fact caching is enabled
only allow facts to cache in the proper file system location
replace our memcached-based fact cache implementation with local files
add support for new "BECOME" prompt in Ansible 2.5+
fix a bug in inventory generation for isolated nodes
properly handle unicode for isolated job buffers
Definition of removal is providing a `credentials` list on launch
that lacks a type of credential that the job template has.
This assures that every category of credential the job template
has will also exist on jobs ran from that job template.
This restriction already existed, but this makes the endpoint
fail instead of re-adding the credentials.
This change makes manual launch congruent with saved launch
configurations.
WFJT nodes & schedules (launch configs) will accept POST/PATCH/PUT
with variables in extra_data that have $encrypted$ for their value
if a valid survey default exists.
In this case, the variable is simply removed from the extra_data.
This is done so that it does not affect pre-existing value
substitution for $encrypted$ values from the config itself
The extra vars file created lives in the playbook private runtime
directory, and will be reaped along with the rest of the directory.
Adjust assorted unit tests as necessary.
* Fix bug where capacity_adjustment sets to "1.00" when instance is toggled
* Hookup websockets for instance group jobs and instance jobs
* Add Wait spinner to Capacity_Adjuster, Instance association modal, and Instance group delete
* Add updateDataset event listener to update instance and instanceGroups list after smartSearch query
Currently Cloudforms can return a mix of IPv4 and IPv6 addresses in the
ipaddresses field and this mix comes in a "random" order (that is the
first entry may be IPv4 sometimes but IPv6 other times). If you wish to
always use IPv4 for the ansible_ssh_host value then this is problematic.
This change adds a new prefer_ipv4 flag which will look for the first
IPv4 address in the ipaddresses list and uses that instead of just the
first entry.
modal implementation
* Remove Instance-List-Policy controller
* Replace let with const when values aren't being reassigned
* Update CapacityAdjuster directive to use replace:true
* Assign less values that are specific to element
* Add more error handling
* This also adds fields to the instance view for tracking cpu and
memory usage as well as information on what the capacity ranges are
* Also adds a flag for enabling/disabling instances which removes them
from all queues and has them stop processing new work
* The capacity is now based almost exclusively on some value relative
to forks
* capacity_adjustment allows you to commit an instance to a certain
amount of forks, cpu focused or memory focused
* Each job run adds a single fork overhead (that's the reasoning
behind the +1)
* Switch policy router queue to not be "tower" so that we don't
fall into a chicken/egg scenario
* Show fixed policy list in serializer so a user can determine if
an instance is manually managed
* Change IG membership mixin to not directly handle applying topology
changes. Instead it just makes sure the policy instance list is
accurate
* Add create/delete hooks for instances and groups to trigger policy
re-evaluation
* Update policy algorithm for fairer distribution
* Fix an issue where CELERY_ROUTES wasn't renamed after celery/django
upgrade
* Update unit tests to be more explicit
* Update count calculations used by algorithm to only consider
non-manual instances
* Adding unit tests and fixture
* Don't propagate logging messages from awx.main.tasks and
awx.main.scheduler
* Use advisory lock to prevent policy eval conflicts
* Allow updating instance groups from view
* use embedded beat rather than standalone
* dynamically set celeryd hostname at runtime
* add embeded beat flag to celery startup
* Embedded beat mode routes will piggyback off of celery worker setup
signal
* Associating/Disassociating an instance with a group
* Triggering a topology rebuild on that change
* Force rabbitmq cleanup of offline nodes
* Automatically check for dependent service startup
* Fetch and set hostname for celery so it doesn't clobber other
celeries
* Rely on celery init signals to dyanmically set listen queues
* Removing old total_capacity instance manager property
* Based on the tower topology (Instance and InstanceGroup
relationships), have celery dyamically listen to queues on boot
* Add celery task capable of "refreshing" what queues each celeryd
worker listens to. This will be used to support changes in the topology.
* Cleaned up some celery task definitions.
* Converged wrongly targeted job launch/finish messages to 'tower'
queue, rather than a 1-off queue.
* Dynamically route celery tasks destined for the local node
* separate beat process
add support for separate beat process
If the "DTSTART" property is specified as a date with UTC time or a date with
local time and time zone reference, then the UNTIL rule part MUST be specified
as a date with UTC time.
this is for the development environment only; when uwsgi notices a code
change, it automatically reloads the uwsgi workers; this patch includes
a hook that sends `SIGHUP` to the celery process, causing it to spawn
a new set of workers as well
redbaron is a library we use to facilitate parsing local settings files;
at _import_ time it generates a parse tree and caches it to disk at
`/tmp`; this process is _really time consuming, and only necessary if
we're actually *using* the library
right now, we're importing this library and paying the penalty
_every_ time we load the awx application
previously, we persisted custom artifacts to the database on
`Job.artifacts` via the callback receiver. when the callback receiver
is backed up processing events, this can result in race conditions for
workflows where a playbook calls `set_stat()`, but the artifact data is
not persisted in the database before the next job in the workflow starts
see: https://github.com/ansible/ansible-tower/issues/7831
this commit allows schedule `rrule` strings to include local timezone
information via TZID=NNNNN; occurrences are _generated_ in the local
time specific by the user (or UTC, if e.g., DTSTART:YYYYMMDDTHHMMSSZ)
while Schedule.next_run, Schedule.dtstart, and Schedule.dtend will be
stored in the UTC equivalent (i.e., the scheduler will still do math on
"what to run next" based on UTC datetimes).
in addition to this change, there is now a new API endpoint,
`/api/v2/schedules/preview/`, which takes an rrule and shows the next
10 occurrences in local and UTC time.
see: https://github.com/ansible/ansible-tower/issues/823
related: https://github.com/dateutil/dateutil/issues/614
this cleanups up a _lot_ of code duplication that we have for builtin
credential types. it will allow customers to setup custom inventory
sources that utilize builtin credential types (e.g., a custom inventory
script that could use an AzureRM credential)
see: https://github.com/ansible/ansible-tower/issues/7852
update our event data search algorithm to be a bit lazier in event data
discovery; this drastically improves processing speeds for stdout >5MB
see: https://github.com/ansible/awx/issues/417
We were having issues where an older tag was being outputed from `git describe`.
From the man page:
Follow only the first parent commit upon seeing a merge commit. This is useful when you wish to not match tags on branches merged in the history of the target commit.
related to https://github.com/ansible/awx/issues/217
* Adds a configure tower in tower setting for users to configure a saml
attribute that tower will use to put users into teams and orgs.
* Celery + json pickling do not handle custom Exceptions (and may never
do so). Mentioning of, if handling custom Exceptions then the code would
be susceptible to same arbitrary code execution that python pickle is
vulnerable to.
* So don't use custom Exceptions.
* celery error callback signature isn't well defined. Thus, our error
callback signature is made to handle just about any call signature and
depend on only 1 attribute, id, existing.
See https://github.com/celery/celery/issues/3709
The original tests set no longer works after Django 1.11 due to more
strict rules against dynamic model definition. The refactored tests set
aims at each existing model that apply named URL rules, instead of
abstract general use cases, thus significantly improves maintainability
and readability.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
this approach totally removes the process of reading and writing stdout
files on the local file system at settings.JOBOUTPUT_ROOT when jobs are
run; now stdout content is only written on-demand as it's fetched for
the deprecated `stdout` endpoint
see: https://github.com/ansible/awx/issues/200
* introduces three new models: `ProjectUpdateEvent`,
`InventoryUpdateEvent`, and `SystemJobEvent`
* simplifies the stdout callback management in `tasks.py` - now _all_
job run types capture and emit events to the callback receiver
* supports stdout reconstruction from events for stdout downloads for
_all_ job types
* configures `ProjectUpdate` runs to configure the awx display callback
(so we can capture real playbook events for `project_update.yml`)
* ProjectUpdate, InventoryUpdate, and SystemJob runs no longer write
text blobs to the deprecated `main_unifiedjob.result_stdout_text` column
see: https://github.com/ansible/awx/issues/200
display_extra_vars was not taking a copy of the data before
acting on it - this causes a bug where the activity stream will
modify the existing object on the model. That leads to new data
not being accepted.
Also moved the processing of extra_data to prior to the accept
or ignore kwargs logic so that we pass the right (post-encryption)
form of the variables.
This data often (in the case of inventory updates) represents large data
blobs (5+MB per job run). Storing it on the polymorphic base class
table, `main_unifiedjob`, causes it to be automatically fetched on every
query (and every polymorphic join) against that table, which can result
in _very_ poor performance for awx across the board. Django offers
`defer()`, but it's quite complicated to sprinkle this everywhere (and
easy to get wrong/introduce side effects related to our RBAC and usage
of polymorphism).
This change moves the field definition to a separate unmanaged model
(which references the same underlying `main_unifiedjob` table) and adds
a proxy for fetching the data as needed
see https://github.com/ansible/awx/issues/200
Consolidate prompts accept/reject logic in unified models
Break out accept/reject logic for variables
Surface new promptable fields on WFJT nodes, schedules
Make schedules and workflows accurately reject variables
that are not allowed by the prompting
rules or the survey rules on the template
Validate against unallowed extra_data in system job schedules
Prevent schedule or WFJT node POST/PATCH with unprompted data
Move system job days validation to new mechanism
Add new psuedo-field for WFJT node credential
Add validation for node related credentials
Add related config model to unified job
Use JobLaunchConfig model for launch RBAC check
Support credential overwrite behavior with multi-creds
change modern manual launch to use merge behavior
Refactor JobLaunchSerializer, self.instance=None
Modularize job launch view to create "modern" data
Auto-create config object with every job
Add create schedule endpoint for jobs
* Jupyter starts alongside the other awx services and is available on
0.0.0.0:8888
* make target: make jupyter
* default settings in settings/development.py
* Added jupyter, matplotlib, numpy to dev dependencies
We `pip download` this file for offline installs. Automat lists this package as a setup_requires, but `pip download` doesn’t resolve these dependencies (distutils will attempt to install them via easy_install when setup.py is invoked).
Allows for use of a suffix that will be appended to host names returned
from Cloudforms API if that suffix is not present.
For example with a suffix of 'example.org', the following results would
be shown for a particular Cloudforms host name:
someexample -> someexample.example.org
someexample.example.org -> someexample.example.org
The main use-case for this is, when one Inventory Source is returning
names that have a FQDN name whilst others are returning a shortname, to
ensure that the hosts in an inventory aren't effectively duplicated.
On Cloudforms (Version 2.0 at least), the dictionary that gets passed to
the inventory_import has a top-level 'cloudforms' dictionary element
that contains the 'id' and 'power_state' rather than those elements
being at the top-level of the dictionary.
This change adds in the 'cloudforms' into the expected name.
the nature of this latest bug is that the WorkflowJob has a *different*
implementation of _accept_or_ignore_job_kwargs, and it wasn't performing
encryption for extra vars provided at launch time; this change places the
encryption mechanism in UJT.create_unified_job so that it works the same
for _all_ UJTs
see: https://github.com/ansible/ansible-tower/issues/7798
see: https://github.com/ansible/ansible-tower/issues/7046
* Check the reason for a dependent project update failure. If it's
because of a cancel, then let the normal cancel mechanisms update the
jobs status and explanation. Do not update the dependent job's status
for a project update that was canceled, in the run code.
* python-social-auth has SOCIAL_AUTH_SAML_SECURITY_CONFIG, which is
forwarded to python-saml settings configuration. This commit exposes
SOCIAL_AUTH_SAML_SECURITY_CONFIG to configure tower in tower to allow
users to set requestedAuthnContext, which will disable the requesting of
password type auth from the idp. Thus, it's up to the idp to choose
which auth to use (i.e. 2-factor).
Openshift was throwing an error here, though I'm not sure why it makes
a whole lot of difference to call fdopen() vs open(). This was
introduced when this method was unified under the new
ansible-inventory system. This fixes it for all cases. mkstemp(),
while not necessary, is a useful addition to keep from leaking
inventory details unnecessarily.
There's a bug in celery 4.X when using bound tasks as error
handlers. We don't actually need it to be bound especially since the
request object is now available in the function signature
* Implicit project update, launch_type='sync', get "associated" with a
job via project_update. When a job is canceled, so should this implicit
project update. This change enforces that logic.
This keeps the instance from re-using a pool that might have already
expired and is unusable for the inspector that needs to run as part of
the task manager
survey_spec is a nested dict, so if we don't `deepcopy()` it, updates
to the individual fields could corrupt the original data structure;
this was causing a bug whereby activity stream updates converted
encrypted survey password defaults -> `$encrypted$`, but inadvertently
modified the originating model due to shared references
see: https://github.com/ansible/ansible-tower/issues/7769
if database connectivity is lost, callback workers currently raise an
uncaught exception and hang; this can cause the entire process to stop
handling callback events
see: https://github.com/ansible/ansible-tower/issues/7660
and also patching angular-scheduler to point to angular 1.4.14
and also patching angular-codemirror to point to angular 1.4.14,
and adding fsevents:"*" to the package.json, and regenerating
npm-shrinkwrap.json for the new dependencies and their branches.
This change causes all SCM inventory updates to run a local
project sync unless they were specifically marked as a
dependency of an already-existing project update, as
opposed to just doing so on manual launch types.
This should be a more robust criteria.
Relates #7712 of ansible-tower.
UI uses `related_search_fields` list to populate help text for resourse
search, `ansible_facts` is searchable via UI but the general pickup
logic would ignore it. So make it a corner case.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* For Tower the license must match between the source and destination
* For AWX the check is disabled
* Hosts imported from another Tower don't count against your license
in the local Tower
* Fix up some issues with enablement
* Prevent slashes from being used in the instance filter
* Add &all=1 filter to make sure we pick up all hosts
Use BaseAccess class to enforce the superuser and system
auditor conditions, as well as the optimizations.
Declare optimizations on access class as tuple.
Limit role of access class method narrowly to RBAC filtering.
This adds a Job Templates tab onto the Project form that gives
the user the ability to see all the job templates using a project.
Clicking the add button on this list will take the user to the job
template form with the project field auto-filled with the project.
This should make the settings and configuration logic less implicit and
a little easier to follow. Some familiarity with the configuration behavior
of nightwatch is still necessary in places - specifically, one should know
that all test_settings defined for non-default environments are treated as
overrides to the values defined for the default environment.
* release_3.2.1:
fallback to empty dict when processing extra_data
fix migration problem from 3.1.1
move 0005a migration to 0005b
feedback on ad hoc prohibited vars error msg
Fix the way we include i18n files in sdist
Fix migrations to support 3.1.2 -> 3.2.1+ upgrade path
fix missing parameter to update_capacity method
fix WARNING log when launching ad hoc command
Validate against ansible variables on ad hoc launch
do not allow ansible connection type of local for ad_hoc
work around an ansible 2.4 inventory caching bug
fix scan job migration unicode issue
Assert isolated nodes have capacity set to 0 and restored based on version
Set capacity to zero if the isolated node has an old version
When Jobs and Adhoc Commands are launched, awx uses a job-scoped auth
token to dynamically fetch inventory via the awx REST API; this process
is complicated, hard to debug, and likely won't work going forward with
oauth2-based tokens in awx
see: https://github.com/ansible/awx/issues/21
with process isolation enabled (which is the awx default), cloudforms
caches inventory script results on disk; awx should direct cloudforms to
store these cache files in a location that's exposed to the isolated
environment
see: ansible/ansible#31760
with process isolation enabled (which is the awx default), cloudforms
caches inventory script results on disk; awx should direct cloudforms to
store these cache files in a location that's exposed to the isolated
environment
see: ansible/ansible#31760
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin
(<awx_display_callback.module.AWXDefaultCallbackModule object at
0x47f6090>):
'module' object has no attribute 'dumps'
The above error is thrown by ansible if callback plugins don't respect
the same import precedence configuration as Ansible. ansible callback/*
dir includes a json.py file. This is imported by ansible
callback/__init__.py when a callback plugin implementation imports from
Ansible callback base without setting the correct import precedence.
This allows the port from the request header to be used
rather than having the request redirected to the port
being used inside the container which may not be
accessible
Fixes#420
related #420
Signed-off-by: Nick Carboni <ncarboni@redhat.com>
The logic that sets awx_web_docker_actual_image and awx_task_docker_actual_image creates and pushes images to the private registry tagged with the awx version, which is appropriate, but then tries to pull with no tag. (so docker defaults to "latest", which does not exist)
Relates #264.
This PR proposed and implemented a way of defining workflow failure
state:
A workflow job fails if one of the conditions below satisfies.
* At least one node runs into states `canceled` or `error`.
* At least one leaf node runs into states `failed`, but no child node is
spawned to run (no error handler).
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* Remove attempted support of key=value pattern, because
it is not actually allowed in practice
* Have variables validator defer to the utils variables parser
* Prune serializers of a handful of cases that previous
attempts at cleanup have missed
Relates #391.
Upstream `python-ldap` (surprisingly) does not support utf-8 DN. So
explicit encoding is needed.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* release_3.2.0: (66 commits)
fix workflow maker lookup issues
adding extra logic check for ansible_facts in smart search
adding "admin_role" as a default query param for insights cred lookup
changing insights cred lookup to not use hard coded cred type
fix rounding of capacity percentage
Catch potential unicode errors when looking up addrinfo
fixing typo with adding query params for instance groups modal
move percentage capacitty to variable
Add unit test for inventory_sources_already_updated
Check for inventory sources already updated from start args
Fixed inventory completed jobs pagination bug by setting default page size
Remove the logic blocking dependent inventory updates on callbacks
fix instance group percentage
Remove host-filter-modal import
Fix partial hover highlight of host filter modal row
Removed leading slash on basePath
Fixed host nested groups pagination
Added trailing slash to basePath
Fixed nested groups pagination
Fixed host_filter searching related fields
...
Relates #7656 in ansible-tower.
We have been using comma `,` and space ` ` to separate search terms in
query string `<field_name>__search=<search terms>`, however in general
we can always use `&` to achieve separation like
`<field_name>__search=<search term 1>&<field_name>__search=<search term
2>&...`. Using specific delimiters makes it impossible for search terms
to contain those delimiters, so they are better off being removed.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Relates to #7586 of ansible-tower as a follow-up of fix#420 of tower.
The original fix works for Django version 1.9 and above, this PR
expanded the solution to Django verison 1.8 and below.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Relates #7386 of ansible-tower.
Due to the uniqueness of Tower configuration datastore model, it is not
fully compatible with activity stream workflow. This PR introduced
setting field for activitystream model along with other changes to make
Tower configuration a special case for activity streams.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
instead of writing individual migrations for new built-in credential
types, this change makes the "setup_tower_managed_defaults" function
idempotent so that it only adds the credential types you're missing
Relates #7684 of ansible-tower.
Slugify username in python-social-auth means disallowing
any non-alphanumerial characters, which is an over-kill
for awx/tower, thus disabling it.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Followed instructions to the letter from a new install and found that there was no mention that Docker service must be started. Tried to add this mention where it would have benefited me, but I am new to github and not sure if I'm doing this right.
* Recreation of user-issues increasing depends on the chosen install
path used. Thus, we need this information when recreating the issue
locally. Ask the user to include this information in the issue template.
Manual initialization allows for some asynchronous work to
finish ahead of Angular's startup. The initial motivation is
to be able to guarantee translation files have been fetched
before rendering content that needs translation. If a locale
isn't supported or if the request to get a json file fails,
the i18n service falls back to en.
Signed-off-by: gconsidine <gconsidi@redhat.com>
Connect #7666 of ansible-tower and follow up original fix tower #455.
The original fix solves the problem of duplicated db keys, but breaks a
rule of enterprise users that 'Enterprise users cannot be
created/authenticated if non-enterprise users with the same name has
already been created in Tower.'. This fix resumes that rule.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
If overwrite=True for an inventory source import, then this matters
creating a new inventory source through v1 API will not include
deprecated_group inside of the InventorySource groups m2m related
connection, but migrations from 3.1 will
In those migrated cases, this code will leave the deprecated_group
untouched, so as to not trigger its cascade delete
* Remove extraneous quotes when authroization token is retrieved
from a cookie
* Fix thrown error when invalid creds are supplied on login due to
attempt to access property of an undefined value
The UI has a handful of dependencies attached to window (angular,
jquery, lodash, etc). In the case of schedules, jquery was included
and extended as expected, but then clobbered by another module. This
prevented the Select2 dependency from working as expected.
Rather than relying on webpack to export particular dependencies as
global, that work is done in the vendor entry point now instead.
- prevent dual-entry for first item in running jobs due to
setdefault syntax
- fix issue where queues (celery tasks) only returned the last
item in the inspector output due to looping problem
this caused reaper bugs in production
Use license_info.features.ldap & license_info.feature.enterprise_auth from /api/v2/config for
auth providers button enabling instead of using info from /api/v2/settings/all
Signed-off-by: Julen Landa Alustiza <julen@zokormazo.info>
This also adds an option to *not* use the local container for building
the software distribution which is required for a local minishift
environment based install
django-polymorphic itself generates queries for polymorphic object
lookups, and these queries for UnifiedJob are *not* properly defering the
`result_stdout_text` column, resulting in more very slow queries. This
solution is _very_ hacky, and very specific to this specific
version of Django and django-polymorphic, but it works until we can
solve this problem the proper way in 3.3 (by removing large stdout blobs
from the database).
see: https://github.com/ansible/ansible-tower/issues/7568
For some reason some docker deployments seem not to be able to resolve
the awxweb host from the awx task host at least when started from the
playbook. This hopefully provides a resolution for that
result_stdout_text can be _very_ large - some customers have 5MB+ per
job; querying for this in list contexts results in _very_ large datasets
being read from the database which is very slow. It's very uncommon to
actually need this column outside of the context of job details, so
defer it.
see: https://github.com/ansible/ansible-tower/issues/7568
When considering previous / current Project Updates we weren't
properly sorting the previous runs.
We also make sure we filter down to just "check" style project updates
and don't consider 'run' style standalone project updates when
deciding what are potentially related project updates
This attempts to detect if there are migrations in-progress and will
force display an interstitial page in the process that attempts to
load the index page every 10s until it succeeds.
This is only attached in production settings so the development
environment can proceed even if the migrations haven't been applied yet
A lot of people have experienced issues with the system-level dependencies that are required in order to build the source distribution that is handed off to the image builds. This makes it unnecessary to install any additional software on the host machine aside from Ansible and Docker.
* Use git+https prefix in git-based npm dependencies
* Use ejs template for index to fix extraneous slash in path
* Remove outdated documentation
* Remove unused service
* Regenerate shrinkwrap
Without this patch, SAML backend will only use the first letter of the NameID as attribute value.
Signed-off-by: Patrick Uiterwijk <patrick@puiterwijk.org>
This avoids re-loading objects from the database in our
chain of permission checking, wherever possible.
access.py is equiped to handle object references instead
of pk ints, and permissions.py is changed to pass those refs.
* awx_installer:
Adds docker installation steps (#15)
Call out eval for setting up the minishift environment
Support official image builds with awx logos
Add support for standalone docker install
First iteration on INSTALL
Adds edge terminated route
Ignore Pycharm droppings
Force reauth docker registry login in installer
Reduce the size of the production container image
Initial awx installer
* release_3.2.0: (138 commits)
Pull Dutch and Spanish translations
Increase verbosity of CTiT Logging test error handling
fix to console error of conditional toggle showing
Fix error message when calling remove on undefined DOM element
fix ctit logging toggle from being showed for log types other than https
Remove delete and edit buttons from smart inventory host list. Only option should be view.
feedback from PR
Enhance query string in ad hoc command event save to consider smart inventory
Fixed host filter clearall
fuller validation for host_filter
On JT form, Show credential tags from summary_fields if user doesn't have view permission on the credential
Align key toggle button to role dropdown in user team permissions modal
Removed rogue console.logs
Removed extra refresh call
Enhace query string in job event save to consider smart inventory
Fix typo in scan_packages plugin
Switch running_jobs and capacity table columns
Disable insights cred when user doesn't have edit permissions
Disallow changing credential_type of an existing credential
fix bug with host_filter RBAC check
...
Instance has gone lost, and jobs are still either running
or waiting inside of its instance group
RBAC - user does not have permission to see some of the
groups that would be used in the capacity calculation
For some cases, a naive capacity dictionary is returned,
main goal is to not throw errors and avoid unpredicted behavior
Detailed capacity tests are moved into new unit test file.
The task manager was doing work to compute currently consumed
capacity, this is moved into the manager and applied in the
same form to the instance group list.
* The django middleware call stack behavior is changed by DRF. As a
result, during the process_request in sso/middlware.py request.user
is not set as you would expect it to be set from the middleware
django.contrib.auth.middleware.AuthenticationMiddleware
cache.set() and cache.get() arguments are logged when the log level is
DEBUG; this _may_ include plaintext secrets; strip sensitive values
before logging them
see: https://github.com/ansible/ansible-tower/issues/7476
This saves the id value of the setting into the cache
if the setting is encrypted. That can then be combined
with the secret_key in order to decrypt the setting,
without having to make an additional query to the database.
* Reap running tasks on non-netsplit nodes
* Reap running tasks on known to be offline nodes
* Reap waiting tasks with no celery id anywhere if waiting >= 60 seconds
* Workflow jobs are virtual jobs that don't actually run. Thus they
won't have a celery id and aren't candidates for the generic reaping.
* Better error logging when Instance not found in reaping code.
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us on IRC (#ansible-awx on freenode) or the mailing list.
Have questions about this document or anything not covered here? Come chat with us at `#ansible-awx` on irc.freenode.net, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project) .
Table of contents
-----------------
## Table of contents
* [Contributing Agreement](#dco)
* [Code of Conduct](#code-of-conduct)
* [Setting up the development environment](#setting-up-the-development-environment)
* [Things to know prior to submitting code](#things-to-know-before-contributing-code)
* [Setting up your development environment](#setting-up-your-development-environment)
* [Prerequisites](#prerequisites)
* [Local Settings](#local-settings)
* [Building the base image](#building-the-base-image)
* [Building the user interface](#building-the-user-interface)
* [Starting up the development environment](#starting-up-the-development-environment)
* [Starting the development environment at the container shell](#starting-the-container-environment-at-the-container-shell)
* [Using the development environment](#using-the-development-environment)
* [Docker](#docker)
* [Docker compose](#docker-compose)
* [Node and npm](#node-and-npm)
* [Building the environment](#building-the-environment)
* [Clone the AWX repo](#clone-the-awx-repo)
* [Create local settings](#create-local-settings)
* [Build the base image](#build-the-base-image)
* [Build the user interface](#build-the-user-interface)
# [Running the environment](#running-the-environment)
* [Start the containers](#start-the-containers)
* [Start from the container shell](#start-from-the-container-shell)
* [Post Build Steps](#post-build-steps)
* [Start a shell](#start-the-shell)
* [Create a superuser](#create-a-superuser)
* [Load the data](#load-the-data)
* [Building API Documentation](#build-documentation)
* [Accessing the AWX web interface](#accessing-the-awx-web-interface)
* [Purging containers and images](#purging-containers-and-images)
* [What should I work on?](#what-should-i-work-on)
* [How issues are resolved](#how-issues-are-resolved)
* [Ansible Issue Bot](#ansible-issue-bot)
DCO
===
## Things to know prior to submitting code
All contributors must use "git commit --signoff" for any
commit to be merged, and agree that usage of --signoff constitutes
agreement with the terms of DCO 1.1. Any contribution that does not
have such a signoff will not be merged.
-All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
```
Developer Certificate of Origin
Version 1.1
## Setting up your development environment
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
The AWX development environment workflow and toolchain is based on Docker, and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also binds the local source tree into the development container, making it possible to observe and test changes in real time.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
### Prerequisites
Developer's Certificate of Origin 1.1
#### Docker
By making a contribution to this project, I certify that:
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
For Linux platforms, refer to the following from Docker:
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
**Fedora**
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
```bash
(host)$ pip install docker-compose
```
Code of Conduct
===============
#### Node and npm
All contributors are expected to adhere to the Ansible Community Code of Conduct: http://docs.ansible.com/ansible/latest/community/code_of_conduct.html
The AWX UI requires the following:
Setting up the development environment
======================================
- Node 6.x LTS version
- NPM 3.x LTS
The AWX development environment workflow and toolchain is based on Docker and the docker-compose tool to contain
the dependencies, services, and databases necessary to run everything. It will bind the local source tree into the container
making it possible to observe changes while developing.
### Build the environment
Prerequisites
-------------
`docker` and `docker-compose` are required for starting the development services, on Linux you can generally find these in your
distro's packaging, but you may find that Docker themselves maintain a seperate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend Docker for Mac (https://www.docker.com/docker-mac) and Docker for Windows (https://www.docker.com/docker-windows)
respectively. Docker for Mac/Windows automatically comes with `docker-compose`.
#### Fork and clone the AWX repo
> Fedora
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/).
AWX will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file is required for starting the development environment and startup will fail if it's not provided.
The AWX base container image (defined in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and symbolic links into the development environment that make running the services easy.
The AWX base container image (found in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and
symbolic links into the development environment that make running the services easy. You'll first need to build the image:
Start the development containers by running the following:
(host)$ make docker-compose-build
The image will only need to be rebuilt if the requirements or OS dependencies change. A core concept about this image is that it relies
on having your local development environment mapped in.
Building the user interface
---------------------------
> AWX requires the 6.x LTS version of Node and 3.x LTS NPM
In order for the AWX user interface to load from the development environment it must be built:
(host)$ make ui-devel
```bash
(host)$ make docker-compose
```
When developing features and fixes for the user interface you can find more detail here: [UI Developer README](awx/ui/README.md)
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django, celery, and the front end build process.
Starting up the development environment
----------------------------------------------
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
There are several ways of starting the development environment depending on your desired workflow. The easiest and most common way is with:
```bash
# List running containers
(host)$ docker ps
(host)$ make docker-compose
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..."23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:6899-6999->6899-6999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:9.6 "docker-entrypoint..."26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
0089699d5afd tools_logstash "/docker-entrypoin..."26 seconds ago Up 25 seconds tools_logstash_1
4d4ff0ced266 memcached:alpine "docker-entrypoint..."26 seconds ago Up 25 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
92842acd64cd rabbitmq:3-management "docker-entrypoint..."26 seconds ago Up 24 seconds 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp tools_rabbitmq_1
```
**NOTE**
> The Makefile assumes that the image you built is tagged with your current branch. This allows you to build images for different contexts or branches. When starting the containers, you can choose a specific branch by setting `COMPOSE_TAG=<branch name>` in your environment.
> For example, you might be working in a feature branch, but you want to run the containers using the `devel` image you built previously. To do that, start the containers using the following command: `$ COMPOSE_TAG=devel make docker-compose`
##### Wait for migrations to complete
The first time you start the environment, database migrations need to run in order to build the PostgreSQL database. It will take few moments, but eventually you will see output in your terminal session that looks like the following:
awx_1 | Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Synchronizing apps without migrations:
awx_1 | Creating tables...
awx_1 | Running deferred SQL...
awx_1 | Installing custom SQL...
awx_1 | Running migrations:
awx_1 | Rendering model states... DONE
awx_1 | Applying contenttypes.0001_initial... OK
awx_1 | Applying contenttypes.0002_remove_content_type_name... OK
awx_1 | Applying auth.0001_initial... OK
awx_1 | Applying auth.0002_alter_permission_name_max_length... OK
awx_1 | Applying auth.0003_alter_user_email_max_length... OK
awx_1 | Applying auth.0004_alter_user_username_opts... OK
awx_1 | Applying auth.0005_alter_user_last_login_null... OK
awx_1 | Applying auth.0006_require_contenttypes_0002... OK
awx_1 | Applying taggit.0001_initial... OK
awx_1 | Applying taggit.0002_auto_20150616_2121... OK
awx_1 | Applying main.0001_initial... OK
awx_1 | Applying main.0002_squashed_v300_release... OK
awx_1 | Applying main.0003_squashed_v300_v303_updates... OK
awx_1 | Applying main.0004_squashed_v310_release... OK
awx_1 | Applying conf.0001_initial... OK
awx_1 | Applying conf.0002_v310_copy_tower_settings... OK
...
```
Once migrations are completed, you can begin using AWX.
#### Start from the container shell
Often times you'll want to start the development environment without immediately starting all of the services in the *awx* container, and instead be taken directly to a shell. You can do this with the following:
```bash
(host)$ make docker-compose-test
```
Using `docker exec`, this will create a session in the running *awx* container, and place you at a command prompt, where you can run shell commands inside the container.
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
```bash
(container)# /bootstrap_development.sh
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
This utilizes the image you built in the previous step and will automatically start all required services and dependent containers. You'll
be able to watch log messages and events as they come through.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
The Makefile assumes that the image you built is tagged with your current branch. This allows you to pre-build images for different contexts
but you may want to use a particular branch's image (for instance if you are developing a PR from a branch based on the integration branch):
```bash
(container)# cd /awx_devel
(container)# make server
```
(host)$ COMPOSE_TAG=devel make docker-compose
### Post Build Steps
Starting the development environment at the container shell
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.
Often times you'll want to start the development environment without immediately starting all services and instead be taken directly to a shell:
##### Start a shell
(host)$ make docker-compose-test
To create the admin user, and load demo data, you first need to start a shell session on the *awx* container. In a new terminal session, use the `docker exec` command as follows to start the shell session:
From here you'll need to bootstrap the development environment before it will be usable for you. The `docker-compose` make target will
automatically do this:
```bash
(host)$ docker exec -it tools_awx_1 bash
```
This creates a session in the *awx* containers, just as if you were using `ssh`, and allows you execute commands within the running container.
(container)$ /bootstrap_development.sh
##### Create an admin user
Before you can log into AWX, you need to create an admin user. With this user you will be able to create more users, and begin configuring the server. From within the container shell, run the following command:
```bash
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
From here you can start each service individually, or choose to start all service in a pre-configured tmux session:
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
(container)# cd /awx_devel
(container)# make server
```bash
(container)# awx-manage create_preload_data
```
Using the development environment
---------------------------------
**NOTE**
With the development environment running there are a few optional steps to pre-populate the environment with data. If you are using the `docker-compose`
method above you'll first need a shell in the container:
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
this container, and thus the database, if the database schema changes in an upstream commit.
(host)$ docker exec -it tools_awx_1 bash
##### Building API Documentation
Create a superuser account:
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
(container)# awx-manage createsuperuser
Preload AWX with demo data:
```bash
(container)/awx_devel$ make swagger
```
(container)# awx-manage create_preload_data
This information will persist in the database running in the `tools_postgres_1` container, until it is removed. You may periodically need to recreate
this container and database if the database schema changes in an upstream commit.
This will write a file named `swagger.json` that contains the API specification
in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
You should now be able to visit and login to the AWX user interface at https://localhost:8043 or http://localhost:8013 if you have built the UI.
If not you can visit the API directly in your browser at: https://localhost:8043/api/ or http://localhost:8013/api/
### Accessing the AWX web interface
When working on the source code for AWX the code will auto-reload for you when changes are made, with the exception of any background tasks that run in
celery.
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
Occasionally it may be necessary to purge any containers and images that may have collected:
To log in use the admin user and password you created above in [Create an admin user](#create-an-admin-user).
(host)$ make docker-clean
There are host of other shortcuts, tools, and container configurations in the Makefile designed for various purposes. Feel free to explore.
What should I work on?
======================
### Purging containers and images
We list our specs in `/docs`. `/docs/current` are things that we are actively working on. `/docs/future` are ideas for future work and the direction we
want that work to take. Fixing bugs, translations, and updates to documentation are also appreciated.
When necessary, remove any AWX containers and images by running the following:
Be aware that if you are working in a part of the codebase that is going through active development your changes may be rejected or you may be asked to
rebase them. A good idea before starting work is to have a discussion with us on IRC or the mailing list.
```bash
(host)$ make docker-clean
```
## What should I work on?
Submitting Pull Requests
========================
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
Fixes and Features for AWX will go through the Github PR interface. There are a few things that can be done to help the visibility of your change
and increase the likelihood that it will be accepted
If it has someone assigned to it then that person is the person responsible for working the enhancement. If you feel like you could contribute then reach out to that person.
> Add UI detail to these
Fixing bugs, adding translations, and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start. For extra information on debugging tools, see [Debugging](https://github.com/ansible/awx/blob/devel/docs/debugging.md).
**NOTE**
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
**NOTE**
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](./awx/ui/README.md).
## Submitting Pull Requests
Fixes and Features for AWX will go through the Github pull request process. Submit your pull request (PR) against the `devel` branch.
Here are a few things you can do to help the visibility of your change, and increase the likelihood that it will be accepted:
* No issues when running linters/code checkers
* Python: flake8: `(container)/awx_devel$ make flake8`
@@ -237,44 +320,17 @@ and increase the likelihood that it will be accepted
* JavaScript: Jasmine: `(container)/awx_devel$ make ui-test-ci`
* Write tests for new functionality, update/add tests for bug fixes
* Make the smallest change possible
* Write good commit messages: https://chris.beams.io/posts/git-commit/
* Write good commit messages. See [How to write a Git commit message](https://chris.beams.io/posts/git-commit/).
It's generally a good idea to discuss features with us first by engaging us in IRC or on the mailing list, especially if you are unsure if it's a good
fit.
It's generally a good idea to discuss features with us first by engaging us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
We like to keep our commit history clean and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase` rather than
`git pull` and `git rebase` rather than `git merge`.
We like to keep our commit history clean, and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase`, rather than
`git pull`, and `git rebase`, rather than `git merge`.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in pretty good working order so we review requests carefuly.
Please be patient.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in good working order, and so we review requests carefully. Please be patient.
All submitted PRs will have the linter and unit tests run against them and the status reported in the PR.
All submitted PRs will have the linter and unit tests run against them, and the status reported in the PR.
Reporting Issues
================
## Reporting Issues
Use the Github issue tracker for filing bugs. In order to save time and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information and an accurate reproducing scenario are critical to helping us identify the problem.
When reporting issues for the UI we also appreciate having screenshots and any error messages from the web browser's console. It's not unsual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
For the API and backend services, please capture all of the logs that you can from the time the problem was occuring.
Don't use the issue tracker to get help on how to do something - please use the mailing list and IRC for that.
How issues are resolved
-----------------------
We triage our issues into high, medium, and low and will tag them with the relevant component (api, ui, installer, etc). We will typically focus on high priority
issues. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking
existing functionality and/or negatively impacting a large number of users.
If your issue isn't considered `high` priority then please be patient as it may take some time to get to your report.
Before opening a new issue, please use the issue search feature to see if it's already been reported. If you have any extra detail to provide then please comment.
Rather than posting a "me too" comment you might consider giving it a "thumbs up" on github.
Ansible Issue Bot
-----------------
> Fill in
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).
ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT
This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism.
1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license.
2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License.
3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation.
4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements.
5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability.
6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions.
7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods.
This document provides a guide for installing AWX.
## Table of contents
- [Getting started](#getting-started)
- [Clone the repo](#clone-the-repo)
- [AWX branding](#awx-branding)
- [Prerequisites](#prerequisites)
- [System Requirements](#system-requirements)
- [AWX Tunables](#awx-tunables)
- [Choose a deployment platform](#choose-a-deployment-platform)
- [Official vs Building Images](#official-vs-building-images)
- [OpenShift](#openshift)
- [Prerequisites](#prerequisites-1)
- [Deploying to Minishift](#deploying-to-minishift)
- [Pre-build steps](#pre-build-steps)
- [PostgreSQL](#postgresql)
- [Start the build](#start-the-build)
- [Post build](#post-build)
- [Accessing AWX](#accessing-awx)
- [Kubernetes](#kubernetes)
- [Prerequisites](#prerequisites-2)
- [Pre-build steps](#pre-build-steps-1)
- [Start the build](#start-the-build-1)
- [Accessing AWX](#accessing-awx-1)
- [SSL Termination](#ssl-termination)
- [Docker or Docker Compose](#docker-or-docker-compose)
- [Prerequisites](#prerequisites-3)
- [Pre-build steps](#pre-build-steps-2)
- [Deploying to a remote host](#deploying-to-a-remote-host)
- [Inventory variables](#inventory-variables)
- [Docker registry](#docker-registry)
- [PostgreSQL](#postgresql-1)
- [Proxy settings](#proxy-settings)
- [Start the build](#start-the-build-2)
- [Post build](#post-build-1)
- [Accessing AWX](#accessing-awx-2)
## Getting started
### Clone the repo
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run commands within the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.4+
- [Git](https://git-scm.com/) Requires Version 1.8.4+
### System Requirements
The system that runs the AWX service will need to satisfy the following requirements
- At leasts 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
### AWX Tunables
**TODO** add tunable bits
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, docker-compose or a standalone Docker daemon. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
- [Docker or Docker Compose](#docker-or-docker-compose).
### Official vs Building Images
When installing AWX you have the option of building your own images or using the images provided on DockerHub (see [awx_web](https://hub.docker.com/r/ansible/awx_web/) and [awx_task](https://hub.docker.com/r/ansible/awx_task/))
This is controlled by the following variables in the `inventory` file
```
dockerhub_base=ansible
dockerhub_version=latest
```
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
*dockerhub_base*
> The base location on DockerHub where the images are hosted (by default this pulls container images named `ansible/awx_web:tag` and `ansible/awx_task:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
## OpenShift
### Prerequisites
To complete a deployment to OpenShift, you will obviously need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval$(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
### Pre-build steps
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*awx_openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a pod. The database is configured for persistence and will create a persistent volume claim named `postgresql`. By default it will claim 5GB from the available persistent volume pool. This can be tuned by setting a variable in the inventory file or on the command line during the `ansible-playbook` run.
ansible-playbook ... -e pg_volume_capacity=n
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Start the build
To start the build, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
To start the build and deployment, run the following (docker_registry_password is optional if using official images):
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
```
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
```
You will see the following indicating that database migrations are running:
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
```bash
# View available routes
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
## Kubernetes
### Prerequisites
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
### Pre-build steps
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*awx_kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Start the build
After making changes to the `inventory` file use `ansible-playbook` to begin the install
```bash
$ ansible-playbook -i inventory install.yml
```
### Post build
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
```bash
# View the running pods, it may take a few minutes for everything to be marked in the Running state
$ kubectl get pods --namespace awx
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
```
### Accessing AWX
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
## Docker or Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
If you're installing using Docker Compose, you'll need [Docker Compose](https://docs.docker.com/compose/install/).
### Pre-build steps
#### Deploying to a remote host
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible; however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
```yaml
# localhost ansible_connection=local
awx-server
[all:vars]
...
```
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
If you choose to use the official images then the remote host will be the one to pull those images.
**Note**
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*postgres_data_dir*
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
*host_port*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. Defaults to *80*.
*use_docker_compose*
> Switch to ``true`` to use Docker Compose instead of the standalone Docker install.
*docker_compose_dir*
When using docker-compose, the `docker-compose.yml` file will be created there (default `/var/lib/awx`).
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
*docker_registry*
> IP address and port, or URL, for accessing a registry.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
*docker_registry_username*
> Username of the user that will push images to the registry. Defaults to *developer*.
*docker_remove_local_images*
> Due to the way that the docker_image module behaves, images will not be pushed to a remote repository if they are present locally. Set this to delete local versions of the images that will be pushed to the remote. This will fail if containers are currently running from those images.
**Note**
> These settings are ignored if using official images
#### Proxy settings
*http_proxy*
> IP address and port, or URL, for using an http_proxy.
*https_proxy*
> IP address and port, or URL, for using an https_proxy.
*no_proxy*
> Exclude IP address or URL from the proxy.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information.
### Start the build
If you are not pushing images to a Docker registry, start the build by running the following:
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory install.yml
```
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
After the playbook run completes, Docker will report up to 5 running containers. If you chose to use an existing PostgresSQL database, then it will report 4. You can view the running containers using the `docker ps` command, as follows:
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e240ed8209cd awx_task:1.0.0.8 "/tini -- /bin/sh ..."2 minutes ago Up About a minute 8052/tcp awx_task
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..."2 minutes ago Up About a minute 0.0.0.0:80->8052/tcp awx_web
55a552142bcd memcached:alpine "docker-entrypoint..."2 minutes ago Up 2 minutes 11211/tcp memcached
84011c072aad rabbitmq:3 "docker-entrypoint..."2 minutes ago Up 2 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
97e196120ab3 postgres:9.6 "docker-entrypoint..."2 minutes ago Up 2 minutes 5432/tcp postgres
```
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
Applying main.0001_initial... OK
...
```
Once migrations complete, you will see the following log output, indicating that migrations have completed:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623(Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license"for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx
(changed: True)
Creating instance group tower
Added instance awx to tower
(changed: True)
...
```
### Accessing AWX
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
### Maintenance using docker-compose
After the installation, maintenance operations with docker-compose can be done by using the `docker-compose.yml` file created at the location pointed by `docker_compose_dir`.
Among the possible operations, you may:
- Stop AWX : `docker-compose stop`
- Upgrade AWX : `docker-compose pull && docker-compose up --force-recreate`
See the [docker-compose documentation](https://docs.docker.com/compose/) for details.
Use the GitHub [issue tracker](https://github.com/ansible/awx/issues) for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [mailing list](https://groups.google.com/forum/#!forum/awx-project) , and the `#ansible-awx` channel on irc.freenode.net to get help.
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a ["thumbs up"](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment) to give us an indication of the severity of the problem.
### UI Issues
When reporting issues for the UI, we also appreciate having screen shots and any error messages from the web browser's console. It's not unusual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
### API and backend issues
For the API and backend services, please capture all of the logs that you can from the time the problem occurred.
## How issues are resolved
We triage our issues into high, medium, and low, and tag them with the relevant component (e.g. api, ui, installer, etc.). We typically focus on higher priority issues first. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking existing functionality, and negatively impacting a large number of users.
If your issue isn't considered high priority, then please be patient as it may take some time to get to it.
### Issue states
`state:needs_triage` This issue has not been looked at by a person yet and still needs to be triaged. This is the initial state for all new issues/pull requests.
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The the issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
`state:in_progress` The issue is actively being worked on and you should be in contact with who ever is assigned if you are also working on or plan to work on a similar issue.
`state:in_testing` The issue or pull request is currently being tested.
### AWX Issue Bot (awxbot)
We use an issue bot to help us label and organize incoming issues, this bot, awxbot, is a version of [ansible/ansibullbot](https://github.com/ansible/ansibullbot).
#### Overview
AWXbot performs many functions:
* Respond quickly to issues and pull requests.
* Identify the maintainers responsible for reviewing pull requests.
* Identify issues and pull request types and components (e.g. type:bug, component: api)
#### For issue submitters
The bot requires a minimal subset of information from the issue template:
* issue type
* component
* summary
If any of those items are missing your issue will still get the `needs_triage` label, but may end up being responded to slower than issues that have the complete set of information.
So please use the template whenever possible.
Currently you can expect the bot to add common labels such as `state:needs_triage`, `type:bug`, `type:enhancement`, `component:ui`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will remain on your issue until a person has looked at it.
#### For pull request submitters
The bot requires a minimal subset of information from the pull request template:
* issue type
* component
* summary
If any of those items are missing your pull request will still get the `needs_triage` label, but may end up being responded to slower than other pull requests that have a complete set of information.
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
The comment will look something like `CC @matburt @wwitzel3 ...`.
AWX provides a web-based user interface, REST API and task engine built on top of
Ansible.
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is the upstream project for [Tower](https://www.ansible.com/tower), a commercial derivative of AWX.
Resources
---------
To install AWX, please view the [Install guide](./INSTALL.md).
Refer to `CONTRIBUTING.md` to get started developing, testing and building AWX.
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
Refer to `INSTALL.md` to get started deploying AWX.
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
Contributing
------------
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
- All code submissions are done through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
Reporting Issues
----------------
If you're experiencing a problem, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us:
- Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- [Open an Issue](https://github.com/ansible/awx/issues)
License
-------
[Apache v2](./LICENSE.md)
Refer to `LOCALIZATION.md` for translation and localization help.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT
This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism.
1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license.
2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License.
3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation.
4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements.
5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability.
6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions.
7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.