This also relaxes some of the task manager rules on Instance Groups
down the full stack such that workflow jobs tend to shortcut the
processing or omit it altogether.
This lets the workflow job spawning logic exist outside of the
instance group queues, which it doesn't need to participate in in the
first place.
Currently updating policy settings doesn't trigger a re-evaluation of
instance group policies, this makes sure we re-evaluate in the event
that anything changes.
* They were previously not required until a min/max was enforced. This
caused the fields to, unintentionally, be required.
* This fix makes the policy fields not required and provides sane defaults.
This enables the parent controller to re-instantiate the CodeMirror instance
on the fly, when necessary. This was necessary on the NetworkUI to update the
CodeMirror instance on the Host Detail panel.
This commit adds a new component to be used for showing CodeMirror
instances, along with an expandable capability to view more variables.
It also removes the previous directive for the Network UI that used
to include this functionality.
Isolated and Control groups are managed strictly from the standalone
setup playbook installer and should not be directly managable from the
api. Especially true since you can't assign or create isolated groups
from within the API itself.
In the future this may change but allowing this in the API could leave
the system in a bad state.
* Before, we had a special group, tower, that ran any async work that
tower needed done. This allowed users fine grain control over which
nodes did background work. However, this granularity was too complicated
for users. So now, all tower system work goes to a special non-user
exposed celery queue. Tower remains the fallback instance group to
execute jobs on. The tower group will be created upon install and
protected from deletion.
I’m going to be reusing this code on the Tower side, and I’m trying to refactor some of the AWX specific bits out. There will probably be more to come, but this is a good start.
The extra vars file created lives in the playbook private runtime
directory, and will be reaped along with the rest of the directory.
Adjust assorted unit tests as necessary.
Previously, if the main unit tests, test_common.py was
run before running this test, it would fail.
By clearing the cache at the start of the test, we
make its behavior consistent and predictable no
matter what other tests are also being ran,
and the assertion is adjusted to match.
* Fancy url finding regex can result in infinite loop for malformed ipv6
urls so replace it with a more nieve regex that can overmatch.
* regex's that find malformed ipv6 urls will be passed to urlparse. This
can result in a parsing/ValueError. For these cases we redact the entire
found URI.
* LDAP params is a new field. It contains the kwargs that will be passed
to the python class specified by group type. The default for group type
is MemberDNGroupType. The required params are now those in the defaults.
Only allow administrative action for a user
who is a system admin or auditor if the
the requesting-user is a system admin.
Previously a user could be edited if the
requesting-user was an admin of ANY of the
orgs the user was member of.
This is changed to require admin permission
to ALL orgs the user is member of.
As a special-case, allow org admins to add
a user as a member to their organization if
the following conditions are met:
- the user is not member of any other orgs
- the org admin has permissions to all of
the roles the user has
* It's problematic to delete an instance that is referenced by a foreign
key; where the referening model is one that has a Polymorphic parent.
* Specifically, when Django goes to nullify the relationship it relies
on the related instances[0] class type to issue a query to decide what
to nullify. So if the foreignkey references multiple different types
(i.e. ProjectUpdate, Job) then only 1 of those class types will get
nullified. The end result is an IntegrityError when delete() is called.
* This changeset ensures that the parent Polymorphic class is queried so
that all the foreignkey entries are nullified
* Also remove old Django "hack" that doesn't work with Django 1.11
Only allow administrative action for a user
who is a system admin or auditor if the
the requesting-user is a system admin.
Previously a user could be edited if the
requesting-user was an admin of ANY of the
orgs the user was member of.
This is changed to require admin permission
to ALL orgs the user is member of.
As a special-case, allow org admins to add
a user as a member to their organization if
the following conditions are met:
- the user is not member of any other orgs
- the org admin has permissions to all of
the roles the user has
- Handle out of order events by batching lines until all lines
are present
- In static mode, fetch pages of results until container is full
and scroll bar appears (for scroll events related to pagination)
* Was considering an isolated instance: any instance that has at least 1
group with no controller. This is technically correct since an iso node
can not be a part of a non-iso group.
* The query is now more robust and considers a node an iso node if ALL
groups that a node belong to ALL have a controller.
* Also added better debugging for the special tower instance group
* Added a check for the existance of the special tower group so that
logs are less "messy" during the install process.
* The Instance Group list of instances was getting over-written with
every call to the register_instance management command.
* This changeset appends --hostnames to the Instance Group policy list.
* It's possible to have an exception raised in BaseTask.run() before the
stdout handler gets defined. This is problematic when the exception
handler tries to access that undefined var .. causing another exception.
Note that the second exception is caught also but it's not desirable to
lose the first exception.
* This fix checks to see if the stdout handler var is defined before
calling it's methods. Thus, we retain the original error message.
`oc new-app --template=postgresql-persistent` has been kind of a pain. It would attempt to create a Persistent Volume, but does not allow you to specify the storageClass.
This code assumes that a Persistent Volume is already available and will fail with a helpful error message if it is not.
Signed-off-by: Shane McDonald <me@shanemcd.com>
* Endpoint exposes all jobs associated with an Instance. This is what we
want. Align the endpoint description with this behavior by removing the
word running.
In verbose unified job models (inventory updates, system jobs,
etc.), do not delay dispatch just because the encoded
event data is not part of the data written to the buffer.
This allows output from these commands to be submitted
to the callback queue as they are produced, instead
of waiting until the buffer is closed.
* Nodes are marked offline, then deleted; given enough time. Nodes can
come back for various reasions (i.e. netsplit). When they come back,
have them recreate the node Instance if AWX_AUTO_DEPROVISION_INSTANCES
is True. Otherwise, do nothing. The do nothing case will show up in the
logs as celery job tracebacks as they fail to be self aware.
Add new input for the tower type credential
elsewhere, tests are being added for verify_ssl in modules
tower-cli also updating to use the original tower.py var
* Moves topology_data to views
* Changes id to cid
* Changes pk to id
* Changes host_id and inventory_id to ForeignKeys
* Resets migrations for network_ui
* Cleans up old files
* Fixes bug where new devices on the canvas weren't added to the search dropdown
* Fixes bug with closing the details panel
* Changes the fill color to white for remote-selected devices
* Fixes read-only mode by swapping out move controller for move read-only
* Updates range on the zoom widget
* Removes the toolbox if user doesn't have permission to edit
* Fixes the extra click that was identified with the context menu
* Adds new readonly version of the move FSM
* Adds an enhancement to debug directive to align the text better
* Disables the toolbox FSM if user doesn't have permission to edit
-removes stale commented-out lines
-makes "unknown" type devices smaller on canvas
-moves "unknown" type device title underneath icon
-removes collapsed inventory toolbox
-changes "Delete" to "Remove"
-removes the "Close" button for "Cancel" on details panel
-changing Remove color to red
This removes features that were not selected for 3.3.
* Removes breadcrumb
* Removes "Jump To" panel and some of the hotkey panel items
* Removes Buttons in favor of Action Dropdown
* Removes chevrons
* Removes ActionIcon model
* Removes the Rename button on the context menu
* Makes details panel readonly
* Adds expand modal for extra vars
* Adds inventory copy function back to inventory list
* Sets cursor to visible
* Adds hide_menus
* Adds fix for mice that return large mousewheel deltas
* Separates test messages from application messages
* Removes test runner and groups, processes, and streams from network_ui
* Adds network_ui_test
* Fixes routing for network_ui_test
* Removes coverage_report tool from network_ui
* Fixes network_ui_test test workflow
* Sets width and height of the page during tests
* Adds group_id to Group table
* Adds inventory_group_id to Group table
* Adds creation of inventory hosts and groups from the network UI
* Changes network UI variables to be under awx key
* Fixes variables initial value
* Adds group membership association/disassociation
* Removes items from the inventory toolbar when loaded by a snaphot
* Adds nunjucks dependency to package.json
* Adds templating to hosts
* Adds templating for racks
* Adds site templating
* Adds group associations for sites
* Squashes migrations for network_ui
* Flake8 migrations
* Changes reserved field type to device_type, group_type, and process_type
* Allows blank values for all CharFields in network_ui models
* Changes reserved field type to device_type, group_type, and process_type
* Updates pipeline and FSM design for 3.4 features:
group and read/write design features.
* Adds tool to copy layout from existing design
* Adds pipeline design
Adds a search field in the network UI and a jump-to level menu. This
allows users to quickly find a device on the canvas or jump to a
certain mode/zoom-level.
Adds animation to smooth out the transition from the current viewport
to a viewport centered on the searched for device or zoom-level.
* Adds animation FSM and changes the 0 hot key to use it
* Adds jump to animation
* Adds search bar type ahead
* Adds jump animation to search and jump-to menus
* Adds keybinding FSM
* Updates the dropdown when devices are added/edit/removed
* Highlights the searched for host
* Adds a simple DRF API for network-ui
* Moves network_ui api to v1_api
* Uses BaseSerializer for networking v1 api
* Adds v2 of the network API
* Uses standard AWX base classes for the network UI API
* Adds canvas prefix to network UI api URL names
* Adds ansible action plugins for automating network UI workflows
* Adds python client for the networking visualization API
* Adds context menu for a rack, and adding more error handling for
items that don't exist in Tower
* Adds context menu for sites
* Adds handler for showing details for links and interfaces
* Fixes the removed "watchCollection" in order to update details panel
* Removes the context menu when changing the scale of the canvas
* Adds delete context menu button, as well as refactoring the delete
functionality to the network.ui.controller.js
* Updates delete functionality to delete nested groups/devices
if the current_scale is set to site or rack icons
* Adds context menu to a group
* Hides rack/site title in top left of group, as well as centering
labels on all icons
* Moves the context menu off screen when disabling it
* Adds unique name to hosts, routers, switches, and groups
* Makes the names of host/switch/router/group SVG elements so they update when
the user updates the name of the SVG element
* Removing svg buttons and adding new html toolbar
* Adds panel for Jump To feature, along with basic functionality
* Adds Key dropdown for hotkeys and adding browser refresh hotkey
* Adds breadcrumb bar and making adjustments after feedback with UX
* Rearrages panels and adding some resize logic
* Fixes z-index of key-panel and jump-to panel
* Adds white background to text underneath icons
* Makes all icons blue
* Changes sizes and colors of icons. Also made icon text background white
* Adjusts sizes of rack and site icons within group boundary
This adds a test framework to drive UI tests from the client
instead of injecting events from the websocket. Tests consist
of a pair of snapshots (before and after the test) and
a list of UI events to process. Tests are run using a FSM
in the client that controls the resetting of state to the snapshot,
injecting the events into the UI, recording test coverage,
and reporting tests to the server.
* Adds design for event trace table
* Adds design for a coverage tracking table
* Adds models for EventTrace and Coverage
* Adds trace_id to recording messages
* Adds design for TopologySnapshot table
* Adds order to TopologySnapshot table
* Adds TopologySnapshot table
* Adds Snapshot message when recordings are started and stoppped
* Adds models for tracking test cases and test results
* Adds designs for a test runner FSM
* Updates test management commands with new schema
* Adds download recording button
* Adds models to track tests
* Adds ui test runner
* Adds id and client to TestResult design
* Adds id and client to TestResult
* Update message types
* Stores test results and code coverage from the test runner
* Adds tool to generate a test coverage report
* Adds APIs for tests and code coverage
* Adds per-test-case coverage reports
* Breaks out coverage for loading the modules from the tests
* Re-raises server-side errors
* Captures errors during tests
* Adds defaults for host name and host type
* Disables test FSM trace storage
* Adds support for sending server error message to the client
* Resets the UI flags, history, and toolbox contents between tests
* Adds istanbul instrumentation to network-ui
* Hooks up the first two context menu buttons
* Makes the rename and details menu show up
wherever the user's cursor's location
* Adds TopologyInventory and DeviceHost tables
* Adds design for host_id on the Device table
* Adds migrations for TopologyInventory
* Adds host_id to Device table
* Adds inventory_id and host_id tracking
* Auto-closes the right hand panel if focus is directed to the canvas.
* Retrieves the host details on inventory load.
* Adds back support for inventory and host_id tracking
* Adds host icon
* Changes rack icon to new icon
* Site icon replacement
* Fixes host icon "hitbox", and adding debug and construction
* Adds construction and debug lines for switch, router, rack, and site
* Adds some error handling for REST calls, as well as alert on
host detail panel.
* Adds channels between FSMs
* Adds FSMTrace model
* Adds FSMTrace storage and download
Channels between FSMs make the processing pipeline delegation explicit
and allow for better instrumentation to trace the state of the entire
pipeline including FSM state transitions and message flow through
the pipeline. This feature is not turned on by default and is
only necessary for debugging or certain kinds of testing.
* Changes Layers' panel's default setting to not expanded
* Adds OffScreen2 state to handle the case where a toolbox is both offscreen and disabled
* Adds a collapsed view of the toolbox, as well as a model for ActionIcons
which is a model whose purpose is to connect the button FSM with the
chevron icons that are used on the toolbox.
* Adds action-icon directive
* Enables/disables the icons if they're not shown
* Fixes initial state of the toolboxes
* Creates context menu and context menu buttons in the network UI
* Adds extra vars to details panel on left hand side
* Adds SVG intro to CONTRIBUTING.md
* Add FSM intro
* Add rendered images of the FSM designs
* Adding example
* Adding links
* Adds details about the FSM design workflows
* Adds FSM state docs
* Adds event handler docs
* Adds details about FSMController
* Adds example of making an FSMController
* Adds details about messages, models, and message passing
* Adds models and messages to CONTRIBUTING.md
* Adds example to widget development
* Adds detail to the widget development example
* Add message type definitions
* Adds support for webpack devserver
* Enable istanbul on network UI
* Enable capture and replay tests on the network ui
* Normalize mouse wheel events
* Fix missing trailing slash on hosts API
* Add Export YAML button
* Adds networking icons, state, and shell
* Adds network UI to the network UI shell.
* Removes jquery as a dependency of network-ui
* Fills the entire viewport with the network canvas and
makes header panel and the right panel overlay on
top of it
* Show task status on device for now
This shows the status of the last few tasks run on a device as a
green/red circle on the device icon. This data live updates
from data emitted over the websocket.
* Adds logic for consuming ansible_net_neighbors facts
This consumes facts emitted from Ansible over a websocket to
Tower. This allows consumers in network to process the facts and
emit messges to the network UI. This requires a special callback
plugin to run in Tower to emit the messages into the websocket using
the python websocket-client library.
* Calls API to get inventory
* Adds CopySite message
* Adds Toolbox and ToolboxItem model design
* Add Toolbox and ToolboxItem tables
* Sends toolbox items to client from server on connect
Adds application level streams and process widgets to
model applications that run on networking devices or hosts.
* Changes Application to Process
* Adds StreamCreate and ProcessCreate messages
* Adds process id sequence to device
* Add serializers for streams and processes
Adds mulitple view modes based on zoom-level. This allows for easy
drilling into a device for more detail or zooming-out for a overview.
* Adds support for multi-site and device modes
* Adds icons to remote device in device detail
* Adds site widget
* Adds link between sites
* Adds toolboxes for inventory, site, and applications
* Adds rack mode
* Adds UI for adding processes to devices
* Adds copy and paste support
* Adds streams
The traditional network engineer workflow includes a diagram, a
spreadsheet, and the CLI. This adds an experimental view of the
network topology data in a spreadsheet like table view.
* Adds angular-xeditable dependency for tables view.
* Add data binding models
* Add message transformations from table to topology formats
* Adding dependencies for tables view
* Moves network ui into a directive
* Adds awxNet prefix to network ui directives
* Adds a module to integrate the stand alone network UI with
Tower UI.
* Adds reconnectingwebsocket to webpack bundle
* Adds configuration for webpack
* Moves ngTouch and hamsterjs to webpack vendor bundle
* Moves angular to network UI vendor bundle
* Adds ui-router dependency
* Changes CSS to BEM style
* Adds unique id sequences for devices and links on Topology and interfaces on Device
* Adds group widget with move, resize, delete, and edit label support
The ansible-network-ui prototype project builds a standalone Network UI
outside of Tower as its own Django application. The original prototype
code is located here:
https://github.com/benthomasson/ansible-network-ui.
The prototype provides a virtual canvas that supports placing
networking devices onto 2D plane and connecting those devices together
with connections called links. The point where the link connects
to the network device is called an interface. The devices, interfaces,
and links may all have their respective names. This models physical
networking devices is a simple fashion.
The prototype implements a pannable and zoomable 2D canvas in using SVG
elements and AngularJS directives. This is done by adding event
listeners for mouse and keyboard events to an SVG element that fills the
entire browser window.
Mouse and keyboard events are handled in a processing pipeline where
the processing units are implemented as finite state machines that
provide deterministic behavior to the UI.
The finite state machines are built in a visual way that makes
the states and transitions clearly evident. The visual tool for
building FSM is located here:
https://github.com/benthomasson/fsm-designer-svg. This tool
is a fork of this project where the canvas is the same. The elements
on the page are FSM states and the directional connections are called
transitions. The bootstrapping of the FSM designer tool and
network-ui happen in parallel. It was useful to try experiemental
code in FSM designer and then import it into network-ui.
The FSM designer tool provides a YAML description of the design
which can be used to generate skeleton code and check the implementation
against the design for discrepancies.
Events supported:
* Mouse click
* Mouse scroll-wheel
* Keyboard events
* Touch events
Interactions supported:
* Pan canvas by clicking-and-dragging on the background
* Zooming canvas by scrolling mousewheel
* Adding devices and links by using hotkeys
* Selecting devices, interaces, and links by clicking on their icon
* Editing labels on devices, interfaces, and links by double-clicking on
their icon
* Moving devices around the canvas by clicking-and-dragging on their
icon
Device types supported:
* router
* switch
* host
* racks
The database schema for the prototype is also developed with a visual
tool that makes the relationships in the snowflake schema for the models
quickly evident. This tool makes it very easy to build queries across
multiple tables using Django's query builder.
See: https://github.com/benthomasson/db-designer-svg
The client and the server communicate asynchronously over a websocket.
This allows the UI to be very responsive to user interaction since
the full request/response cycle is not needed for every user
interaction.
The server provides persistence of the UI state in the database
using event handlers for events generated in the UI. The UI
processes mouse and keyboard events, updates the UI, and
generates new types of events that are then sent to the server
to be persisted in the database.
UI elements are tracked by unique ids generated on the client
when an element is first created. This allows the elements to
be correctly tracked before they are stored in the database.
The history of the UI is stored in the TopologyHistory model
which is useful for tracking which client made which change
and is useful for implementing undo/redo.
Each message is given a unique id per client and has
a known message type. Message types are pre-populated
in the MessageType model using a database migration.
A History message containing all the change messages for a topology is
sent when the websocket is connected. This allows for undo/redo work
across sessions.
This prototype provides a server-side test runner for driving
tests in the user interface. Events are emitted on the server
to drive the UI. Test code coverage is measured using the
istanbul library which produces instrumented client code.
Code coverage for the server is is measured by the coverage library.
The test code coverage for the Python code is 100%.
* I had thought that setting the settings was required. But carefully
selected defaults for the settings is the correct way to deal with
errors I was seeing early in developing this feature.
* Use unicode InstanceGroup and queue names up until the point we
actually create the queue
* kombu add_consumers returns a dict with a value that contians the
passed in queue name. Trouble is, the returned dict value is a string
and not a unicode string and this results in an error.
* Adds pattern to easy add django-auth-ldap group types classes and to
pass parameters via AUTH_LDAP_GROUP_TYPE_PARAMS
* Adds new group type PosixUIDGroupType that accepts the attribute,
ldap_group_user_attr, on which to search for the user(s) in the group.
As lstrip_blocks: True was added, this broke the formating when adding alternate DNS servers within the template. Removing the extra white space removals within the if and endif statements fixed the resulting yml formating.
* related to https://github.com/ansible/ansible-tower/issues/7931
* The Tower Instance group is special. It should always exist, so
prevent any delete to it.
* Only allow super users to associate/disassociate instances the 'tower'
instance group.
* Do not allow fields of tower instance group to be changed.
Related to https://github.com/ansible/ansible-tower/issues/7957
* Problem presented itself as Instances falling out of Instance Groups.
This was due to the cluster membership policy decider erroring out on a
logger message with unicode.
* Fixed up potential other unicode logger unicode issues in tasks.py
* Allow overriding all container resource requests by setting defaults/
* Fix an issue where template vars were reversed in the deployment config
* Remove `limit` usage to allow for resource ballooning if it's available
* Fix type error when using templated values in the config map for resources
* Added two settings values for declaring absolute cpu and memory
capacity that will be picked up by the capacity utility methods
* installer inventory variables for controlling the amount of cpu and
memory container requests/limits for the awx task containers
* Added fixed values for cpu and memory container requests for other
containers
* configmap uses the declared inventory variables to define the
capacity inputs that will be used by AWX to correspond to the same
inputs for requests/limits on the deployment.
from PR review, also adding tests to assert that the
value is passed from the stdout_handle to the UnifiedJob
object on finalization of job run in tasks.py
* Adds email, first name, last name as extra vars to job launches
* Remove old ad-hoc command extra vars population... use our
base-class method instead
Fix templating for dns and dns_search entries for both `awx_web` and `awx_task` images.
Multiple entries were templated in a oneliner style while docker-compose wanted them in a list style.
Last round of dependency updates showed that AWX
depended on packages which came implicitly from shade
decorator is added as an explicit dependency
and all of the rest of shade requirements are
added back in here.
Upgrades of minor dependency upgrades
Inventory scripts were upgraded in separate commit
Major exclusions from this update
- celery was already downgraded for other reasons
- Django / DRF major update already done, minor bumps here
- asgi-amqp has fixes coming independently, not touched
- TACACS plus added features not needed
Removals of note
- remove shade from AWX requirements
- remove kombu from Ansible requirements
Other notes
Add note about pinning setuptools and pip,
done but not mentioned previously
Stop pinning gevent-websocket and twisted
upgrade Azure to Ansible core requirements
more detailed notes
https://gist.github.com/AlanCoding/9442a512ab6977940bc7b5b346d4f70b
upgrade version of Django for Exception
* invoke the first heartbeat as early as possible. Results in a much
better user experience where when a user scales up an awx node, the node
appears with capacity earlier.
* invoke the first heartbeat as early as possible. Results in a much
better user experience where when a user scales up an awx node, the node
appears with capacity earlier.
* awx task container uses postgres port to wait for postgres to become
available before the container init continues. The () are problematic
and are removed.
* () was originally added to fix an openshift issues. That error does
NOT occur with this fix.
Add copy fields corresponding to new server-side copying
Refactor the way user_capabilities are delivered
- move the prefetch definition from views to serializer
- store temporary mapping in serializer context
- use serializer backlinks to denote polymorphic prefetch model exclusions
This fixes a small race condition that sometimes occurs when running
locally by ensuring that the delayed paged scrolling that happens
from using search doesn't put the password reset button out of view
when the test runner is trying to find and click it.
* tower/release_3.2.3:
fix unicode bugs with log statements
use --export option for ansible-inventory
add support for new "BECOME" prompt in Ansible 2.5+ for adhoc commands
enforce strings for secret password inputs on Credentials
fix a bug for "users should be able to change type of unused credential"
fix xss vulnerabilities - on host recent jobs popover - on schedule name tooltip
fix a bug when testing UDP-based logging configuration
bump templates form credential_types page limit
Wait for Slack RTM API websocket connection to be established
don't process artifacts from custom `set_stat` calls asynchronously
don't overwrite env['ANSIBLE_LIBRARY'] when fact caching is enabled
only allow facts to cache in the proper file system location
replace our memcached-based fact cache implementation with local files
add support for new "BECOME" prompt in Ansible 2.5+
fix a bug in inventory generation for isolated nodes
properly handle unicode for isolated job buffers
Definition of removal is providing a `credentials` list on launch
that lacks a type of credential that the job template has.
This assures that every category of credential the job template
has will also exist on jobs ran from that job template.
This restriction already existed, but this makes the endpoint
fail instead of re-adding the credentials.
This change makes manual launch congruent with saved launch
configurations.
WFJT nodes & schedules (launch configs) will accept POST/PATCH/PUT
with variables in extra_data that have $encrypted$ for their value
if a valid survey default exists.
In this case, the variable is simply removed from the extra_data.
This is done so that it does not affect pre-existing value
substitution for $encrypted$ values from the config itself
The extra vars file created lives in the playbook private runtime
directory, and will be reaped along with the rest of the directory.
Adjust assorted unit tests as necessary.
* Fix bug where capacity_adjustment sets to "1.00" when instance is toggled
* Hookup websockets for instance group jobs and instance jobs
* Add Wait spinner to Capacity_Adjuster, Instance association modal, and Instance group delete
* Add updateDataset event listener to update instance and instanceGroups list after smartSearch query
Currently Cloudforms can return a mix of IPv4 and IPv6 addresses in the
ipaddresses field and this mix comes in a "random" order (that is the
first entry may be IPv4 sometimes but IPv6 other times). If you wish to
always use IPv4 for the ansible_ssh_host value then this is problematic.
This change adds a new prefer_ipv4 flag which will look for the first
IPv4 address in the ipaddresses list and uses that instead of just the
first entry.
modal implementation
* Remove Instance-List-Policy controller
* Replace let with const when values aren't being reassigned
* Update CapacityAdjuster directive to use replace:true
* Assign less values that are specific to element
* Add more error handling
* This also adds fields to the instance view for tracking cpu and
memory usage as well as information on what the capacity ranges are
* Also adds a flag for enabling/disabling instances which removes them
from all queues and has them stop processing new work
* The capacity is now based almost exclusively on some value relative
to forks
* capacity_adjustment allows you to commit an instance to a certain
amount of forks, cpu focused or memory focused
* Each job run adds a single fork overhead (that's the reasoning
behind the +1)
* Switch policy router queue to not be "tower" so that we don't
fall into a chicken/egg scenario
* Show fixed policy list in serializer so a user can determine if
an instance is manually managed
* Change IG membership mixin to not directly handle applying topology
changes. Instead it just makes sure the policy instance list is
accurate
* Add create/delete hooks for instances and groups to trigger policy
re-evaluation
* Update policy algorithm for fairer distribution
* Fix an issue where CELERY_ROUTES wasn't renamed after celery/django
upgrade
* Update unit tests to be more explicit
* Update count calculations used by algorithm to only consider
non-manual instances
* Adding unit tests and fixture
* Don't propagate logging messages from awx.main.tasks and
awx.main.scheduler
* Use advisory lock to prevent policy eval conflicts
* Allow updating instance groups from view
* use embedded beat rather than standalone
* dynamically set celeryd hostname at runtime
* add embeded beat flag to celery startup
* Embedded beat mode routes will piggyback off of celery worker setup
signal
* Associating/Disassociating an instance with a group
* Triggering a topology rebuild on that change
* Force rabbitmq cleanup of offline nodes
* Automatically check for dependent service startup
* Fetch and set hostname for celery so it doesn't clobber other
celeries
* Rely on celery init signals to dyanmically set listen queues
* Removing old total_capacity instance manager property
* Based on the tower topology (Instance and InstanceGroup
relationships), have celery dyamically listen to queues on boot
* Add celery task capable of "refreshing" what queues each celeryd
worker listens to. This will be used to support changes in the topology.
* Cleaned up some celery task definitions.
* Converged wrongly targeted job launch/finish messages to 'tower'
queue, rather than a 1-off queue.
* Dynamically route celery tasks destined for the local node
* separate beat process
add support for separate beat process
If the "DTSTART" property is specified as a date with UTC time or a date with
local time and time zone reference, then the UNTIL rule part MUST be specified
as a date with UTC time.
this is for the development environment only; when uwsgi notices a code
change, it automatically reloads the uwsgi workers; this patch includes
a hook that sends `SIGHUP` to the celery process, causing it to spawn
a new set of workers as well
redbaron is a library we use to facilitate parsing local settings files;
at _import_ time it generates a parse tree and caches it to disk at
`/tmp`; this process is _really time consuming, and only necessary if
we're actually *using* the library
right now, we're importing this library and paying the penalty
_every_ time we load the awx application
previously, we persisted custom artifacts to the database on
`Job.artifacts` via the callback receiver. when the callback receiver
is backed up processing events, this can result in race conditions for
workflows where a playbook calls `set_stat()`, but the artifact data is
not persisted in the database before the next job in the workflow starts
see: https://github.com/ansible/ansible-tower/issues/7831
this commit allows schedule `rrule` strings to include local timezone
information via TZID=NNNNN; occurrences are _generated_ in the local
time specific by the user (or UTC, if e.g., DTSTART:YYYYMMDDTHHMMSSZ)
while Schedule.next_run, Schedule.dtstart, and Schedule.dtend will be
stored in the UTC equivalent (i.e., the scheduler will still do math on
"what to run next" based on UTC datetimes).
in addition to this change, there is now a new API endpoint,
`/api/v2/schedules/preview/`, which takes an rrule and shows the next
10 occurrences in local and UTC time.
see: https://github.com/ansible/ansible-tower/issues/823
related: https://github.com/dateutil/dateutil/issues/614
this cleanups up a _lot_ of code duplication that we have for builtin
credential types. it will allow customers to setup custom inventory
sources that utilize builtin credential types (e.g., a custom inventory
script that could use an AzureRM credential)
see: https://github.com/ansible/ansible-tower/issues/7852
update our event data search algorithm to be a bit lazier in event data
discovery; this drastically improves processing speeds for stdout >5MB
see: https://github.com/ansible/awx/issues/417
We were having issues where an older tag was being outputed from `git describe`.
From the man page:
Follow only the first parent commit upon seeing a merge commit. This is useful when you wish to not match tags on branches merged in the history of the target commit.
related to https://github.com/ansible/awx/issues/217
* Adds a configure tower in tower setting for users to configure a saml
attribute that tower will use to put users into teams and orgs.
* Celery + json pickling do not handle custom Exceptions (and may never
do so). Mentioning of, if handling custom Exceptions then the code would
be susceptible to same arbitrary code execution that python pickle is
vulnerable to.
* So don't use custom Exceptions.
* celery error callback signature isn't well defined. Thus, our error
callback signature is made to handle just about any call signature and
depend on only 1 attribute, id, existing.
See https://github.com/celery/celery/issues/3709
The original tests set no longer works after Django 1.11 due to more
strict rules against dynamic model definition. The refactored tests set
aims at each existing model that apply named URL rules, instead of
abstract general use cases, thus significantly improves maintainability
and readability.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
this approach totally removes the process of reading and writing stdout
files on the local file system at settings.JOBOUTPUT_ROOT when jobs are
run; now stdout content is only written on-demand as it's fetched for
the deprecated `stdout` endpoint
see: https://github.com/ansible/awx/issues/200
* introduces three new models: `ProjectUpdateEvent`,
`InventoryUpdateEvent`, and `SystemJobEvent`
* simplifies the stdout callback management in `tasks.py` - now _all_
job run types capture and emit events to the callback receiver
* supports stdout reconstruction from events for stdout downloads for
_all_ job types
* configures `ProjectUpdate` runs to configure the awx display callback
(so we can capture real playbook events for `project_update.yml`)
* ProjectUpdate, InventoryUpdate, and SystemJob runs no longer write
text blobs to the deprecated `main_unifiedjob.result_stdout_text` column
see: https://github.com/ansible/awx/issues/200
display_extra_vars was not taking a copy of the data before
acting on it - this causes a bug where the activity stream will
modify the existing object on the model. That leads to new data
not being accepted.
Also moved the processing of extra_data to prior to the accept
or ignore kwargs logic so that we pass the right (post-encryption)
form of the variables.
This data often (in the case of inventory updates) represents large data
blobs (5+MB per job run). Storing it on the polymorphic base class
table, `main_unifiedjob`, causes it to be automatically fetched on every
query (and every polymorphic join) against that table, which can result
in _very_ poor performance for awx across the board. Django offers
`defer()`, but it's quite complicated to sprinkle this everywhere (and
easy to get wrong/introduce side effects related to our RBAC and usage
of polymorphism).
This change moves the field definition to a separate unmanaged model
(which references the same underlying `main_unifiedjob` table) and adds
a proxy for fetching the data as needed
see https://github.com/ansible/awx/issues/200
Consolidate prompts accept/reject logic in unified models
Break out accept/reject logic for variables
Surface new promptable fields on WFJT nodes, schedules
Make schedules and workflows accurately reject variables
that are not allowed by the prompting
rules or the survey rules on the template
Validate against unallowed extra_data in system job schedules
Prevent schedule or WFJT node POST/PATCH with unprompted data
Move system job days validation to new mechanism
Add new psuedo-field for WFJT node credential
Add validation for node related credentials
Add related config model to unified job
Use JobLaunchConfig model for launch RBAC check
Support credential overwrite behavior with multi-creds
change modern manual launch to use merge behavior
Refactor JobLaunchSerializer, self.instance=None
Modularize job launch view to create "modern" data
Auto-create config object with every job
Add create schedule endpoint for jobs
* Jupyter starts alongside the other awx services and is available on
0.0.0.0:8888
* make target: make jupyter
* default settings in settings/development.py
* Added jupyter, matplotlib, numpy to dev dependencies
We `pip download` this file for offline installs. Automat lists this package as a setup_requires, but `pip download` doesn’t resolve these dependencies (distutils will attempt to install them via easy_install when setup.py is invoked).
Allows for use of a suffix that will be appended to host names returned
from Cloudforms API if that suffix is not present.
For example with a suffix of 'example.org', the following results would
be shown for a particular Cloudforms host name:
someexample -> someexample.example.org
someexample.example.org -> someexample.example.org
The main use-case for this is, when one Inventory Source is returning
names that have a FQDN name whilst others are returning a shortname, to
ensure that the hosts in an inventory aren't effectively duplicated.
On Cloudforms (Version 2.0 at least), the dictionary that gets passed to
the inventory_import has a top-level 'cloudforms' dictionary element
that contains the 'id' and 'power_state' rather than those elements
being at the top-level of the dictionary.
This change adds in the 'cloudforms' into the expected name.
the nature of this latest bug is that the WorkflowJob has a *different*
implementation of _accept_or_ignore_job_kwargs, and it wasn't performing
encryption for extra vars provided at launch time; this change places the
encryption mechanism in UJT.create_unified_job so that it works the same
for _all_ UJTs
see: https://github.com/ansible/ansible-tower/issues/7798
see: https://github.com/ansible/ansible-tower/issues/7046
* Check the reason for a dependent project update failure. If it's
because of a cancel, then let the normal cancel mechanisms update the
jobs status and explanation. Do not update the dependent job's status
for a project update that was canceled, in the run code.
* python-social-auth has SOCIAL_AUTH_SAML_SECURITY_CONFIG, which is
forwarded to python-saml settings configuration. This commit exposes
SOCIAL_AUTH_SAML_SECURITY_CONFIG to configure tower in tower to allow
users to set requestedAuthnContext, which will disable the requesting of
password type auth from the idp. Thus, it's up to the idp to choose
which auth to use (i.e. 2-factor).
Openshift was throwing an error here, though I'm not sure why it makes
a whole lot of difference to call fdopen() vs open(). This was
introduced when this method was unified under the new
ansible-inventory system. This fixes it for all cases. mkstemp(),
while not necessary, is a useful addition to keep from leaking
inventory details unnecessarily.
There's a bug in celery 4.X when using bound tasks as error
handlers. We don't actually need it to be bound especially since the
request object is now available in the function signature
* Implicit project update, launch_type='sync', get "associated" with a
job via project_update. When a job is canceled, so should this implicit
project update. This change enforces that logic.
This keeps the instance from re-using a pool that might have already
expired and is unusable for the inspector that needs to run as part of
the task manager
survey_spec is a nested dict, so if we don't `deepcopy()` it, updates
to the individual fields could corrupt the original data structure;
this was causing a bug whereby activity stream updates converted
encrypted survey password defaults -> `$encrypted$`, but inadvertently
modified the originating model due to shared references
see: https://github.com/ansible/ansible-tower/issues/7769
if database connectivity is lost, callback workers currently raise an
uncaught exception and hang; this can cause the entire process to stop
handling callback events
see: https://github.com/ansible/ansible-tower/issues/7660
and also patching angular-scheduler to point to angular 1.4.14
and also patching angular-codemirror to point to angular 1.4.14,
and adding fsevents:"*" to the package.json, and regenerating
npm-shrinkwrap.json for the new dependencies and their branches.
This change causes all SCM inventory updates to run a local
project sync unless they were specifically marked as a
dependency of an already-existing project update, as
opposed to just doing so on manual launch types.
This should be a more robust criteria.
Relates #7712 of ansible-tower.
UI uses `related_search_fields` list to populate help text for resourse
search, `ansible_facts` is searchable via UI but the general pickup
logic would ignore it. So make it a corner case.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* For Tower the license must match between the source and destination
* For AWX the check is disabled
* Hosts imported from another Tower don't count against your license
in the local Tower
* Fix up some issues with enablement
* Prevent slashes from being used in the instance filter
* Add &all=1 filter to make sure we pick up all hosts
Use BaseAccess class to enforce the superuser and system
auditor conditions, as well as the optimizations.
Declare optimizations on access class as tuple.
Limit role of access class method narrowly to RBAC filtering.
This adds a Job Templates tab onto the Project form that gives
the user the ability to see all the job templates using a project.
Clicking the add button on this list will take the user to the job
template form with the project field auto-filled with the project.
This should make the settings and configuration logic less implicit and
a little easier to follow. Some familiarity with the configuration behavior
of nightwatch is still necessary in places - specifically, one should know
that all test_settings defined for non-default environments are treated as
overrides to the values defined for the default environment.
* release_3.2.1:
fallback to empty dict when processing extra_data
fix migration problem from 3.1.1
move 0005a migration to 0005b
feedback on ad hoc prohibited vars error msg
Fix the way we include i18n files in sdist
Fix migrations to support 3.1.2 -> 3.2.1+ upgrade path
fix missing parameter to update_capacity method
fix WARNING log when launching ad hoc command
Validate against ansible variables on ad hoc launch
do not allow ansible connection type of local for ad_hoc
work around an ansible 2.4 inventory caching bug
fix scan job migration unicode issue
Assert isolated nodes have capacity set to 0 and restored based on version
Set capacity to zero if the isolated node has an old version
When Jobs and Adhoc Commands are launched, awx uses a job-scoped auth
token to dynamically fetch inventory via the awx REST API; this process
is complicated, hard to debug, and likely won't work going forward with
oauth2-based tokens in awx
see: https://github.com/ansible/awx/issues/21
with process isolation enabled (which is the awx default), cloudforms
caches inventory script results on disk; awx should direct cloudforms to
store these cache files in a location that's exposed to the isolated
environment
see: ansible/ansible#31760
with process isolation enabled (which is the awx default), cloudforms
caches inventory script results on disk; awx should direct cloudforms to
store these cache files in a location that's exposed to the isolated
environment
see: ansible/ansible#31760
[WARNING]: Failure using method (v2_runner_on_ok) in callback plugin
(<awx_display_callback.module.AWXDefaultCallbackModule object at
0x47f6090>):
'module' object has no attribute 'dumps'
The above error is thrown by ansible if callback plugins don't respect
the same import precedence configuration as Ansible. ansible callback/*
dir includes a json.py file. This is imported by ansible
callback/__init__.py when a callback plugin implementation imports from
Ansible callback base without setting the correct import precedence.
This allows the port from the request header to be used
rather than having the request redirected to the port
being used inside the container which may not be
accessible
Fixes#420
related #420
Signed-off-by: Nick Carboni <ncarboni@redhat.com>
The logic that sets awx_web_docker_actual_image and awx_task_docker_actual_image creates and pushes images to the private registry tagged with the awx version, which is appropriate, but then tries to pull with no tag. (so docker defaults to "latest", which does not exist)
Relates #264.
This PR proposed and implemented a way of defining workflow failure
state:
A workflow job fails if one of the conditions below satisfies.
* At least one node runs into states `canceled` or `error`.
* At least one leaf node runs into states `failed`, but no child node is
spawned to run (no error handler).
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* Remove attempted support of key=value pattern, because
it is not actually allowed in practice
* Have variables validator defer to the utils variables parser
* Prune serializers of a handful of cases that previous
attempts at cleanup have missed
Relates #391.
Upstream `python-ldap` (surprisingly) does not support utf-8 DN. So
explicit encoding is needed.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
* release_3.2.0: (66 commits)
fix workflow maker lookup issues
adding extra logic check for ansible_facts in smart search
adding "admin_role" as a default query param for insights cred lookup
changing insights cred lookup to not use hard coded cred type
fix rounding of capacity percentage
Catch potential unicode errors when looking up addrinfo
fixing typo with adding query params for instance groups modal
move percentage capacitty to variable
Add unit test for inventory_sources_already_updated
Check for inventory sources already updated from start args
Fixed inventory completed jobs pagination bug by setting default page size
Remove the logic blocking dependent inventory updates on callbacks
fix instance group percentage
Remove host-filter-modal import
Fix partial hover highlight of host filter modal row
Removed leading slash on basePath
Fixed host nested groups pagination
Added trailing slash to basePath
Fixed nested groups pagination
Fixed host_filter searching related fields
...
Relates #7656 in ansible-tower.
We have been using comma `,` and space ` ` to separate search terms in
query string `<field_name>__search=<search terms>`, however in general
we can always use `&` to achieve separation like
`<field_name>__search=<search term 1>&<field_name>__search=<search term
2>&...`. Using specific delimiters makes it impossible for search terms
to contain those delimiters, so they are better off being removed.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Relates to #7586 of ansible-tower as a follow-up of fix#420 of tower.
The original fix works for Django version 1.9 and above, this PR
expanded the solution to Django verison 1.8 and below.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Relates #7386 of ansible-tower.
Due to the uniqueness of Tower configuration datastore model, it is not
fully compatible with activity stream workflow. This PR introduced
setting field for activitystream model along with other changes to make
Tower configuration a special case for activity streams.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
instead of writing individual migrations for new built-in credential
types, this change makes the "setup_tower_managed_defaults" function
idempotent so that it only adds the credential types you're missing
Relates #7684 of ansible-tower.
Slugify username in python-social-auth means disallowing
any non-alphanumerial characters, which is an over-kill
for awx/tower, thus disabling it.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
Followed instructions to the letter from a new install and found that there was no mention that Docker service must be started. Tried to add this mention where it would have benefited me, but I am new to github and not sure if I'm doing this right.
* Recreation of user-issues increasing depends on the chosen install
path used. Thus, we need this information when recreating the issue
locally. Ask the user to include this information in the issue template.
Manual initialization allows for some asynchronous work to
finish ahead of Angular's startup. The initial motivation is
to be able to guarantee translation files have been fetched
before rendering content that needs translation. If a locale
isn't supported or if the request to get a json file fails,
the i18n service falls back to en.
Signed-off-by: gconsidine <gconsidi@redhat.com>
Connect #7666 of ansible-tower and follow up original fix tower #455.
The original fix solves the problem of duplicated db keys, but breaks a
rule of enterprise users that 'Enterprise users cannot be
created/authenticated if non-enterprise users with the same name has
already been created in Tower.'. This fix resumes that rule.
Signed-off-by: Aaron Tan <jangsutsr@gmail.com>
If overwrite=True for an inventory source import, then this matters
creating a new inventory source through v1 API will not include
deprecated_group inside of the InventorySource groups m2m related
connection, but migrations from 3.1 will
In those migrated cases, this code will leave the deprecated_group
untouched, so as to not trigger its cascade delete
* Remove extraneous quotes when authroization token is retrieved
from a cookie
* Fix thrown error when invalid creds are supplied on login due to
attempt to access property of an undefined value
The UI has a handful of dependencies attached to window (angular,
jquery, lodash, etc). In the case of schedules, jquery was included
and extended as expected, but then clobbered by another module. This
prevented the Select2 dependency from working as expected.
Rather than relying on webpack to export particular dependencies as
global, that work is done in the vendor entry point now instead.
- prevent dual-entry for first item in running jobs due to
setdefault syntax
- fix issue where queues (celery tasks) only returned the last
item in the inspector output due to looping problem
this caused reaper bugs in production
Use license_info.features.ldap & license_info.feature.enterprise_auth from /api/v2/config for
auth providers button enabling instead of using info from /api/v2/settings/all
Signed-off-by: Julen Landa Alustiza <julen@zokormazo.info>
This also adds an option to *not* use the local container for building
the software distribution which is required for a local minishift
environment based install
django-polymorphic itself generates queries for polymorphic object
lookups, and these queries for UnifiedJob are *not* properly defering the
`result_stdout_text` column, resulting in more very slow queries. This
solution is _very_ hacky, and very specific to this specific
version of Django and django-polymorphic, but it works until we can
solve this problem the proper way in 3.3 (by removing large stdout blobs
from the database).
see: https://github.com/ansible/ansible-tower/issues/7568
For some reason some docker deployments seem not to be able to resolve
the awxweb host from the awx task host at least when started from the
playbook. This hopefully provides a resolution for that
result_stdout_text can be _very_ large - some customers have 5MB+ per
job; querying for this in list contexts results in _very_ large datasets
being read from the database which is very slow. It's very uncommon to
actually need this column outside of the context of job details, so
defer it.
see: https://github.com/ansible/ansible-tower/issues/7568
When considering previous / current Project Updates we weren't
properly sorting the previous runs.
We also make sure we filter down to just "check" style project updates
and don't consider 'run' style standalone project updates when
deciding what are potentially related project updates
This attempts to detect if there are migrations in-progress and will
force display an interstitial page in the process that attempts to
load the index page every 10s until it succeeds.
This is only attached in production settings so the development
environment can proceed even if the migrations haven't been applied yet
A lot of people have experienced issues with the system-level dependencies that are required in order to build the source distribution that is handed off to the image builds. This makes it unnecessary to install any additional software on the host machine aside from Ansible and Docker.
* Use git+https prefix in git-based npm dependencies
* Use ejs template for index to fix extraneous slash in path
* Remove outdated documentation
* Remove unused service
* Regenerate shrinkwrap
Without this patch, SAML backend will only use the first letter of the NameID as attribute value.
Signed-off-by: Patrick Uiterwijk <patrick@puiterwijk.org>
This avoids re-loading objects from the database in our
chain of permission checking, wherever possible.
access.py is equiped to handle object references instead
of pk ints, and permissions.py is changed to pass those refs.
* awx_installer:
Adds docker installation steps (#15)
Call out eval for setting up the minishift environment
Support official image builds with awx logos
Add support for standalone docker install
First iteration on INSTALL
Adds edge terminated route
Ignore Pycharm droppings
Force reauth docker registry login in installer
Reduce the size of the production container image
Initial awx installer
* release_3.2.0: (138 commits)
Pull Dutch and Spanish translations
Increase verbosity of CTiT Logging test error handling
fix to console error of conditional toggle showing
Fix error message when calling remove on undefined DOM element
fix ctit logging toggle from being showed for log types other than https
Remove delete and edit buttons from smart inventory host list. Only option should be view.
feedback from PR
Enhance query string in ad hoc command event save to consider smart inventory
Fixed host filter clearall
fuller validation for host_filter
On JT form, Show credential tags from summary_fields if user doesn't have view permission on the credential
Align key toggle button to role dropdown in user team permissions modal
Removed rogue console.logs
Removed extra refresh call
Enhace query string in job event save to consider smart inventory
Fix typo in scan_packages plugin
Switch running_jobs and capacity table columns
Disable insights cred when user doesn't have edit permissions
Disallow changing credential_type of an existing credential
fix bug with host_filter RBAC check
...
Instance has gone lost, and jobs are still either running
or waiting inside of its instance group
RBAC - user does not have permission to see some of the
groups that would be used in the capacity calculation
For some cases, a naive capacity dictionary is returned,
main goal is to not throw errors and avoid unpredicted behavior
Detailed capacity tests are moved into new unit test file.
The task manager was doing work to compute currently consumed
capacity, this is moved into the manager and applied in the
same form to the instance group list.
* The django middleware call stack behavior is changed by DRF. As a
result, during the process_request in sso/middlware.py request.user
is not set as you would expect it to be set from the middleware
django.contrib.auth.middleware.AuthenticationMiddleware
cache.set() and cache.get() arguments are logged when the log level is
DEBUG; this _may_ include plaintext secrets; strip sensitive values
before logging them
see: https://github.com/ansible/ansible-tower/issues/7476
This saves the id value of the setting into the cache
if the setting is encrypted. That can then be combined
with the secret_key in order to decrypt the setting,
without having to make an additional query to the database.
* Reap running tasks on non-netsplit nodes
* Reap running tasks on known to be offline nodes
* Reap waiting tasks with no celery id anywhere if waiting >= 60 seconds
* Workflow jobs are virtual jobs that don't actually run. Thus they
won't have a celery id and aren't candidates for the generic reaping.
* Better error logging when Instance not found in reaping code.
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us on IRC (#ansible-awx on freenode) or the mailing list.
Have questions about this document or anything not covered here? Come chat with us at `#ansible-awx` on irc.freenode.net, or submit your question to the [mailing list](https://groups.google.com/forum/#!forum/awx-project) .
Table of contents
-----------------
## Table of contents
* [Contributing Agreement](#dco)
* [Code of Conduct](#code-of-conduct)
* [Setting up the development environment](#setting-up-the-development-environment)
* [Things to know prior to submitting code](#things-to-know-before-contributing-code)
* [Setting up your development environment](#setting-up-your-development-environment)
* [Prerequisites](#prerequisites)
* [Local Settings](#local-settings)
* [Building the base image](#building-the-base-image)
* [Building the user interface](#building-the-user-interface)
* [Starting up the development environment](#starting-up-the-development-environment)
* [Starting the development environment at the container shell](#starting-the-container-environment-at-the-container-shell)
* [Using the development environment](#using-the-development-environment)
* [Docker](#docker)
* [Docker compose](#docker-compose)
* [Node and npm](#node-and-npm)
* [Building the environment](#building-the-environment)
* [Clone the AWX repo](#clone-the-awx-repo)
* [Create local settings](#create-local-settings)
* [Build the base image](#build-the-base-image)
* [Build the user interface](#build-the-user-interface)
# [Running the environment](#running-the-environment)
* [Start the containers](#start-the-containers)
* [Start from the container shell](#start-from-the-container-shell)
* [Post Build Steps](#post-build-steps)
* [Start a shell](#start-the-shell)
* [Create a superuser](#create-a-superuser)
* [Load the data](#load-the-data)
* [Building API Documentation](#build-documentation)
* [Accessing the AWX web interface](#accessing-the-awx-web-interface)
* [Purging containers and images](#purging-containers-and-images)
* [What should I work on?](#what-should-i-work-on)
* [How issues are resolved](#how-issues-are-resolved)
* [Ansible Issue Bot](#ansible-issue-bot)
DCO
===
## Things to know prior to submitting code
All contributors must use "git commit --signoff" for any
commit to be merged, and agree that usage of --signoff constitutes
agreement with the terms of DCO 1.1. Any contribution that does not
have such a signoff will not be merged.
-All code submissions are done through pull requests against the `devel` branch.
- You must use `git commit --signoff` for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md).
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
```
Developer Certificate of Origin
Version 1.1
## Setting up your development environment
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
The AWX development environment workflow and toolchain is based on Docker, and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also binds the local source tree into the development container, making it possible to observe and test changes in real time.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
### Prerequisites
Developer's Certificate of Origin 1.1
#### Docker
By making a contribution to this project, I certify that:
Prior to starting the development services, you'll need `docker` and `docker-compose`. On Linux, you can generally find these in your distro's packaging, but you may find that Docker themselves maintain a separate repo that tracks more closely to the latest releases.
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
For macOS and Windows, we recommend [Docker for Mac](https://www.docker.com/docker-mac) and [Docker for Windows](https://www.docker.com/docker-windows)
respectively.
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
For Linux platforms, refer to the following from Docker:
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
**Fedora**
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
```bash
(host)$ pip install docker-compose
```
Code of Conduct
===============
#### Node and npm
All contributors are expected to adhere to the Ansible Community Code of Conduct: http://docs.ansible.com/ansible/latest/community/code_of_conduct.html
The AWX UI requires the following:
Setting up the development environment
======================================
- Node 6.x LTS version
- NPM 3.x LTS
The AWX development environment workflow and toolchain is based on Docker and the docker-compose tool to contain
the dependencies, services, and databases necessary to run everything. It will bind the local source tree into the container
making it possible to observe changes while developing.
### Build the environment
Prerequisites
-------------
`docker` and `docker-compose` are required for starting the development services, on Linux you can generally find these in your
distro's packaging, but you may find that Docker themselves maintain a seperate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend Docker for Mac (https://www.docker.com/docker-mac) and Docker for Windows (https://www.docker.com/docker-windows)
respectively. Docker for Mac/Windows automatically comes with `docker-compose`.
#### Fork and clone the AWX repo
> Fedora
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/).
AWX will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file is required for starting the development environment and startup will fail if it's not provided.
The AWX base container image (defined in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and symbolic links into the development environment that make running the services easy.
The AWX base container image (found in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and
symbolic links into the development environment that make running the services easy. You'll first need to build the image:
Start the development containers by running the following:
(host)$ make docker-compose-build
The image will only need to be rebuilt if the requirements or OS dependencies change. A core concept about this image is that it relies
on having your local development environment mapped in.
Building the user interface
---------------------------
> AWX requires the 6.x LTS version of Node and 3.x LTS NPM
In order for the AWX user interface to load from the development environment it must be built:
(host)$ make ui-devel
```bash
(host)$ make docker-compose
```
When developing features and fixes for the user interface you can find more detail here: [UI Developer README](awx/ui/README.md)
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django, celery, and the front end build process.
Starting up the development environment
----------------------------------------------
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
There are several ways of starting the development environment depending on your desired workflow. The easiest and most common way is with:
```bash
# List running containers
(host)$ docker ps
(host)$ make docker-compose
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aa4a75d6d77b gcr.io/ansible-tower-engineering/awx_devel:devel "/tini -- /bin/sh ..."23 seconds ago Up 15 seconds 0.0.0.0:5555->5555/tcp, 0.0.0.0:6899-6999->6899-6999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 22/tcp, 0.0.0.0:8080->8080/tcp tools_awx_1
e4c0afeb548c postgres:9.6 "docker-entrypoint..."26 seconds ago Up 23 seconds 5432/tcp tools_postgres_1
0089699d5afd tools_logstash "/docker-entrypoin..."26 seconds ago Up 25 seconds tools_logstash_1
4d4ff0ced266 memcached:alpine "docker-entrypoint..."26 seconds ago Up 25 seconds 0.0.0.0:11211->11211/tcp tools_memcached_1
92842acd64cd rabbitmq:3-management "docker-entrypoint..."26 seconds ago Up 24 seconds 4369/tcp, 5671-5672/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:15672->15672/tcp tools_rabbitmq_1
```
**NOTE**
> The Makefile assumes that the image you built is tagged with your current branch. This allows you to build images for different contexts or branches. When starting the containers, you can choose a specific branch by setting `COMPOSE_TAG=<branch name>` in your environment.
> For example, you might be working in a feature branch, but you want to run the containers using the `devel` image you built previously. To do that, start the containers using the following command: `$ COMPOSE_TAG=devel make docker-compose`
##### Wait for migrations to complete
The first time you start the environment, database migrations need to run in order to build the PostgreSQL database. It will take few moments, but eventually you will see output in your terminal session that looks like the following:
awx_1 | Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Synchronizing apps without migrations:
awx_1 | Creating tables...
awx_1 | Running deferred SQL...
awx_1 | Installing custom SQL...
awx_1 | Running migrations:
awx_1 | Rendering model states... DONE
awx_1 | Applying contenttypes.0001_initial... OK
awx_1 | Applying contenttypes.0002_remove_content_type_name... OK
awx_1 | Applying auth.0001_initial... OK
awx_1 | Applying auth.0002_alter_permission_name_max_length... OK
awx_1 | Applying auth.0003_alter_user_email_max_length... OK
awx_1 | Applying auth.0004_alter_user_username_opts... OK
awx_1 | Applying auth.0005_alter_user_last_login_null... OK
awx_1 | Applying auth.0006_require_contenttypes_0002... OK
awx_1 | Applying taggit.0001_initial... OK
awx_1 | Applying taggit.0002_auto_20150616_2121... OK
awx_1 | Applying main.0001_initial... OK
awx_1 | Applying main.0002_squashed_v300_release... OK
awx_1 | Applying main.0003_squashed_v300_v303_updates... OK
awx_1 | Applying main.0004_squashed_v310_release... OK
awx_1 | Applying conf.0001_initial... OK
awx_1 | Applying conf.0002_v310_copy_tower_settings... OK
...
```
Once migrations are completed, you can begin using AWX.
#### Start from the container shell
Often times you'll want to start the development environment without immediately starting all of the services in the *awx* container, and instead be taken directly to a shell. You can do this with the following:
```bash
(host)$ make docker-compose-test
```
Using `docker exec`, this will create a session in the running *awx* container, and place you at a command prompt, where you can run shell commands inside the container.
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
```bash
(container)# /bootstrap_development.sh
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes.
This utilizes the image you built in the previous step and will automatically start all required services and dependent containers. You'll
be able to watch log messages and events as they come through.
Now you can start each service individually, or start all services in a pre-configured tmux session like so:
The Makefile assumes that the image you built is tagged with your current branch. This allows you to pre-build images for different contexts
but you may want to use a particular branch's image (for instance if you are developing a PR from a branch based on the integration branch):
```bash
(container)# cd /awx_devel
(container)# make server
```
(host)$ COMPOSE_TAG=devel make docker-compose
### Post Build Steps
Starting the development environment at the container shell
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.
Often times you'll want to start the development environment without immediately starting all services and instead be taken directly to a shell:
##### Start a shell
(host)$ make docker-compose-test
To create the admin user, and load demo data, you first need to start a shell session on the *awx* container. In a new terminal session, use the `docker exec` command as follows to start the shell session:
From here you'll need to bootstrap the development environment before it will be usable for you. The `docker-compose` make target will
automatically do this:
```bash
(host)$ docker exec -it tools_awx_1 bash
```
This creates a session in the *awx* containers, just as if you were using `ssh`, and allows you execute commands within the running container.
(container)$ /bootstrap_development.sh
##### Create an admin user
Before you can log into AWX, you need to create an admin user. With this user you will be able to create more users, and begin configuring the server. From within the container shell, run the following command:
```bash
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
From here you can start each service individually, or choose to start all service in a pre-configured tmux session:
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
(container)# cd /awx_devel
(container)# make server
```bash
(container)# awx-manage create_preload_data
```
Using the development environment
---------------------------------
**NOTE**
With the development environment running there are a few optional steps to pre-populate the environment with data. If you are using the `docker-compose`
method above you'll first need a shell in the container:
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
this container, and thus the database, if the database schema changes in an upstream commit.
(host)$ docker exec -it tools_awx_1 bash
##### Building API Documentation
Create a superuser account:
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
(container)# awx-manage createsuperuser
Preload AWX with demo data:
```bash
(container)/awx_devel$ make swagger
```
(container)# awx-manage create_preload_data
This information will persist in the database running in the `tools_postgres_1` container, until it is removed. You may periodically need to recreate
this container and database if the database schema changes in an upstream commit.
This will write a file named `swagger.json` that contains the API specification
in OpenAPI format. A variety of online tools are available for translating
this data into more consumable formats (such as HTML). http://editor.swagger.io
is an example of one such service.
You should now be able to visit and login to the AWX user interface at https://localhost:8043 or http://localhost:8013 if you have built the UI.
If not you can visit the API directly in your browser at: https://localhost:8043/api/ or http://localhost:8013/api/
### Accessing the AWX web interface
When working on the source code for AWX the code will auto-reload for you when changes are made, with the exception of any background tasks that run in
celery.
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
Occasionally it may be necessary to purge any containers and images that may have collected:
To log in use the admin user and password you created above in [Create an admin user](#create-an-admin-user).
(host)$ make docker-clean
There are host of other shortcuts, tools, and container configurations in the Makefile designed for various purposes. Feel free to explore.
What should I work on?
======================
### Purging containers and images
We list our specs in `/docs`. `/docs/current` are things that we are actively working on. `/docs/future` are ideas for future work and the direction we
want that work to take. Fixing bugs, translations, and updates to documentation are also appreciated.
When necessary, remove any AWX containers and images by running the following:
Be aware that if you are working in a part of the codebase that is going through active development your changes may be rejected or you may be asked to
rebase them. A good idea before starting work is to have a discussion with us on IRC or the mailing list.
```bash
(host)$ make docker-clean
```
## What should I work on?
Submitting Pull Requests
========================
For feature work, take a look at the current [Enhancements](https://github.com/ansible/awx/issues?q=is%3Aissue+is%3Aopen+label%3Atype%3Aenhancement).
Fixes and Features for AWX will go through the Github PR interface. There are a few things that can be done to help the visibility of your change
and increase the likelihood that it will be accepted
If it has someone assigned to it then that person is the person responsible for working the enhancement. If you feel like you could contribute then reach out to that person.
> Add UI detail to these
Fixing bugs, adding translations, and updating the documentation are always appreciated, so reviewing the backlog of issues is always a good place to start. For extra information on debugging tools, see [Debugging](https://github.com/ansible/awx/blob/devel/docs/debugging.md).
**NOTE**
> If you work in a part of the codebase that is going through active development, your changes may be rejected, or you may be asked to `rebase`. A good idea before starting work is to have a discussion with us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
**NOTE**
> If you're planning to develop features or fixes for the UI, please review the [UI Developer doc](./awx/ui/README.md).
## Submitting Pull Requests
Fixes and Features for AWX will go through the Github pull request process. Submit your pull request (PR) against the `devel` branch.
Here are a few things you can do to help the visibility of your change, and increase the likelihood that it will be accepted:
* No issues when running linters/code checkers
* Python: flake8: `(container)/awx_devel$ make flake8`
@@ -237,44 +320,17 @@ and increase the likelihood that it will be accepted
* JavaScript: Jasmine: `(container)/awx_devel$ make ui-test-ci`
* Write tests for new functionality, update/add tests for bug fixes
* Make the smallest change possible
* Write good commit messages: https://chris.beams.io/posts/git-commit/
* Write good commit messages. See [How to write a Git commit message](https://chris.beams.io/posts/git-commit/).
It's generally a good idea to discuss features with us first by engaging us in IRC or on the mailing list, especially if you are unsure if it's a good
fit.
It's generally a good idea to discuss features with us first by engaging us in the `#ansible-awx` channel on irc.freenode.net, or on the [mailing list](https://groups.google.com/forum/#!forum/awx-project).
We like to keep our commit history clean and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase` rather than
`git pull` and `git rebase` rather than `git merge`.
We like to keep our commit history clean, and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase`, rather than
`git pull`, and `git rebase`, rather than `git merge`.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in pretty good working order so we review requests carefuly.
Please be patient.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in good working order, and so we review requests carefully. Please be patient.
All submitted PRs will have the linter and unit tests run against them and the status reported in the PR.
All submitted PRs will have the linter and unit tests run against them, and the status reported in the PR.
Reporting Issues
================
## Reporting Issues
Use the Github issue tracker for filing bugs. In order to save time and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information and an accurate reproducing scenario are critical to helping us identify the problem.
When reporting issues for the UI we also appreciate having screenshots and any error messages from the web browser's console. It's not unsual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
For the API and backend services, please capture all of the logs that you can from the time the problem was occuring.
Don't use the issue tracker to get help on how to do something - please use the mailing list and IRC for that.
How issues are resolved
-----------------------
We triage our issues into high, medium, and low and will tag them with the relevant component (api, ui, installer, etc). We will typically focus on high priority
issues. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking
existing functionality and/or negatively impacting a large number of users.
If your issue isn't considered `high` priority then please be patient as it may take some time to get to your report.
Before opening a new issue, please use the issue search feature to see if it's already been reported. If you have any extra detail to provide then please comment.
Rather than posting a "me too" comment you might consider giving it a "thumbs up" on github.
Ansible Issue Bot
-----------------
> Fill in
We welcome your feedback, and encourage you to file an issue when you run into a problem. But before opening a new issues, we ask that you please view our [Issues guide](./ISSUES.md).
ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT
This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism.
1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license.
2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License.
3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation.
4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements.
5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability.
6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions.
7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods.
This document provides a guide for installing AWX.
## Table of contents
- [Getting started](#getting-started)
- [Clone the repo](#clone-the-repo)
- [AWX branding](#awx-branding)
- [Prerequisites](#prerequisites)
- [System Requirements](#system-requirements)
- [AWX Tunables](#awx-tunables)
- [Choose a deployment platform](#choose-a-deployment-platform)
- [Official vs Building Images](#official-vs-building-images)
- [OpenShift](#openshift)
- [Prerequisites](#prerequisites-1)
- [Deploying to Minishift](#deploying-to-minishift)
- [Pre-build steps](#pre-build-steps)
- [PostgreSQL](#postgresql)
- [Start the build](#start-the-build)
- [Post build](#post-build)
- [Accessing AWX](#accessing-awx)
- [Kubernetes](#kubernetes)
- [Prerequisites](#prerequisites-2)
- [Pre-build steps](#pre-build-steps-1)
- [Configuring Helm](#configuring-helm)
- [Start the build](#start-the-build-1)
- [Accessing AWX](#accessing-awx-1)
- [SSL Termination](#ssl-termination)
- [Docker or Docker Compose](#docker-or-docker-compose)
- [Prerequisites](#prerequisites-3)
- [Pre-build steps](#pre-build-steps-2)
- [Deploying to a remote host](#deploying-to-a-remote-host)
- [Inventory variables](#inventory-variables)
- [Docker registry](#docker-registry)
- [PostgreSQL](#postgresql-1)
- [Proxy settings](#proxy-settings)
- [Start the build](#start-the-build-2)
- [Post build](#post-build-1)
- [Accessing AWX](#accessing-awx-2)
## Getting started
### Clone the repo
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run commands within the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.4+
The system that runs the AWX service will need to satisfy the following requirements
- At leasts 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 9.4.
### AWX Tunables
**TODO** add tunable bits
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, docker-compose or a standalone Docker daemon. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
- [Docker or Docker Compose](#docker-or-docker-compose).
### Official vs Building Images
When installing AWX you have the option of building your own images or using the images provided on DockerHub (see [awx_web](https://hub.docker.com/r/ansible/awx_web/) and [awx_task](https://hub.docker.com/r/ansible/awx_task/))
This is controlled by the following variables in the `inventory` file
```
dockerhub_base=ansible
dockerhub_version=latest
```
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
*dockerhub_base*
> The base location on DockerHub where the images are hosted (by default this pulls container images named `ansible/awx_web:tag` and `ansible/awx_task:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
## OpenShift
### Prerequisites
To complete a deployment to OpenShift, you will obviously need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/openshift/defaults/main.yml](/installer/openshift/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval$(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
### Pre-build steps
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*awx_openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a pod. The database is configured for persistence and will create a persistent volume claim named `postgresql`. By default it will claim 5GB from the available persistent volume pool. This can be tuned by setting a variable in the inventory file or on the command line during the `ansible-playbook` run.
ansible-playbook ... -e pg_volume_capacity=n
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Start the build
To start the build, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
To start the build and deployment, run the following (docker_registry_password is optional if using official images):
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
```
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
```
You will see the following indicating that database migrations are running:
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
```bash
# View available routes
$ oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
## Kubernetes
### Prerequisites
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/kubernetes/defaults/main.yml](/installer/kubernetes[/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-build steps
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*awx_kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://docs.helm.sh/using_helm/#quickstart-guide](https://docs.helm.sh/using_helm/#quickstart-guide).
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://docs.helm.sh/using_helm/#role-based-access-control](https://docs.helm.sh/using_helm/#role-based-access-control)
### Start the build
After making changes to the `inventory` file use `ansible-playbook` to begin the install
```bash
$ ansible-playbook -i inventory install.yml
```
### Post build
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
```bash
# View the running pods, it may take a few minutes for everything to be marked in the Running state
$ kubectl get pods --namespace awx
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
```
### Accessing AWX
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
## Docker or Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
If you're installing using Docker Compose, you'll need [Docker Compose](https://docs.docker.com/compose/install/).
### Pre-build steps
#### Deploying to a remote host
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible; however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
```yaml
# localhost ansible_connection=local
awx-server
[all:vars]
...
```
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
If you choose to use the official images then the remote host will be the one to pull those images.
**Note**
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
Before starting the build process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*postgres_data_dir*
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
*host_port*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. Defaults to *80*.
*use_docker_compose*
> Switch to ``true`` to use Docker Compose instead of the standalone Docker install.
*docker_compose_dir*
When using docker-compose, the `docker-compose.yml` file will be created there (default `/var/lib/awx`).
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
*docker_registry*
> IP address and port, or URL, for accessing a registry.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
*docker_registry_username*
> Username of the user that will push images to the registry. Defaults to *developer*.
*docker_remove_local_images*
> Due to the way that the docker_image module behaves, images will not be pushed to a remote repository if they are present locally. Set this to delete local versions of the images that will be pushed to the remote. This will fail if containers are currently running from those images.
**Note**
> These settings are ignored if using official images
#### Proxy settings
*http_proxy*
> IP address and port, or URL, for using an http_proxy.
*https_proxy*
> IP address and port, or URL, for using an https_proxy.
*no_proxy*
> Exclude IP address or URL from the proxy.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_database`, and `pg_port` with the connection information.
### Start the build
If you are not pushing images to a Docker registry, start the build by running the following:
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory install.yml
```
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
After the playbook run completes, Docker will report up to 5 running containers. If you chose to use an existing PostgresSQL database, then it will report 4. You can view the running containers using the `docker ps` command, as follows:
```bash
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e240ed8209cd awx_task:1.0.0.8 "/tini -- /bin/sh ..."2 minutes ago Up About a minute 8052/tcp awx_task
1cfd02601690 awx_web:1.0.0.8 "/tini -- /bin/sh ..."2 minutes ago Up About a minute 0.0.0.0:80->8052/tcp awx_web
55a552142bcd memcached:alpine "docker-entrypoint..."2 minutes ago Up 2 minutes 11211/tcp memcached
84011c072aad rabbitmq:3 "docker-entrypoint..."2 minutes ago Up 2 minutes 4369/tcp, 5671-5672/tcp, 25672/tcp rabbitmq
97e196120ab3 postgres:9.6 "docker-entrypoint..."2 minutes ago Up 2 minutes 5432/tcp postgres
```
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
Apply all migrations: sso, taggit, sessions, djcelery, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
Applying main.0001_initial... OK
...
```
Once migrations complete, you will see the following log output, indicating that migrations have completed:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623(Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license"for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx
(changed: True)
Creating instance group tower
Added instance awx to tower
(changed: True)
...
```
### Accessing AWX
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
### Maintenance using docker-compose
After the installation, maintenance operations with docker-compose can be done by using the `docker-compose.yml` file created at the location pointed by `docker_compose_dir`.
Among the possible operations, you may:
- Stop AWX : `docker-compose stop`
- Upgrade AWX : `docker-compose pull && docker-compose up --force-recreate`
See the [docker-compose documentation](https://docs.docker.com/compose/) for details.
Use the GitHub [issue tracker](https://github.com/ansible/awx/issues) for filing bugs. In order to save time, and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information, and an accurate reproducing scenario are critical to helping us identify the problem.
Please don't use the issue tracker as a way to ask how to do something. Instead, use the [mailing list](https://groups.google.com/forum/#!forum/awx-project) , and the `#ansible-awx` channel on irc.freenode.net to get help.
Before opening a new issue, please use the issue search feature to see if what you're experiencing has already been reported. If you have any extra detail to provide, please comment. Otherwise, rather than posting a "me too" comment, please consider giving it a ["thumbs up"](https://github.com/blog/2119-add-reactions-to-pull-requests-issues-and-comment) to give us an indication of the severity of the problem.
### UI Issues
When reporting issues for the UI, we also appreciate having screen shots and any error messages from the web browser's console. It's not unusual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
### API and backend issues
For the API and backend services, please capture all of the logs that you can from the time the problem occurred.
## How issues are resolved
We triage our issues into high, medium, and low, and tag them with the relevant component (e.g. api, ui, installer, etc.). We typically focus on higher priority issues first. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking existing functionality, and negatively impacting a large number of users.
If your issue isn't considered high priority, then please be patient as it may take some time to get to it.
### Issue states
`state:needs_triage` This issue has not been looked at by a person yet and still needs to be triaged. This is the initial state for all new issues/pull requests.
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The the issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
`state:in_progress` The issue is actively being worked on and you should be in contact with who ever is assigned if you are also working on or plan to work on a similar issue.
`state:in_testing` The issue or pull request is currently being tested.
### AWX Issue Bot (awxbot)
We use an issue bot to help us label and organize incoming issues, this bot, awxbot, is a version of [ansible/ansibullbot](https://github.com/ansible/ansibullbot).
#### Overview
AWXbot performs many functions:
* Respond quickly to issues and pull requests.
* Identify the maintainers responsible for reviewing pull requests.
* Identify issues and pull request types and components (e.g. type:bug, component: api)
#### For issue submitters
The bot requires a minimal subset of information from the issue template:
* issue type
* component
* summary
If any of those items are missing your issue will still get the `needs_triage` label, but may end up being responded to slower than issues that have the complete set of information.
So please use the template whenever possible.
Currently you can expect the bot to add common labels such as `state:needs_triage`, `type:bug`, `type:enhancement`, `component:ui`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will remain on your issue until a person has looked at it.
#### For pull request submitters
The bot requires a minimal subset of information from the pull request template:
* issue type
* component
* summary
If any of those items are missing your pull request will still get the `needs_triage` label, but may end up being responded to slower than other pull requests that have a complete set of information.
Currently you can expect awxbot to add common labels such as `state:needs_triage`, `type:bug`, `component:docs`, etc...
These labels are determined by the template data. Please use the template and fill it out as accurately as possible.
The `state:needs_triage` label will will remain on your pull request until a person has looked at it.
You can also expect the bot to CC maintainers of specific areas of the code, this will notify them that there is a pull request by placing a comment on the pull request.
The comment will look something like `CC @matburt @wwitzel3 ...`.
AWX provides a web-based user interface, REST API and task engine built on top of
Ansible.
AWX provides a web-based user interface, REST API, and task engine built on top of [Ansible](https://github.com/ansible/ansible). It is the upstream project for [Tower](https://www.ansible.com/tower), a commercial derivative of AWX.
Resources
---------
To install AWX, please view the [Install guide](./INSTALL.md).
Refer to `CONTRIBUTING.md` to get started developing, testing and building AWX.
To learn more about using AWX, and Tower, view the [Tower docs site](http://docs.ansible.com/ansible-tower/index.html).
Refer to `INSTALL.md` to get started deploying AWX.
The AWX Project Frequently Asked Questions can be found [here](https://www.ansible.com/awx-project-faq).
Contributing
------------
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
- All code submissions are done through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
Reporting Issues
----------------
If you're experiencing a problem, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
We welcome your feedback and ideas. Here's how to reach us:
- Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
- [Open an Issue](https://github.com/ansible/awx/issues)
License
-------
[Apache v2](./LICENSE.md)
Refer to `LOCALIZATION.md` for translation and localization help.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
This page lists OAuth 2 utility endpoints used for authorization, token refresh and revoke.
Note endpoints other than `/api/o/authorize/` are not meant to be used in browsers and do not
support HTTP GET. The endpoints here strictly follow
[RFC specs for OAuth2](https://tools.ietf.org/html/rfc6749), so please use that for detailed
reference. Note AWX net location default to `http://localhost:8013` in examples:
## Create Token for an Application using Authorization code grant type
Given an application "AuthCodeApp" of grant type `authorization-code`,
from the client app, the user makes a GET to the Authorize endpoint with
*`response_type`
*`client_id`
*`redirect_uris`
*`scope`
AWX will respond with the authorization `code` and `state`
to the redirect_uri specified in the application. The client application will then make a POST to the
`api/o/token/` endpoint on AWX with
*`code`
*`client_id`
*`client_secret`
*`grant_type`
*`redirect_uri`
AWX will respond with the `access_token`, `token_type`, `refresh_token`, and `expires_in`. For more
information on testing this flow, refer to [django-oauth-toolkit](http://django-oauth-toolkit.readthedocs.io/en/latest/tutorial/tutorial_01.html#test-your-authorization-server).
## Create Token for an Application using Implicit grant type
Suppose we have an application "admin's app" of grant type `implicit`.
In API browser, first make sure the user is logged in via session auth, then visit authorization
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
Make a POST request to this resource to launch a job. If any passwords or variables are required then they should be passed in via POST data. In order to determine what values are required in order to launch a job based on this job template you may make a GET request to this endpoint.
ANSIBLE TOWER BY RED HAT END USER LICENSE AGREEMENT
This end user license agreement (“EULA”) governs the use of the Ansible Tower software and any related updates, upgrades, versions, appearance, structure and organization (the “Ansible Tower Software”), regardless of the delivery mechanism.
1. License Grant. Subject to the terms of this EULA, Red Hat, Inc. and its affiliates (“Red Hat”) grant to you (“You”) a non-transferable, non-exclusive, worldwide, non-sublicensable, limited, revocable license to use the Ansible Tower Software for the term of the associated Red Hat Software Subscription(s) and in a quantity equal to the number of Red Hat Software Subscriptions purchased from Red Hat for the Ansible Tower Software (“License”), each as set forth on the applicable Red Hat ordering document. You acquire only the right to use the Ansible Tower Software and do not acquire any rights of ownership. Red Hat reserves all rights to the Ansible Tower Software not expressly granted to You. This License grant pertains solely to Your use of the Ansible Tower Software and is not intended to limit Your rights under, or grant You rights that supersede, the license terms of any software packages which may be made available with the Ansible Tower Software that are subject to an open source software license.
2. Intellectual Property Rights. Title to the Ansible Tower Software and each component, copy and modification, including all derivative works whether made by Red Hat, You or on Red Hat's behalf, including those made at Your suggestion and all associated intellectual property rights, are and shall remain the sole and exclusive property of Red Hat and/or it licensors. The License does not authorize You (nor may You allow any third party, specifically non-employees of Yours) to: (a) copy, distribute, reproduce, use or allow third party access to the Ansible Tower Software except as expressly authorized hereunder; (b) decompile, disassemble, reverse engineer, translate, modify, convert or apply any procedure or process to the Ansible Tower Software in order to ascertain, derive, and/or appropriate for any reason or purpose, including the Ansible Tower Software source code or source listings or any trade secret information or process contained in the Ansible Tower Software (except as permitted under applicable law); (c) execute or incorporate other software (except for approved software as appears in the Ansible Tower Software documentation or specifically approved by Red Hat in writing) into Ansible Tower Software, or create a derivative work of any part of the Ansible Tower Software; (d) remove any trademarks, trade names or titles, copyrights legends or any other proprietary marking on the Ansible Tower Software; (e) disclose the results of any benchmarking of the Ansible Tower Software (whether or not obtained with Red Hat’s assistance) to any third party; (f) attempt to circumvent any user limits or other license, timing or use restrictions that are built into, defined or agreed upon, regarding the Ansible Tower Software. You are hereby notified that the Ansible Tower Software may contain time-out devices, counter devices, and/or other devices intended to ensure the limits of the License will not be exceeded (“Limiting Devices”). If the Ansible Tower Software contains Limiting Devices, Red Hat will provide You materials necessary to use the Ansible Tower Software to the extent permitted. You may not tamper with or otherwise take any action to defeat or circumvent a Limiting Device or other control measure, including but not limited to, resetting the unit amount or using false host identification number for the purpose of extending any term of the License.
3. Evaluation Licenses. Unless You have purchased Ansible Tower Software Subscriptions from Red Hat or an authorized reseller under the terms of a commercial agreement with Red Hat, all use of the Ansible Tower Software shall be limited to testing purposes and not for production use (“Evaluation”). Unless otherwise agreed by Red Hat, Evaluation of the Ansible Tower Software shall be limited to an evaluation environment and the Ansible Tower Software shall not be used to manage any systems or virtual machines on networks being used in the operation of Your business or any other non-evaluation purpose. Unless otherwise agreed by Red Hat, You shall limit all Evaluation use to a single 30 day evaluation period and shall not download or otherwise obtain additional copies of the Ansible Tower Software or license keys for Evaluation.
4. Limited Warranty. Except as specifically stated in this Section 4, to the maximum extent permitted under applicable law, the Ansible Tower Software and the components are provided and licensed “as is” without warranty of any kind, expressed or implied, including the implied warranties of merchantability, non-infringement or fitness for a particular purpose. Red Hat warrants solely to You that the media on which the Ansible Tower Software may be furnished will be free from defects in materials and manufacture under normal use for a period of thirty (30) days from the date of delivery to You. Red Hat does not warrant that the functions contained in the Ansible Tower Software will meet Your requirements or that the operation of the Ansible Tower Software will be entirely error free, appear precisely as described in the accompanying documentation, or comply with regulatory requirements.
5. Limitation of Remedies and Liability. To the maximum extent permitted by applicable law, Your exclusive remedy under this EULA is to return any defective media within thirty (30) days of delivery along with a copy of Your payment receipt and Red Hat, at its option, will replace it or refund the money paid by You for the media. To the maximum extent permitted under applicable law, neither Red Hat nor any Red Hat authorized distributor will be liable to You for any incidental or consequential damages, including lost profits or lost savings arising out of the use or inability to use the Ansible Tower Software or any component, even if Red Hat or the authorized distributor has been advised of the possibility of such damages. In no event shall Red Hat's liability or an authorized distributor’s liability exceed the amount that You paid to Red Hat for the Ansible Tower Software during the twelve months preceding the first event giving rise to liability.
6. Export Control. In accordance with the laws of the United States and other countries, You represent and warrant that You: (a) understand that the Ansible Tower Software and its components may be subject to export controls under the U.S. Commerce Department’s Export Administration Regulations (“EAR”); (b) are not located in any country listed in Country Group E:1 in Supplement No. 1 to part 740 of the EAR; (c) will not export, re-export, or transfer the Ansible Tower Software to any prohibited destination or to any end user who has been prohibited from participating in US export transactions by any federal agency of the US government; (d) will not use or transfer the Ansible Tower Software for use in connection with the design, development or production of nuclear, chemical or biological weapons, or rocket systems, space launch vehicles, or sounding rockets or unmanned air vehicle systems; (e) understand and agree that if you are in the United States and you export or transfer the Ansible Tower Software to eligible end users, you will, to the extent required by EAR Section 740.17 obtain a license for such export or transfer and will submit semi-annual reports to the Commerce Department’s Bureau of Industry and Security, which include the name and address (including country) of each transferee; and (f) understand that countries including the United States may restrict the import, use, or export of encryption products (which may include the Ansible Tower Software) and agree that you shall be solely responsible for compliance with any such import, use, or export restrictions.
7. General. If any provision of this EULA is held to be unenforceable, that shall not affect the enforceability of the remaining provisions. This agreement shall be governed by the laws of the State of New York and of the United States, without regard to any conflict of laws provisions. The rights and obligations of the parties to this EULA shall not be governed by the United Nations Convention on the International Sale of Goods.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.