Update some links and notes in the changelog
SUMMARY
Fix typo, remove duplicate change note, fix a wrong link, add link to the ui virtualenv removal
Reviewed-by: Ryan Petrello <None>
Allow one to select non-global execution environments for organizations
Allow one to select non-global EE when editing an Organization.
See: #9592
All those EE should be present as a choice when editing the Default organization.
Editing Default organization.
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
fix port conflict in dev cluster
SUMMARY
problem: loop adds 100 to ports 7899 and 7999, which would yield 7999 to 8099 on the next iteration, so the 7999 is conflicting
fix: add 1000 instead
Also, haproxy was being defined twice, now it renders once.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 17.0.1
Reviewed-by: Seth Foster <None>
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Instruct git to ignore the .vscode/ directory
SUMMARY
Instruct git to ignore the .vscode/ directory
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 18.0.0
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Christian Adams <rooftopcellist@gmail.com>
add ouiaID to select and cancel buttons on modals
SUMMARY
Add ouiaId prop to select and cancel button within modals
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Fixes silent error on SCM subform
SUMMARY
This addresses #9373. It disallows the user to select both Update on launch and update on project update. It also adds a bit of info to the tool tip including a link to the project in question so the user can edit the project to allow them to update on launch and on project update
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
AWX VERSION
ADDITIONAL INFORMATION
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Create a wrapper directory for the private data dir
Reviewed-by: None <None>
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Elijah DeLee <kdelee@redhat.com>
Fixes Several Bugs
SUMMARY
This address #9485 (Job template project field validate), #9319 (Job Details view only would show job type run, even if it was a job type check, #7516 (changes the Completed Jobs tab for a JT or WFJT to show Jobs since it show completed and pending/running jobs).
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: John Mitchell <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Prevent users from selecting credentials that prompt for passwords on workflow nodes and schedules
SUMMARY
link #8921
If a user selects a job template with default credentials that prompt for passwords (but does not prompt for credentials) then the user should not be allowed to create the node and a different JT must be selected:
If a user selects a credential that prompts for passwords when creating/editing a workflow node or schedule then we show this error:
If a user removes a credential that exists in the default collection of credentials on the JT then it must be replaced. This is the error we show:
If a user attempts to create a schedule for a job template with default credentials that prompt (but does not prompt for credentials) then the API responds with this error:
I believe this UX is consistent with the old UI but I am double checking that now.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
Reviewed-by: Kersom <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Fixes crashing wizard, and adds error handle on adding role
SUMMARY
This addresses #8769. It also adds error handling if there is some sort of request error during the submit request.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
AWX VERSION
ADDITIONAL INFORMATION
Reviewed-by: John Mitchell <None>
Adds ouiaId's to various buttons
SUMMARY
@tiagodread @unlikelyzero @one-t @akus062381 this will likely break something because I changed some existing ouia-id's so that they are a consistent structure.
^^ Let's let one of them merge this
I also removed an unused component
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: John Hill <johill@redhat.com>
Adds support for html in custom login text
SUMMARY
link #7603
I couldn't come up with a way to do this without breaking up the component and discontinuing use of the LoginPage PF component. This is because LoginPage expects the textContent component (what we use to display the custom login text) to be a string. By using the underlying LoginPage components I reconstructed the login page and got more control over that prop.
The custom message in the old UI supported both strings and HTML:
So we need to support rendering HTML but we need to do it in a safe way. Our solution to that was https://docs.angularjs.org/api/ngSanitize. React doesn't seem to have anything like this built in so I went looking for outside help. html-entities is already included in our project but as best as I can tell that lib is mainly focused on swapping special characters out for html entities. I wanted something that was going to strip the HTML of bits that could be exploited by a malicious actor.
I settled on https://www.npmjs.com/package/sanitize-html because it was a) small and b) actively maintained. The API was simple and let me sanitize the HTML before setting it using dangerouslySetInnerHTML. If we need to tweak the configuration away from the default values then we can certainly do that.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Only attempt to fetch event options on non workflow jobs
SUMMARY
link #9640
This was fallout from output search filtering. We need this request for non workflow jobs so that we can build the search options.
ISSUE TYPE
Bugfix Pull Request
COMPONENT NAME
UI
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Add support for replace/revert on secret credential fields
SUMMARY
link #7256
Note that this only applies to editing an existing credential. You should not see this button on fields when adding a new credential.
When editing an existing credential the replace button should show up on fields where secret is true and the field has an existing value that is not an external credential. Examples:
Fields with external credentials should look the same:
Initially the button tooltip should say Replace. Clicking Replace will clear out the previously saved value and enable the form field:
The tooltip will change to Revert. Clicking Revert will take the field back to it's original state.
I also noticed a race condition which would result in the input fields (subform) not being populated due to the form rendering before the request(s) were completed. I fixed this.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
Reviewed-by: Kersom <None>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
Add support for filtering and pagination on job output
SUMMARY
link #6612
link #5906
This PR adds the ability to filter job events and also includes logic to handle fetching filtered job events across different pages.
Note that the verbosity dropdown included in #5906 is not included in this work. I don't think that's possible without api changes.
As part of this work, I converted JobOutput.jsx from a class based component to a functional component. I've tried my best to make sure that all existing functionality has remained the same by comparing the experience of this branch to devel.
Like the old UI, the output filter is disabled while the job is running.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
UI
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Marliana Lara <marliana.lara@gmail.com>
Enable Ansible version to be collected from EEs
SUMMARY
Connecting issue #9473
This PR, along with this Ansible-Runner PR, enables us to obtain the Ansible (core) version for each execution environment that is utilized. This info can be gathered from the new ansible_version column on the main_unifiedjobs table.
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
API
AWX VERSION
awx: 17.0.1
ADDITIONAL INFORMATION
Screenshot/example of the DB output:
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Bianca Henderson <beeankha@gmail.com>
Reviewed-by: Ladislav Smola <lsmola@redhat.com>
Reviewed-by: Shane McDonald <me@shanemcd.com>
Do not allow user to modify EE managed by tower
Do not allow user to attempt to modify EE managed by tower.
See: #9250
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Tiago Góes <tiago.goes2009@gmail.com>
refactor payload construction for awxkit
This fixes container_group creation to allow passing
"is_container_group" and "credential" to the "create" method
on instance groups, and refactors other page objects
to use a common utility function to eliminate copy-pasted code
This will help us update to set is_container_group correctly as is now needed since de52ade
Reviewed-by: Ryan Petrello <None>
Remove custom virtual env
Remove custom virtual from the UI.
Also, surface missing-resource warnings on list items for UJTs that were using
custom virtualenvs. And related details page.
See: #9190
Also: #9207
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Mat Wilson <mawilson@redhat.com>
Add support for Centrify Vault as a credential plugin
replaces #8952
cc @surbhijain1502 @Asharma-bhavna @badrogh
Reviewed-by: Ryan Petrello <None>
Reviewed-by: Jake McDermott <yo@jakemcdermott.me>
Reviewed-by: Chris Meyers <None>
This fixes container_group creation to allow passing
"is_container_group" and "credential" to the "create" method
on instance groups, and refactors other page objects
to use a common utility function to eliminate copy-pasted code
Dont require is_container_group in payload when creating InstanceGroups
Reviewed-by: Elijah DeLee <kdelee@redhat.com>
Reviewed-by: Ryan Petrello <None>
Hashicorp Vault Credential Plugin : Support for namespace
SUMMARY
Added the support for Vault Namespace (Enterprise feature)
ISSUE TYPE
Feature Pull Request
COMPONENT NAME
credential_plugins/hashivault.py
AWX VERSION
1.7.0
ADDITIONAL INFORMATION
Adding specific X-Vault-Namespace header when Namespace option is set.
Reviewed-by: Ryan Petrello <None>
* fix typing space character
* hide cursor when editor doesn't have user focus
* show help text any time editor is in focus
* fix content shifting when help text appears/disappears
* remove 80 character "print limit" line
- remove requirements_ansible logic from the update script
- removed the need for py2-specific system dependencies
- update to the latest pip-tools and move to the new long format
(https://github.com/jazzband/pip-tools/pull/1237)
- fixed a few busted references to receptorctl @ devel
- Organization.default_environment
- Project.default_environment
- JobTemplate.execution_environment
- WorkflowJobTemplate.execution_environment
System jobs are not editable by anyone other than a system admin, so
we don't need to check. It appears that unified job templates can't
be created or edited outside of the endpoints for the specific types.
They've made breaking changes that is going to take
some deeper investigation to update awxkit to use
This is only used for development purposes, and should
have not impact on the "awx" cli entry point
Add EE to the following screens:
* Job Template
* Organization
* Project
* Workflow Job Template
Also, add a new lookup component - ExecutionEnvironmentLoookup.
See: https://github.com/ansible/awx/issues/9189
moved AWXKit pull additions to separate PR and made some changes that were causing linting errors in tests and add copy to show_capabilities for the ee serializer
can_add: gets an explicit role to check against, `'execution_environment_admin_role'`
can_change: leverages `self.check_related()` for the case where the Org is not changing, but also adds an explicit check for the EE Admin Role when the Org is changing to an explicit different Org.
* add ee option on factories for organizations
* add new lines
* remove inventory from the options
* remove line
* remove line from projects
* fix the tuple
* fix lint problems
- In K8S-based installs, only container groups are intended to be used
for playbook execution (JTs, adhoc, inventory updates), so in this
scenario, other job types have a task impact of zero.
- In K8S-based installs, traditional instances have *zero* capacity
(because they're only members of the control plane where services
- http/s, local control plane execution - run)
- This commit also includes some changes that allow for the task manager
to launch tasks with task_impact=0 on instances that have capacity=0
(previously, an instance with zero capacity would never be selected
as the "execution node"
This means that when IS_K8S=True, any Job Template associated with an
Instance Group will never actually go from pending -> running (because
there's no capacity - all playbooks must run through Container Groups).
For an improved ux, our intention is to introduce logic into the
operator install process such that the *default* group that's created at
install time is a *Container Group* that's configured to point at the
K8S cluster where awx itself is deployed.
The part where we pass in the runner params to the processor phase is
legit. Need to investigate why the fact_cache directory is no longer nested
under job.id.
- a new unique name field to EE
- a new configure-Tower-in-Tower setting DEFAULT_EXECUTION_ENVIRONMENT
- an Org-level execution_environment_admin_role
- a default_environment field on Project
- a new Container Registry credential type
- order EEs by reverse of the created timestamp
- a method to resolve which EE to use on jobs
Add organization as part of creating/editing an execution environments
If one is a `system admin` the Organization is an optional field. Not
providing an Organization makes the execution environment globally
available.
If one is a `org admin` the Organization is a required field.
See: https://github.com/ansible/awx/issues/7887
Update to AWX execution environment
use the special 2.9 container image
revert setting back for merge
Fix another permission error by mapping 2 folders
also create folders before running
* Migate management jobs list to tables
* Use cancel link variant for consistency with other prompts
* Add basic test coverage for sysjobs
* Remove select-all from mgmt jobs
* Remove unneeded component variables
* Fix missing schedule breadcrumb
* Optimize data fetching with useCallback
- removes local_docker installer and points community users to our development environment (make docker-compose)
- provides a migration path from Local Docker Compose installations --> the dev environment
- the dev env can now be configured to use an external database
- consolidated the Local Docker and dev env docker-compose.yml files into one template file, used by the dockerfile role
- added a 'sources' role to template out config files
- the postgres data dir is no longer a bind-mount, it is a docker volume
- the redis socket is not longer a bind-mount, it is a docker volume
- the local_settings.py.docker-compose file no longer needs to be copied over in the dev env
- Create tmp rsyslog.conf in rsyslog volume to avoid cross-linking. Previously, the tmp code-generated rsyslog.conf was being written to /tmp (by default). As a result, we were attempting to shutil.move() across volumes.
- move k8s image build and push roles under tools/ansible
- See tools/docker-compose/README.md for usage of these changes
this middleware allready existed, and we were trying to log this
data but it was not working.
Hope is these logs will be able to be shipped via external logging
and we could use kibana to track response time of different endpoints
Fixed the customized notification returning incorrect values for host_status_counts
Update notifications.py
Removed if condition
Added exception handling
A nitpick
This fixes the issue addressed in 9273
The symlinks are created on the build container as opposed to the final
container causing awx-manage command to break.
Signed-off-by: Siva Renganathan <siva.rg@protonmail.com>
Various points (e.g. created, running, processing events), are
structured into json format and output to /var/log/tower/job_lifecycle.log
As part of this work, the DependencyGraph is reworked to return
which job object is doing the blocking, rather than a boolean.
* The cron ran logrotate will now rotate our log files instead of python
* If not error log file is specified in the config then do not include
it as a paremter to rsyslog omhttp module. This is useful for
containers.
related: https://github.com/ansible/awx/issues/6010
as noted in the comment removed from this diff, it's probably time
to stop calling this function on every dispatcher service restart
team roles title crumb missing
various inventory crums missing
make it so inventories and templates don't get rid of data needed to generate the crumb config
Fix Inventory/Project rbac broken on JT form.
Also, update ProjectLookup to filter using `role_level: 'use_role'` as
per old UI implementation.
Also, update InventoryLookup to filter using `role_level: 'use_role'` as
per old UI implementation.
See: https://github.com/ansible/awx/issues/8194
Credential access tab should be shown when cred doesn't belong to an organization.
Also, update unit-tests to reflect change.
See: https://github.com/ansible/awx/issues/7708
a new version of pip-tools changed the format of dependency annotations
in generated requirements.txt files
we should probably change to the new format at some point, but maybe
*after* we merge a few of our long-running branches that touch these
files (otherwise, managing conflicts could be pretty hellish)
This does a few things:
- Removes need for awx_sdist_builder image
- Reorders Dockerfile steps to optimize image cache between prod and dev builds
- Unifies VENV_BASE and COLLECTION_BASE in prod and dev builds
* Fix repeated api calls from useEffect hook by wrapping the breadcrumb
setter with useCallback
* Rework the top-level routes to remove some old patterns and bring it more
into alignment with how it's done on the projects screen
Intercept all http(s) responses and store expiration time from headers
in local storage. Drive expiration timers in app container across all
tabs with browser storage events and accompanying react hooks
integration. Show a warning with logout countdown and continue button
when session is nearly expired.
* The namespace for isolated logging was not enabled. Add a handler and
logger so that it's enabled. This is particularly useful when the
logging level is switched to DEBUG
* Knowing how long check_isolated.yml ran can be helpful in debuging the
isolated execution path. Especially if you suspect the connection speed
or reliability of the control node -> execution node
* It's hard/impossible to know what job a check_isolated.yml playbook
runs for by just looking at the logs.
* Forward the job id for which an iso management playbook is running for
and output that job id so it can be found in the logs.
* We batch logging isolated management playbook output. This results in
the timestamp of the log being useless when trying to determine when
each task in the playbook ran.
* To fix this, we enable timestamp logging at the playbook level via
ansible `profile_tasks` callback plugin.
This restores some of the original files and routes from the migration
view of the classic ui with the eventual goal of fully reintegrating this
system with the new ui.
See: b39db745d4
* The exported field shows total quantity exported to a manifest for a given sub. We want to sum the quantities of each sub allocation in a manifest instead.
We _always_ want INLINE_RUNTIME_CHUNK to be false when building the ui,
even if someone happens to unexpectedly make a production build without
using the top-level make targets for some reason.
Hide instance group for Inventory Details if the data is not available.
This is the the same approach used in other details screens.
See: https://github.com/ansible/awx/issues/8620
The fixes and issue where the timestaps in the stdout for
inventory updates gave the time since the start of the dispatcher
instead of the time since the start of the update.
This commit also moves the handler into the utils module where
other custom AWX handlers live, instead of tasks.py
this is to keep tasks.py relatively clean, as best as possible
* Change handling of error cases to global post_run_hook
* handle license errors correctly again
* Fix some issues with line ordering from the custom logger thing
* Remove debug log statement
* Use PermissionDenied for license errors
* More elegant handling of line initialization
Update tests to new exception type
Catch all save errors, fix timing offset bug
Fix license error handling inside import command
* proot now enabled at task-level
since tasks are no longer calling
awx-manage (which would set up its own proot)
* dropping proot env var since it's not
relevant to the test
* noting that the inv update task only uses the
inventory update management command to
save the inv to the database
(it doesn't do the work of fetching hosts / groups)
- in the past, inv. update jobs called `awx-manage inventory_update`
which took care of setting up process isolation
- at this point, though, inv. update jobs call runner / ansible-inventory
directly, so we need another way to put process isolation in place
- thankfully, there was already support for providing process isolation
for other types of jobs (namely JT Jobs, Project Updates and Ad Hoc
commands)
- so, we do what those other jobs do and override the stub for should_use_proot
(which by default returns false) so that it keys off of the
`AWX_PROOT_ENABLED` setting
* perform_update can be called from either awx-manage
or the RunInventoryUpdate task
* need to make sure that the inventory updates
that happen with perform_update are atomic
This commit makes the needed changes to inventory update
post_save_hook logic so that the historic log lines that
inventory updates write will be written to stdout,
but this hack bypasses the ansible-runner verbose event
logic and dispatches verbose events directly.
Fix the venv application with the ansible-inventory system
(note: much of this is undone in a later commit)
Deal with some minor test updates for
the ansible-inventory interface changes
Add updates related to smart inventories.
* Add popover for `Smart host filter`.
* Add popover for `Instance Groups` on Smart Inventory screen.
* Rename `Host filter` to `Smart host filter` per mockup.
* Add inventory as part of dynamic host filter.
See: https://github.com/ansible/awx/issues/8581
Also: https://github.com/ansible/awx/issues/8548
we've seen evidence of a race condition on fork for awx.conf.Setting
access; in the past, we've attempted to solve this by explicitly closing
connections pre-fork, but we've seen evidence that this isn't always
good enough
this patch is an attempt to close connections post-fork so that sockets
aren't inherited post fork, leading to bizarre race conditions in
setting access
Add feature to associate teams to users. For the time being when
associating Users to a team, the User will be associated with `member_role` only. And when `diassociating` the User from a team all related roles - member, read, and admin will be removed.
Also, fix a bug related to search not being cleared after closing/cancel
the `AssociateModal`.
Hide max hosts field on org form.
Also, simplify the usage of context API to read the value of me
parameter.
Hide max hosts field on org form.
See: https://github.com/ansible/awx/issues/4950
Remove related resources groups/hosts when deleting inventory sources.
The current UI deletes `groups` and `hosts` once the inventory source is
deleted. Add this behavior to the new UI.
See: https://github.com/ansible/awx/issues/8098
- Write a deepmerge() implementation, keeping only the test suite of
https://stackoverflow.com/a/20666342/435004
- Use it to deep-merge pod['metadata'] with user input,
instead of replacing fields in it
Fix username as a required field. `UserForm` is used for adding and
editing an user. When adding an user, the initial user value is a `{}`
update logic to cover this case.
Also, add unit-tests to cover this particular case.
See: https://github.com/ansible/awx/issues/8453
cast create_preload_data to boolean with `create_preload_data | bool` in launch_awx_task.sh.j2
Reviewed-by: Ryan Petrello
https://github.com/ryanpetrello
* Task manager fit_ optimization code caused problems with container
group code.
* Note that we don't actually get the benefit of the optimization for
container groups. We just make it so that the code doesn't blow up. It
will take another pass to apply optimizations to the container group
task manager path.
- output a profiling disabled message when appropriate
- specify that we are doing SQL profiling in the enabled case
- treat negative thresholds the same as zero, disabling profiling
This was problematic because it was overwriting the original values that had been defined in the other serializers. Additionally, there are no other dunders for other capabilities prefetch
this was likely added because UnifiedJobTemplateSerializer does not have it's own capabilities, but rather derives them from JTSerializer and WFJTSeralizer, but it worked better without the dunder once I removed the data that was overwriting the data from the WFJT and JT serializers.
* Tried to fill in application_name in awx/__init__.py but I think that
is too late
* Fill in database application_name with enough information to easily
trace the connection from postgres back to the node and pid that
initiated the connection.
* Set application_name in django settings so that application_name is
set _before_ the first postgres connection is established.
Annotations are only supported for ingress and service accounts
This PR will allow you now to specify annotations for Kubernetes Deployment
resources by defining `kubernetes_deployment_annotations` var list
* Do not query the database for the set of Instance that belong to the
group for which we are trying to fit a job on, for each job.
* Instead, cache the set of instances per-instance group.
* We update the parent unified job template to point at new jobs
created. We also update a similar foreign key when the job finishes
running. This causes lock contention when the job template is
allow_simultaneous and there are a lot of jobs from that job template
running in parallel. I've seen as bad as 5 minutes waiting for the lock
when a job finishes.
* This change moves the parent->child update to OUTSIDE of the
transaction if the job is allow_simultaneous (inherited from the parent
unified job). We sacrafice a bit of correctness for performance. The
logic is, if you are launching 1,000 parallel jobs do you really care
that the job template contains a pointer to the last one you launched?
Probably not. If you do, you can always query jobs related to the job
template sorted by created time.
* Do not query the database for the set of Instance that belong to the
group for which we are trying to fit a job on, for each job.
* Instead, cache the set of instances per-instance group.
* We update the parent unified job template to point at new jobs
created. We also update a similar foreign key when the job finishes
running. This causes lock contention when the job template is
allow_simultaneous and there are a lot of jobs from that job template
running in parallel. I've seen as bad as 5 minutes waiting for the lock
when a job finishes.
* This change moves the parent->child update to OUTSIDE of the
transaction if the job is allow_simultaneous (inherited from the parent
unified job). We sacrafice a bit of correctness for performance. The
logic is, if you are launching 1,000 parallel jobs do you really care
that the job template contains a pointer to the last one you launched?
Probably not. If you do, you can always query jobs related to the job
template sorted by created time.
* fixes#8347
* Rename inventory_source to name in the tower_inventory_source_update
* Allow to specify both name or id for `name` and `inventory` params
* Add type of login used as part of UserListItem.
* Add type of login used as part of UserDetail.
* Hide password field, UserForm, in case login method is LDAP or Social.
* Make username field, UserForm, not required in case login is LDAP or
Social.
See: https://github.com/ansible/awx/issues/5685
Add missing words for translation.
`...more`, and `Show Less` were already marked for translation in a
previous PR, since this code is shared as part of the `ChipGroup` code.
See: https://github.com/ansible/awx/issues/6857
This changeset introduces two changes:
1. Update the API representation of Workflow Job Templates to use the
natural key of the Inventory type instead of its id;
2. Override the related property of the CLI's WorkflowJobTemplate page
type to patch the related references during the export process,
allowing the resource to be serialised using the natural key of the
Inventory type instead of the id.
Change n.2 is a workaround that is used when exporting resources from
AWX/Tower instances that don't have change n.1. It can be removed in the
future.
In order to create a container group is necessary to provide a
credential.
See: https://github.com/ansible/awx/issues/8184
This change makes the code related to display the credential as part of
the container group details a bit more robust. Avoiding to attempt to
show a non-existent credential - what is not supposed to exist.
Closes: https://github.com/ansible/awx/issues/8199
the bigint migration removed the foreign key constraints for:
- host_id
- job_id (and projectupdate_id, etc...)
because of this, we don't really need to check explicitly for a host_id
IntegrityError anymore (because it won't occur)
additionally, while it's possible to insert an event with a mismatched
job_id now (for example, you can totally start a long-running job, and
delete the job record in the background using the ORM or psql), doing
so results in DoesNotExist errors in the code that handles the
playbook_on_stats events
instead, just have each worker connect directly to redis
this has a few benefits:
- it's simpler to explain and debug
- back pressure on the queue keeps messages around in redis (which is
observable, and survives the restart of Python processes)
- it's likely notably more performant at high loads
make the --status flag work by fetching a periodically recorded snapshot
of internal process state; additionally, update the callback receiver to
*also* record these statistics so we can gain more insight into any
performance issues
The analytics change PR adjusted the logging for awx.analytics,
which solved the issue, but should have used the targeted awx.main.analytics.
Also flip a couple of loggers to use the regular awx.analytics (awx analytics)
logger instead of awx.main.analytics (the automation anayltics task system).
I'm not sure that this function is actually in use anywhere anymore, but
it shouldn't be a top-level import because it represents an optional
dependency.
Add quotes around volume value for posgres data. I installed via docker without changing any values and the UI was stuck in upgrading for long time. Browsed around and figured out that issue was due to postgres volume as a query was getting error. Inspected the template and found that there was no quotes around volume, unlike volumes for others.
I added the quotes and docker compose was working
Errors/warnings when gathering analytics are about 50/50 split between
the gathering code in analytics and the task code that calls it, so
they should be in the same place for debugging sanity.
Collect expensive collectors separately, and in a loop
where we make smaller intermediate dumps.
Don't return a table dump if there are no records, and
don't put that CSV in the manifest.
Fix up unit tests.
doing this in the migration *before* any Organizations actually exist
is stirring up RBAC dragons that I don't have time to fight
this commit meanst that *new* installs will pre-create the default
Galaxy (public) credential in create_preload_data, while
*upgraded/migrations* installs will do so via the migration
* Do not write out inventory source_vars to a file on disk as they _may_
contain sensitive information. This also removes support for backwards
migrations. This is fine, backwards migration is really only useful
during development.
* Before, we were re-writing `plugin:` when users updated the
InventorySource via the API. Now, we just override at run-time. This
makes for a more sane API interaction
Options which are not in the API POST and are marked in the module as deprecated are ignored
If an option is not in the API POST but is marked as a list we assume its a relation
The auth_path is used with the approle auth method
It's not linked to the secret we are reading but to the auth method,
this parameter has to be moved to inputs
Changed library structure
Origional TowerModule becomes TowerLegacyModule
TowerModule from tower_api becomes TowerAPIModule
A real base TowerModule is created in tower_module.py
A new TowerAWXKitModule is created in tower_awxkit
TowerAWXKitModule and TowerAPIModule are child classes of TowerModule
This changset allows the import of YAML formatted resources. The CLI
user can indicate which format to use with the `-f, --format` option.
The CLI help text has been amended to reflect the new feature.
The AWX CLI `export` subcommand offers the option of formatting the output
as YAML or JSON, so it makes sense that the `import` subcommand reflects
this.
A simple test is also provided. In order to ease the task of testing
commands that import resources by reading the stdin, the CLI has been
extended to allow specifying an alternative file descriptor for stdin,
similarly to stdout and stderr.
This change adds related Labels to the Workflow Job Template document that is
exported by the AWX CLI.
Previously, exporting and then importing Workflow Job Templates would
not retain their related Labels.
This change fixes an erroneus early return in a private method that was
preventing more than one type of related object from being correctly
assigned to the parent object, and therefore imported.
Also, a minor spelling mistake was corrected.
Remove showExpandCollapse prop from the DataListToolbar calls. This is
not an expected prop to be passed to this component.
Inside DataListToolbar.
```
const showExpandCollapse = onCompact && onExpand;
```
In order to use this feature, `onCompact` and `onExpand` props should
be passed.
...
* upgrade `chromedriver` for ARM support
* upgrade `pynacl` to fix `libsodium` build issue on ARM
* remove unnecessary i686-specific `libstdc++.so.6` package
* install `kubectl` and `tini` from upstream binaries for ARM support
* use upstream `postgres` and `alpine` docker images for `postgresql` helm chart
Fixes#7051
Populate the cache the first time the job is run for a revision
that needs them, and for future runs for that revision just
copy it into the private directory.
Delete the cache on project deletion.
Invalidate the cache on a new project revision
Also download roles/collections during the sync job
Since we're writing into a per-revision cache, we can do this easily now.
Don't try and install content if there aren't any requirements expecting it
Adjust pathing to the proper location.
Force install if doing a manual sync.
Requirements may be unversioned.
Remove the cache when delete-on-update is set
Integrate content caching with existing task logic
Revert the --force flags
use the update id as metric for role caching
Shift the movement of cache to job folder from rsync task to python
Only install roles and collections if needed
Deal with roles and collections for jobs without sync
Skip local copy if roles or collections turned off
update docs for content caching
Design pivot - use empty cache dir to indicate lack of content
Do not cache content if we did not install content
Test changes to allay concerns about reliability of local_path
Do not blow away cache for SCM inventory updates
Remove project update vars no longer used
Remove job pre-creation of content folders
code style edit, always use cache_id as property in tasks
Fix log message
Situations have come up where the 5+ minute kill signal for
run_task_manager is emitted to the worker process running it, but
since the worker improperly inherited the AWXConsumerBase().stop()
handler a deadlock ultimately was triggered on the database
connection.
The docker_registry_password var isn't interpolated by the shell, so
it shouldn't be quoted
Fixes: #7695
Signed-off-by: Philip DOUGLASS <philip.douglass@amadeus.com>
Revert the --force flags
use the update id as metric for role caching
Shift the movement of cache to job folder from rsync task to python
Only install roles and collections if needed
Deal with roles and collections for jobs without sync
Skip local copy if roles or collections turned off
update docs for content caching
Design pivot - use empty cache dir to indicate lack of content
Do not cache content if we did not install content
Test changes to allay concerns about reliability of local_path
Do not blow away cache for SCM inventory updates
Populate the cache the first time the job is run for a revision
that needs them, and for future runs for that revision just
copy it into the private directory.
Delete the cache on project deletion.
* Use more selective route matching when determining if a nav item is
active
* Don't automatically collapse nav groups when user navigates to a
different group
this resolves an issue that causes an endless hang on with Cyberark AIM
lookups when a certificate *and* key are specified
the underlying issue here is that we can't rely on the underyling Python
ssl implementation to *only* read from the fifo that stores the pem data
*only once*; in reality, we need to just use *actual* tempfiles for
stability purposes
see: https://github.com/ansible/awx/issues/6986
see: https://github.com/urllib3/urllib3/issues/1880
When we used ints and passed this data into a nother call like:
- name: Create a job template with a looked up credential from a folded lookup
tower_job_template:
name: "{{ job_template_name }}"
credentials: >-
{{ lookup(
'awx.awx.tower_api',
'credentials',
query_params={ 'name' : credential_name },
return_ids=True,
expect_one=True,
wantlist=True
) }}
project: "{{ project_name }}"
inventory: Demo Inventory
playbook: hello_world.yml
job_type: run
state: present
register: create_jt
Ansible would raise this warning:
[WARNING]: The value 30 (type int) in a string field was converted to '30' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.
Returning a list of strings prevents that.
Custom credentials can have input fields named 'name', 'organization',
'description', etc. Underscore these variables to make collisions
less likely to occur.
this tool looks at the most recent jobs for a specific job template and
attempts to discover the _slowest_ tasks and hosts
$ awx-manage bottleneck --template N
$ awx-manage bottleneck --template N --threshold 1 --ignore yum
$ awx-manage bottleneck --template N --ignore pause --ignore yum
this change fixes a bug introduced in the optimization at https://github.com/ansible/awx/pull/7352
1. Create inventory with multiple hosts
2. Run a playbook with a limit to match only one host
3. Run job, verify that it only acts on the one host
4. Go to inventory host list and see that all the hosts have last_job updated to point to the job that only acted on one host.
There is some history here.
https://github.com/ansible/awx/pull/7190 <- This PR was an attempt at fixing a
bug notting ran into where some jobs on k8s installs would get stuck in Waiting
forever.
The PR mentioned above introduced a bug where there are no instance groups on a
fresh k8s-based install. This is because this process currently happens in the
launch scripts, before the database is up.
With this patch, queue / instance group registration happens in the heartbeat,
right after auto-registering the instance.
When targeting, ../workflow_job_templates/id#/workflow_nodes/ endpoint,
user could not set all_parents_must_converge to true.
3.7.1 backport for awx issue #7063
* broadcast websockets have stats tracked (i.e. connection status,
number of messages total, messages per minute, etc). Previous to this
change, stats were tracked by ip address, if it was defined on the
instance, XOR hostname. This changeset tracks stats by hostname.
* The websocket backplane interconnect is done via ip address for
Kubernetes and OpenShift. On init run_wsbroadcast reads all Instances
from the DB and makes a decision to use the ip address or the hostname
based, with preference given to the ip address if defined. For
Kubernetes and OpenShift the nodes can load the Instance before the
ip_address is set. This would cause the connection to be tried by
hostname rather than ip address. This changeset ensures that an ip
address set after an Instance record is created will be detected and
used.
This was causing our EL8 Brew builds to break, because it wasn't being
vendored. This is in fact required for python3. It was being resolved as a
dependency of other things (see list at end of line), so it was being downloaded
on-the-fly since our normal builds have internet access. It only broke when it
wasn't vendored for offline builds.
2020-05-15 17:52:08 -04:00
3016 changed files with 169404 additions and 241346 deletions
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
<!-- Issues are for **concrete, actionable bugs and feature requests** only - if you're just asking for debugging help or technical support, please use:
This is a list of high-level changes for each release of AWX. A full list of commits can be found at `https://github.com/ansible/awx/releases/tag/<version>`.
# 18.0.0 (March 23, 2021)
**IMPORTANT INSTALL AND UPGRADE NOTES**
Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX: https://github.com/ansible/awx/blob/devel/INSTALL.md#installing-awx
If you have a pre-existing installation of AWX that utilizes the Docker-based installation method, this install method has ** notably changed** from 17.x to 18.x. For details, please see:
After a herculean effort from a number of contributors, we're excited to announce that AWX 18.0.0 introduces a new concept called Execution Environments.
Execution Environments are container images which consist of everything necessary to run a playbook within AWX, and which drive the entire management and lifecycle of playbook execution runtime in AWX: https://github.com/ansible/awx/issues/5157. This means that going forward, AWX no longer utilizes the [bubblewrap](https://github.com/containers/bubblewrap) project for playbook isolation, but instead utilizes a container per playbook run.
Much like custom virtualenvs, custom Execution Environments can be crafted to specify additional Python or system-level dependencies. Ansible Builder outputs images you can upload to your registry which can *then* be defined in AWX and utilized for playbook runs.
To learn more about Ansible Builder and Execution Environments, see: https://www.ansible.com/blog/introduction-to-ansible-builder
### Other Notable Changes
- Removed `installer` directory.
- The Kubernetes installer has been removed in favor of [AWX Operator](https://github.com/ansible/awx-operator).
- The "Local Docker" install method has been removed in favor of the development environment. Details can be found at: https://github.com/ansible/awx/blob/devel/tools/docker-compose/README.md
- Removal of custom virtual environments https://github.com/ansible/awx/pull/9498
- Custom virtual environments have been replaced by Execution Environments https://github.com/ansible/awx/pull/9570
- The default Container Group Pod definition has changed. All custom Pod specs have been reset. https://github.com/ansible/awx/commit/05ef51f710dad8f8036bc5acee4097db4adc0d71
- Added user interface for the activity stream: https://github.com/ansible/awx/pull/9083
- Converted many of the top-level list views (Jobs, Teams, Hosts, Inventories, Projects, and more) to a new, permanent table component for substantially increased responsiveness, usability, maintainability, and other 'ility's: https://github.com/ansible/awx/pull/8970, https://github.com/ansible/awx/pull/9182 and many others!
- Added click-to-expand details for job tables
- Added search filtering to job output https://github.com/ansible/awx/pull/9208
- Added the new migration, update, and "installation in progress" page https://github.com/ansible/awx/pull/9123
- Added the user interface for job settings https://github.com/ansible/awx/pull/8661
- Runtime errors from jobs are now displayed, along with an explanation for what went wrong, on the output page https://github.com/ansible/awx/pull/8726
- You can now cancel a running job from its output and details panel https://github.com/ansible/awx/pull/9199
- Fixed a bug where launch prompt inputs were unexpectedly deposited in the url: https://github.com/ansible/awx/pull/9231
- Playbook, credential type, and inventory file inputs now support type-ahead and manual type-in! https://github.com/ansible/awx/pull/9120
- Added ability to relaunch against failed hosts: https://github.com/ansible/awx/pull/9225
- Added pending workflow approval count to the application header https://github.com/ansible/awx/pull/9334
- Added user interface for management jobs: https://github.com/ansible/awx/pull/9224
- Added toast message to show notification template test result to notification templates list https://github.com/ansible/awx/pull/9318
- Replaced CodeMirror with AceEditor for editing template variables and notification templates https://github.com/ansible/awx/pull/9281
- Added support for filtering and pagination on job output https://github.com/ansible/awx/pull/9208
- Added support for html in custom login text https://github.com/ansible/awx/pull/9519
# 17.1.0 (March 9, 2021)
- Addressed a security issue in AWX (CVE-2021-20253)
- Fixed a bug permissions error related to redis in K8S-based deployments: https://github.com/ansible/awx/issues/9401
# 17.0.1 (January 26, 2021)
- Fixed pgdocker directory permissions issue with Local Docker installer: https://github.com/ansible/awx/pull/9152
- Fixed a bug in the UI which caused toggle settings to not be changed when clicked: https://github.com/ansible/awx/pull/9093
# 17.0.0 (January 22, 2021)
- AWX now requires PostgreSQL 12 by default: https://github.com/ansible/awx/pull/8943
**Note:** users who encounter permissions errors at upgrade time should `chown -R ~/.awx/pgdocker` to ensure it's owned by the user running the install playbook
- Added support for region name for OpenStack inventory: https://github.com/ansible/awx/issues/5080
- Added the ability to chain undefined attributes in custom notification templates: https://github.com/ansible/awx/issues/8677
- Dramatically simplified the `image_build` role: https://github.com/ansible/awx/pull/8980
- Fixed a bug which can cause schema migrations to fail at install time: https://github.com/ansible/awx/issues/9077
- Fixed a bug which caused the `is_superuser` user property to be out of date in certain circumstances: https://github.com/ansible/awx/pull/8833
- Fixed a bug which sometimes results in race conditions on setting access: https://github.com/ansible/awx/pull/8580
- Fixed a bug which sometimes causes an unexpected delay in stdout for some playbooks: https://github.com/ansible/awx/issues/9085
- (UI) Added support for credential password prompting on job launch: https://github.com/ansible/awx/pull/9028
- (UI) Added the ability to configure LDAP settings in the UI: https://github.com/ansible/awx/issues/8291
- (UI) Added a sync button to the Project detail view: https://github.com/ansible/awx/issues/8847
- (UI) Added a form for configuring Google Outh 2.0 settings: https://github.com/ansible/awx/pull/8762
- (UI) Added searchable keys and related keys to the Credentials list: https://github.com/ansible/awx/issues/8603
- (UI) Added support for advanced search and copying to Notification Templates: https://github.com/ansible/awx/issues/7879
- (UI) Added support for prompting on workflow nodes: https://github.com/ansible/awx/issues/5913
- (UI) Added support for session timeouts: https://github.com/ansible/awx/pull/8250
- (UI) Fixed a bug that broke websocket streaming for the insecure ws:// protocol: https://github.com/ansible/awx/pull/8877
- (UI) Fixed a bug in the user interface when a translation for the browser's preferred locale isn't available: https://github.com/ansible/awx/issues/8884
- (UI) Fixed bug where navigating from one survey question form directly to another wasn't reloading the form: https://github.com/ansible/awx/issues/7522
- (UI) Fixed a bug which can cause an uncaught error while launching a Job Template: https://github.com/ansible/awx/issues/8936
- Updated autobahn to address CVE-2020-35678
## 16.0.0 (December 10, 2020)
- AWX now ships with a reimagined user interface. **Please read this before upgrading:** https://groups.google.com/g/awx-project/c/KuT5Ao92HWo
- Removed support for syncing inventory from Red Hat CloudForms - https://github.com/ansible/awx/commit/0b701b3b2
- Removed support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
- Upgraded NodeJS to actively maintained LTS 14.15.1 - https://github.com/ansible/awx/pull/8766
- Added Git-LFS to the default image build - https://github.com/ansible/awx/pull/8700
- Added the ability to specify `metadata.labels` in the podspec for container groups - https://github.com/ansible/awx/issues/8486
- Added support for Kubernetes pod annotations - https://github.com/ansible/awx/pull/8434
- Added the ability to label the web container in local Docker installs - https://github.com/ansible/awx/pull/8449
- Added additional metadata (as an extra var) to playbook runs to report the SCM branch name - https://github.com/ansible/awx/pull/8433
- Fixed a bug that caused k8s installations to fail due to an incorrect Helm repo - https://github.com/ansible/awx/issues/8715
- Fixed a bug that prevented certain Workflow Approval resources from being deleted - https://github.com/ansible/awx/pull/8612
- Fixed a bug that prevented the deletion of inventories stuck in "pending deletion" state - https://github.com/ansible/awx/issues/8525
- Fixed a display bug in webhook notifications with certain unicode characters - https://github.com/ansible/awx/issues/7400
- Improved support for exporting dependent objects (Inventory Hosts and Groups) in the `awx export` CLI tool - https://github.com/ansible/awx/commit/607bc0788
## 15.0.1 (October 20, 2020)
- Added several optimizations to improve performance for a variety of high-load simultaneous job launch use cases https://github.com/ansible/awx/pull/8403
- Added the ability to source roles and collections from requirements.yaml files (not just requirements.yml) - https://github.com/ansible/awx/issues/4540
- awx.awx collection modules now provide a clearer error message for incompatible versions of awxkit - https://github.com/ansible/awx/issues/8127
- Fixed a bug in notification messages that contain certain unicode characters - https://github.com/ansible/awx/issues/7400
- Fixed a bug that prevents the deletion of Workflow Approval records - https://github.com/ansible/awx/issues/8305
- Fixed a bug that broke the selection of webhook credentials - https://github.com/ansible/awx/issues/7892
- Fixed a bug which can cause confusing behavior for social auth logins across distinct browser tabs - https://github.com/ansible/awx/issues/8154
- Fixed several bugs in the output of Workflow Job Templates using the `awx export` tool - https://github.com/ansible/awx/issues/7798 https://github.com/ansible/awx/pull/7847
- Fixed a race condition that can lead to missing hosts when running parallel inventory syncs - https://github.com/ansible/awx/issues/5571
- Fixed an HTTP 500 error when certain LDAP group parameters aren't properly set - https://github.com/ansible/awx/issues/7622
- Updated a few dependencies in response to several CVEs:
* CVE-2020-7720
* CVE-2020-7743
* CVE-2020-7676
## 15.0.0 (September 30, 2020)
- Added improved support for fetching Ansible collections from private Galaxy content sources (such as https://github.com/ansible/galaxy_ng) - https://github.com/ansible/awx/issues/7813
**Note:** as part of this change, new Organizations created in the AWX API will _no longer_ automatically synchronize roles and collections from galaxy.ansible.com by default. More details on this change can be found at: https://github.com/ansible/awx/issues/8341#issuecomment-707310633
- AWX now utilizes a version of certifi that auto-discovers certificates in the system certificate store - https://github.com/ansible/awx/pull/8242
- Added support for arbitrary custom inventory plugin configuration: https://github.com/ansible/awx/issues/5150
- Added an optional setting to disable the auto-creation of organizations and teams on successful SAML login. - https://github.com/ansible/awx/pull/8069
- Added a number of optimizations to AWX's callback receiver to improve the speed of stdout processing for simultaneous playbooks runs - https://github.com/ansible/awx/pull/8193 https://github.com/ansible/awx/pull/8191
- Added the ability to use `!include` and `!import` constructors when constructing YAML for use with the AWX CLI - https://github.com/ansible/awx/issues/8135
- Fixed a bug that prevented certain users from being able to edit approval nodes in Workflows - https://github.com/ansible/awx/pull/8253
- Fixed a bug that broke password prompting for credentials in certain cases - https://github.com/ansible/awx/issues/8202
- Fixed a bug which can cause PostgreSQL deadlocks when running many parallel playbooks against large shared inventories - https://github.com/ansible/awx/issues/8145
- Fixed a bug which can cause delays in AWX's task manager when large numbers of simultaneous jobs are scheduled - https://github.com/ansible/awx/issues/7655
- Fixed a bug which can cause certain scheduled jobs - those that run every X minute(s) or hour(s) - to fail to run at the proper time - https://github.com/ansible/awx/issues/8071
- Fixed a performance issue for playbooks that store large amounts of data using the `set_stats` module - https://github.com/ansible/awx/issues/8006
- Fixed a bug related to AWX's handling of the auth_path argument for the HashiVault KeyValue credential plugin - https://github.com/ansible/awx/pull/7991
- Fixed a bug that broke support for Remote Archive SCM Type project syncs on platforms that utilize Python2 - https://github.com/ansible/awx/pull/8057
- Updated to the latest version of Django Rest Framework to address CVE-2020-25626
- Updated to the latest version of Django to address CVE-2020-24583 and CVE-2020-24584
- Updated to the latest verson of channels_redis to address a bug that slowly causes Daphne processes to leak memory over time - https://github.com/django/channels_redis/issues/212
## 14.1.0 (Aug 25, 2020)
- AWX images can now be built on ARM64 - https://github.com/ansible/awx/pull/7607
- Added the Remote Archive SCM Type to support using immutable artifacts and releases (such as tarballs and zip files) as projects - https://github.com/ansible/awx/issues/7954
- Deprecated official support for Mercurial-based project updates - https://github.com/ansible/awx/issues/7932
- Added resource import/export support to the official AWX collection - https://github.com/ansible/awx/issues/7329
- Added the ability to import YAML-based resources (instead of just JSON) when using the AWX CLI - https://github.com/ansible/awx/pull/7808
- Users upgrading from older versions of AWX may encounter an issue that causes their postgres container to restart in a loop (https://github.com/ansible/awx/issues/7854) - if you encounter this, bring your containers down and then back up (e.g., `docker-compose down && docker-compose up -d`) after upgrading to 14.1.0.
- Updated the AWX CLI to export labels associated with Workflow Job Templates - https://github.com/ansible/awx/pull/7847
- Updated to the latest python-ldap to address a bug - https://github.com/ansible/awx/issues/7868
- Upgraded git-python to fix a bug that caused workflows to sometimes fail - https://github.com/ansible/awx/issues/6119
- Worked around a bug in the channels_redis library that slowly causes Daphne processes to leak memory over time - https://github.com/django/channels_redis/issues/212
- Fixed a bug in the AWX CLI that prevented Workflow nodes from importing properly - https://github.com/ansible/awx/issues/7793
- Fixed a bug in the awx.awx collection release process that templated the wrong version - https://github.com/ansible/awx/issues/7870
- Fixed a bug that caused errors rendering stdout that contained UTF-16 surrogate pairs - https://github.com/ansible/awx/pull/7918
## 14.0.0 (Aug 6, 2020)
- As part of our commitment to inclusivity in open source, we recently took some time to audit AWX's source code and user interface and replace certain terminology with more inclusive language. Strictly speaking, this isn't a bug or a feature, but we think it's important and worth calling attention to:
- Installing roles and collections via requirements.yml as part of Project Updates now requires at least Ansible 2.9 - https://github.com/ansible/awx/issues/7769
- Deprecated the use of the `PRIMARY_GALAXY_USERNAME` and `PRIMARY_GALAXY_PASSWORD` settings. We recommend using tokens to access Galaxy or Automation Hub.
- Added local caching for downloaded roles and collections so they are not re-downloaded on nodes where they are up to date with the project - https://github.com/ansible/awx/issues/5518
- Added the ability to associate K8S/OpenShift credentials to Job Template for playbook interaction with the `community.kubernetes` collection - https://github.com/ansible/awx/issues/5735
- Added the ability to include HTML in the Custom Login Info presented on the login page - https://github.com/ansible/awx/issues/7600
- Fixed https://access.redhat.com/security/cve/cve-2020-14327 - Server-side request forgery on credentials
- Fixed https://access.redhat.com/security/cve/cve-2020-14328 - Server-side request forgery on webhooks
- Fixed https://access.redhat.com/security/cve/cve-2020-14329 - Sensitive data exposure on labels
- Fixed https://access.redhat.com/security/cve/cve-2020-14337 - Named URLs allow for testing the presence or absence of objects
- Fixed a number of bugs in the user interface related to an upgrade of jQuery:
* https://github.com/ansible/awx/issues/7530
* https://github.com/ansible/awx/issues/7546
* https://github.com/ansible/awx/issues/7534
* https://github.com/ansible/awx/issues/7606
- Fixed a bug that caused the `-f yaml` flag of the AWX CLI to not print properly formatted YAML - https://github.com/ansible/awx/issues/7795
- Fixed a bug in the installer that caused errors when `docker_registry_password` was set - https://github.com/ansible/awx/issues/7695
- Fixed a permissions error that prevented certain users from starting AWX services - https://github.com/ansible/awx/issues/7545
- Fixed a bug that allows superusers to run unsafe Jinja code when defining custom Credential Types - https://github.com/ansible/awx/pull/7584/
- Fixed a bug that prevented users from creating (or editing) custom Credential Types containing boolean fields - https://github.com/ansible/awx/issues/7483
- Fixed a bug that prevented users with postgres usernames containing uppercase letters from restoring backups succesfully - https://github.com/ansible/awx/pull/7519
- Fixed a bug which allowed the creation (in the Tower API) of Groups and Hosts with the same name - https://github.com/ansible/awx/issues/4680
## 13.0.0 (Jun 23, 2020)
- Added import and export subcommands to the awx-cli tool, replacing send and receive from the old tower-cli (https://github.com/ansible/awx/pull/6125).
- Added import and export commands to the official AWX CLI, replacing send and receive from the old tower-cli (https://github.com/ansible/awx/pull/6125).
- Removed scripts as a means of running inventory updates of built-in types (https://github.com/ansible/awx/pull/6911)
- Ansible 2.8 is now partially unsupported; some inventory source types are known to no longer work.
- Fixed an issue where the vmware inventory source ssl_verify source variable was not recognized (https://github.com/ansible/awx/pull/7360)
@@ -11,11 +184,11 @@ This is a list of high-level changes for each release of AWX. A full list of com
- Fixed a bug that caused rsyslogd's configuration file to have world-readable file permissions, potentially leaking secrets (CVE-2020-10782)
## 12.0.0 (Jun 9, 2020)
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
- Removed memcached as a dependency of AWX (https://github.com/ansible/awx/pull/7240)
- Moved to a single container image build instead of separate awx_web and awx_task images. The container image is just `awx` (https://github.com/ansible/awx/pull/7228)
- Official AWX container image builds now use a two-stage container build process that notably reduces the size of our published images (https://github.com/ansible/awx/pull/7017)
- Removed support for HipChat notifications ([EoL announcement](https://www.atlassian.com/partnerships/slack/faq#faq-98b17ca3-247f-423b-9a78-70a91681eff0)); all previously-created HipChat notification templates will be deleted due to this removal.
- Fixed a bug which broke AWX installations with oc version 4.3 (https://github.com/ansible/awx/pull/6948/files)
- Fixed a bug which broke AWX installations with oc version 4.3 (https://github.com/ansible/awx/pull/6948/)
- Fixed a performance issue that caused notable delay of stdout processing for playbooks run against large numbers of hosts (https://github.com/ansible/awx/issues/6991)
- Fixed a bug that caused CyberArk AIM credential plugin looks to hang forever in some environments (https://github.com/ansible/awx/issues/6986)
- Fixed a bug that caused ANY/ALL converage settings not to properly save when editing approval nodes in the UI (https://github.com/ansible/awx/issues/6998)
@@ -42,7 +33,7 @@ Have questions about this document or anything not covered here? Come chat with
## Setting up your development environment
The AWX development environment workflow and toolchain is based on Docker, and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also binds the local source tree into the development container, making it possible to observe and test changes in real time.
The AWX development environment workflow and toolchain uses Docker and the docker-compose tool, to provide dependencies, services, and databases necessary to run all of the components. It also bind-mounts the local source tree into the development container, making it possible to observe and test changes in real time.
### Prerequisites
@@ -55,218 +46,37 @@ respectively.
For Linux platforms, refer to the following from Docker:
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the Docker compose Python module separately, in which case you'll need to run the following:
If you're not using Docker for Mac, or Docker for Windows, you may need, or choose to, install the `docker-compose` Python module separately.
```bash
(host)$ pip install docker-compose
(host)$ pip3 install docker-compose
```
#### Frontend Development
See [the ui development documentation](awx/ui/README.md).
### Build the environment
See [the ui development documentation](awx/ui_next/CONTRIBUTING.md).
#### Fork and clone the AWX repo
If you have not done so already, you'll need to fork the AWX repo on GitHub. For more on how to do this, see [Fork a Repo](https://help.github.com/articles/fork-a-repo/).
#### Create local settings
### Build and Run the Development Environment
AWX will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file is required for starting the development environment and startup will fail if it's not provided.
See the [README.md](./tools/docker-compose/README.md) for docs on how to build the awx_devel image and run the development environment.
An example is provided. Make a copy of it, and edit as needed (the defaults are usually fine):
The AWX base container image (defined in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and symbolic links into the development environment that make running the services easy.
Run the following to build the image:
```bash
(host)$ make docker-compose-build
```
**NOTE**
> The image will need to be rebuilt, if the Python requirements or OS dependencies change.
Once the build completes, you will have a `ansible/awx_devel` image in your local image cache. Use the `docker images` command to view it, as follows:
```bash
(host)$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansible/awx_devel latest ba9ec3e8df74 26 minutes ago 1.42GB
```
#### Build the user interface
Run the following to build the AWX UI:
```bash
(host) $ make ui-devel
```
See [the ui development documentation](awx/ui/README.md) for more information on using the frontend development, build, and test tooling.
### Running the environment
#### Start the containers
Start the development containers by running the following:
```bash
(host)$ make docker-compose
```
The above utilizes the image built in the previous step, and will automatically start all required services and dependent containers. Once the containers launch, your session will be attached to the *awx* container, and you'll be able to watch log messages and events in real time. You will see messages from Django and the front end build process.
If you start a second terminal session, you can take a look at the running containers using the `docker ps` command. For example:
```bash
# List running containers
(host)$ docker ps
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
44251b476f98 gcr.io/ansible-tower-engineering/awx_devel:devel "/entrypoint.sh /bin…"27 seconds ago Up 23 seconds 0.0.0.0:6899->6899/tcp, 0.0.0.0:7899-7999->7899-7999/tcp, 0.0.0.0:8013->8013/tcp, 0.0.0.0:8043->8043/tcp, 0.0.0.0:8080->8080/tcp, 22/tcp, 0.0.0.0:8888->8888/tcp tools_awx_run_9e820694d57e
40de380e3c2e redis:latest "docker-entrypoint.s…"28 seconds ago Up 26 seconds
b66a506d3007 postgres:10 "docker-entrypoint.s…"28 seconds ago Up 26 seconds 0.0.0.0:5432->5432/tcp tools_postgres_1
```
**NOTE**
> The Makefile assumes that the image you built is tagged with your current branch. This allows you to build images for different contexts or branches. When starting the containers, you can choose a specific branch by setting `COMPOSE_TAG=<branch name>` in your environment.
> For example, you might be working in a feature branch, but you want to run the containers using the `devel` image you built previously. To do that, start the containers using the following command: `$ COMPOSE_TAG=devel make docker-compose`
##### Wait for migrations to complete
The first time you start the environment, database migrations need to run in order to build the PostgreSQL database. It will take few moments, but eventually you will see output in your terminal session that looks like the following:
awx_1 | Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
awx_1 | Synchronizing apps without migrations:
awx_1 | Creating tables...
awx_1 | Running deferred SQL...
awx_1 | Installing custom SQL...
awx_1 | Running migrations:
awx_1 | Rendering model states... DONE
awx_1 | Applying contenttypes.0001_initial... OK
awx_1 | Applying contenttypes.0002_remove_content_type_name... OK
awx_1 | Applying auth.0001_initial... OK
awx_1 | Applying auth.0002_alter_permission_name_max_length... OK
awx_1 | Applying auth.0003_alter_user_email_max_length... OK
awx_1 | Applying auth.0004_alter_user_username_opts... OK
awx_1 | Applying auth.0005_alter_user_last_login_null... OK
awx_1 | Applying auth.0006_require_contenttypes_0002... OK
awx_1 | Applying taggit.0001_initial... OK
awx_1 | Applying taggit.0002_auto_20150616_2121... OK
awx_1 | Applying main.0001_initial... OK
awx_1 | Applying main.0002_squashed_v300_release... OK
awx_1 | Applying main.0003_squashed_v300_v303_updates... OK
awx_1 | Applying main.0004_squashed_v310_release... OK
awx_1 | Applying conf.0001_initial... OK
awx_1 | Applying conf.0002_v310_copy_tower_settings... OK
...
```
Once migrations are completed, you can begin using AWX.
#### Start from the container shell
Often times you'll want to start the development environment without immediately starting all of the services in the *awx* container, and instead be taken directly to a shell. You can do this with the following:
```bash
(host)$ make docker-compose-test
```
Using `docker exec`, this will create a session in the running *awx* container, and place you at a command prompt, where you can run shell commands inside the container.
If you want to start and use the development environment, you'll first need to bootstrap it by running the following command:
```bash
(container)# /usr/bin/bootstrap_development.sh
```
The above will do all the setup tasks, including running database migrations, so it may take a couple minutes. Once it's done it
will drop you back to the shell.
In order to launch all developer services:
```bash
(container)# /usr/bin/launch_awx.sh
```
`launch_awx.sh` also calls `bootstrap_development.sh` so if all you are doing is launching the supervisor to start all services, you don't
need to call `bootstrap_development.sh` first.
### Post Build Steps
Before you can log in and use the system, you will need to create an admin user. Optionally, you may also want to load some demo data.
##### Start a shell
To create the admin user, and load demo data, you first need to start a shell session on the *awx* container. In a new terminal session, use the `docker exec` command as follows to start the shell session:
```bash
(host)$ docker exec -it tools_awx_1 bash
```
This creates a session in the *awx* containers, just as if you were using `ssh`, and allows you execute commands within the running container.
##### Create an admin user
Before you can log into AWX, you need to create an admin user. With this user you will be able to create more users, and begin configuring the server. From within the container shell, run the following command:
```bash
(container)# awx-manage createsuperuser
```
You will be prompted for a username, an email address, and a password, and you will be asked to confirm the password. The email address is not important, so just enter something that looks like an email address. Remember the username and password, as you will use them to log into the web interface for the first time.
##### Load demo data
You can optionally load some demo data. This will create a demo project, inventory, and job template. From within the container shell, run the following to load the data:
```bash
(container)# awx-manage create_preload_data
```
**NOTE**
> This information will persist in the database running in the `tools_postgres_1` container, until the container is removed. You may periodically need to recreate
this container, and thus the database, if the database schema changes in an upstream commit.
##### Building API Documentation
### Building API Documentation
AWX includes support for building [Swagger/OpenAPI
documentation](https://swagger.io). To build the documentation locally, run:
@@ -284,7 +94,7 @@ is an example of one such service.
You can now log into the AWX web interface at [https://localhost:8043](https://localhost:8043), and access the API directly at [https://localhost:8043/api/](https://localhost:8043/api/).
To log in use the admin user and password you created above in [Create an admin user](#create-an-admin-user).
[Create an admin user](./tools/docker-compose/README.md#create-an-admin-user) if needed.
### Purging containers and images
@@ -335,7 +145,7 @@ Sometimes it might take us a while to fully review your PR. We try to keep the `
All submitted PRs will have the linter and unit tests run against them via Zuul, and the status reported in the PR.
## PR Checks ran by Zuul
## PR Checks run by Zuul
Zuul jobs for awx are defined in the [zuul-jobs](https://github.com/ansible/zuul-jobs) repo.
* [Quickstart with minikube](#quickstart-with-minikube)
* [Starting minikube](#starting-minikube)
* [Deploying the AWX Operator](#deploying-the-awx-operator)
* [Verifying the Operator Deployment](#verifying-the-operator-deployment)
* [Deploy AWX](#deploy-awx)
* [Accessing AWX](#accessing-awx)
* [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
# Installing AWX
This document provides a guide for installing AWX.
:warning: NOTE |
--- |
If you're installing an older release of AWX (prior to 18.0), these instructions have changed. Take a look at your version specific instructions, e.g., for AWX 17.0.1, see: [https://github.com/ansible/awx/blob/17.0.1/INSTALL.md](https://github.com/ansible/awx/blob/17.0.1/INSTALL.md)
If you're attempting to migrate an older Docker-based AWX installation, see: [Migrating Data from Local Docker](https://github.com/ansible/awx/blob/devel/tools/docker-compose/docs/data_migration.md) |
## Table of contents
## The AWX Operator
- [Installing AWX](#installing-awx)
* [Getting started](#getting-started)
+ [Clone the repo](#clone-the-repo)
+ [AWX branding](#awx-branding)
+ [Prerequisites](#prerequisites)
+ [System Requirements](#system-requirements)
+ [Choose a deployment platform](#choose-a-deployment-platform)
+ [Official vs Building Images](#official-vs-building-images)
* [Upgrading from previous versions](#upgrading-from-previous-versions)
* [OpenShift](#openshift)
+ [Prerequisites](#prerequisites-1)
+ [Pre-install steps](#pre-install-steps)
- [Deploying to Minishift](#deploying-to-minishift)
- [PostgreSQL](#postgresql)
+ [Run the installer](#run-the-installer)
+ [Post-install](#post-install)
+ [Accessing AWX](#accessing-awx)
* [Kubernetes](#kubernetes)
+ [Prerequisites](#prerequisites-2)
+ [Pre-install steps](#pre-install-steps-1)
+ [Configuring Helm](#configuring-helm)
+ [Run the installer](#run-the-installer-1)
+ [Post-install](#post-install-1)
+ [Accessing AWX](#accessing-awx-1)
+ [SSL Termination](#ssl-termination)
* [Docker-Compose](#docker-compose)
+ [Prerequisites](#prerequisites-3)
+ [Pre-install steps](#pre-install-steps-2)
- [Deploying to a remote host](#deploying-to-a-remote-host)
- [Inventory variables](#inventory-variables)
- [Docker registry](#docker-registry)
- [Proxy settings](#proxy-settings)
- [PostgreSQL](#postgresql-1)
+ [Run the installer](#run-the-installer-2)
+ [Post-install](#post-install-2)
+ [Accessing AWX](#accessing-awx-2)
- [Installing the AWX CLI](#installing-the-awx-cli)
* [Building the CLI Documentation](#building-the-cli-documentation)
Starting in version 18.0, the [AWX Operator](https://github.com/ansible/awx-operator) is the preferred way to install AWX.
## Getting started
AWX can also alternatively be installed and [run in Docker](./tools/docker-compose/README.md), but this install path is only recommended for development/test-oriented deployments, and has no official published release.
### Clone the repo
### Quickstart with minikube
If you have not already done so, you will need to clone, or create a local copy, of the [AWX repo](https://github.com/ansible/awx). We generally recommend that you view the releases page:
If you don't have an existing OpenShift or Kubernetes cluster, minikube is a fast and easy way to get up and running.
https://github.com/ansible/awx/releases
To install minikube, follow the steps in their [documentation](https://minikube.sigs.k8s.io/docs/start/).
Please note that deploying from `HEAD` (or the latest commit) is **not** stable, and that if you want to do this, you should proceed at your own risk (also, see the section #official-vs-building-images for building your own image).
For more on how to clone the repo, view [git clone help](https://git-scm.com/docs/git-clone).
Once you have a local copy, run commands within the root of the project tree.
### AWX branding
You can optionally install the AWX branding assets from the [awx-logos repo](https://github.com/ansible/awx-logos). Prior to installing, please review and agree to the [trademark guidelines](https://github.com/ansible/awx-logos/blob/master/TRADEMARKS.md).
To install the assets, clone the `awx-logos` repo so that it is next to your `awx` clone. As you progress through the installation steps, you'll be setting variables in the [inventory](./installer/inventory) file. To include the assets in the build, set `awx_official=true`.
### Prerequisites
Before you can run a deployment, you'll need the following installed in your local environment:
- [Ansible](http://docs.ansible.com/ansible/latest/intro_installation.html) Requires Version 2.8+
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
- [NPM 6.x LTS](https://docs.npmjs.com/)
+ This is only required if you're [building your own container images](#official-vs-building-images) with `use_container_for_build=false`
### System Requirements
The system that runs the AWX service will need to satisfy the following requirements
- At least 4GB of memory
- At least 2 cpu cores
- At least 20GB of space
- Running Docker, Openshift, or Kubernetes
- If you choose to use an external PostgreSQL database, please note that the minimum version is 10+.
### Choose a deployment platform
We currently support running AWX as a containerized application using Docker images deployed to either an OpenShift cluster, a Kubernetes cluster, or docker-compose. The remainder of this document will walk you through the process of building the images, and deploying them to either platform.
The [installer](./installer) directory contains an [inventory](./installer/inventory) file, and a playbook, [install.yml](./installer/install.yml). You'll begin by setting variables in the inventory file according to the platform you wish to use, and then you'll start the image build and deployment process by running the playbook.
In the sections below, you'll find deployment details and instructions for each platform:
- [OpenShift](#openshift)
- [Kubernetes](#kubernetes)
- [Docker Compose](#docker-compose).
### Official vs Building Images
When installing AWX you have the option of building your own image or using the image provided on DockerHub (see [awx](https://hub.docker.com/r/ansible/awx/))
This is controlled by the following variables in the `inventory` file
Once you have installed minikube, run the following command to start it. You may wish to customize these options.
If these variables are present then all deployments will use these hosted images. If the variables are not present then the images will be built during the install.
#### Deploying the AWX Operator
*dockerhub_base*
> The base location on DockerHub where the images are hosted (by default this pulls a container image named `ansible/awx:tag`)
*dockerhub_version*
> Multiple versions are provided. `latest` always pulls the most recent. You may also select version numbers at different granularities: 1, 1.0, 1.0.1, 1.0.0.123
*use_container_for_build*
> Use a local distribution build container image for building the AWX package. This is helpful if you don't want to bother installing the build-time dependencies as it is taken care of already.
## Upgrading from previous versions
Upgrading AWX involves rerunning the install playbook. Download a newer release from [https://github.com/ansible/awx/releases](https://github.com/ansible/awx/releases) and re-populate the inventory file with your customized variables.
For convenience, you can create a file called `vars.yml`:
For a comprehensive overview of features, see [README.md](https://github.com/ansible/awx-operator/blob/devel/README.md) in the awx-operator repo. The following steps are the bare minimum to get AWX up and running.
To complete a deployment to OpenShift, you will need access to an OpenShift cluster. For demo and testing purposes, you can use [Minishift](https://github.com/minishift/minishift) to create a single node cluster running inside a virtual machine.
When using OpenShift for deploying AWX make sure you have correct privileges to add the security context 'privileged', otherwise the installation will fail. The privileged context is needed because of the use of [the bubblewrap tool](https://github.com/containers/bubblewrap) to add an additional layer of security when using containers.
You will also need to have the `oc` command in your PATH. The `install.yml` playbook will call out to `oc` when logging into, and creating objects on the cluster.
The default resource requests per-deployment requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources](https://docs.openshift.com/container-platform/latest/dev_guide/compute_resources.html#dev-compute-resources)
### Pre-install steps
Before starting the install, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*openshift_host*
> IP address or hostname of the OpenShift cluster. If you're using Minishift, this will be the value returned by `minishift ip`.
*openshift_skip_tls_verify*
> Boolean. Set to True if using self-signed certs.
*openshift_project*
> Name of the OpenShift project that will be created, and used as the namespace for the AWX app. Defaults to *awx*.
*openshift_user*
> Username of the OpenShift user that will create the project, and deploy the application. Defaults to *developer*.
*openshift_pg_emptydir*
> Boolean. Set to True to use an emptyDir volume when deploying the PostgreSQL pod. Note: This should only be used for demo and testing purposes.
*docker_registry*
> IP address and port, or URL, for accessing a registry that the OpenShift cluster can access. Defaults to *172.30.1.1:5000*, the internal registry delivered with Minishift. This is not needed if you are using official hosted images.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Generally this will match the project name. It defaults to *awx*. This is not needed if you are using official hosted images.
*docker_registry_username*
> Username of the user that will push images to the registry. Will generally match the *openshift_user* value. Defaults to *developer*. This is not needed if you are using official hosted images.
#### Deploying to Minishift
Install Minishift by following the [installation guide](https://docs.openshift.org/latest/minishift/getting-started/installing.html).
The recommended minimum resources for your Minishift VM:
```bash
$ minishift start --cpus=4 --memory=8GB
```
The Minishift VM contains a Docker daemon, which you can use to build the AWX images. This is generally the approach you should take, and we recommend doing so. To use this instance, run the following command to setup your environment:
```bash
# Set DOCKER environment variable to point to the Minishift VM
$ eval$(minishift docker-env)
```
**Note**
> If you choose to not use the Docker instance running inside the VM, and build the images externally, you will have to enable the OpenShift cluster to access the images. This involves pushing the images to an external Docker registry, and granting the cluster access to it, or exposing the internal registry, and pushing the images into it.
#### PostgreSQL
By default, AWX will deploy a PostgreSQL pod inside of your cluster. You will need to create a [Persistent Volume Claim](https://docs.openshift.org/latest/dev_guide/persistent_volumes.html) which is named `postgresql` by default, and can be overridden by setting the `openshift_pg_pvc_name` variable. For testing and demo purposes, you may set `openshift_pg_emptydir=yes`.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information. When setting `pg_hostname` the installer will assume you have configured the database in that location and will not launch the postgresql pod.
### Run the installer
To start the install, you will pass two *extra* variables on the command line. The first is *openshift_password*, which is the password for the *openshift_user*, and the second is *docker_registry_password*, which is the password associated with *docker_registry_username*.
If you're using the OpenShift internal registry, then you'll pass an access token for the *docker_registry_password* value, rather than a password. The `oc whoami -t` command will generate the required token, as long as you're logged into the cluster via `oc cluster login`.
Run the following command (docker_registry_password is optional if using official images):
After the playbook run completes, check the status of the deployment by running `oc get pods`:
```bash
# View the running pods
$ oc get pods
NAME READY STATUS RESTARTS AGE
awx-3886581826-5mv0l 4/4 Running 0 8s
postgresql-1-l85fh 1/1 Running 0 20m
Once the Operator is running, you can now deploy AWX by creating a simple YAML file:
```
In the above example, the name of the AWX pod is `awx-3886581826-5mv0l`. Before accessing the AWX web interface, setup tasks and database migrations need to complete. These tasks are running in the `awx_task` container inside the AWX pod. To monitor their status, tail the container's STDOUT by running the following command, replacing the AWX pod name with the pod name from your environment:
```bash
# Follow the awx_task log output
$ oc logs -f awx-3886581826-5mv0l -c awx-celery
$ cat myawx.yml
---
apiVersion: awx.ansible.com/v1beta1
kind: AWX
metadata:
name: awx
spec:
tower_ingress_type: Ingress
```
You will see the following indicating that database migrations are running:
And then creating the AWX object in the Kubernetes API:
Once database migrations complete, the web interface will be accessible.
After a few seconds, you will see the database and application pods show up. On a fresh system, it may take a few minutes for the container images to download.
### Accessing AWX
The AWX web interface is running in the AWX pod, behind the `awx-web-svc` service. To view the service, and its port value, run the following command:
The deployment process creates a route, `awx-web-svc`, to expose the service. How the ingres is actually created will vary depending on your environment, and how the cluster is configured. You can view the route, and the external IP address and hostname assigned to it, by running the following command:
##### Accessing AWX
```bash
# View available routes
$ oc get routes
To access the AWX UI, you'll need to grab the service url from minikube:
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
The above example is taken from a Minishift instance. From a web browser, use `https` to access the `HOST/PORT` value from your environment. Using the above example, the URL to access the server would be [https://awx-web-svc-awx.192.168.64.2.nip.io](https://awx-web-svc-awx.192.168.64.2.nip.io).
On fresh installs, you will see the "AWX is currently upgrading." page until database migrations finish.
Once you access the AWX server, you will be prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
Once you are redirected to the login screen, you can now log in by obtaining the generated admin password (note: do not copy the trailing `%`):
## Kubernetes
### Prerequisites
A Kubernetes deployment will require you to have access to a Kubernetes cluster as well as the following tools:
The installation program will reference `kubectl` directly. `helm` is only necessary if you are letting the installer configure PostgreSQL for you.
The default resource requests per-pod requires:
> Memory: 6GB
> CPU: 3 cores
This can be tuned by overriding the variables found in [/installer/roles/kubernetes/defaults/main.yml](/installer/roles/kubernetes/defaults/main.yml). Special care should be taken when doing this as undersized instances will experience crashes and resource exhaustion.
For more detail on how resource requests are formed see: [https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/](https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/)
### Pre-install steps
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section uncommenting when necessary. Make sure the openshift and standalone docker sections are commented out:
*kubernetes_context*
> Prior to running the installer, make sure you've configured the context for the cluster you'll be installing to. This is how the installer knows which cluster to connect to and what authentication to use
*kubernetes_namespace*
> Name of the Kubernetes namespace where the AWX resources will be installed. This will be created if it doesn't exist
*docker_registry_*
> These settings should be used if building your own base images. You'll need access to an external registry and are responsible for making sure your kube cluster can talk to it and use it. If these are undefined and the dockerhub_ configuration settings are uncommented then the images will be pulled from dockerhub instead
### Configuring Helm
If you want the AWX installer to manage creating the database pod (rather than installing and configuring postgres on your own). Then you will need to have a working `helm` installation, you can find details here: [https://docs.helm.sh/using_helm/#quickstart-guide](https://docs.helm.sh/using_helm/#quickstart-guide).
Newer Kubernetes clusters with RBAC enabled will need to make sure a service account is created, make sure to follow the instructions here [https://docs.helm.sh/using_helm/#role-based-access-control](https://docs.helm.sh/using_helm/#role-based-access-control)
### Run the installer
After making changes to the `inventory` file use `ansible-playbook` to begin the install
After the playbook run completes, check the status of the deployment by running `kubectl get pods --namespace awx` (replace awx with the namespace you used):
```bash
# View the running pods, it may take a few minutes for everything to be marked in the Running state
$ kubectl get pods --namespace awx
NAME READY STATUS RESTARTS AGE
awx-2558692395-2r8ss 4/4 Running 0 29s
awx-postgresql-355348841-kltkn 1/1 Running 0 1m
```
### Accessing AWX
The AWX web interface is running in the AWX pod behind the `awx-web-svc` service:
The deployment process creates an `Ingress` named `awx-web-svc` also. Some kubernetes cloud providers will automatically handle routing configuration when an Ingress is created others may require that you more explicitly configure it. You can see what kubernetes knows about things with:
```bash
kubectl get ing --namespace awx
NAME HOSTS ADDRESS PORTS AGE
awx-web-svc * 35.227.x.y 80 3m
```
If your provider is able to allocate an IP Address from the Ingress controller then you can navigate to the address and access the AWX interface. For some providers it can take a few minutes to allocate and make this accessible. For other providers it may require you to manually intervene.
### SSL Termination
Unlike Openshift's `Route` the Kubernetes `Ingress` doesn't yet handle SSL termination. As such the default configuration will only expose AWX through HTTP on port 80. You are responsible for configuring SSL support until support is added (either to Kubernetes or AWX itself).
## Docker-Compose
### Prerequisites
- [Docker](https://docs.docker.com/engine/installation/) on the host where AWX will be deployed. After installing Docker, the Docker service must be started (depending on your OS, you may have to add the local user that uses Docker to the ``docker`` group, refer to the documentation for details)
+ This also installs the `docker` Python module, which is incompatible with `docker-py`. If you have previously installed `docker-py`, please uninstall it.
By default, the delivered [installer/inventory](./installer/inventory) file will deploy AWX to the local host. It is possible, however, to deploy to a remote host. The [installer/install.yml](./installer/install.yml) playbook can be used to build images on the local host, and ship the built images to, and run deployment tasks on, a remote host. To do this, modify the [installer/inventory](./installer/inventory) file, by commenting out `localhost`, and adding the remote host.
For example, suppose you wish to build images locally on your CI/CD host, and deploy them to a remote host named *awx-server*. To do this, add *awx-server* to the [installer/inventory](./installer/inventory) file, and comment out or remove `localhost`, as demonstrated by the following:
```yaml
# localhost ansible_connection=local
awx-server
[all:vars]
...
```
In the above example, image build tasks will be delegated to `localhost`, which is typically where the clone of the AWX project exists. Built images will be archived, copied to remote host, and imported into the remote Docker image cache. Tasks to start the AWX containers will then execute on the remote host.
If you choose to use the official images then the remote host will be the one to pull those images.
**Note**
> You may also want to set additional variables to control how Ansible connects to the host. For more information about this, view [Behavioral Inventory Parameters](http://docs.ansible.com/ansible/latest/intro_inventory.html#id12).
> As mentioned above, in [Prerequisites](#prerequisites-1), the prerequisites are required on the remote host.
> When deploying to a remote host, the playbook does not execute tasks with the `become` option. For this reason, make sure the user that connects to the remote host has privileges to run the `docker` command. This typically means that non-privileged users need to be part of the `docker` group.
#### Inventory variables
Before starting the install process, review the [inventory](./installer/inventory) file, and uncomment and provide values for the following variables found in the `[all:vars]` section:
*postgres_data_dir*
> If you're using the default PostgreSQL container (see [PostgreSQL](#postgresql-1) below), provide a path that can be mounted to the container, and where the database can be persisted.
*host_port*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container. If undefined no port will be exposed. Defaults to *80*.
*host_port_ssl*
> Provide a port number that can be mapped from the Docker daemon host to the web server running inside the AWX container for SSL support. If undefined no port will be exposed. Defaults to *443*, only works if you also set `ssl_certificate` (see below).
*ssl_certificate*
> Optionally, provide the path to a file that contains a certificate and its private key. This needs to be a .pem-file
*docker_compose_dir*
> When using docker-compose, the `docker-compose.yml` file will be created there (default `/tmp/awxcompose`).
*custom_venv_dir*
> Adds the custom venv environments from the local host to be passed into the containers at install.
*ca_trust_dir*
> If you're using a non trusted CA, provide a path where the untrusted Certs are stored on your Host.
#### Docker registry
If you wish to tag and push built images to a Docker registry, set the following variables in the inventory file:
*docker_registry*
> IP address and port, or URL, for accessing a registry.
*docker_registry_repository*
> Namespace to use when pushing and pulling images to and from the registry. Defaults to *awx*.
*docker_registry_username*
> Username of the user that will push images to the registry. Defaults to *developer*.
**Note**
> These settings are ignored if using official images
#### Proxy settings
*http_proxy*
> IP address and port, or URL, for using an http_proxy.
*https_proxy*
> IP address and port, or URL, for using an https_proxy.
*no_proxy*
> Exclude IP address or URL from the proxy.
#### PostgreSQL
AWX requires access to a PostgreSQL database, and by default, one will be created and deployed in a container, and data will be persisted to a host volume. In this scenario, you must set the value of `postgres_data_dir` to a path that can be mounted to the container. When the container is stopped, the database files will still exist in the specified path.
If you wish to use an external database, in the inventory file, set the value of `pg_hostname`, and update `pg_username`, `pg_password`, `pg_admin_password`, `pg_database`, and `pg_port` with the connection information.
### Run the installer
If you are not pushing images to a Docker registry, start the install by running the following:
```bash
# Set the working directory to installer
$ cd installer
# Run the Ansible playbook
$ ansible-playbook -i inventory install.yml
```
If you're pushing built images to a repository, then use the `-e` option to pass the registry password as follows, replacing *password* with the password of the username assigned to `docker_registry_username` (note that you will also need to remove `dockerhub_base` and `dockerhub_version` from the inventory file):
After the playbook run completes, Docker starts a series of containers that provide the services that make up AWX. You can view the running containers using the `docker ps` command.
If you're deploying using Docker Compose, container names will be prefixed by the name of the folder where the docker-compose.yml file is created (by default, `awx`).
Immediately after the containers start, the *awx_task* container will perform required setup tasks, including database migrations. These tasks need to complete before the web interface can be accessed. To monitor the progress, you can follow the container's STDOUT by running the following:
Apply all migrations: sso, taggit, sessions, sites, kombu_transport_django, social_auth, contenttypes, auth, conf, main
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying contenttypes.0001_initial... OK
Applying contenttypes.0002_remove_content_type_name... OK
Applying auth.0001_initial... OK
Applying auth.0002_alter_permission_name_max_length... OK
Applying auth.0003_alter_user_email_max_length... OK
Applying auth.0004_alter_user_username_opts... OK
Applying auth.0005_alter_user_last_login_null... OK
Applying auth.0006_require_contenttypes_0002... OK
Applying taggit.0001_initial... OK
Applying taggit.0002_auto_20150616_2121... OK
Applying main.0001_initial... OK
...
```
Once migrations complete, you will see the following log output, indicating that migrations have completed:
```bash
Python 2.7.5 (default, Nov 6 2016, 00:28:07)
[GCC 4.8.5 20150623(Red Hat 4.8.5-11)] on linux2
Type "help", "copyright", "credits" or "license"for more information.
(InteractiveConsole)
>>> <User: admin>
>>> Default organization added.
Demo Credential, Inventory, and Job Template added.
Successfully registered instance awx
(changed: True)
Creating instance group tower
Added instance awx to tower
(changed: True)
...
```
### Accessing AWX
The AWX web server is accessible on the deployment host, using the *host_port* value set in the *inventory* file. The default URL is [http://localhost](http://localhost).
You will prompted with a login dialog. The default administrator username is `admin`, and the password is `password`.
Now you can log in at the URL above with the username "admin" and the password above. Happy Automating!
# Installing the AWX CLI
@@ -651,17 +134,16 @@ Potential uses include:
* Checking on the status and output of job runs
* Managing objects like organizations, users, teams, etc...
The preferred way to install the AWX CLI is through pip directly from GitHub:
The preferred way to install the AWX CLI is through pip directly from PyPI:
@@ -31,7 +31,7 @@ If your issue isn't considered high priority, then please be patient as it may t
`state:needs_info` The issue needs more information. This could be more debug output, more specifics out the system such as version information. Any detail that is currently preventing this issue from moving forward. This should be considered a blocked state.
`state:needs_review` The the issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_review` The issue/pull request needs to be reviewed by other maintainers and contributors. This is usually used when there is a question out to another maintainer or when a person is less familar with an area of the code base the issue is for.
`state:needs_revision` More commonly used on pull requests, this state represents that there are changes that are being waited on.
CHROMIUM_BIN=$(CHROMIUM_BIN)PUPPETEER_SKIP_CHROMIUM_DOWNLOAD=1$(NPM_BIN) --unsafe-perm --prefix awx/ui ci --no-save awx/ui
CHROMIUM_BIN=$(CHROMIUM_BIN)$(NPM_BIN) run --prefix awx/ui jshint
CHROMIUM_BIN=$(CHROMIUM_BIN)$(NPM_BIN) run --prefix awx/ui lint
CHROME_BIN=$(CHROMIUM_BIN)$(NPM_BIN) --prefix awx/ui run test:ci
CHROME_BIN=$(CHROMIUM_BIN)$(NPM_BIN) --prefix awx/ui run unit
# END UI TASKS
# --------------------------------------
# UI NEXT TASKS
# --------------------------------------
ui-next-lint:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next lint
$(NPM_BIN) run --prefix awx/ui_next prettier-check
$(NPM_BIN) run --prefix awx/ui_next test -- --coverage --watchAll=false
ui-next-test:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next test
ui-next-zuul-lint-and-test:
$(NPM_BIN) --prefix awx/ui_next install
$(NPM_BIN) run --prefix awx/ui_next lint
$(NPM_BIN) run --prefix awx/ui_next prettier-check
$(NPM_BIN) run --prefix awx/ui_next test
# END UI NEXT TASKS
# --------------------------------------
# Build a pip-installable package into dist/ with a timestamped version number.
dev_build:
@@ -627,29 +473,30 @@ docker-auth:
awx/projects:
@mkdir -p $@
# Docker isolated rampart
docker-compose-isolated:awx/projects
CURRENT_UID=$(shell id -u)TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
COMPOSE_UP_OPTS?=
CLUSTER_NODE_COUNT?=1
# Docker Compose Development environment
docker-compose:docker-authawx/projects
CURRENT_UID=$(shell id -u)OS="$(shell docker info | grep 'Operating System')"TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml up --no-recreate awx
echo -e "\033[0;31mTo generate a CyberArk Conjur API key: docker exec -it tools_conjur_1 conjurctl account create quick-start\033[0m"
CURRENT_UID=$(shell id -u)TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
docker-compose -f tools/docker-compose/_sources/docker-compose.yml -f tools/docker-credential-plugins-override.yml up --no-recreate awx
docker-compose-test:docker-authawx/projects
cd tools &&CURRENT_UID=$(shell id -u)OS="$(shell docker info | grep 'Operating System')"TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE)docker-compose run --rm --service-ports awx /bin/bash
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /bin/bash
docker-compose-runtest:awx/projects
cd tools &&CURRENT_UID=$(shell id -u)TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE)docker-compose run --rm --service-ports awx /start_tests.sh
docker-compose -f tools/docker-compose/_sources/docker-compose.yml run --rm --service-ports awx_1 /start_tests.sh
docker-compose-build-swagger:awx/projects
cd tools &&CURRENT_UID=$(shell id -u)TAG=$(COMPOSE_TAG)DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE)docker-compose run --rm --service-ports --no-deps awx /start_tests.sh swagger
- Refer to the [Contributing guide](./CONTRIBUTING.md) to get started developing, testing, and building AWX.
- All code submissions are done through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged, and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs `git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net, and talk about what you would like to do or add first. This not only helps everyone know what's going on, it also helps save time and effort, if the community decides some changes are needed.
- All code submissions are made through pull requests against the `devel` branch.
- All contributors must use git commit --signoff for any commit to be merged and agree that usage of --signoff constitutes agreement with the terms of [DCO 1.1](./DCO_1_1.md)
- Take care to make sure no merge commits are in the submission, and use `git rebase` vs.`git merge` for this reason.
- If submitting a large code change, it's a good idea to join the `#ansible-awx` channel on irc.freenode.net and talk about what you would like to do or add first. This not only helps everyone know what's going on, but it also helps save time and effort if the community decides some changes are needed.
Reporting Issues
----------------
If you're experiencing a problem that you feel is a bug in AWX, or have ideas for how to improve AWX, we encourage you to open an issue, and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
If you're experiencing a problem that you feel is a bug in AWX or have ideas for improving AWX, we encourage you to open an issue and share your feedback. But before opening a new issue, we ask that you please take a look at our [Issues guide](./ISSUES.md).
Code of Conduct
---------------
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions, or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
We ask all of our community members and contributors to adhere to the [Ansible code of conduct](http://docs.ansible.com/ansible/latest/community/code_of_conduct.html). If you have questions or need assistance, please reach out to our community team at [codeofconduct@ansible.com](mailto:codeofconduct@ansible.com)
Get Involved
------------
@@ -38,9 +39,3 @@ We welcome your feedback and ideas. Here's how to reach us with feedback and que
- Join the `#ansible-awx` channel on irc.freenode.net
- Join the [mailing list](https://groups.google.com/forum/#!forum/awx-project)
msgstr "Les domaines OpenStack définissent les limites administratives. Ils sont nécessaires uniquement pour les URL d’authentification Keystone v3. Voir la documentation Ansible Tower pour les scénarios courants."
#: awx/main/models/credential/__init__.py:824
msgid "Region Name"
msgstr "Nom de la region"
#: awx/main/models/credential/__init__.py:826
msgid ""
"For some cloud providers, like OVH, region must be specified."
msgstr ""
"Chez certains fournisseurs, comme OVH, vous devez spécifier le nom de la région"
field=models.CharField(blank=True,choices=[('','Manual'),('git','Git'),('hg','Mercurial'),('svn','Subversion'),('insights','Red Hat Insights'),('archive','Remote Archive')],default='',help_text='Specifies the source control system used to store the project.',max_length=8,verbose_name='SCM Type'),
),
migrations.AlterField(
model_name='projectupdate',
name='scm_type',
field=models.CharField(blank=True,choices=[('','Manual'),('git','Git'),('hg','Mercurial'),('svn','Subversion'),('insights','Red Hat Insights'),('archive','Remote Archive')],default='',help_text='Specifies the source control system used to store the project.',max_length=8,verbose_name='SCM Type'),
field=models.TextField(blank=True,default='',help_text='Only used when enabled_var is set. Value when the host is considered enabled. For example if enabled_var="status.power_state"and enabled_value="powered_on" with host variables:{"status": {"power_state": "powered_on", "created": "2020-08-04T18:13:04+00:00", "healthy": true }, "name": "foobar", "ip_address": "192.168.2.1"}The host would be marked enabled. If power_state where any value other than powered_on then the host would be disabled when imported into Tower. If the key is not found then the host will be enabled'),
),
migrations.AddField(
model_name='inventorysource',
name='enabled_var',
field=models.TextField(blank=True,default='',help_text='Retrieve the enabled state from the given dict of host variables. The enabled variable may be specified as "foo.bar", in which case the lookup will traverse into nested dicts, equivalent to: from_dict.get("foo", {}).get("bar", default)'),
),
migrations.AddField(
model_name='inventorysource',
name='host_filter',
field=models.TextField(blank=True,default='',help_text='Regex where only matching hosts will be imported into Tower.'),
),
migrations.AddField(
model_name='inventoryupdate',
name='enabled_value',
field=models.TextField(blank=True,default='',help_text='Only used when enabled_var is set. Value when the host is considered enabled. For example if enabled_var="status.power_state"and enabled_value="powered_on" with host variables:{"status": {"power_state": "powered_on", "created": "2020-08-04T18:13:04+00:00", "healthy": true }, "name": "foobar", "ip_address": "192.168.2.1"}The host would be marked enabled. If power_state where any value other than powered_on then the host would be disabled when imported into Tower. If the key is not found then the host will be enabled'),
),
migrations.AddField(
model_name='inventoryupdate',
name='enabled_var',
field=models.TextField(blank=True,default='',help_text='Retrieve the enabled state from the given dict of host variables. The enabled variable may be specified as "foo.bar", in which case the lookup will traverse into nested dicts, equivalent to: from_dict.get("foo", {}).get("bar", default)'),
),
migrations.AddField(
model_name='inventoryupdate',
name='host_filter',
field=models.TextField(blank=True,default='',help_text='Regex where only matching hosts will be imported into Tower.'),
field=models.CharField(blank=True,choices=[('','Manual'),('git','Git'),('svn','Subversion'),('insights','Red Hat Insights'),('archive','Remote Archive')],default='',help_text='Specifies the source control system used to store the project.',max_length=8,verbose_name='SCM Type'),
),
migrations.AlterField(
model_name='projectupdate',
name='scm_type',
field=models.CharField(blank=True,choices=[('','Manual'),('git','Git'),('svn','Subversion'),('insights','Red Hat Insights'),('archive','Remote Archive')],default='',help_text='Specifies the source control system used to store the project.',max_length=8,verbose_name='SCM Type'),
('organization',models.ForeignKey(blank=True,default=None,help_text='The organization used to determine access to this execution environment.',null=True,on_delete=django.db.models.deletion.CASCADE,related_name='executionenvironments',to='main.Organization')),
('tags',taggit.managers.TaggableManager(blank=True,help_text='A comma-separated list of tags.',through='taggit.TaggedItem',to='taggit.Tag',verbose_name='Tags')),
field=models.ForeignKey(blank=True,default=None,help_text='The default execution environment for jobs run by this organization.',null=True,on_delete=django.db.models.deletion.SET_NULL,related_name='+',to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjob',
name='execution_environment',
field=models.ForeignKey(blank=True,default=None,help_text='The container image to be used for execution.',null=True,on_delete=django.db.models.deletion.SET_NULL,related_name='unifiedjobs',to='main.ExecutionEnvironment'),
),
migrations.AddField(
model_name='unifiedjobtemplate',
name='execution_environment',
field=models.ForeignKey(blank=True,default=None,help_text='The container image to be used for execution.',null=True,on_delete=django.db.models.deletion.SET_NULL,related_name='unifiedjobtemplates',to='main.ExecutionEnvironment'),
field=models.ForeignKey(blank=True,default=None,help_text='The default execution environment for jobs run using this project.',null=True,on_delete=django.db.models.deletion.SET_NULL,related_name='+',to='main.ExecutionEnvironment'),
field=models.CharField(choices=[('always','Always pull container before running.'),('missing','No pull option has been selected.'),('never','Never pull container before running.')],blank=True,default='',help_text='Pull image before running?',max_length=16),
field=awx.main.fields.JSONBField(blank=True,default=dict,editable=False,help_text='The Collections names and versions installed in the execution environment.'),
field=models.ForeignKey(blank=True,default=None,help_text='The default execution environment for jobs run by this organization.',null=True,on_delete=awx.main.utils.polymorphic.SET_NULL,related_name='+',to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='project',
name='default_environment',
field=models.ForeignKey(blank=True,default=None,help_text='The default execution environment for jobs run using this project.',null=True,on_delete=awx.main.utils.polymorphic.SET_NULL,related_name='+',to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='unifiedjob',
name='execution_environment',
field=models.ForeignKey(blank=True,default=None,help_text='The container image to be used for execution.',null=True,on_delete=awx.main.utils.polymorphic.SET_NULL,related_name='unifiedjobs',to='main.ExecutionEnvironment'),
),
migrations.AlterField(
model_name='unifiedjobtemplate',
name='execution_environment',
field=models.ForeignKey(blank=True,default=None,help_text='The container image to be used for execution.',null=True,on_delete=awx.main.utils.polymorphic.SET_NULL,related_name='unifiedjobtemplates',to='main.ExecutionEnvironment'),
field=models.ForeignKey(blank=True,default=None,help_text='The default execution environment for jobs run by this organization.',null=True,on_delete=django.db.models.deletion.SET_NULL,related_name='+',to='main.ExecutionEnvironment'),
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.