Merge pull request #236 from matburt/contributing_and_other_docs

CONTRIBUTING.md and other docs
This commit is contained in:
Matthew Jones 2017-08-11 10:30:55 -04:00 committed by GitHub
commit dd44f7a640
16 changed files with 362 additions and 426 deletions

278
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,278 @@
Ansible AWX
===========
Hi there! We're excited to have you as a contributor.
Have questions about this document or anything not covered here? Come chat with us on IRC (#ansible-awx on freenode) or the mailing list.
Table of contents
-----------------
* [Contributing Agreement](#dco)
* [Code of Conduct](#code-of-conduct)
* [Setting up the development environment](#setting-up-the-development-environment)
* [Prerequisites](#prerequisites)
* [Local Settings](#local-settings)
* [Building the base image](#building-the-base-image)
* [Building the user interface](#building-the-user-interface)
* [Starting up the development environment](#starting-up-the-development-environment)
* [Starting the development environment at the container shell](#starting-the-container-environment-at-the-container-shell)
* [Using the development environment](#using-the-development-environment)
* [What should I work on?](#what-should-i-work-on)
* [Submitting Pull Requests](#submitting-pull-requests)
* [Reporting Issues](#reporting-issues)
* [How issues are resolved](#how-issues-are-resolved)
* [Ansible Issue Bot](#ansible-issue-bot)
DCO
===
All contributors must use "git commit --signoff" for any
commit to be merged, and agree that usage of --signoff constitutes
agreement with the terms of DCO 1.1. Any contribution that does not
have such a signoff will not be merged.
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Code of Conduct
===============
Setting up the development environment
======================================
The AWX development environment workflow and toolchain is based on Docker and the docker-compose tool to contain
the dependencies, services, and databases necessary to run everything. It will bind the local source tree into the container
making it possible to observe changes while developing.
Prerequisites
-------------
`docker` and `docker-compose` are required for starting the development services, on Linux you can generally find these in your
distro's packaging, but you may find that Docker themselves maintain a seperate repo that tracks more closely to the latest releases.
For macOS and Windows, we recommend Docker for Mac (https://www.docker.com/docker-mac) and Docker for Windows (https://www.docker.com/docker-windows)
respectively. Docker for Mac/Windows automatically comes with `docker-compose`.
> Fedora
https://docs.docker.com/engine/installation/linux/docker-ce/fedora/
> Centos
https://docs.docker.com/engine/installation/linux/docker-ce/centos/
> Ubuntu
https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/
> Debian
https://docs.docker.com/engine/installation/linux/docker-ce/debian/
> Arch
https://wiki.archlinux.org/index.php/Docker
For `docker-compose` you may need/choose to install it seperately:
pip install docker-compose
Local Settings
--------------
In development mode (i.e. when running from a source checkout), Ansible AWX
will import the file `awx/settings/local_settings.py` and combine it with defaults in `awx/settings/defaults.py`. This file
is required for starting the development environment and startup will fail if it's not provided
An example file that works for the `docker-compose` tool is provided. Make a copy of it and edit as needed (the defaults are usually fine):
(host)$ cp awx/settings/local_settings.py.docker_compose awx/settings/local_settings.py
Building the base image
-----------------------
The AWX base container image (found in `tools/docker-compose/Dockerfile`) contains basic OS dependencies and
symbolic links into the development environment that make running the services easy. You'll first need to build the image:
(host)$ make docker-compose-build
The image will only need to be rebuilt if the requirements or OS dependencies change. A core concept about this image is that it relies
on having your local development environment mapped in.
Building the user interface
---------------------------
> AWX requires the 6.x LTS version of Node and 3.x LTS NPM
In order for the AWX user interface to load from the development environment it must be built:
(host)$ make ui-devel
When developing features and fixes for the user interface you can find more detail here: [UI Developer README](awx/ui/README.md)
Starting up the development environment
----------------------------------------------
There are several ways of starting the development environment depending on your desired workflow. The easiest and most common way is with:
(host)$ make docker-compose
This utilizes the image you built in the previous step and will automatically start all required services and dependent containers. You'll
be able to watch log messages and events as they come through.
The Makefile assumes that the image you built is tagged with your current branch. This allows you to pre-build images for different contexts
but you may want to use a particular branch's image (for instance if you are developing a PR from a branch based on the integration branch):
(host)$ COMPOSE_TAG=devel make docker-compose
Starting the development environment at the container shell
-----------------------------------------------------------
Often times you'll want to start the development environment without immediately starting all services and instead be taken directly to a shell:
(host)$ make docker-compose-test
From here you'll need to bootstrap the development environment before it will be usable for you. The `docker-compose` make target will
automatically do this:
(container)$ /bootstrap_development.sh
From here you can start each service individually, or choose to start all service in a pre-configured tmux session:
(container)# cd /awx_devel
(container)# make server
Using the development environment
---------------------------------
With the development environment running there are a few optional steps to pre-populate the environment with data. If you are using the `docker-compose`
method above you'll first need a shell in the container:
(host)$ docker exec -it tools_awx_1 bash
Create a superuser account:
(container)# awx-manage createsuperuser
Preload AWX with demo data:
(container)# awx-manage create_preload_data
This information will persist in the database running in the `tools_postgres_1` container, until it is removed. You may periodically need to recreate
this container and database if the database schema changes in an upstream commit.
You should now be able to visit and login to the AWX user interface at https://localhost:8043 or http://localhost:8013 if you have built the UI.
If not you can visit the API directly in your browser at: https://localhost:8043/api/ or http://localhost:8013/api/
When working on the source code for AWX the code will auto-reload for you when changes are made, with the exception of any background tasks that run in
celery.
Occasionally it may be necessary to purge any containers and images that may have collected:
(host)$ make docker-clean
There are host of other shortcuts, tools, and container configurations in the Makefile designed for various purposes. Feel free to explore.
What should I work on?
======================
We list our specs in `/docs`. `/docs/current` are things that we are actively working on. `/docs/future` are ideas for future work and the direction we
want that work to take. Fixing bugs, translations, and updates to documentation are also appreciated.
Be aware that if you are working in a part of the codebase that is going through active development your changes may be rejected or you may be asked to
rebase them. A good idea before starting work is to have a discussion with us on IRC or the mailing list.
Submitting Pull Requests
========================
Fixes and Features for AWX will go through the Github PR interface. There are a few things that can be done to help the visibility of your change
and increase the likelihood that it will be accepted
> Add UI detail to these
* No issues when running linters/code checkers
* Python: flake8: `(container)/awx_devel$ make flake8`
* Javascript: JsHint: `(container)/awx_devel$ make jshint`
* No issues from unit tests
* Python: py.test: `(container)/awx_devel$ make test`
* JavaScript: Jasmine: `(container)/awx_devel$ make ui-test-ci`
* Write tests for new functionality, update/add tests for bug fixes
* Make the smallest change possible
* Write good commit messages: https://chris.beams.io/posts/git-commit/
It's generally a good idea to discuss features with us first by engaging us in IRC or on the mailing list, especially if you are unsure if it's a good
fit.
We like to keep our commit history clean and will require resubmission of pull requests that contain merge commits. Use `git pull --rebase` rather than
`git pull` and `git rebase` rather than `git merge`.
Sometimes it might take us a while to fully review your PR. We try to keep the `devel` branch in pretty good working order so we review requests carefuly.
Please be patient.
All submitted PRs will have the linter and unit tests run against them and the status reported in the PR.
Reporting Issues
================
Use the Github issue tracker for filing bugs. In order to save time and help us respond to issues quickly, make sure to fill out as much of the issue template
as possible. Version information and an accurate reproducing scenario are critical to helping us identify the problem.
When reporting issues for the UI we also appreciate having screenshots and any error messages from the web browser's console. It's not unsual for browser extensions
and plugins to cause problems. Reporting those will also help speed up analyzing and resolving UI bugs.
For the API and backend services, please capture all of the logs that you can from the time the problem was occuring.
Don't use the issue tracker to get help on how to do something - please use the mailing list and IRC for that.
How issues are resolved
-----------------------
We triage our issues into high, medium, and low and will tag them with the relevant component (api, ui, installer, etc). We will typically focus on high priority
issues. There aren't hard and fast rules for determining the severity of an issue, but generally high priority issues have an increased likelihood of breaking
existing functionality and/or negatively impacting a large number of users.
If your issue isn't considered `high` priority then please be patient as it may take some time to get to your report.
Before opening a new issue, please use the issue search feature to see if it's already been reported. If you have any extra detail to provide then please comment.
Rather than posting a "me too" comment you might consider giving it a "thumbs up" on github.
Ansible Issue Bot
-----------------
> Fill in

2
INSTALL.md Normal file
View File

@ -0,0 +1,2 @@
Installing AWX
==============

View File

@ -5,7 +5,7 @@
### Environment
<!--
* Tower version: X.Y.Z
* AWX version: X.Y.Z
* Ansible version: X.Y.Z
* Operating System:
* Web Browser:

View File

@ -9,7 +9,8 @@ NPM_BIN ?= npm
DEPS_SCRIPT ?= packaging/bundle/deps.py
GIT_BRANCH ?= $(shell git rev-parse --abbrev-ref HEAD)
MANAGEMENT_COMMAND ?= awx-manage
GCLOUD_AUTH ?=
IMAGE_REPOSITORY_AUTH ?=
IMAGE_REPOSITORY_BASE ?= https://gcr.io
VERSION=$(shell git describe --long)
VERSION3=$(shell git describe --long | sed 's/\-g.*//')
@ -24,7 +25,7 @@ VENV_BASE ?= /venv
SCL_PREFIX ?=
CELERY_SCHEDULE_FILE ?= /celerybeat-schedule
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering/
DEV_DOCKER_TAG_BASE ?= gcr.io/ansible-tower-engineering
# Python packages to install only from source (not from binary wheels)
# Comma separated list
SRC_ONLY_PKGS ?= cffi,pycparser,psycopg2,twilio
@ -537,13 +538,13 @@ setup-bundle-build:
mkdir -p $@
docker-auth:
if [ "$(GCLOUD_AUTH)" ]; then \
docker login -u oauth2accesstoken -p "$(GCLOUD_AUTH)" https://gcr.io; \
if [ "$(IMAGE_REPOSITORY_AUTH)" ]; then \
docker login -u oauth2accesstoken -p "$(IMAGE_REPOSITORY_AUTH)" $(IMAGE_REPOSITORY_BASE); \
fi;
# Docker isolated rampart
docker-isolated:
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml create
docker start tools_awx_1
docker start tools_isolated_1
if [ "`docker exec -i -t tools_isolated_1 cat /root/.ssh/authorized_keys`" == "`docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub`" ]; then \
@ -551,29 +552,31 @@ docker-isolated:
else \
docker exec "tools_isolated_1" bash -c "mkdir -p /root/.ssh && rm -f /root/.ssh/authorized_keys && echo $$(docker exec -t tools_awx_1 cat /root/.ssh/id_rsa.pub) >> /root/.ssh/authorized_keys"; \
fi
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/docker-isolated-override.yml up
# Docker Compose Development environment
docker-compose: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml up --no-recreate awx
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml up --no-recreate awx
docker-compose-cluster: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose-cluster.yml up
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml up
docker-compose-test: docker-auth
cd tools && TAG=$(COMPOSE_TAG) docker-compose run --rm --service-ports awx /bin/bash
cd tools && TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose run --rm --service-ports awx /bin/bash
docker-compose-build: awx-devel-build awx-isolated-build
docker-compose-build: awx-devel-build
# Base development image build
awx-devel-build:
docker build -t ansible/awx_devel -f tools/docker-compose/Dockerfile .
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)awx_devel:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)awx_devel:$(COMPOSE_TAG)
docker tag ansible/awx_devel $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_devel:$(COMPOSE_TAG)
# For use when developing on "isolated" AWX deployments
awx-isolated-build:
docker build -t ansible/awx_isolated -f tools/docker-isolated/Dockerfile .
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)awx_isolated:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)awx_isolated:$(COMPOSE_TAG)
docker tag ansible/awx_isolated $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
#docker push $(DEV_DOCKER_TAG_BASE)/awx_isolated:$(COMPOSE_TAG)
MACHINE?=default
docker-clean:
@ -585,10 +588,10 @@ docker-refresh: docker-clean docker-compose
# Docker Development Environment with Elastic Stack Connected
docker-compose-elk: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose.yml -f tools/elastic/docker-compose.logstash-link.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
docker-compose-cluster-elk: docker-auth
TAG=$(COMPOSE_TAG) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
TAG=$(COMPOSE_TAG) DEV_DOCKER_TAG_BASE=$(DEV_DOCKER_TAG_BASE) docker-compose -f tools/docker-compose-cluster.yml -f tools/elastic/docker-compose.logstash-link-cluster.yml -f tools/elastic/docker-compose.elastic-override.yml up --no-recreate
clean-elk:
docker stop tools_kibana_1

View File

@ -1,21 +1,16 @@
[![Build Status](http://jenkins.testing.ansible.com/buildStatus/icon?job=Test_Tower_Unittest)](http://jenkins.testing.ansible.com/job/Test_Tower_Unittest)
[![Requirements Status](https://requires.io/github/ansible/ansible-tower/requirements.svg?branch=devel)](https://requires.io/github/ansible/ansible-tower/requirements/?branch=devel)
[![Devel Requirements Status](https://requires.io/github/ansible/awx/requirements.svg?branch=devel)](https://requires.io/github/ansible/awx/requirements/?branch=devel)
Ansible Tower
AWX
=============
Tower provides a web-based user interface, REST API and task engine built on top of
AWX provides a web-based user interface, REST API and task engine built on top of
Ansible.
Resources
---------
Refer to `CONTRIBUTING.md` to get started developing, testing and building Tower.
Refer to `CONTRIBUTING.md` to get started developing, testing and building AWX.
Refer to `setup/README.md` to get started deploying Tower.
Refer to `INSTALL.md` to get started deploying AWX.
Refer to `docs/build_system.md` for more about Jenkins and installing nightly builds (as opposed to running from source).
Refer to `docs/release_process.md` for information on the steps involved in creating a release.
Refer to http://docs.ansible.com/ansible-tower/index.html for information on installing/upgrading, setup, troubleshooting, and much more.
Refer to `LOCALIZATION.md` for translation and localization help.

View File

@ -159,7 +159,7 @@ class ApiRootView(APIView):
v1 = reverse('api:api_v1_root_view', kwargs={'version': 'v1'})
v2 = reverse('api:api_v2_root_view', kwargs={'version': 'v2'})
data = dict(
description = _('Ansible Tower REST API'),
description = _('AWX REST API'),
current_version = v2,
available_versions = dict(v1 = v1, v2 = v2),
)

View File

@ -1,10 +1,10 @@
# Ansible Tower UI
# AWX UI
## Requirements
### Node / NPM
Tower currently requires the 6.x LTS version of Node and NPM.
AWX currently requires the 6.x LTS version of Node and NPM.
macOS installer: [https://nodejs.org/dist/latest-v6.x/node-v6.9.4.pkg](https://nodejs.org/dist/latest-v6.x/node-v6.9.4.pkg)
@ -33,7 +33,7 @@ $ yum install bzip2 gcc-c++ git make
### Starting the UI
First, the Tower API will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md).
First, the AWX API will need to be running. See [CONTRIBUTING.md](../../CONTRIBUTING.md).
When using Docker for Mac or native Docker on Linux:
@ -84,7 +84,7 @@ Some javascript libraries do not export their contents as a module, or depend on
1. Shim implicit dependencies using Webpack's [ProvidePlugin](https://github.com/webpack/webpack/blob/006d59500de0493c4096d5d4cecd64eb12db2b95/lib/ProvidePlugin.js). Example:
```js
// Tower source code depends on the lodash library being available as _
// AWX source code depends on the lodash library being available as _
_.uniq([1,2,3,1]) // will throw error undefined
```
@ -128,7 +128,7 @@ Environment variables can accessed in a Javascript via `PROCESS.env`.
Example usage in `npm run build-docker-machine`:
```bash
$ docker-machine ssh $DOCKER_MACHINE_NAME -f -N -L ${npm_package_config_websocket_port}:localhost:${npm_package_config_websocket_port}; ip=$(docker-machine ip $DOCKER_MACHINE_NAME); echo npm set ansible-tower:django_host ${ip}; $ grunt dev
$ docker-machine ssh $DOCKER_MACHINE_NAME -f -N -L ${npm_package_config_websocket_port}:localhost:${npm_package_config_websocket_port}; ip=$(docker-machine ip $DOCKER_MACHINE_NAME); echo npm set awx:django_host ${ip}; $ grunt dev
```
Example usage in an `npm test` script target:
@ -139,7 +139,7 @@ npm_package_config_websocket_port=mock_websocket_port npm_package_config_django_
You'll usually want to pipe and set vars prior to running a script target:
```
$ npm set ansible-tower:websocket_host ${mock_host}; npm run script-name
$ npm set awx:websocket_host ${mock_host}; npm run script-name
```
### NPM Scripts

View File

@ -1,278 +0,0 @@
Build and Release Process
=========================
This document describes the Ansible Tower Software build and release process.
This process includes the automation of the packaging for Debian/Ubuntu and
Fedora/EL (Enterprise Linux), as well as the creation of various software
repositories which are used by the default playbook setup.
Packaging Details
-----------------------------
### Version and Release Determination ###
The `VERSION` and `RELEASE` variables used by the build process are configured
in the Makefile. The `VERSION` is taken from the `__version__` field contained
within the `awx/__init__.py file.` This string should always be of the format:
[0-9\.]+
There should *not* be any "-" characters contained in the string. (Which can
represent a build/rev type release number). Example:
1.2.2
The RELEASE value is generated by the Makefile during build time. While the
value can be supplied by setting the environment variable `RELEASE`, it is
recommended to let the `Makefile` determine the appropriate value.
### OFFICIAL vs. Non-OFFICIAL Builds ###
An "official" build is one that does not include a development timestamp in the
release field. This is controlled by setting the environment variable
`OFFICIAL=yes` prior to running the make command.
Non-official builds will replace the `RELEASE` variable with the following
string:
-0.gitYYYYmmDDHHMM
Non-official builds should only be used for development purposes, and are
copied into the nightly repos. Official builds will be copied out to the
production servers via the automated Jenkins build process (described below).
### Python sdist Process ###
The sdist build is the first step in the packaging process. This step is
responsible for assembling the files to be packaged into a .tar.gz, which
can then be installed itself via pip or used later for the RPM/DEB builds.
The resulting tar.gz file will be named:
ansible-tower-${VERSION}-${RELEASE}.tar.gz
### RPM Build Process ###
The first step of the RPM build process is to remove the `$RELEASE` from the
tar.gz, since the spec file does not like to include the release. This is
handled by the `rpmtar` Makefile target, which first unpacks the file, renames
the contained directory to simply be `ansible-tower-${VERSION}`, and finally re-
packages the file as `ansible-tower-${VERSION}.tar.gz`.
The main Makefile target for the rpm build is (unsurprisingly) `rpm`. This copies
the re-formed sdist .tar.gz file into the rpm-build directory and then calls
the rpmbuild command to create the RPM.
The spec file for this command is `packaging/rpm/ansible-tower.spec`. This file is currently
maintained by hand, so any changelog entries must be added to it manually. All
other aspects of the file (source, version, release, etc.) are picked up via
variables that are set by the Makefile and do not need to be updated during
packaging.
### DEB Build Process ###
The process to build a .deb is somewhat more involved, and I will not get too
involved in the specifics of how the debian packaging works. The main files used
in this packaging are (all found in `packaging/deb/`):
- ansible-tower.dirs
- ansible-tower.install
- control
- rules
- {pre,post}{inst,rm}
The `ansible-tower.dirs` file contains the directories (listed as paths relative to the
build root) that will be created during the packaging.
The `ansible-tower.install` file contains a list of files that will be installed directly
by the build process rather than via the `make install` command or other steps. This
is of the format "source destination" (where the destination is also a path
relative to the build root).
The `control` file is functionally similar to the header of a spec file, and
contains things like the package name, requirements, etc.
The `rules` file is really a Makefile, and contains the rules for the build
process. These rules are based on the type of build you're executing (binary
vs. source, for instance). Since we are building a binary-only .deb package,
the only target we use is the `binary` target.
The pre/post scripts are analogous to the %pre/%post macros in the RPM spec,
and are executed at the various stages of the installation/removal. For
Debian/Ubuntu, these scripts do quite a bit more than the corresponding RPM
stages, since RPM packaging guidelines are generally more strict about
starting/stopping services, etc. during the RPM installation.
In the main `Makefile`, just as with the RPM target, the target for building
the .deb's is `deb`. This target begins similarly to the rpm target, in that
it copies the sdist file into the deb-build directory. It then unpacks that
file there and calls the `dh_make` helper function. This creates several new
directories that are used by the `dpkg-buildpackage` command, most importantly
the `debian` and `DEBIAN` directories (used for the source and binary builds,
respectively). The generated `debian` directory is removed and replaced with
the files that are in `packaging/deb/` and the target package name is inserted
into a file that will be used as a command-line argument to `dpkg-buildpackage`.
This is required, otherwise the build process will try and figure out the
name automatically (and not always successfully).
Finally, `dpkg-buildpackage` is called to build the .deb.
Jenkins
-----------------------------
### Server Information ###
The Ansible Jenkins server can be found at http://jenkins.testing.ansible.com
This is a standard Jenkins installation, with the following additional
plugins installed:
- Build Authorization Token Root Plugin:
This plugin allows build and related REST build triggers be accessed even
when anonymous users cannot see Jenkins.
- Git Client Plugin:
The standard git client plugin.
- Git Parameter Plug-In:
This plugin adds the ability to choose from git repository revisions or tags
- GitHub API Plugin:
This plugin provides GitHub API for other plugins.
- GitHub Plugin:
This plugin integrates GitHub to Jenkins.
- Workspace Cleanup Plugin:
This plugin ensures that the root of the workspace is cleaned out between
builds to prevent files from previous builds leaking or breaking future builds.
### Server Installation and Configuration ###
The base Jenkins server was installed via apt:
$ apt-get install jenkins
Since the server OS for the Jenkins server is Ubuntu Raring (13.04). In order to
execute RPM builds on this server, mock was installed from source as follows:
$ apt-get install \
automake \
git \
libpython2.7 \
libsqlite0 \
libuser1 \
make \
python-decoratortools \
python-libxml2 \
python-peak.util.decorators \
python-pycurl \
python-rpm \
python-sqlite \
python-sqlitecachec \
python-support \
python-urlgrabber \
usermode \
yum \
yum-utils
$ git clone git://git.fedorahosted.org/git/mock.git mock
$ cd mock
$ ./autogen.sh
$ automake
$ ./configure \
--bindir=/usr/bin \
--sbindir=/usr/sbin \
--sysconfdir=/etc \
--localstatedir=/var/lib \
--libdir=/usr/lib \
--includedir=/usr/include \
--mandir=/usr/man
$ make install
$ ln -s /usr/bin/consolehelper /usr/bin/mock
In order to create apt repositories, the reprepro package was also installed.
$ apt-get install reprepro
### Configured Jobs ###
There are currently three classes of jobs configured in Jenkins:
- `Build_Ansible_{RPM,DEB,TAR,Docs}` builds for Ansible
- `Build_Tower_{RPM,DEB,TAR,Docs}` builds for Ansible Tower
- Automated Scans which kick-off the prior two jobs
The automated scans work by checking for new tags in the git repository for
the given project, and when a new one is found, starting the appropriate jobs.
For RPMs, a job is started for each of the supported distributions while for
DEBs only one job is started. All of these jobs are started with `OFFICIAL=yes`
so that an official package is produced, which will be copied out to the production
repositories (documented below).
> NOTE: The nightly jobs are currently triggered by a cron job in the exact same
> manner as the above jobs, the only difference being that they set OFFICIAL=no
> and use HEAD as the target tag for the job, so they are always built off of
> the most recent commit at that time. Likewise, the resultant packages are only
> copied to the relevant nightlies repo (also documented below).
### Manual Builds ###
Manual builds can be triggered via the Jenkins GUI. Simply log in and select the
appropriate job, and then click on the "Build with Parameters" link to the left
(or select it from the drop-down that is available from the main jobs list).
You will be presented with a form to enter parameters. The `TARGET_TAG` and `OFFICIAL`
parameters are the same for both RPM and DEB builds, the function of which is
described above. For RPM builds, there is an addition parameter named `TARGET_DIST`,
which controls the mock environment for the build.
> WARNING: Take extra care when manually triggering an `OFFICIAL` build at this
> time, as the resultant package will automatically be copied to the production
> server and made available for customers to download.
> NOTE: As of this writing, using the combination of `TARGET_TAG=HEAD` and `OFFICIAL=yes`
> is allowed, however this will not be the case in the future. This will either be
> disallowed by failing the job, or the resultant package will be copied to a third
> repository to be used for user-acceptance testing (UAT).
Repositories
-----------------------------
### Nightlies ###
The nightly repositories are hosted on the AnsibleWorks Jenkins server, and can
be found at the following location:
http://jenkins.testing.ansible.com/ansible-tower_nightlies_f8b8c5588b2505970227a7b0900ef69040ad5a00/devel
There are several sub-folders, including `deb/`, `rpm/`, `docs/` and `setup/`
The `rpm/` folder itself contains sub-folders for each distribution/architecture
that we support, for example:
- epel-6-{i386,x86_64}
- epel-7-{i386,x86_64}
The `deb/` folder contains several subfolders, which correspond to the normal
apt repository structure. All .deb files are located under `pool/`, while the `dists/`
directory contains the distribution-specific information.
The `docs/` folder contains the PDF documentation.
The `setup/` folder contains the Ansible Tower setup playbook tar.gz file.
These nightly repositories can be used by the Ansible Tower setup playbook by
running the `setup.sh` shell script with the following option:
./setup.sh -e "aw_repo_url=http://jenkins.testing.ansible.com/ansible-tower_nightlies_f8b8c5588b2505970227a7b0900ef69040ad5a00/devel gpgcheck=0"
### Official Releases ###
As noted above, `OFFICIAL` builds are copied out to the production server, and can be
found at the following location:
http://releases.ansible.com/ansible-tower/
The Ansible Tower setup playbook will use this repo location by default.
### NOTES ###
* When implementing new features, please keep an eye on the contents of `/packaging/remove_tower_source.py`,
which determines which python source files should be deleted and which should be
preserved. There are some source files that we want to keep in production for special
purposes, like source files of migrations. Ignoring those might lead to bugs.

View File

@ -1,69 +0,0 @@
Release Process
===============
This document describes the process of creating and publishing an Ansible Tower release.
Time for a release
------------------
When the time comes for a release, the following steps will ensure a smooth and
successful release.
1. Verify that the `__version__` variable has been updated in `awx/__init__.py`.
```
__version__ = 'X.Y.Z'
```
2. Update the "Release History" in the file `README.md`.
3. Update the rpm package changelog by adding a new entry to the file `packaging/rpm/ansible-tower.spec`.
4. Update the debian package changelog by adding a new entry to the file `packaging/debian/changelog`.
5. Tag and push the release to git.
```
git tag <X.Y.Z>
git push --tags
```
6. Create and push a release branch to git.
```
git branch release_<X.Y.Z>
git checkout release_<X.Y.Z>
git push origin release_<X.Y.Z>
```
Monitor Jenkins
---------------
Once tagged, one must launch the [Release_Tower](http://jenkins.testing.ansible.com/view/Tower/job/Release_Tower/) with the following parameters:
* `GIT_BRANCH=origin/tags/<X.Y.Z>`
* `OFFICIAL=yes`
The following jobs will be triggered:
* [Build_Tower_TAR](http://jenkins.testing.ansible.com/view/Tower/)
* [Build_Tower_DEB](http://jenkins.testing.ansible.com/view/Tower/)
* [Build_Tower_AMI](http://jenkins.testing.ansible.com/view/Tower/)
* [Build_Tower_RPM](http://jenkins.testing.ansible.com/view/Tower/)
* [Build_Tower_Docs](http://jenkins.testing.ansible.com/view/Tower/)
Should any build step fail, Jenkins will emit a message in IRC and set the build status to failed.
Publishing Builds
-----------------
Upon successful completion, jenkins will publish build artifacts to the following locations:
* http://releases.ansible.com/ansible-tower/rpm
* http://releases.ansible.com/ansible-tower/deb
* http://releases.ansible.com/ansible-tower/setup
* http://releases.ansible.com/ansible-tower/docs
Publishing AMI's
----------------------
While OFFICIAL Tower AMI's are created by jenkins, the process for blessing AMI's is manual. Please contact <dave@ansible.com> to initiate the process.
Publishing Documentation
------------------------
Tower documentation is available in the [product-docs](https://github.com/ansible/product-docs) repository. The [Build_Tower_Docs](http://jenkins.testing.ansible.com/view/Tower/) job builds and publishes PDF, and HTML, documentation.

View File

@ -18,4 +18,4 @@ exclude=.tox,venv,awx/lib/site-packages,awx/plugins/inventory/ec2.py,awx/plugins
[flake8]
ignore=E201,E203,E221,E225,E231,E241,E251,E261,E265,E303,E501,W291,W391,W293,E731,F405
exclude=.tox,venv,awx/lib/site-packages,awx/plugins/inventory,awx/ui,awx/api/urls.py,awx/main/migrations,awx/main/south_migrations,awx/main/tests/data,node_modules/,awx/projects/,tools/docker,awx/settings/local_*.py,installer/openshift/settings.py
exclude=.tox,venv,awx/lib/site-packages,awx/plugins/inventory,awx/ui,awx/api/urls.py,awx/main/migrations,awx/main/south_migrations,awx/main/tests/data,node_modules/,awx/projects/,tools/docker,awx/settings/local_*.py,installer/openshift/settings.py,build/

View File

@ -16,7 +16,7 @@ services:
- "15672:15672"
awx_1:
privileged: true
image: gcr.io/ansible-tower-engineering/awx_devel:${TAG}
image: ${DEV_DOCKER_TAG_BASE}/awx_devel:${TAG}
hostname: awx_1
environment:
RABBITMQ_HOST: rabbitmq_1
@ -30,7 +30,7 @@ services:
awx_2:
privileged: true
image: gcr.io/ansible-tower-engineering/awx_devel:${TAG}
image: ${DEV_DOCKER_TAG_BASE}/awx_devel:${TAG}
hostname: awx_2
environment:
RABBITMQ_HOST: rabbitmq_2
@ -43,7 +43,7 @@ services:
- "../:/awx_devel"
awx_3:
privileged: true
image: gcr.io/ansible-tower-engineering/awx_devel:${TAG}
image: ${DEV_DOCKER_TAG_BASE}/awx_devel:${TAG}
hostname: awx_3
environment:
RABBITMQ_HOST: rabbitmq_3
@ -55,16 +55,16 @@ services:
volumes:
- "../:/awx_devel"
rabbitmq_1:
image: gcr.io/ansible-tower-engineering/rabbit_cluster_node:latest
image: ${DEV_DOCKER_TAG_BASE}/rabbit_cluster_node:latest
hostname: rabbitmq_1
rabbitmq_2:
image: gcr.io/ansible-tower-engineering/rabbit_cluster_node:latest
image: ${DEV_DOCKER_TAG_BASE}/rabbit_cluster_node:latest
hostname: rabbitmq_2
environment:
- CLUSTERED=true
- CLUSTER_WITH=rabbitmq_1
rabbitmq_3:
image: gcr.io/ansible-tower-engineering/rabbit_cluster_node:latest
image: ${DEV_DOCKER_TAG_BASE}/rabbit_cluster_node:latest
hostname: rabbitmq_3
environment:
- CLUSTERED=true

View File

@ -2,7 +2,7 @@ version: '3'
services:
# Primary AWX Development Container
awx:
image: gcr.io/ansible-tower-engineering/awx_devel:${TAG}
image: ${DEV_DOCKER_TAG_BASE}/awx_devel:${TAG}
hostname: awx
environment:
RABBITMQ_HOST: rabbitmq

View File

@ -26,6 +26,7 @@ ADD tools/docker-compose/awx.egg-info /tmp/awx.egg-info
RUN ln -Ffs /awx_devel/tools/docker-compose/nginx.conf /etc/nginx/nginx.conf
RUN ln -Ffs /awx_devel/tools/docker-compose/nginx.vh.default.conf /etc/nginx/conf.d/nginx.vh.default.conf
RUN ln -s /awx_devel/tools/docker-compose/start_development.sh /start_development.sh
RUN ln -s /awx_devel/tools/docker-compose/bootstrap_development.sh /bootstrap_development.sh
RUN openssl req -nodes -newkey rsa:2048 -keyout /etc/nginx/nginx.key -out /etc/nginx/nginx.csr -subj "/C=US/ST=North Carolina/L=Durham/O=Ansible/OU=AWX Development/CN=awx.localhost"
RUN openssl x509 -req -days 365 -in /etc/nginx/nginx.csr -signkey /etc/nginx/nginx.key -out /etc/nginx/nginx.crt
WORKDIR /tmp

View File

@ -0,0 +1,36 @@
#!/bin/bash
set +x
# Wait for the databases to come up
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=postgres port=5432" all
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=memcached port=11211" all
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=${RABBITMQ_HOST} port=5672" all
# In case AWX in the container wants to connect to itself, use "docker exec" to attach to the container otherwise
# TODO: FIX
#/etc/init.d/ssh start
ansible -i "127.0.0.1," -c local -v -m postgresql_user -U postgres -a "name=awx-dev password=AWXsome1 login_user=postgres login_host=postgres" all
ansible -i "127.0.0.1," -c local -v -m postgresql_db -U postgres -a "name=awx-dev owner=awx-dev login_user=postgres login_host=postgres" all
# Move to the source directory so we can bootstrap
if [ -f "/awx_devel/manage.py" ]; then
cd /awx_devel
else
echo "Failed to find awx source tree, map your development tree volume"
fi
cp -R /tmp/awx.egg-info /awx_devel/ || true
sed -i "s/placeholder/$(git describe --long | sed 's/\./\\./g')/" /awx_devel/awx.egg-info/PKG-INFO
cp /tmp/awx.egg-link /venv/awx/lib/python2.7/site-packages/awx.egg-link
ln -s /awx_devel/tools/rdb.py /venv/awx/lib/python2.7/site-packages/rdb.py || true
yes | cp -rf /awx_devel/tools/docker-compose/supervisor.conf /supervisor.conf
# AWX bootstrapping
make version_file
make migrate
make init
mkdir -p /awx_devel/awx/public/static
mkdir -p /awx_devel/awx/ui/static

View File

@ -1,42 +1,10 @@
#!/bin/bash
set +x
# Wait for the databases to come up
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=postgres port=5432" all
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=memcached port=11211" all
ansible -i "127.0.0.1," -c local -v -m wait_for -a "host=${RABBITMQ_HOST} port=5672" all
# In case AWX in the container wants to connect to itself, use "docker exec" to attach to the container otherwise
# TODO: FIX
#/etc/init.d/ssh start
ansible -i "127.0.0.1," -c local -v -m postgresql_user -U postgres -a "name=awx-dev password=AWXsome1 login_user=postgres login_host=postgres" all
ansible -i "127.0.0.1," -c local -v -m postgresql_db -U postgres -a "name=awx-dev owner=awx-dev login_user=postgres login_host=postgres" all
# Move to the source directory so we can bootstrap
if [ -f "/awx_devel/manage.py" ]; then
cd /awx_devel
else
echo "Failed to find awx source tree, map your development tree volume"
fi
cp -R /tmp/awx.egg-info /awx_devel/ || true
sed -i "s/placeholder/$(git describe --long | sed 's/\./\\./g')/" /awx_devel/awx.egg-info/PKG-INFO
cp /tmp/awx.egg-link /venv/awx/lib/python2.7/site-packages/awx.egg-link
ln -s /awx_devel/tools/rdb.py /venv/awx/lib/python2.7/site-packages/rdb.py || true
yes | cp -rf /awx_devel/tools/docker-compose/supervisor.conf /supervisor.conf
# AWX bootstrapping
make version_file
make migrate
make init
mkdir -p /awx_devel/awx/public/static
mkdir -p /awx_devel/awx/ui/static
# Start the service
/bootstrap_development.sh
cd /awx_devel
# Start the services
if [ -f "/awx_devel/tools/docker-compose/use_dev_supervisor.txt" ]; then
make supervisor
else

View File

@ -8,7 +8,7 @@ services:
- isolated
# Isolated Rampart Container
isolated:
image: gcr.io/ansible-tower-engineering/awx_isolated:${TAG}
image: ${DEV_DOCKER_TAG_BASE}/awx_isolated:${TAG}
hostname: isolated
volumes:
- "../awx/main/isolated:/awx_devel"