Dependency Updates

* Dynamic Inventory Source
Template against ansible 2.3 dynamic inventory sources.
The major change is removal of `rax.py`. Most upstream scripts except
`foreman.py` has quite trivial coding style changes, or minor functional
extensions  that does not affect Tower inventory update runs.
`foreman.py`, on the other hand, went through quite a major refactoring,
but functionalities stay the same.

Major python dependency updates include apache-libcloud (1.3.0 -->
2.0.0), boto (2.45.0 --> 2.46.1) and shade (1.19.0 --> 1.20.0). Minor
python dependency updates include indirect updates via `pip-compile`,
which are determined by base dependencies.

Some minor `task.py` extensions:
 - `.ini` file for ec2 has one more field `stack_filter=False`, which
   reveals changes in `ec2.py`.
 - `.ini` file for cloudforms will catch these four options from
   `source_vars_dict` of inventory update: `'version', 'purge_actions',
   'clean_group_keys', 'nest_tags'`. These four options have always been
   available in `cloudforms.py` but `cloudforms.ini.example` has not
   mentioned them until the latest version. For consistency with upstream
   docs, we should make these fields available for tower user to customize.
 - YAML file of openstack will catch ansible options `use_hostnames`,
   `expand_hostvars` and `fail_on_errors` from `source_vars_dict` of
   inventory update as a response to issue #6075.

* Remove Rackspace support
Supports of Rackspace as both a dynamic inventory source and a cloud
credential are fully removed. Data migrations have been added to support
arbitrary credential types feature and delete rackspace inventory
sources.

Note also requirement `jsonschema` has been moved from
`requirements.txt` to `requirements.in` as a primary dependency to
reflect it's usage in `/main/fields.py`.

Connected issue: #6080.

* `pexpect` major update
`pexpect` stands at the very core of our task system and underwent a
major update from 3.1 to 4.2.1. Although verified during devel, please
still be mindful of any suspicious issues on celery side even after this
PR gets merged.

* Miscellaneous
 - requests now explicitly declared in `requirements.in` at version 2.11.1
   in response to upstream issue
 - celery: 3.1.17 -> 3.1.25
 - django-extensions: 1.7.4 -> 1.7.8
 - django-polymorphic: 0.7.2 -> 1.2
 - django-split-settings: 0.2.2 -> 0.2.5
 - django-taggit: 0.21.3 -> 0.22.1
 - irc: 15.0.4 -> 15.1.1
 - pygerduty: 0.35.1 -> 0.35.2
 - pyOpenSSL: 16.2.0 -> 17.0.0
 - python-saml: 2.2.0 -> 2.2.1
 - redbaron: 0.6.2 -> 0.6.3
 - slackclient: 1.0.2 -> 1.0.5
 - tacacs_plus: 0.1 -> 0.2
 - xmltodict: 0.10.2 -> 0.11.0
 - pip: 8.1.2 -> 9.0.1
 - setuptools: 23.0.0 -> 35.0.2
 - (requirements_ansible.in only)kombu: 3.0.35 -> 3.0.37
This commit is contained in:
Aaron Tan 2017-04-20 16:47:53 -04:00
parent 3ed9ebed89
commit cfb633e8a6
35 changed files with 666 additions and 1022 deletions

View File

@ -267,8 +267,8 @@ virtualenv_ansible:
fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv --system-site-packages --setuptools $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==23.0.0 && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==8.1.2; \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \
fi
@ -279,8 +279,8 @@ virtualenv_tower:
fi; \
if [ ! -d "$(VENV_BASE)/tower" ]; then \
virtualenv --system-site-packages --setuptools $(VENV_BASE)/tower && \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==23.0.0 && \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==8.1.2; \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \
fi

View File

@ -89,7 +89,7 @@ class Metadata(metadata.SimpleMetadata):
# Special handling of inventory source_region choices that vary based on
# selected inventory source.
if field.field_name == 'source_regions':
for cp in ('azure', 'ec2', 'gce', 'rax'):
for cp in ('azure', 'ec2', 'gce'):
get_regions = getattr(InventorySource, 'get_%s_region_choices' % cp)
field_info['%s_region_choices' % cp] = get_regions()

View File

@ -35,7 +35,7 @@ from rest_framework import validators
from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic
from polymorphic import PolymorphicModel
from polymorphic.models import PolymorphicModel
# AWX
from awx.main.constants import SCHEDULEABLE_PROVIDERS

View File

@ -361,16 +361,9 @@ class DashboardView(APIView):
'job_failed': inventory_with_failed_hosts.count(),
'inventory_failed': failed_inventory}
user_inventory_sources = get_user_queryset(request.user, InventorySource)
rax_inventory_sources = user_inventory_sources.filter(source='rax')
rax_inventory_failed = rax_inventory_sources.filter(status='failed')
ec2_inventory_sources = user_inventory_sources.filter(source='ec2')
ec2_inventory_failed = ec2_inventory_sources.filter(status='failed')
data['inventory_sources'] = {}
data['inventory_sources']['rax'] = {'url': reverse('api:inventory_source_list', request=request) + "?source=rax",
'label': 'Rackspace',
'failures_url': reverse('api:inventory_source_list', request=request) + "?source=rax&status=failed",
'total': rax_inventory_sources.count(),
'failed': rax_inventory_failed.count()}
data['inventory_sources']['ec2'] = {'url': reverse('api:inventory_source_list', request=request) + "?source=ec2",
'failures_url': reverse('api:inventory_source_list', request=request) + "?source=ec2&status=failed",
'label': 'Amazon EC2',

View File

@ -19,6 +19,7 @@ class Migration(migrations.Migration):
operations = [
# Inventory Refresh
migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(invsrc.remove_rax_inventory_sources),
migrations.RunPython(invsrc.remove_inventory_source_with_no_inventory_link),
migrations.RunPython(invsrc.rename_inventory_sources),
]

View File

@ -3,8 +3,54 @@ from awx.main.models import CredentialType
from awx.main.utils.common import encrypt_field, decrypt_field
DEPRECATED_CRED_KIND = {
'rax': {
'kind': 'cloud',
'name': 'Rackspace',
'inputs': {
'fields': [{
'id': 'username',
'label': 'Username',
'type': 'string'
}, {
'id': 'password',
'label': 'Password',
'type': 'string',
'secret': True,
}],
'required': ['username', 'password']
},
'injectors': {
'env': {
'RAX_USERNAME': '{{ username }}',
'RAX_API_KEY': '{{ password }}',
'CLOUD_VERIFY_SSL': 'False',
},
},
},
}
def _generate_deprecated_cred_types():
ret = {}
for deprecated_kind in DEPRECATED_CRED_KIND:
ret[deprecated_kind] = None
return ret
def _populate_deprecated_cred_types(cred, kind):
if kind not in cred:
return None
if cred[kind] is None:
new_obj = CredentialType(**DEPRECATED_CRED_KIND[kind])
new_obj.save()
cred[kind] = new_obj
return cred[kind]
def migrate_to_v2_credentials(apps, schema_editor):
CredentialType.setup_tower_managed_defaults()
deprecated_cred = _generate_deprecated_cred_types()
# this monkey-patch is necessary to make the implicit role generation save
# signal use the correct Role model (the version active at this point in
@ -18,7 +64,7 @@ def migrate_to_v2_credentials(apps, schema_editor):
data = {}
if getattr(cred, 'vault_password', None):
data['vault_password'] = cred.vault_password
credential_type = CredentialType.from_v1_kind(cred.kind, data)
credential_type = _populate_deprecated_cred_types(deprecated_cred, cred.kind) or CredentialType.from_v1_kind(cred.kind, data)
defined_fields = credential_type.defined_fields
cred.credential_type = apps.get_model('main', 'CredentialType').objects.get(pk=credential_type.pk)

View File

@ -17,6 +17,14 @@ def remove_manual_inventory_sources(apps, schema_editor):
InventorySource.objects.filter(source='').delete()
def remove_rax_inventory_sources(apps, schema_editor):
'''Rackspace inventory sources are not supported since 3.2, remove them.
'''
InventorySource = apps.get_model('main', 'InventorySource')
logger.debug("Removing all Rackspace InventorySource from database.")
InventorySource.objects.filter(source='rax').delete()
def rename_inventory_sources(apps, schema_editor):
'''Rename existing InventorySource entries using the following format.
{{ inventory_source.name }} - {{ inventory.module }} - {{ number }}

View File

@ -51,7 +51,6 @@ class V1Credential(object):
('net', 'Network'),
('scm', 'Source Control'),
('aws', 'Amazon Web Services'),
('rax', 'Rackspace'),
('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'),
('cloudforms', 'Red Hat CloudForms'),
@ -794,28 +793,6 @@ def openstack(cls):
)
@CredentialType.default
def rackspace(cls):
return cls(
kind='cloud',
name='Rackspace',
managed_by_tower=True,
inputs={
'fields': [{
'id': 'username',
'label': 'Username',
'type': 'string'
}, {
'id': 'password',
'label': 'Password',
'type': 'string',
'secret': True,
}],
'required': ['username', 'password']
}
)
@CredentialType.default
def vmware(cls):
return cls(

View File

@ -745,7 +745,6 @@ class InventorySourceOptions(BaseModel):
('', _('Manual')),
('file', _('File, Directory or Script')),
('scm', _('Sourced from a project in Tower')),
('rax', _('Rackspace Cloud Servers')),
('ec2', _('Amazon EC2')),
('gce', _('Google Compute Engine')),
('azure', _('Microsoft Azure Classic (deprecated)')),
@ -953,14 +952,6 @@ class InventorySourceOptions(BaseModel):
('tag_none', _('Tag None')),
]
@classmethod
def get_rax_region_choices(cls):
# Not possible to get rax regions without first authenticating, so use
# list from settings.
regions = list(getattr(settings, 'RAX_REGION_CHOICES', []))
regions.insert(0, ('ALL', 'All'))
return regions
@classmethod
def get_gce_region_choices(self):
"""Return a complete list of regions in GCE, as a list of
@ -1037,10 +1028,7 @@ class InventorySourceOptions(BaseModel):
if self.source in CLOUD_PROVIDERS:
get_regions = getattr(self, 'get_%s_region_choices' % self.source)
valid_regions = [x[0] for x in get_regions()]
if self.source == 'rax':
region_transform = lambda x: x.strip().upper()
else:
region_transform = lambda x: x.strip().lower()
region_transform = lambda x: x.strip().lower()
else:
return ''
all_region = region_transform('all')

View File

@ -23,7 +23,7 @@ from django.apps import apps
from django.contrib.contenttypes.models import ContentType
# Django-Polymorphic
from polymorphic import PolymorphicModel
from polymorphic.models import PolymorphicModel
# Django-Celery
from djcelery.models import TaskMeta

View File

@ -3,7 +3,7 @@
import logging
from twilio.rest import TwilioRestClient
from twilio.rest import Client
from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _
@ -29,7 +29,7 @@ class TwilioBackend(TowerBaseEmailBackend):
def send_messages(self, messages):
sent_messages = 0
try:
connection = TwilioRestClient(self.account_sid, self.account_token)
connection = Client(self.account_sid, self.account_token)
except Exception as e:
if not self.fail_silently:
raise

View File

@ -600,7 +600,10 @@ class BaseTask(Task):
job_timeout = 0 if local_timeout < 0 else job_timeout
else:
job_timeout = 0
child = pexpect.spawnu(args[0], args[1:], cwd=cwd, env=env)
child = pexpect.spawn(
args[0], args[1:], cwd=cwd, env=env, ignore_sighup=True,
encoding='utf-8', echo=False,
)
child.logfile_read = logfile
canceled = False
timed_out = False
@ -924,10 +927,6 @@ class RunJob(BaseTask):
if len(cloud_cred.security_token) > 0:
env['AWS_SECURITY_TOKEN'] = decrypt_field(cloud_cred, 'security_token')
# FIXME: Add EC2_URL, maybe EC2_REGION!
elif cloud_cred and cloud_cred.kind == 'rax':
env['RAX_USERNAME'] = cloud_cred.username
env['RAX_API_KEY'] = decrypt_field(cloud_cred, 'password')
env['CLOUD_VERIFY_SSL'] = str(False)
elif cloud_cred and cloud_cred.kind == 'gce':
env['GCE_EMAIL'] = cloud_cred.username
env['GCE_PROJECT'] = cloud_cred.project
@ -1148,7 +1147,7 @@ class RunJob(BaseTask):
job = self.update_model(job.pk, scm_revision=job.project.scm_revision)
except Exception:
job = self.update_model(job.pk, status='failed',
job_explanation=('Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' %
job_explanation=('Previous Task Failed: {"job_type": "%s", "job_name": "%s", "job_id": "%s"}' %
('project_update', local_project_sync.name, local_project_sync.id)))
raise
@ -1422,7 +1421,7 @@ class RunProjectUpdate(BaseTask):
os.close(self.lock_fd)
logger.error("I/O error({0}) while trying to aquire lock on file [{1}]: {2}".format(e.errno, lock_path, e.strerror))
raise
def pre_run_hook(self, instance, **kwargs):
if instance.launch_type == 'sync':
self.acquire_lock(instance)
@ -1506,6 +1505,18 @@ class RunInventoryUpdate(BaseTask):
},
'cache': cache,
}
ansible_variables = {
'use_hostnames': True,
'expand_hostvars': False,
'fail_on_errors': True,
}
provided_count = 0
for var_name in ansible_variables:
if var_name in inventory_update.source_vars_dict:
ansible_variables[var_name] = inventory_update.source_vars_dict[var_name]
provided_count += 1
if provided_count:
openstack_data['ansible'] = ansible_variables
private_data['credentials'][credential] = yaml.safe_dump(
openstack_data, default_flow_style=False, allow_unicode=True
)
@ -1527,9 +1538,12 @@ class RunInventoryUpdate(BaseTask):
ec2_opts.setdefault('route53', 'False')
ec2_opts.setdefault('all_instances', 'True')
ec2_opts.setdefault('all_rds_instances', 'False')
# TODO: Include this option when boto3 support comes.
#ec2_opts.setdefault('include_rds_clusters', 'False')
ec2_opts.setdefault('rds', 'False')
ec2_opts.setdefault('nested_groups', 'True')
ec2_opts.setdefault('elasticache', 'False')
ec2_opts.setdefault('stack_filters', 'False')
if inventory_update.instance_filters:
ec2_opts.setdefault('instance_filters', inventory_update.instance_filters)
group_by = [x.strip().lower() for x in inventory_update.group_by.split(',') if x.strip()]
@ -1542,15 +1556,6 @@ class RunInventoryUpdate(BaseTask):
ec2_opts.setdefault('cache_max_age', '300')
for k,v in ec2_opts.items():
cp.set(section, k, unicode(v))
# Build pyrax creds INI for rax inventory script.
elif inventory_update.source == 'rax':
section = 'rackspace_cloud'
cp.add_section(section)
credential = inventory_update.credential
if credential:
cp.set(section, 'username', credential.username)
cp.set(section, 'api_key', decrypt_field(credential,
'password'))
# Allow custom options to vmware inventory script.
elif inventory_update.source == 'vmware':
credential = inventory_update.credential
@ -1609,6 +1614,11 @@ class RunInventoryUpdate(BaseTask):
cp.set(section, 'password', decrypt_field(credential, 'password'))
cp.set(section, 'ssl_verify', "false")
cloudforms_opts = dict(inventory_update.source_vars_dict.items())
for opt in ['version', 'purge_actions', 'clean_group_keys', 'nest_tags']:
if opt in cloudforms_opts:
cp.set(section, opt, cloudforms_opts[opt])
section = 'cache'
cp.add_section(section)
cp.set(section, 'max_age', "0")
@ -1681,14 +1691,6 @@ class RunInventoryUpdate(BaseTask):
if len(passwords['source_security_token']) > 0:
env['AWS_SECURITY_TOKEN'] = passwords['source_security_token']
env['EC2_INI_PATH'] = cloud_credential
elif inventory_update.source == 'rax':
env['RAX_CREDS_FILE'] = cloud_credential
env['RAX_REGION'] = inventory_update.source_regions or 'all'
env['RAX_CACHE_MAX_AGE'] = "0"
env['CLOUD_VERIFY_SSL'] = str(False)
# Set this environment variable so the vendored package won't
# complain about not being able to determine its version number.
env['PBR_VERSION'] = '0.5.21'
elif inventory_update.source == 'vmware':
env['VMWARE_INI_PATH'] = cloud_credential
elif inventory_update.source == 'azure':

View File

@ -1036,71 +1036,6 @@ def test_aws_create_fail_required_fields(post, organization, admin, version, par
assert 'password' in json.dumps(response.data)
#
# Rackspace Credentials
#
@pytest.mark.django_db
@pytest.mark.parametrize('version, params', [
['v1', {
'kind': 'rax',
'name': 'Best credential ever',
'username': 'some_username',
'password': 'some_password',
}],
['v2', {
'credential_type': 1,
'name': 'Best credential ever',
'inputs': {
'username': 'some_username',
'password': 'some_password',
}
}]
])
def test_rax_create_ok(post, organization, admin, version, params):
rax = CredentialType.defaults['rackspace']()
rax.save()
params['organization'] = organization.id
response = post(
reverse('api:credential_list', kwargs={'version': version}),
params,
admin
)
assert response.status_code == 201
assert Credential.objects.count() == 1
cred = Credential.objects.all()[:1].get()
assert cred.inputs['username'] == 'some_username'
assert decrypt_field(cred, 'password') == 'some_password'
@pytest.mark.django_db
@pytest.mark.parametrize('version, params', [
['v1', {
'kind': 'rax',
'name': 'Best credential ever'
}],
['v2', {
'credential_type': 1,
'name': 'Best credential ever',
'inputs': {}
}]
])
def test_rax_create_fail_required_field(post, organization, admin, version, params):
rax = CredentialType.defaults['rackspace']()
rax.save()
params['organization'] = organization.id
response = post(
reverse('api:credential_list', kwargs={'version': version}),
params,
admin
)
assert response.status_code == 400
assert Credential.objects.count() == 0
assert 'username' in json.dumps(response.data)
assert 'password' in json.dumps(response.data)
#
# VMware vCenter Credentials
#

View File

@ -18,7 +18,6 @@ def test_default_cred_types():
'gce',
'net',
'openstack',
'rackspace',
'satellite6',
'scm',
'ssh',

View File

@ -196,11 +196,6 @@ def test_openstack_migration():
assert Credential.objects.count() == 1
@pytest.mark.skip(reason="TODO: rackspace should be a custom type (we're removing official support)")
def test_rackspace():
pass
@pytest.mark.django_db
def test_vmware_migration():
cred = Credential(name='My Credential')

View File

@ -16,6 +16,16 @@ def test_inv_src_manual_removal(inventory_source):
assert not InventorySource.objects.filter(pk=inventory_source.pk).exists()
@pytest.mark.django_db
def test_rax_inv_src_removal(inventory_source):
inventory_source.source = 'rax'
inventory_source.save()
assert InventorySource.objects.filter(pk=inventory_source.pk).exists()
invsrc.remove_rax_inventory_sources(apps, None)
assert not InventorySource.objects.filter(pk=inventory_source.pk).exists()
@pytest.mark.django_db
def test_inv_src_rename(inventory_source_factory):
inv_src01 = inventory_source_factory('t1')

View File

@ -381,25 +381,6 @@ class TestJobCredentials(TestJobExecution):
assert env['AWS_SECRET_KEY'] == 'secret'
assert env['AWS_SECURITY_TOKEN'] == 'token'
def test_rax_credential(self):
rax = CredentialType.defaults['rackspace']()
credential = Credential(
pk=1,
credential_type=rax,
inputs = {'username': 'bob', 'password': 'secret'}
)
credential.inputs['password'] = encrypt_field(credential, 'password')
self.instance.extra_credentials.add(credential)
self.task.run(self.pk)
assert self.task.run_pexpect.call_count == 1
call_args, _ = self.task.run_pexpect.call_args_list[0]
job, args, cwd, env, passwords, stdout = call_args
assert env['RAX_USERNAME'] == 'bob'
assert env['RAX_API_KEY'] == 'secret'
assert env['CLOUD_VERIFY_SSL'] == 'False'
def test_gce_credentials(self):
gce = CredentialType.defaults['gce']()
credential = Credential(
@ -1170,7 +1151,7 @@ def test_aquire_lock_open_fail_logged(logging_getLogger, os_open):
logger = mock.Mock()
logging_getLogger.return_value = logger
ProjectUpdate = tasks.RunProjectUpdate()
with pytest.raises(OSError, errno=3, strerror='dummy message'):
@ -1196,7 +1177,7 @@ def test_aquire_lock_acquisition_fail_logged(fcntl_flock, logging_getLogger, os_
logging_getLogger.return_value = logger
fcntl_flock.side_effect = err
ProjectUpdate = tasks.RunProjectUpdate()
with pytest.raises(IOError, errno=3, strerror='dummy message'):

View File

@ -1,6 +1,6 @@
#
# Configuration file for azure_rm.py
#
#
[azure]
# Control which resource groups are included. By default all resources groups are included.
# Set resource_groups to a comma separated list of resource groups names.

View File

@ -23,7 +23,7 @@
Azure External Inventory Script
===============================
Generates dynamic inventory by making API requests to the Azure Resource
Manager using the AAzure Python SDK. For instruction on installing the
Manager using the Azure Python SDK. For instruction on installing the
Azure Python SDK see http://azure-sdk-for-python.readthedocs.org/
Authentication
@ -32,7 +32,7 @@ The order of precedence is command line arguments, environment variables,
and finally the [default] profile found in ~/.azure/credentials.
If using a credentials file, it should be an ini formatted file with one or
more sections, which we refer to as profiles. The script looks for a
more sections, which we refer to as profiles. The script looks for a
[default] section, if a profile is not specified either on the command line
or with an environment variable. The keys in a profile will match the
list of command line arguments below.
@ -42,7 +42,7 @@ in your ~/.azure/credentials file, or a service principal or Active Directory
user.
Command line arguments:
- profile
- profile
- client_id
- secret
- subscription_id
@ -61,7 +61,7 @@ Environment variables:
Run for Specific Host
-----------------------
When run for a specific host using the --host option, a resource group is
When run for a specific host using the --host option, a resource group is
required. For a specific host, this script returns the following variables:
{
@ -191,7 +191,7 @@ import os
import re
import sys
from distutils.version import LooseVersion
from packaging.version import Version
from os.path import expanduser
@ -309,7 +309,7 @@ class AzureRM(object):
def _get_env_credentials(self):
env_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.iteritems():
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
env_credentials[attribute] = os.environ.get(env_variable, None)
if env_credentials['profile'] is not None:
@ -328,7 +328,7 @@ class AzureRM(object):
self.log('Getting credentials')
arg_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.iteritems():
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
arg_credentials[attribute] = getattr(params, attribute)
# try module params
@ -362,7 +362,11 @@ class AzureRM(object):
resource_client = self.rm_client
resource_client.providers.register(key)
except Exception as exc:
self.fail("One-time registration of {0} failed - {1}".format(key, str(exc)))
self.log("One-time registration of {0} failed - {1}".format(key, str(exc)))
self.log("You might need to register {0} using an admin account".format(key))
self.log(("To register a provider using the Python CLI: "
"https://docs.microsoft.com/azure/azure-resource-manager/"
"resource-manager-common-deployment-errors#noregisteredproviderfound"))
@property
def network_client(self):
@ -442,7 +446,7 @@ class AzureInventory(object):
def _parse_cli_args(self):
# Parse command line arguments
parser = argparse.ArgumentParser(
description='Produce an Ansible Inventory file for an Azure subscription')
description='Produce an Ansible Inventory file for an Azure subscription')
parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
parser.add_argument('--debug', action='store_true', default=False,
@ -664,7 +668,7 @@ class AzureInventory(object):
self._inventory['azure'].append(host_name)
if self.group_by_tag and vars.get('tags'):
for key, value in vars['tags'].iteritems():
for key, value in vars['tags'].items():
safe_key = self._to_safe(key)
safe_value = safe_key + '_' + self._to_safe(value)
if not self._inventory.get(safe_key):
@ -724,7 +728,7 @@ class AzureInventory(object):
def _get_env_settings(self):
env_settings = dict()
for attribute, env_variable in AZURE_CONFIG_SETTINGS.iteritems():
for attribute, env_variable in AZURE_CONFIG_SETTINGS.items():
env_settings[attribute] = os.environ.get(env_variable, None)
return env_settings
@ -786,11 +790,11 @@ class AzureInventory(object):
def main():
if not HAS_AZURE:
sys.exit("The Azure python sdk is not installed (try 'pip install azure>=2.0.0rc5') - {0}".format(HAS_AZURE_EXC))
sys.exit("The Azure python sdk is not installed (try `pip install 'azure>=2.0.0rc5' --upgrade`) - {0}".format(HAS_AZURE_EXC))
if LooseVersion(azure_compute_version) < LooseVersion(AZURE_MIN_VERSION):
if Version(azure_compute_version) < Version(AZURE_MIN_VERSION):
sys.exit("Expecting azure.mgmt.compute.__version__ to be {0}. Found version {1} "
"Do you have Azure >= 2.0.0rc5 installed?".format(AZURE_MIN_VERSION, azure_compute_version))
"Do you have Azure >= 2.0.0rc5 installed? (try `pip install 'azure>=2.0.0rc5' --upgrade`)".format(AZURE_MIN_VERSION, azure_compute_version))
AzureInventory()

View File

@ -1,16 +1,33 @@
# Ansible CloudForms external inventory script settings
#
[cloudforms]
# The version of CloudForms (this is not used yet)
version = 3.1
# the version of CloudForms ; currently not used, but tested with
version = 4.1
# The hostname of the CloudForms server
hostname = #insert your hostname here
# This should be the hostname of the CloudForms server
url = https://cfme.example.com
# Username for CloudForms
username = #insert your cloudforms user here
# This will more than likely need to be a local CloudForms username
username = <set your username here>
# Password for CloudForms user
password = #password
# The password for said username
password = <set your password here>
# True = verify SSL certificate / False = trust anything
ssl_verify = True
# limit the number of vms returned per request
limit = 100
# purge the CloudForms actions from hosts
purge_actions = True
# Clean up group names (from tags and other groupings so Ansible doesn't complain)
clean_group_keys = True
# Explode tags into nested groups / subgroups
nest_tags = False
[cache]
# Maximum time to trust the cache in seconds
max_age = 600

View File

@ -10,9 +10,10 @@
# AWS regions to make calls to. Set this to 'all' to make request to all regions
# in AWS and merge the results together. Alternatively, set this to a comma
# separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2'
# separated list of regions. E.g. 'us-east-1, us-west-1, us-west-2'
# 'auto' is AWS_REGION or AWS_DEFAULT_REGION environment variable.
regions = all
regions_exclude = us-gov-west-1,cn-north-1
regions_exclude = us-gov-west-1, cn-north-1
# When generating inventory, Ansible needs to know how to address a server.
# Each EC2 instance has a lot of variables associated with it. Here is the list:
@ -56,14 +57,19 @@ vpc_destination_variable = ip_address
#destination_format_tags = Name,environment
# To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True.
# Route53, set 'route53' to True.
route53 = False
# To use Route53 records as the inventory hostnames, uncomment and set
# to equal the domain name you wish to use. You must also have 'route53' (above)
# set to True.
# route53_hostnames = .example.com
# To exclude RDS instances from the inventory, uncomment and set to False.
rds = False
#rds = False
# To exclude ElastiCache instances from the inventory, uncomment and set to False.
elasticache = False
#elasticache = False
# Additionally, you can specify the list of zones to exclude looking up in
# 'route53_excluded_zones' as a comma-separated list.
@ -75,7 +81,7 @@ all_instances = False
# By default, only EC2 instances in the 'running' state are returned. Specify
# EC2 instance states to return as a comma-separated list. This
# option is overriden when 'all_instances' is True.
# option is overridden when 'all_instances' is True.
# instance_states = pending, running, shutting-down, terminated, stopping, stopped
# By default, only RDS instances in the 'available' state are returned. Set
@ -107,7 +113,7 @@ cache_path = ~/.ansible/tmp
# The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated.
# To disable the cache, set this value to 0
cache_max_age = 0
cache_max_age = 300
# Organize groups into a nested/hierarchy instead of a flat namespace.
nested_groups = False
@ -117,15 +123,17 @@ replace_dash_in_groups = True
# If set to true, any tag of the form "a,b,c" is expanded into a list
# and the results are used to create additional tag_* inventory groups.
expand_csv_tags = True
expand_csv_tags = False
# The EC2 inventory output can become very large. To manage its size,
# configure which groups should be created.
group_by_instance_id = True
group_by_region = True
group_by_availability_zone = True
group_by_aws_account = False
group_by_ami_id = True
group_by_instance_type = True
group_by_instance_state = False
group_by_key_pair = True
group_by_vpc_id = True
group_by_security_group = True
@ -151,6 +159,12 @@ group_by_elasticache_replication_group = True
# Filters are key/value pairs separated by '=', to list multiple filters use
# a list separated by commas. See examples below.
# If you want to apply multiple filters simultaneously, set stack_filters to
# True. Default behaviour is to combine the results of all filters. Stacking
# allows the use of multiple conditions to filter down, for example by
# environment and type of host.
stack_filters = False
# Retrieve only instances with (key=value) env=staging tag
# instance_filters = tag:env=staging

View File

@ -12,6 +12,8 @@ variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
optional region environement variable if region is 'auto'
This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable:
@ -162,6 +164,8 @@ class Ec2Inventory(object):
# and availability zones
self.inventory = self._empty_inventory()
self.aws_account_id = None
# Index of hostname (address) to instance ID
self.index = {}
@ -216,12 +220,22 @@ class Ec2Inventory(object):
def read_settings(self):
''' Reads the settings from the ec2.ini file '''
scriptbasename = __file__
scriptbasename = os.path.basename(scriptbasename)
scriptbasename = scriptbasename.replace('.py', '')
defaults = {'ec2': {
'ini_path': os.path.join(os.path.dirname(__file__), '%s.ini' % scriptbasename)
}
}
if six.PY3:
config = configparser.ConfigParser()
else:
config = configparser.SafeConfigParser()
ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini')
ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path)))
ec2_ini_path = os.environ.get('EC2_INI_PATH', defaults['ec2']['ini_path'])
ec2_ini_path = os.path.expanduser(os.path.expandvars(ec2_ini_path))
config.read(ec2_ini_path)
# is eucalyptus?
@ -245,6 +259,11 @@ class Ec2Inventory(object):
self.regions.append(regionInfo.name)
else:
self.regions = configRegions.split(",")
if 'auto' in self.regions:
env_region = os.environ.get('AWS_REGION')
if env_region is None:
env_region = os.environ.get('AWS_DEFAULT_REGION')
self.regions = [ env_region ]
# Destination addresses
self.destination_variable = config.get('ec2', 'destination_variable')
@ -265,6 +284,10 @@ class Ec2Inventory(object):
# Route53
self.route53_enabled = config.getboolean('ec2', 'route53')
if config.has_option('ec2', 'route53_hostnames'):
self.route53_hostnames = config.get('ec2', 'route53_hostnames')
else:
self.route53_hostnames = None
self.route53_excluded_zones = []
if config.has_option('ec2', 'route53_excluded_zones'):
self.route53_excluded_zones.extend(
@ -306,13 +329,13 @@ class Ec2Inventory(object):
if self.all_instances:
self.ec2_instance_states = ec2_valid_instance_states
elif config.has_option('ec2', 'instance_states'):
for instance_state in config.get('ec2', 'instance_states').split(','):
instance_state = instance_state.strip()
if instance_state not in ec2_valid_instance_states:
continue
self.ec2_instance_states.append(instance_state)
for instance_state in config.get('ec2', 'instance_states').split(','):
instance_state = instance_state.strip()
if instance_state not in ec2_valid_instance_states:
continue
self.ec2_instance_states.append(instance_state)
else:
self.ec2_instance_states = ['running']
self.ec2_instance_states = ['running']
# Return all RDS instances? (if RDS is enabled)
if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled:
@ -338,8 +361,8 @@ class Ec2Inventory(object):
else:
self.all_elasticache_nodes = False
# boto configuration profile (prefer CLI argument)
self.boto_profile = self.args.boto_profile
# boto configuration profile (prefer CLI argument then environment variables then config file)
self.boto_profile = self.args.boto_profile or os.environ.get('AWS_PROFILE')
if config.has_option('ec2', 'boto_profile') and not self.boto_profile:
self.boto_profile = config.get('ec2', 'boto_profile')
@ -374,14 +397,11 @@ class Ec2Inventory(object):
os.makedirs(cache_dir)
cache_name = 'ansible-ec2'
aws_profile = lambda: (self.boto_profile or
os.environ.get('AWS_PROFILE') or
os.environ.get('AWS_ACCESS_KEY_ID') or
self.credentials.get('aws_access_key_id', None))
if aws_profile():
cache_name = '%s-%s' % (cache_name, aws_profile())
self.cache_path_cache = cache_dir + "/%s.cache" % cache_name
self.cache_path_index = cache_dir + "/%s.index" % cache_name
cache_id = self.boto_profile or os.environ.get('AWS_ACCESS_KEY_ID', self.credentials.get('aws_access_key_id'))
if cache_id:
cache_name = '%s-%s' % (cache_name, cache_id)
self.cache_path_cache = os.path.join(cache_dir, "%s.cache" % cache_name)
self.cache_path_index = os.path.join(cache_dir, "%s.index" % cache_name)
self.cache_max_age = config.getint('ec2', 'cache_max_age')
if config.has_option('ec2', 'expand_csv_tags'):
@ -408,6 +428,7 @@ class Ec2Inventory(object):
'group_by_availability_zone',
'group_by_ami_id',
'group_by_instance_type',
'group_by_instance_state',
'group_by_key_pair',
'group_by_vpc_id',
'group_by_security_group',
@ -420,6 +441,7 @@ class Ec2Inventory(object):
'group_by_elasticache_cluster',
'group_by_elasticache_parameter_group',
'group_by_elasticache_replication_group',
'group_by_aws_account',
]
for option in group_by_options:
if config.has_option('ec2', option):
@ -439,7 +461,7 @@ class Ec2Inventory(object):
# Do we need to exclude hosts that match a pattern?
try:
pattern_exclude = config.get('ec2', 'pattern_exclude');
pattern_exclude = config.get('ec2', 'pattern_exclude')
if pattern_exclude and len(pattern_exclude) > 0:
self.pattern_exclude = re.compile(pattern_exclude)
else:
@ -447,6 +469,12 @@ class Ec2Inventory(object):
except configparser.NoOptionError:
self.pattern_exclude = None
# Do we want to stack multiple filters?
if config.has_option('ec2', 'stack_filters'):
self.stack_filters = config.getboolean('ec2', 'stack_filters')
else:
self.stack_filters = False
# Instance filters (see boto and EC2 API docs). Ignore invalid filters.
self.ec2_instance_filters = defaultdict(list)
if config.has_option('ec2', 'instance_filters'):
@ -534,8 +562,14 @@ class Ec2Inventory(object):
conn = self.connect(region)
reservations = []
if self.ec2_instance_filters:
for filter_key, filter_values in self.ec2_instance_filters.items():
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
if self.stack_filters:
filters_dict = {}
for filter_key, filter_values in self.ec2_instance_filters.items():
filters_dict[filter_key] = filter_values
reservations.extend(conn.get_all_instances(filters = filters_dict))
else:
for filter_key, filter_values in self.ec2_instance_filters.items():
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
else:
reservations = conn.get_all_instances()
@ -555,6 +589,9 @@ class Ec2Inventory(object):
for tag in tags:
tags_by_instance_id[tag.res_id][tag.name] = tag.value
if (not self.aws_account_id) and reservations:
self.aws_account_id = reservations[0].owner_id
for reservation in reservations:
for instance in reservation.instances:
instance.tags = tags_by_instance_id[instance.id]
@ -676,7 +713,7 @@ class Ec2Inventory(object):
try:
# Boto also doesn't provide wrapper classes to CacheClusters or
# CacheNodes. Because of that wo can't make use of the get_list
# CacheNodes. Because of that we can't make use of the get_list
# method in the AWSQueryConnection. Let's do the work manually
clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters']
@ -710,7 +747,7 @@ class Ec2Inventory(object):
try:
# Boto also doesn't provide wrapper classes to ReplicationGroups
# Because of that wo can't make use of the get_list method in the
# Because of that we can't make use of the get_list method in the
# AWSQueryConnection. Let's do the work manually
replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups']
@ -786,9 +823,19 @@ class Ec2Inventory(object):
else:
hostname = getattr(instance, self.hostname_variable)
# set the hostname from route53
if self.route53_enabled and self.route53_hostnames:
route53_names = self.get_instance_route53_names(instance)
for name in route53_names:
if name.endswith(self.route53_hostnames):
hostname = name
# If we can't get a nice hostname, use the destination address
if not hostname:
hostname = dest
# to_safe strips hostname characters like dots, so don't strip route53 hostnames
elif self.route53_enabled and self.route53_hostnames and hostname.endswith(self.route53_hostnames):
hostname = hostname.lower()
else:
hostname = self.to_safe(hostname).lower()
@ -837,6 +884,13 @@ class Ec2Inventory(object):
if self.nested_groups:
self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by instance state
if self.group_by_instance_state:
state_name = self.to_safe('instance_state_' + instance.state)
self.push(self.inventory, state_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'instance_states', state_name)
# Inventory: Group by key pair
if self.group_by_key_pair and instance.key_name:
key_name = self.to_safe('key_' + instance.key_name)
@ -863,6 +917,12 @@ class Ec2Inventory(object):
self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by AWS account ID
if self.group_by_aws_account:
self.push(self.inventory, self.aws_account_id, dest)
if self.nested_groups:
self.push_group(self.inventory, 'accounts', self.aws_account_id)
# Inventory: Group by tag keys
if self.group_by_tag_keys:
for k, v in instance.tags.items():
@ -1194,13 +1254,14 @@ class Ec2Inventory(object):
if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available':
return
# Skip clusters we cannot address (e.g. private VPC subnet or clustered redis)
if replication_group['NodeGroups'][0]['PrimaryEndpoint'] is None or \
replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] is None:
return
# Select the best destination address (PrimaryEndpoint)
dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address']
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index
self.index[dest] = [region, replication_group['ReplicationGroupId']]
@ -1243,7 +1304,10 @@ class Ec2Inventory(object):
''' Get and store the map of resource records to domain names that
point to them. '''
r53_conn = route53.Route53Connection()
if self.boto_profile:
r53_conn = route53.Route53Connection(profile_name=self.boto_profile)
else:
r53_conn = route53.Route53Connection()
all_zones = r53_conn.get_zones()
route53_zones = [ zone for zone in all_zones if zone.name[:-1]
@ -1304,7 +1368,7 @@ class Ec2Inventory(object):
instance_vars[key] = value
elif isinstance(value, six.string_types):
instance_vars[key] = value.strip()
elif type(value) == type(None):
elif value is None:
instance_vars[key] = ''
elif key == 'ec2_region':
instance_vars[key] = value.name
@ -1335,6 +1399,8 @@ class Ec2Inventory(object):
#print type(value)
#print value
instance_vars[self.to_safe('ec2_account_id')] = self.aws_account_id
return instance_vars
def get_host_info_dict_from_describe_dict(self, describe_dict):
@ -1413,7 +1479,7 @@ class Ec2Inventory(object):
# Target: Everything
# Replace None by an empty string
elif type(value) == type(None):
elif value is None:
host_info[key] = ''
else:
@ -1464,26 +1530,22 @@ class Ec2Inventory(object):
''' Reads the inventory from the cache file and returns it as a JSON
object '''
cache = open(self.cache_path_cache, 'r')
json_inventory = cache.read()
return json_inventory
with open(self.cache_path_cache, 'r') as f:
json_inventory = f.read()
return json_inventory
def load_index_from_cache(self):
''' Reads the index from the cache file sets self.index '''
cache = open(self.cache_path_index, 'r')
json_index = cache.read()
self.index = json.loads(json_index)
with open(self.cache_path_index, 'rb') as f:
self.index = json.load(f)
def write_to_cache(self, data, filename):
''' Writes data in JSON format to a file '''
json_data = self.json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
with open(filename, 'w') as f:
f.write(json_data)
def uncammelize(self, key):
temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key)
@ -1506,5 +1568,6 @@ class Ec2Inventory(object):
return json.dumps(data)
# Run the script
Ec2Inventory()
if __name__ == '__main__':
# Run the script
Ec2Inventory()

View File

@ -1,3 +1,107 @@
# Foreman inventory (https://github.com/theforeman/foreman_ansible_inventory)
#
# This script can be used as an Ansible dynamic inventory.
# The connection parameters are set up via *foreman.ini*
# This is how the script founds the configuration file in
# order of discovery.
#
# * `/etc/ansible/foreman.ini`
# * Current directory of your inventory script.
# * `FOREMAN_INI_PATH` environment variable.
#
# ## Variables and Parameters
#
# The data returned from Foreman for each host is stored in a foreman
# hash so they're available as *host_vars* along with the parameters
# of the host and it's hostgroups:
#
# "foo.example.com": {
# "foreman": {
# "architecture_id": 1,
# "architecture_name": "x86_64",
# "build": false,
# "build_status": 0,
# "build_status_label": "Installed",
# "capabilities": [
# "build",
# "image"
# ],
# "compute_profile_id": 4,
# "hostgroup_name": "webtier/myapp",
# "id": 70,
# "image_name": "debian8.1",
# ...
# "uuid": "50197c10-5ebb-b5cf-b384-a1e203e19e77"
# },
# "foreman_params": {
# "testparam1": "foobar",
# "testparam2": "small",
# ...
# }
#
# and could therefore be used in Ansible like:
#
# - debug: msg="From Foreman host {{ foreman['uuid'] }}"
#
# Which yields
#
# TASK [test_foreman : debug] ****************************************************
# ok: [foo.example.com] => {
# "msg": "From Foreman host 50190bd1-052a-a34a-3c9c-df37a39550bf"
# }
#
# ## Automatic Ansible groups
#
# The inventory will provide a set of groups, by default prefixed by
# 'foreman_'. If you want to customize this prefix, change the
# group_prefix option in /etc/ansible/foreman.ini. The rest of this
# guide will assume the default prefix of 'foreman'
#
# The hostgroup, location, organization, content view, and lifecycle
# environment of each host are created as Ansible groups with a
# foreman_<grouptype> prefix, all lowercase and problematic parameters
# removed. So e.g. the foreman hostgroup
#
# myapp / webtier / datacenter1
#
# would turn into the Ansible group:
#
# foreman_hostgroup_myapp_webtier_datacenter1
#
# Furthermore Ansible groups can be created on the fly using the
# *group_patterns* variable in *foreman.ini* so that you can build up
# hierarchies using parameters on the hostgroup and host variables.
#
# Lets assume you have a host that is built using this nested hostgroup:
#
# myapp / webtier / datacenter1
#
# and each of the hostgroups defines a parameters respectively:
#
# myapp: app_param = myapp
# webtier: tier_param = webtier
# datacenter1: dc_param = datacenter1
#
# The host is also in a subnet called "mysubnet" and provisioned via an image
# then *group_patterns* like:
#
# [ansible]
# group_patterns = ["{app_param}-{tier_param}-{dc_param}",
# "{app_param}-{tier_param}",
# "{app_param}",
# "{subnet_name}-{provision_method}"]
#
# would put the host into the additional Ansible groups:
#
# - myapp-webtier-datacenter1
# - myapp-webtier
# - myapp
# - mysubnet-image
#
# by recursively resolving the hostgroups, getting the parameter keys
# and values and doing a Python *string.format()* like replacement on
# it.
#
[foreman]
url = http://localhost:3000/
user = foreman

View File

@ -1,7 +1,8 @@
#!/usr/bin/env python
# vim: set fileencoding=utf-8 :
#
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>,
# Daniel Lobato Garcia <dlobatog@redhat.com>
#
# This script is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
@ -18,106 +19,62 @@
#
# This is somewhat based on cobbler inventory
# Stdlib imports
# __future__ imports must occur at the beginning of file
from __future__ import print_function
try:
# Python 2 version
import ConfigParser
except ImportError:
# Python 3 version
import configparser as ConfigParser
import json
import argparse
import copy
import os
import re
import requests
from requests.auth import HTTPBasicAuth
import sys
from time import time
from collections import defaultdict
from distutils.version import LooseVersion, StrictVersion
try:
import ConfigParser
except ImportError:
import configparser as ConfigParser
# 3rd party imports
import requests
if LooseVersion(requests.__version__) < LooseVersion('1.1.0'):
print('This script requires python-requests 1.1 as a minimum version')
sys.exit(1)
from requests.auth import HTTPBasicAuth
try:
import json
except ImportError:
import simplejson as json
def json_format_dict(data, pretty=False):
"""Converts a dict to a JSON object and dumps it as a formatted string"""
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
class ForemanInventory(object):
config_paths = [
"/etc/ansible/foreman.ini",
os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini',
]
def __init__(self):
self.inventory = dict() # A list of groups and the hosts in that group
self.inventory = defaultdict(list) # A list of groups and the hosts in that group
self.cache = dict() # Details about hosts in the inventory
self.params = dict() # Params of each host
self.facts = dict() # Facts of each host
self.hostgroups = dict() # host groups
self.session = None # Requests session
def run(self):
if not self._read_settings():
return False
self._get_inventory()
self._print_data()
return True
def _read_settings(self):
# Read settings and parse CLI arguments
if not self.read_settings():
return False
self.parse_cli_args()
return True
def _get_inventory(self):
if self.args.refresh_cache:
self.update_cache()
elif not self.is_cache_valid():
self.update_cache()
else:
self.load_inventory_from_cache()
self.load_params_from_cache()
self.load_facts_from_cache()
self.load_cache_from_cache()
def _print_data(self):
data_to_print = ""
if self.args.host:
data_to_print += self.get_host_info()
else:
self.inventory['_meta'] = {'hostvars': {}}
for hostname in self.cache:
self.inventory['_meta']['hostvars'][hostname] = {
'foreman': self.cache[hostname],
'foreman_params': self.params[hostname],
}
if self.want_facts:
self.inventory['_meta']['hostvars'][hostname]['foreman_facts'] = self.facts[hostname]
data_to_print += self.json_format_dict(self.inventory, True)
print(data_to_print)
def is_cache_valid(self):
"""Determines if the cache is still valid"""
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if (os.path.isfile(self.cache_path_inventory) and
os.path.isfile(self.cache_path_params) and
os.path.isfile(self.cache_path_facts)):
return True
return False
self.config_paths = [
"/etc/ansible/foreman.ini",
os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini',
]
env_value = os.environ.get('FOREMAN_INI_PATH')
if env_value is not None:
self.config_paths.append(os.path.expanduser(os.path.expandvars(env_value)))
def read_settings(self):
"""Reads the settings from the foreman.ini file"""
config = ConfigParser.SafeConfigParser()
env_value = os.environ.get('FOREMAN_INI_PATH')
if env_value is not None:
self.config_paths.append(os.path.expanduser(os.path.expandvars(env_value)))
config.read(self.config_paths)
# Foreman API related
@ -136,7 +93,7 @@ class ForemanInventory(object):
except (ConfigParser.NoOptionError, ConfigParser.NoSectionError):
group_patterns = "[]"
self.group_patterns = eval(group_patterns)
self.group_patterns = json.loads(group_patterns)
try:
self.group_prefix = config.get('ansible', 'group_prefix')
@ -212,12 +169,6 @@ class ForemanInventory(object):
def _get_hosts(self):
return self._get_json("%s/api/v2/hosts" % self.foreman_url)
def _get_hostgroup_by_id(self, hid):
if hid not in self.hostgroups:
url = "%s/api/v2/hostgroups/%s" % (self.foreman_url, hid)
self.hostgroups[hid] = self._get_json(url)
return self.hostgroups[hid]
def _get_all_params_by_id(self, hid):
url = "%s/api/v2/hosts/%s" % (self.foreman_url, hid)
ret = self._get_json(url, [404])
@ -225,10 +176,6 @@ class ForemanInventory(object):
ret = {}
return ret.get('all_parameters', {})
def _get_facts_by_id(self, hid):
url = "%s/api/v2/hosts/%s/facts" % (self.foreman_url, hid)
return self._get_json(url)
def _resolve_params(self, host):
"""Fetch host params and convert to dict"""
params = {}
@ -239,6 +186,10 @@ class ForemanInventory(object):
return params
def _get_facts_by_id(self, hid):
url = "%s/api/v2/hosts/%s/facts" % (self.foreman_url, hid)
return self._get_json(url)
def _get_facts(self, host):
"""Fetch all host facts of the host"""
if not self.want_facts:
@ -253,6 +204,29 @@ class ForemanInventory(object):
raise ValueError("More than one set of facts returned for '%s'" % host)
return facts
def write_to_cache(self, data, filename):
"""Write data in JSON format to a file"""
json_data = json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
def _write_cache(self):
self.write_to_cache(self.cache, self.cache_path_cache)
self.write_to_cache(self.inventory, self.cache_path_inventory)
self.write_to_cache(self.params, self.cache_path_params)
self.write_to_cache(self.facts, self.cache_path_facts)
def to_safe(self, word):
'''Converts 'bad' characters in a string to underscores
so they can be used as Ansible groups
>>> ForemanInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = "[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def update_cache(self):
"""Make calls to foreman and save the output in a cache"""
@ -267,20 +241,20 @@ class ForemanInventory(object):
val = host.get('%s_title' % group) or host.get('%s_name' % group)
if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name)
self.inventory[safe_key].append(dns_name)
# Create ansible groups for environment, location and organization
for group in ['environment', 'location', 'organization']:
val = host.get('%s_name' % group)
if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name)
self.inventory[safe_key].append(dns_name)
for group in ['lifecycle_environment', 'content_view']:
val = host.get('content_facet_attributes', {}).get('%s_name' % group)
if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name)
self.inventory[safe_key].append(dns_name)
params = self._resolve_params(host)
@ -297,44 +271,27 @@ class ForemanInventory(object):
for pattern in self.group_patterns:
try:
key = pattern.format(**groupby)
self.push(self.inventory, key, dns_name)
self.inventory[key].append(dns_name)
except KeyError:
pass # Host not part of this group
self.cache[dns_name] = host
self.params[dns_name] = params
self.facts[dns_name] = self._get_facts(host)
self.push(self.inventory, 'all', dns_name)
self.inventory['all'].append(dns_name)
self._write_cache()
def _write_cache(self):
self.write_to_cache(self.cache, self.cache_path_cache)
self.write_to_cache(self.inventory, self.cache_path_inventory)
self.write_to_cache(self.params, self.cache_path_params)
self.write_to_cache(self.facts, self.cache_path_facts)
def get_host_info(self):
"""Get variables about a specific host"""
if not self.cache or len(self.cache) == 0:
# Need to load index from cache
self.load_cache_from_cache()
if self.args.host not in self.cache:
# try updating the cache
self.update_cache()
if self.args.host not in self.cache:
# host might not exist anymore
return self.json_format_dict({}, True)
return self.json_format_dict(self.cache[self.args.host], True)
def push(self, d, k, v):
if k in d:
d[k].append(v)
else:
d[k] = [v]
def is_cache_valid(self):
"""Determines if the cache is still valid"""
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if (os.path.isfile(self.cache_path_inventory) and
os.path.isfile(self.cache_path_params) and
os.path.isfile(self.cache_path_facts)):
return True
return False
def load_inventory_from_cache(self):
"""Read the index from the cache file sets self.index"""
@ -365,33 +322,58 @@ class ForemanInventory(object):
json_cache = cache.read()
self.cache = json.loads(json_cache)
def write_to_cache(self, data, filename):
"""Write data in JSON format to a file"""
json_data = self.json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
@staticmethod
def to_safe(word):
'''Converts 'bad' characters in a string to underscores
so they can be used as Ansible groups
>>> ForemanInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = "[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def json_format_dict(self, data, pretty=False):
"""Converts a dict to a JSON object and dumps it as a formatted string"""
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
def get_inventory(self):
if self.args.refresh_cache or not self.is_cache_valid():
self.update_cache()
else:
return json.dumps(data)
self.load_inventory_from_cache()
self.load_params_from_cache()
self.load_facts_from_cache()
self.load_cache_from_cache()
def get_host_info(self):
"""Get variables about a specific host"""
if not self.cache or len(self.cache) == 0:
# Need to load index from cache
self.load_cache_from_cache()
if self.args.host not in self.cache:
# try updating the cache
self.update_cache()
if self.args.host not in self.cache:
# host might not exist anymore
return json_format_dict({}, True)
return json_format_dict(self.cache[self.args.host], True)
def _print_data(self):
data_to_print = ""
if self.args.host:
data_to_print += self.get_host_info()
else:
self.inventory['_meta'] = {'hostvars': {}}
for hostname in self.cache:
self.inventory['_meta']['hostvars'][hostname] = {
'foreman': self.cache[hostname],
'foreman_params': self.params[hostname],
}
if self.want_facts:
self.inventory['_meta']['hostvars'][hostname]['foreman_facts'] = self.facts[hostname]
data_to_print += json_format_dict(self.inventory, True)
print(data_to_print)
def run(self):
# Read settings and parse CLI arguments
if not self.read_settings():
return False
self.parse_cli_args()
self.get_inventory()
self._print_data()
return True
if __name__ == '__main__':
inv = ForemanInventory()
sys.exit(not inv.run())
sys.exit(not ForemanInventory().run())

View File

@ -312,7 +312,7 @@ class GceInventory(object):
return gce
def parse_env_zones(self):
'''returns a list of comma seperated zones parsed from the GCE_ZONE environment variable.
'''returns a list of comma separated zones parsed from the GCE_ZONE environment variable.
If provided, this will be used to filter the results of the grouped_instances call'''
import csv
reader = csv.reader([os.environ.get('GCE_ZONE',"")], skipinitialspace=True)
@ -323,7 +323,7 @@ class GceInventory(object):
''' Command line argument processing '''
parser = argparse.ArgumentParser(
description='Produce an Ansible Inventory file based on GCE')
description='Produce an Ansible Inventory file based on GCE')
parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)')
parser.add_argument('--host', action='store',
@ -428,8 +428,10 @@ class GceInventory(object):
if zones and zone not in zones:
continue
if zone in groups: groups[zone].append(name)
else: groups[zone] = [name]
if zone in groups:
groups[zone].append(name)
else:
groups[zone] = [name]
tags = node.extra['tags']
for t in tags:
@ -437,26 +439,36 @@ class GceInventory(object):
tag = t[6:]
else:
tag = 'tag_%s' % t
if tag in groups: groups[tag].append(name)
else: groups[tag] = [name]
if tag in groups:
groups[tag].append(name)
else:
groups[tag] = [name]
net = node.extra['networkInterfaces'][0]['network'].split('/')[-1]
net = 'network_%s' % net
if net in groups: groups[net].append(name)
else: groups[net] = [name]
if net in groups:
groups[net].append(name)
else:
groups[net] = [name]
machine_type = node.size
if machine_type in groups: groups[machine_type].append(name)
else: groups[machine_type] = [name]
if machine_type in groups:
groups[machine_type].append(name)
else:
groups[machine_type] = [name]
image = node.image and node.image or 'persistent_disk'
if image in groups: groups[image].append(name)
else: groups[image] = [name]
if image in groups:
groups[image].append(name)
else:
groups[image] = [name]
status = node.extra['status']
stat = 'status_%s' % status.lower()
if stat in groups: groups[stat].append(name)
else: groups[stat] = [name]
if stat in groups:
groups[stat].append(name)
else:
groups[stat] = [name]
groups["_meta"] = meta

View File

@ -27,6 +27,6 @@ clouds:
password: stack
project_name: stack
ansible:
use_hostnames: False
expand_hostvars: True
use_hostnames: True
expand_hostvars: False
fail_on_errors: True

View File

@ -1,471 +0,0 @@
#!/usr/bin/env python
# (c) 2013, Jesse Keating <jesse.keating@rackspace.com,
# Paul Durivage <paul.durivage@rackspace.com>,
# Matt Martz <matt@sivel.net>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
"""
Rackspace Cloud Inventory
Authors:
Jesse Keating <jesse.keating@rackspace.com,
Paul Durivage <paul.durivage@rackspace.com>,
Matt Martz <matt@sivel.net>
Description:
Generates inventory that Ansible can understand by making API request to
Rackspace Public Cloud API
When run against a specific host, this script returns variables similar to:
rax_os-ext-sts_task_state
rax_addresses
rax_links
rax_image
rax_os-ext-sts_vm_state
rax_flavor
rax_id
rax_rax-bandwidth_bandwidth
rax_user_id
rax_os-dcf_diskconfig
rax_accessipv4
rax_accessipv6
rax_progress
rax_os-ext-sts_power_state
rax_metadata
rax_status
rax_updated
rax_hostid
rax_name
rax_created
rax_tenant_id
rax_loaded
Configuration:
rax.py can be configured using a rax.ini file or via environment
variables. The rax.ini file should live in the same directory along side
this script.
The section header for configuration values related to this
inventory plugin is [rax]
[rax]
creds_file = ~/.rackspace_cloud_credentials
regions = IAD,ORD,DFW
env = prod
meta_prefix = meta
access_network = public
access_ip_version = 4
Each of these configurations also has a corresponding environment variable.
An environment variable will override a configuration file value.
creds_file:
Environment Variable: RAX_CREDS_FILE
An optional configuration that points to a pyrax-compatible credentials
file.
If not supplied, rax.py will look for a credentials file
at ~/.rackspace_cloud_credentials. It uses the Rackspace Python SDK,
and therefore requires a file formatted per the SDK's specifications.
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md
regions:
Environment Variable: RAX_REGION
An optional environment variable to narrow inventory search
scope. If used, needs a value like ORD, DFW, SYD (a Rackspace
datacenter) and optionally accepts a comma-separated list.
environment:
Environment Variable: RAX_ENV
A configuration that will use an environment as configured in
~/.pyrax.cfg, see
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md
meta_prefix:
Environment Variable: RAX_META_PREFIX
Default: meta
A configuration that changes the prefix used for meta key/value groups.
For compatibility with ec2.py set to "tag"
access_network:
Environment Variable: RAX_ACCESS_NETWORK
Default: public
A configuration that will tell the inventory script to use a specific
server network to determine the ansible_ssh_host value. If no address
is found, ansible_ssh_host will not be set. Accepts a comma-separated
list of network names, the first found wins.
access_ip_version:
Environment Variable: RAX_ACCESS_IP_VERSION
Default: 4
A configuration related to "access_network" that will attempt to
determine the ansible_ssh_host value for either IPv4 or IPv6. If no
address is found, ansible_ssh_host will not be set.
Acceptable values are: 4 or 6. Values other than 4 or 6
will be ignored, and 4 will be used. Accepts a comma-separated list,
the first found wins.
Examples:
List server instances
$ RAX_CREDS_FILE=~/.raxpub rax.py --list
List servers in ORD datacenter only
$ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD rax.py --list
List servers in ORD and DFW datacenters
$ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD,DFW rax.py --list
Get server details for server named "server.example.com"
$ RAX_CREDS_FILE=~/.raxpub rax.py --host server.example.com
Use the instance private IP to connect (instead of public IP)
$ RAX_CREDS_FILE=~/.raxpub RAX_ACCESS_NETWORK=private rax.py --list
"""
import os
import re
import sys
import argparse
import warnings
import collections
import ConfigParser
from six import iteritems
try:
import json
except ImportError:
import simplejson as json
try:
import pyrax
from pyrax.utils import slugify
except ImportError:
sys.exit('pyrax is required for this module')
from time import time
from ansible.constants import get_config, mk_boolean
NON_CALLABLES = (basestring, bool, dict, int, list, type(None))
def load_config_file():
p = ConfigParser.ConfigParser()
config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
'rax.ini')
try:
p.read(config_file)
except ConfigParser.Error:
return None
else:
return p
p = load_config_file()
def rax_slugify(value):
return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_'))
def to_dict(obj):
instance = {}
for key in dir(obj):
value = getattr(obj, key)
if isinstance(value, NON_CALLABLES) and not key.startswith('_'):
key = rax_slugify(key)
instance[key] = value
return instance
def host(regions, hostname):
hostvars = {}
for region in regions:
# Connect to the region
cs = pyrax.connect_to_cloudservers(region=region)
for server in cs.servers.list():
if server.name == hostname:
for key, value in to_dict(server).items():
hostvars[key] = value
# And finally, add an IP address
hostvars['ansible_ssh_host'] = server.accessIPv4
print(json.dumps(hostvars, sort_keys=True, indent=4))
def _list_into_cache(regions):
groups = collections.defaultdict(list)
hostvars = collections.defaultdict(dict)
images = {}
cbs_attachments = collections.defaultdict(dict)
prefix = get_config(p, 'rax', 'meta_prefix', 'RAX_META_PREFIX', 'meta')
try:
# Ansible 2.3+
networks = get_config(p, 'rax', 'access_network',
'RAX_ACCESS_NETWORK', 'public', value_type='list')
except TypeError:
# Ansible 2.2.x and below
networks = get_config(p, 'rax', 'access_network',
'RAX_ACCESS_NETWORK', 'public', islist=True)
try:
try:
ip_versions = map(int, get_config(p, 'rax', 'access_ip_version',
'RAX_ACCESS_IP_VERSION', 4, value_type='list'))
except TypeError:
ip_versions = map(int, get_config(p, 'rax', 'access_ip_version',
'RAX_ACCESS_IP_VERSION', 4, islist=True))
except:
ip_versions = [4]
else:
ip_versions = [v for v in ip_versions if v in [4, 6]]
if not ip_versions:
ip_versions = [4]
# Go through all the regions looking for servers
for region in regions:
# Connect to the region
cs = pyrax.connect_to_cloudservers(region=region)
if cs is None:
warnings.warn(
'Connecting to Rackspace region "%s" has caused Pyrax to '
'return None. Is this a valid region?' % region,
RuntimeWarning)
continue
for server in cs.servers.list():
# Create a group on region
groups[region].append(server.name)
# Check if group metadata key in servers' metadata
group = server.metadata.get('group')
if group:
groups[group].append(server.name)
for extra_group in server.metadata.get('groups', '').split(','):
if extra_group:
groups[extra_group].append(server.name)
# Add host metadata
for key, value in to_dict(server).items():
hostvars[server.name][key] = value
hostvars[server.name]['rax_region'] = region
for key, value in iteritems(server.metadata):
groups['%s_%s_%s' % (prefix, key, value)].append(server.name)
groups['instance-%s' % server.id].append(server.name)
groups['flavor-%s' % server.flavor['id']].append(server.name)
# Handle boot from volume
if not server.image:
if not cbs_attachments[region]:
cbs = pyrax.connect_to_cloud_blockstorage(region)
for vol in cbs.list():
if mk_boolean(vol.bootable):
for attachment in vol.attachments:
metadata = vol.volume_image_metadata
server_id = attachment['server_id']
cbs_attachments[region][server_id] = {
'id': metadata['image_id'],
'name': slugify(metadata['image_name'])
}
image = cbs_attachments[region].get(server.id)
if image:
server.image = {'id': image['id']}
hostvars[server.name]['rax_image'] = server.image
hostvars[server.name]['rax_boot_source'] = 'volume'
images[image['id']] = image['name']
else:
hostvars[server.name]['rax_boot_source'] = 'local'
try:
imagegroup = 'image-%s' % images[server.image['id']]
groups[imagegroup].append(server.name)
groups['image-%s' % server.image['id']].append(server.name)
except KeyError:
try:
image = cs.images.get(server.image['id'])
except cs.exceptions.NotFound:
groups['image-%s' % server.image['id']].append(server.name)
else:
images[image.id] = image.human_id
groups['image-%s' % image.human_id].append(server.name)
groups['image-%s' % server.image['id']].append(server.name)
# And finally, add an IP address
ansible_ssh_host = None
# use accessIPv[46] instead of looping address for 'public'
for network_name in networks:
if ansible_ssh_host:
break
if network_name == 'public':
for version_name in ip_versions:
if ansible_ssh_host:
break
if version_name == 6 and server.accessIPv6:
ansible_ssh_host = server.accessIPv6
elif server.accessIPv4:
ansible_ssh_host = server.accessIPv4
if not ansible_ssh_host:
addresses = server.addresses.get(network_name, [])
for address in addresses:
for version_name in ip_versions:
if ansible_ssh_host:
break
if address.get('version') == version_name:
ansible_ssh_host = address.get('addr')
break
if ansible_ssh_host:
hostvars[server.name]['ansible_ssh_host'] = ansible_ssh_host
if hostvars:
groups['_meta'] = {'hostvars': hostvars}
with open(get_cache_file_path(regions), 'w') as cache_file:
json.dump(groups, cache_file)
def get_cache_file_path(regions):
regions_str = '.'.join([reg.strip().lower() for reg in regions])
ansible_tmp_path = os.path.join(os.path.expanduser("~"), '.ansible', 'tmp')
if not os.path.exists(ansible_tmp_path):
os.makedirs(ansible_tmp_path)
return os.path.join(ansible_tmp_path,
'ansible-rax-%s-%s.cache' % (
pyrax.identity.username, regions_str))
def _list(regions, refresh_cache=True):
cache_max_age = int(get_config(p, 'rax', 'cache_max_age',
'RAX_CACHE_MAX_AGE', 600))
if (not os.path.exists(get_cache_file_path(regions)) or
refresh_cache or
(time() - os.stat(get_cache_file_path(regions))[-1]) > cache_max_age):
# Cache file doesn't exist or older than 10m or refresh cache requested
_list_into_cache(regions)
with open(get_cache_file_path(regions), 'r') as cache_file:
groups = json.load(cache_file)
print(json.dumps(groups, sort_keys=True, indent=4))
def parse_args():
parser = argparse.ArgumentParser(description='Ansible Rackspace Cloud '
'inventory module')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--list', action='store_true',
help='List active servers')
group.add_argument('--host', help='List details about the specific host')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help=('Force refresh of cache, making API requests to'
'RackSpace (default: False - use cache files)'))
return parser.parse_args()
def setup():
default_creds_file = os.path.expanduser('~/.rackspace_cloud_credentials')
# pyrax does not honor the environment variable CLOUD_VERIFY_SSL=False, so let's help pyrax
if 'CLOUD_VERIFY_SSL' in os.environ:
pyrax.set_setting('verify_ssl', os.environ['CLOUD_VERIFY_SSL'] in [1, 'true', 'True'])
env = get_config(p, 'rax', 'environment', 'RAX_ENV', None)
if env:
pyrax.set_environment(env)
keyring_username = pyrax.get_setting('keyring_username')
# Attempt to grab credentials from environment first
creds_file = get_config(p, 'rax', 'creds_file',
'RAX_CREDS_FILE', None)
if creds_file is not None:
creds_file = os.path.expanduser(creds_file)
else:
# But if that fails, use the default location of
# ~/.rackspace_cloud_credentials
if os.path.isfile(default_creds_file):
creds_file = default_creds_file
elif not keyring_username:
sys.exit('No value in environment variable %s and/or no '
'credentials file at %s'
% ('RAX_CREDS_FILE', default_creds_file))
identity_type = pyrax.get_setting('identity_type')
pyrax.set_setting('identity_type', identity_type or 'rackspace')
region = pyrax.get_setting('region')
try:
if keyring_username:
pyrax.keyring_auth(keyring_username, region=region)
else:
pyrax.set_credential_file(creds_file, region=region)
except Exception as e:
sys.exit("%s: %s" % (e, e.message))
regions = []
if region:
regions.append(region)
else:
try:
# Ansible 2.3+
region_list = get_config(p, 'rax', 'regions', 'RAX_REGION', 'all',
value_type='list')
except TypeError:
# Ansible 2.2.x and below
region_list = get_config(p, 'rax', 'regions', 'RAX_REGION', 'all',
islist=True)
for region in region_list:
region = region.strip().upper()
if region == 'ALL':
regions = pyrax.regions
break
elif region not in pyrax.regions:
sys.exit('Unsupported region %s' % region)
elif region not in regions:
regions.append(region)
return regions
def main():
args = parse_args()
regions = setup()
if args.list:
_list(regions, refresh_cache=args.refresh_cache)
elif args.host:
host(regions, args.host)
sys.exit(0)
if __name__ == '__main__':
main()

View File

@ -374,7 +374,7 @@ class VMWareInventory(object):
if cfm is not None and cfm.field:
for f in cfm.field:
if f.managedObjectType == vim.VirtualMachine:
self.custom_fields[f.key] = f.name;
self.custom_fields[f.key] = f.name
self.debugl('%d custom fieds collected' % len(self.custom_fields))
return instance_tuples
@ -628,7 +628,10 @@ class VMWareInventory(object):
elif type(vobj) in self.vimTable:
rdata = {}
for key in self.vimTable[type(vobj)]:
rdata[key] = getattr(vobj, key)
try:
rdata[key] = getattr(vobj, key)
except Exception as e:
self.debugl(e)
elif issubclass(type(vobj), str) or isinstance(vobj, str):
if vobj.isalnum():
@ -685,12 +688,15 @@ class VMWareInventory(object):
if self.lowerkeys:
method = method.lower()
if level + 1 <= self.maxlevel:
rdata[method] = self._process_object_types(
methodToCall,
thisvm=thisvm,
inkey=inkey + '.' + method,
level=(level + 1)
)
try:
rdata[method] = self._process_object_types(
methodToCall,
thisvm=thisvm,
inkey=inkey + '.' + method,
level=(level + 1)
)
except vim.fault.NoPermission:
self.debugl("Skipping method %s (NoPermission)" % method)
else:
pass
@ -719,5 +725,3 @@ class VMWareInventory(object):
if __name__ == "__main__":
# Run the script
print(VMWareInventory().show())

View File

@ -246,7 +246,7 @@ class AzureInventory(object):
def push(self, my_dict, key, element):
"""Pushed an element onto an array that may not have been defined in the dict."""
if key in my_dict:
my_dict[key].append(element);
my_dict[key].append(element)
else:
my_dict[key] = [element]

View File

@ -632,32 +632,6 @@ AD_HOC_COMMANDS = [
'win_user',
]
# Not possible to get list of regions without authenticating, so use this list
# instead (based on docs from:
# http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Service_Access_Endpoints-d1e517.html)
RAX_REGION_CHOICES = [
('ORD', _('Chicago')),
('DFW', _('Dallas/Ft. Worth')),
('IAD', _('Northern Virginia')),
('LON', _('London')),
('SYD', _('Sydney')),
('HKG', _('Hong Kong')),
]
# Inventory variable name/values for determining if host is active/enabled.
RAX_ENABLED_VAR = 'rax_status'
RAX_ENABLED_VALUE = 'ACTIVE'
# Inventory variable name containing unique instance ID.
RAX_INSTANCE_ID_VAR = 'rax_id'
# Filter for allowed group/host names when importing inventory from Rackspace.
# By default, filter group of one created for each instance and exclude all
# groups without children, hosts and variables.
RAX_GROUP_FILTER = r'^(?!instance-.+).+$'
RAX_HOST_FILTER = r'^.+$'
RAX_EXCLUDE_EMPTY_GROUPS = True
INV_ENV_VARIABLE_BLACKLIST = ("HOME", "USER", "_", "TERM")
# ----------------

View File

@ -1,5 +1,7 @@
The requirements.txt and requirements_ansible.txt files are generated from requirements.in and requirements_ansible.in, respectively, using `pip-tools` `pip-compile`. The following commands should do this if ran inside the tower_tools container.
NOTE: before running `pip-compile`, please copy-paste contents in `requirements/requirements_git.txt` to the top of `requirements/requirements.in` and prepend each copied line with `-e `. Later after `requirements.txt` is generated, don't forget to remove all `git+https://github.com...`-like lines from both `requirements.txt` and `requirements.in`
```
virtualenv /buildit
source /buildit/bin/activate
@ -17,3 +19,5 @@ pip-compile requirements/requirements_ansible.in > requirements/requirements_ans
* As of `pip-tools` `1.8.1` `pip-compile` does not resolve packages specified using a git url. Thus, dependencies for things like `dm.xmlsec.binding` do not get resolved and output to `requirements.txt`. This means that:
* can't use `pip install --no-deps` because other deps WILL be sucked in
* all dependencies are NOT captured in our `.txt` files. This means you can't rely on the `.txt` when gathering licenses.
* Packages `gevent-websocket` and `twisted` are put in `requirements.in` *not* because they are primary dependency of Tower, but because their versions needs to be freezed as dependencies of django channel. Please be mindful when doing dependency updates.

View File

@ -1,53 +1,55 @@
apache-libcloud==1.3.0
apache-libcloud==2.0.0
appdirs==1.4.2
asgi-amqp==0.4.1
asgiref==1.0.1
azure==2.0.0rc6
backports.ssl-match-hostname==3.5.0.1
boto==2.45.0
boto==2.46.1
channels==0.17.3
celery==3.1.17
celery==3.1.25
daphne>=0.15.0,<1.0.0
Django==1.8.16
django-auth-ldap==1.2.8
django-celery==3.1.17
django-crum==0.7.1
django-extensions==1.7.4
django-extensions==1.7.8
django-jsonfield==1.0.1
django-polymorphic==0.7.2
django-polymorphic==1.2
django-radius==1.1.0
django-solo==1.1.2
django-split-settings==0.2.2
django-taggit==0.21.3
django-split-settings==0.2.5
django-taggit==0.22.1
django-transaction-hooks==0.2
djangorestframework==3.3.3
djangorestframework-yaml==1.0.3
gevent-websocket==0.9.5
irc==15.0.4
irc==15.1.1
jsonschema==2.6.0
M2Crypto==0.25.1
Markdown==2.6.7
ordereddict==1.1
pexpect==3.1
pexpect==4.2.1
psphere==0.5.2
psutil==5.0.0
pygerduty==0.35.1
pyOpenSSL==16.2.0
psutil==5.2.2
pygerduty==0.35.2
pyOpenSSL==17.0.0
pyparsing==2.2.0
python-logstash==0.4.6
python-memcached==1.58
python-radius==1.0
python-saml==2.2.0
python-saml==2.2.1
python-social-auth==0.2.21
pyvmomi==6.5
redbaron==0.6.2
redbaron==0.6.3
requests==2.11.1
requests-futures==0.9.7
service-identity==16.0.0
shade==1.19.0
slackclient==1.0.2
tacacs_plus==0.1
twilio==5.6.0
shade==1.20.0
slackclient==1.0.5
tacacs_plus==0.2
twilio==6.1.0
twisted==16.6.0
uWSGI==2.0.14
xmltodict==0.10.2
pip==8.1.2
setuptools==23.0.0
xmltodict==0.11.0
pip==9.0.1
setuptools==35.0.2

View File

@ -7,13 +7,13 @@
adal==0.4.5 # via msrestazure
amqp==1.4.9 # via kombu
anyjson==0.3.3 # via kombu
apache-libcloud==1.3.0
apache-libcloud==2.0.0
appdirs==1.4.2
asgi-amqp==0.4.1
asgiref==1.0.1
asn1crypto==0.22.0 # via cryptography
attrs==16.3.0 # via service-identity
autobahn==0.18.2 # via daphne
autobahn==17.5.1 # via daphne
azure-batch==1.0.0 # via azure
azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage
azure-mgmt-batch==1.0.0 # via azure-mgmt
@ -32,35 +32,35 @@ azure-servicebus==0.20.3 # via azure
azure-servicemanagement-legacy==0.20.4 # via azure
azure-storage==0.33.0 # via azure
azure==2.0.0rc6
babel==2.4.0 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
babel==2.3.4 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
backports.functools-lru-cache==1.3 # via jaraco.functools
backports.ssl-match-hostname==3.5.0.1
baron==0.6.5 # via redbaron
billiard==3.3.0.23 # via celery
boto==2.45.0
celery==3.1.17
boto==2.46.1
celery==3.1.25
#certifi==2017.4.17 # via msrest
cffi==1.10.0 # via cryptography
channels==0.17.3
cliff==2.5.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cliff==2.6.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cmd2==0.7.0 # via cliff
constantly==15.1.0 # via twisted
cryptography==1.8.1 # via adal, azure-storage, pyopenssl, secretstorage
cryptography==1.8.1 # via adal, azure-storage, pyopenssl, secretstorage, twilio
daphne==0.15.0
debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient
decorator==4.0.11 # via shade
defusedxml==0.4.1 # via python-saml
deprecation==1.0 # via openstacksdk
deprecation==1.0.1 # via openstacksdk
django-auth-ldap==1.2.8
django-celery==3.1.17
django-crum==0.7.1
django-extensions==1.7.4
django-extensions==1.7.8
django-jsonfield==1.0.1
django-polymorphic==0.7.2
django-polymorphic==1.2
django-radius==1.1.0
django-solo==1.1.2
django-split-settings==0.2.2
django-taggit==0.21.3
django-split-settings==0.2.5
django-taggit==0.22.1
django-transaction-hooks==0.2
django==1.8.16 # via channels, django-auth-ldap, django-crum, django-split-settings, django-transaction-hooks
djangorestframework-yaml==1.0.3
@ -73,17 +73,16 @@ futures==3.1.1 # via azure-storage, requests-futures, shade
gevent-websocket==0.9.5
gevent==1.2.1 # via gevent-websocket
greenlet==0.4.12 # via gevent
httplib2==0.10.3 # via twilio
idna==2.5 # via cryptography
idna==2.5 # via cryptography, twilio
incremental==16.10.1 # via twisted
inflect==0.2.5 # via jaraco.itertools
ipaddress==1.0.18 # via cryptography, shade
irc==15.0.4
irc==15.1.1
iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient
isodate==0.5.4 # via msrest, python-saml
jaraco.classes==1.4.1 # via jaraco.collections
jaraco.collections==1.5.1 # via irc, jaraco.text
jaraco.functools==1.15.2 # via irc, jaraco.text
jaraco.functools==1.16 # via irc, jaraco.text
jaraco.itertools==2.0.1 # via irc
jaraco.logging==1.5 # via irc
jaraco.stream==1.1.2 # via irc
@ -92,9 +91,9 @@ jmespath==0.9.2 # via shade
jsonpatch==1.15 # via shade, warlock
jsonpickle==0.9.4 # via asgi-amqp
jsonpointer==1.10 # via jsonpatch
jsonschema==2.6.0 # via python-designateclient, python-ironicclient, warlock
jsonschema==2.6.0
keyring==10.3.2 # via msrestazure
keystoneauth1==2.19.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
keystoneauth1==2.20.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
kombu==3.0.37 # via asgi-amqp, celery
lxml==3.7.3
m2crypto==0.25.1
@ -108,28 +107,29 @@ munch==2.1.1 # via shade
netaddr==0.7.19 # via oslo.config, oslo.utils, pyrad, python-neutronclient
netifaces==0.10.5 # via oslo.utils, shade
oauthlib==2.0.2 # via python-social-auth, requests-oauthlib
openstacksdk==0.9.15 # via python-openstackclient
openstacksdk==0.9.16 # via python-openstackclient
ordereddict==1.1
os-client-config==1.26.0 # via openstacksdk, osc-lib, python-neutronclient, shade
osc-lib==1.3.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
oslo.config==3.24.0 # via python-keystoneclient
os-client-config==1.27.0 # via openstacksdk, osc-lib, python-neutronclient, shade
osc-lib==1.6.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
oslo.config==4.1.0 # via python-keystoneclient
oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient
oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
packaging==16.8 # via cryptography
pbr==2.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
pexpect==3.1
packaging==16.8 # via cryptography, setuptools
pbr==3.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
pexpect==4.2.1
positional==1.1.1 # via keystoneauth1, python-keystoneclient
prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient
psphere==0.5.2
psutil==5.0.0
psutil==5.2.2
psycopg2==2.7.1
ptyprocess==0.5.1 # via pexpect
pyasn1-modules==0.0.8 # via service-identity
pyasn1==0.2.3 # via pyasn1-modules, service-identity
pycparser==2.17 # via cffi
pygerduty==0.35.1
pyjwt==1.5.0 # via adal, python-social-auth
pyopenssl==16.2.0 # via service-identity
pygerduty==0.35.2
pyjwt==1.5.0 # via adal, python-social-auth, twilio
pyopenssl==17.0.0 # via service-identity, twilio
pyparsing==2.2.0
pyrad==2.1 # via django-radius
python-cinderclient==2.0.1 # via python-openstackclient, shade
@ -138,48 +138,48 @@ python-designateclient==2.6.0 # via shade
python-glanceclient==2.6.0 # via python-openstackclient
python-ironicclient==1.12.0 # via shade
python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade
python-ldap==2.4.32 # via django-auth-ldap
python-ldap==2.4.38 # via django-auth-ldap
python-logstash==0.4.6
python-memcached==1.58
python-neutronclient==6.2.0 # via shade
python-novaclient==8.0.0 # via python-openstackclient, shade
python-openid==2.2.5 # via python-social-auth
python-openstackclient==3.9.0 # via python-ironicclient
python-openstackclient==3.11.0 # via python-ironicclient
python-radius==1.0
python-saml==2.2.0
python-saml==2.2.1
python-social-auth==0.2.21
pytz==2017.2 # via babel, celery, irc, oslo.serialization, oslo.utils, tempora, twilio
pyvmomi==6.5
pyyaml==3.12 # via cliff, djangorestframework-yaml, os-client-config, psphere, python-ironicclient
redbaron==0.6.2
redbaron==0.6.3
requests-futures==0.9.7
requests-oauthlib==0.8.0 # via msrest, python-social-auth
requests==2.12.5 # via adal, azure-servicebus, azure-servicemanagement-legacy, azure-storage, keystoneauth1, msrest, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-social-auth, pyvmomi, requests-futures, requests-oauthlib, slackclient
requests==2.11.1
requestsexceptions==1.2.0 # via os-client-config, shade
rfc3986==0.4.1 # via oslo.config
rply==0.7.4 # via baron
secretstorage==2.3.1 # via keyring
service-identity==16.0.0
shade==1.19.0
shade==1.20.0
simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient
six==1.10.0 # via asgi-amqp, asgiref, autobahn, cliff, cmd2, cryptography, debtcollector, django-extensions, irc, jaraco.classes, jaraco.collections, jaraco.itertools, jaraco.logging, jaraco.stream, keystoneauth1, more-itertools, munch, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, pygerduty, pyopenssl, pyrad, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, python-social-auth, pyvmomi, shade, slackclient, stevedore, tacacs-plus, tempora, twilio, txaio, warlock, websocket-client
slackclient==1.0.2
six==1.10.0 # via asgi-amqp, asgiref, autobahn, cliff, cmd2, cryptography, debtcollector, django-extensions, irc, jaraco.classes, jaraco.collections, jaraco.itertools, jaraco.logging, jaraco.stream, keystoneauth1, more-itertools, munch, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, pygerduty, pyopenssl, pyrad, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, python-social-auth, pyvmomi, setuptools, shade, slackclient, stevedore, tacacs-plus, tempora, twilio, txaio, warlock, websocket-client
slackclient==1.0.5
stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient
suds==0.4 # via psphere
tacacs_plus==0.1
tacacs_plus==0.2
tempora==1.6.1 # via irc, jaraco.logging
twilio==5.6.0
twilio==6.1.0
twisted==16.6.0
txaio==2.7.0 # via autobahn
txaio==2.7.1 # via autobahn
typing==3.6.1 # via m2crypto
unicodecsv==0.14.1 # via cliff
uwsgi==2.0.14
warlock==1.2.0 # via python-glanceclient
websocket-client==0.40.0 # via slackclient
wrapt==1.10.10 # via debtcollector, positional, python-glanceclient
xmltodict==0.10.2
zope.interface==4.3.3 # via twisted
xmltodict==0.11.0
zope.interface==4.4.0 # via twisted
# The following packages are considered to be unsafe in a requirements file:
pip==8.1.2
setuptools==23.0.0
pip==9.0.1
setuptools==35.0.2

View File

@ -1,14 +1,15 @@
apache-libcloud==1.3.0
apache-libcloud==2.0.0
azure==2.0.0rc6
backports.ssl-match-hostname==3.5.0.1
kombu==3.0.35
boto==2.45.0
kombu==3.0.37
boto==2.46.1
python-memcached==1.58
psphere==0.5.2
psutil==5.0.0
psutil==5.2.2
pyvmomi==6.5
pywinrm[kerberos]==0.2.2
requests==2.11.1
secretstorage==2.3.1
shade==1.19.0
setuptools==23.0.0
pip==8.1.2
shade==1.20.0
setuptools==35.0.2
pip==9.0.1

View File

@ -7,8 +7,8 @@
adal==0.4.5 # via msrestazure
amqp==1.4.9 # via kombu
anyjson==0.3.3 # via kombu
apache-libcloud==1.3.0
appdirs==1.4.3 # via os-client-config, python-ironicclient
apache-libcloud==2.0.0
appdirs==1.4.3 # via os-client-config, python-ironicclient, setuptools
asn1crypto==0.22.0 # via cryptography
azure-batch==1.0.0 # via azure
azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage
@ -17,33 +17,33 @@ azure-mgmt-compute==0.30.0rc6 # via azure-mgmt
azure-mgmt-keyvault==0.30.0rc6 # via azure-mgmt
azure-mgmt-logic==1.0.0 # via azure-mgmt
azure-mgmt-network==0.30.0rc6 # via azure-mgmt
azure-mgmt-nspkg==1.0.0 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage
azure-mgmt-nspkg==2.0.0 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage
azure-mgmt-redis==1.0.0 # via azure-mgmt
azure-mgmt-resource==0.30.0rc6 # via azure-mgmt
azure-mgmt-scheduler==1.0.0 # via azure-mgmt
azure-mgmt-storage==0.30.0rc6 # via azure-mgmt
azure-mgmt==0.30.0rc6 # via azure
azure-nspkg==1.0.0 # via azure-common, azure-mgmt-nspkg, azure-storage
azure-nspkg==2.0.0 # via azure-common, azure-mgmt-nspkg, azure-storage
azure-servicebus==0.20.3 # via azure
azure-servicemanagement-legacy==0.20.4 # via azure
azure-storage==0.33.0 # via azure
azure==2.0.0rc6
babel==2.4.0 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
babel==2.3.4 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
backports.ssl-match-hostname==3.5.0.1
boto==2.45.0
certifi==2017.1.23 # via msrest
boto==2.46.1
certifi==2017.4.17 # via msrest
cffi==1.10.0 # via cryptography
cliff==2.5.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cliff==2.6.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cmd2==0.7.0 # via cliff
cryptography==1.8.1 # via adal, azure-storage, secretstorage
debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient
decorator==4.0.11 # via shade
deprecation==1.0 # via openstacksdk
deprecation==1.0.1 # via openstacksdk
dogpile.cache==0.6.2 # via python-ironicclient, shade
enum34==1.1.6 # via cryptography, msrest
funcsigs==1.0.2 # via debtcollector, oslo.utils
functools32==3.2.3.post2 # via jsonschema
futures==3.0.5 # via azure-storage, shade
futures==3.1.1 # via azure-storage, shade
idna==2.5 # via cryptography
ipaddress==1.0.18 # via cryptography, shade
iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient
@ -52,34 +52,33 @@ jmespath==0.9.2 # via shade
jsonpatch==1.15 # via shade, warlock
jsonpointer==1.10 # via jsonpatch
jsonschema==2.6.0 # via python-designateclient, python-ironicclient, warlock
keyring==10.3.1 # via msrestazure
keystoneauth1==2.19.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
kombu==3.0.35
keyring==10.3.2 # via msrestazure
keystoneauth1==2.20.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
kombu==3.0.37
monotonic==1.3 # via oslo.utils
msgpack-python==0.4.8 # via oslo.serialization
msrest==0.4.6 # via azure-common, msrestazure
msrest==0.4.7 # via azure-common, msrestazure
msrestazure==0.4.7 # via azure-common
munch==2.1.1 # via shade
netaddr==0.7.19 # via oslo.config, oslo.utils, python-neutronclient
netifaces==0.10.5 # via oslo.utils, shade
ntlm-auth==1.0.2 # via requests-ntlm
ntlm-auth==1.0.3 # via requests-ntlm
oauthlib==2.0.2 # via requests-oauthlib
openstacksdk==0.9.14 # via python-openstackclient
ordereddict==1.1 # via ntlm-auth
os-client-config==1.26.0 # via openstacksdk, osc-lib, python-neutronclient, shade
osc-lib==1.3.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
oslo.config==3.24.0 # via python-keystoneclient
openstacksdk==0.9.16 # via python-openstackclient
os-client-config==1.27.0 # via openstacksdk, osc-lib, python-neutronclient, shade
osc-lib==1.6.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
oslo.config==4.1.0 # via python-keystoneclient
oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient
oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
packaging==16.8 # via cryptography
pbr==2.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
packaging==16.8 # via cryptography, setuptools
pbr==3.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
positional==1.1.1 # via keystoneauth1, python-keystoneclient
prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient
psphere==0.5.2
psutil==5.0.0
psutil==5.2.2
pycparser==2.17 # via cffi
pyjwt==1.4.2 # via adal
pyjwt==1.5.0 # via adal
pykerberos==1.1.14 # via requests-kerberos
pyparsing==2.2.0 # via cliff, cmd2, oslo.utils, packaging
python-cinderclient==2.0.1 # via python-openstackclient, shade
@ -89,9 +88,9 @@ python-glanceclient==2.6.0 # via python-openstackclient
python-ironicclient==1.12.0 # via shade
python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade
python-memcached==1.58
python-neutronclient==6.1.0 # via shade
python-novaclient==7.1.0 # via python-openstackclient, shade
python-openstackclient==3.9.0 # via python-ironicclient
python-neutronclient==6.2.0 # via shade
python-novaclient==8.0.0 # via python-openstackclient, shade
python-openstackclient==3.11.0 # via python-ironicclient
pytz==2017.2 # via babel, oslo.serialization, oslo.utils
pyvmomi==6.5
pywinrm[kerberos]==0.2.2
@ -99,20 +98,20 @@ pyyaml==3.12 # via cliff, os-client-config, psphere, python-ironicc
requests-kerberos==0.11.0 # via pywinrm
requests-ntlm==1.0.0 # via pywinrm
requests-oauthlib==0.8.0 # via msrest
requests==2.12.5 # via adal, azure-servicebus, azure-servicemanagement-legacy, azure-storage, keystoneauth1, msrest, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, pyvmomi, pywinrm, requests-kerberos, requests-ntlm, requests-oauthlib
requests==2.11.1
requestsexceptions==1.2.0 # via os-client-config, shade
rfc3986==0.4.1 # via oslo.config
secretstorage==2.3.1
shade==1.19.0
shade==1.20.0
simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient
six==1.10.0 # via cliff, cmd2, cryptography, debtcollector, keystoneauth1, munch, ntlm-auth, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, pyvmomi, pywinrm, shade, stevedore, warlock
six==1.10.0 # via cliff, cmd2, cryptography, debtcollector, keystoneauth1, munch, ntlm-auth, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, pyvmomi, pywinrm, setuptools, shade, stevedore, warlock
stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient
suds==0.4 # via psphere
unicodecsv==0.14.1 # via cliff
warlock==1.2.0 # via python-glanceclient
wrapt==1.10.10 # via debtcollector, positional, python-glanceclient
xmltodict==0.10.2 # via pywinrm
xmltodict==0.11.0 # via pywinrm
# The following packages are considered to be unsafe in a requirements file:
pip==8.1.2
setuptools==23.0.0
pip==9.0.1
setuptools==35.0.2