Dependency Updates

* Dynamic Inventory Source
Template against ansible 2.3 dynamic inventory sources.
The major change is removal of `rax.py`. Most upstream scripts except
`foreman.py` has quite trivial coding style changes, or minor functional
extensions  that does not affect Tower inventory update runs.
`foreman.py`, on the other hand, went through quite a major refactoring,
but functionalities stay the same.

Major python dependency updates include apache-libcloud (1.3.0 -->
2.0.0), boto (2.45.0 --> 2.46.1) and shade (1.19.0 --> 1.20.0). Minor
python dependency updates include indirect updates via `pip-compile`,
which are determined by base dependencies.

Some minor `task.py` extensions:
 - `.ini` file for ec2 has one more field `stack_filter=False`, which
   reveals changes in `ec2.py`.
 - `.ini` file for cloudforms will catch these four options from
   `source_vars_dict` of inventory update: `'version', 'purge_actions',
   'clean_group_keys', 'nest_tags'`. These four options have always been
   available in `cloudforms.py` but `cloudforms.ini.example` has not
   mentioned them until the latest version. For consistency with upstream
   docs, we should make these fields available for tower user to customize.
 - YAML file of openstack will catch ansible options `use_hostnames`,
   `expand_hostvars` and `fail_on_errors` from `source_vars_dict` of
   inventory update as a response to issue #6075.

* Remove Rackspace support
Supports of Rackspace as both a dynamic inventory source and a cloud
credential are fully removed. Data migrations have been added to support
arbitrary credential types feature and delete rackspace inventory
sources.

Note also requirement `jsonschema` has been moved from
`requirements.txt` to `requirements.in` as a primary dependency to
reflect it's usage in `/main/fields.py`.

Connected issue: #6080.

* `pexpect` major update
`pexpect` stands at the very core of our task system and underwent a
major update from 3.1 to 4.2.1. Although verified during devel, please
still be mindful of any suspicious issues on celery side even after this
PR gets merged.

* Miscellaneous
 - requests now explicitly declared in `requirements.in` at version 2.11.1
   in response to upstream issue
 - celery: 3.1.17 -> 3.1.25
 - django-extensions: 1.7.4 -> 1.7.8
 - django-polymorphic: 0.7.2 -> 1.2
 - django-split-settings: 0.2.2 -> 0.2.5
 - django-taggit: 0.21.3 -> 0.22.1
 - irc: 15.0.4 -> 15.1.1
 - pygerduty: 0.35.1 -> 0.35.2
 - pyOpenSSL: 16.2.0 -> 17.0.0
 - python-saml: 2.2.0 -> 2.2.1
 - redbaron: 0.6.2 -> 0.6.3
 - slackclient: 1.0.2 -> 1.0.5
 - tacacs_plus: 0.1 -> 0.2
 - xmltodict: 0.10.2 -> 0.11.0
 - pip: 8.1.2 -> 9.0.1
 - setuptools: 23.0.0 -> 35.0.2
 - (requirements_ansible.in only)kombu: 3.0.35 -> 3.0.37
This commit is contained in:
Aaron Tan
2017-04-20 16:47:53 -04:00
parent 3ed9ebed89
commit cfb633e8a6
35 changed files with 666 additions and 1022 deletions

View File

@@ -267,8 +267,8 @@ virtualenv_ansible:
fi; \ fi; \
if [ ! -d "$(VENV_BASE)/ansible" ]; then \ if [ ! -d "$(VENV_BASE)/ansible" ]; then \
virtualenv --system-site-packages --setuptools $(VENV_BASE)/ansible && \ virtualenv --system-site-packages --setuptools $(VENV_BASE)/ansible && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==23.0.0 && \ $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \
$(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==8.1.2; \ $(VENV_BASE)/ansible/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \ fi; \
fi fi
@@ -279,8 +279,8 @@ virtualenv_tower:
fi; \ fi; \
if [ ! -d "$(VENV_BASE)/tower" ]; then \ if [ ! -d "$(VENV_BASE)/tower" ]; then \
virtualenv --system-site-packages --setuptools $(VENV_BASE)/tower && \ virtualenv --system-site-packages --setuptools $(VENV_BASE)/tower && \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==23.0.0 && \ $(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed setuptools==35.0.2 && \
$(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==8.1.2; \ $(VENV_BASE)/tower/bin/pip install $(PIP_OPTIONS) --ignore-installed pip==9.0.1; \
fi; \ fi; \
fi fi

View File

@@ -89,7 +89,7 @@ class Metadata(metadata.SimpleMetadata):
# Special handling of inventory source_region choices that vary based on # Special handling of inventory source_region choices that vary based on
# selected inventory source. # selected inventory source.
if field.field_name == 'source_regions': if field.field_name == 'source_regions':
for cp in ('azure', 'ec2', 'gce', 'rax'): for cp in ('azure', 'ec2', 'gce'):
get_regions = getattr(InventorySource, 'get_%s_region_choices' % cp) get_regions = getattr(InventorySource, 'get_%s_region_choices' % cp)
field_info['%s_region_choices' % cp] = get_regions() field_info['%s_region_choices' % cp] = get_regions()

View File

@@ -35,7 +35,7 @@ from rest_framework import validators
from rest_framework.utils.serializer_helpers import ReturnList from rest_framework.utils.serializer_helpers import ReturnList
# Django-Polymorphic # Django-Polymorphic
from polymorphic import PolymorphicModel from polymorphic.models import PolymorphicModel
# AWX # AWX
from awx.main.constants import SCHEDULEABLE_PROVIDERS from awx.main.constants import SCHEDULEABLE_PROVIDERS

View File

@@ -361,16 +361,9 @@ class DashboardView(APIView):
'job_failed': inventory_with_failed_hosts.count(), 'job_failed': inventory_with_failed_hosts.count(),
'inventory_failed': failed_inventory} 'inventory_failed': failed_inventory}
user_inventory_sources = get_user_queryset(request.user, InventorySource) user_inventory_sources = get_user_queryset(request.user, InventorySource)
rax_inventory_sources = user_inventory_sources.filter(source='rax')
rax_inventory_failed = rax_inventory_sources.filter(status='failed')
ec2_inventory_sources = user_inventory_sources.filter(source='ec2') ec2_inventory_sources = user_inventory_sources.filter(source='ec2')
ec2_inventory_failed = ec2_inventory_sources.filter(status='failed') ec2_inventory_failed = ec2_inventory_sources.filter(status='failed')
data['inventory_sources'] = {} data['inventory_sources'] = {}
data['inventory_sources']['rax'] = {'url': reverse('api:inventory_source_list', request=request) + "?source=rax",
'label': 'Rackspace',
'failures_url': reverse('api:inventory_source_list', request=request) + "?source=rax&status=failed",
'total': rax_inventory_sources.count(),
'failed': rax_inventory_failed.count()}
data['inventory_sources']['ec2'] = {'url': reverse('api:inventory_source_list', request=request) + "?source=ec2", data['inventory_sources']['ec2'] = {'url': reverse('api:inventory_source_list', request=request) + "?source=ec2",
'failures_url': reverse('api:inventory_source_list', request=request) + "?source=ec2&status=failed", 'failures_url': reverse('api:inventory_source_list', request=request) + "?source=ec2&status=failed",
'label': 'Amazon EC2', 'label': 'Amazon EC2',

View File

@@ -19,6 +19,7 @@ class Migration(migrations.Migration):
operations = [ operations = [
# Inventory Refresh # Inventory Refresh
migrations.RunPython(migration_utils.set_current_apps_for_migrations), migrations.RunPython(migration_utils.set_current_apps_for_migrations),
migrations.RunPython(invsrc.remove_rax_inventory_sources),
migrations.RunPython(invsrc.remove_inventory_source_with_no_inventory_link), migrations.RunPython(invsrc.remove_inventory_source_with_no_inventory_link),
migrations.RunPython(invsrc.rename_inventory_sources), migrations.RunPython(invsrc.rename_inventory_sources),
] ]

View File

@@ -3,8 +3,54 @@ from awx.main.models import CredentialType
from awx.main.utils.common import encrypt_field, decrypt_field from awx.main.utils.common import encrypt_field, decrypt_field
DEPRECATED_CRED_KIND = {
'rax': {
'kind': 'cloud',
'name': 'Rackspace',
'inputs': {
'fields': [{
'id': 'username',
'label': 'Username',
'type': 'string'
}, {
'id': 'password',
'label': 'Password',
'type': 'string',
'secret': True,
}],
'required': ['username', 'password']
},
'injectors': {
'env': {
'RAX_USERNAME': '{{ username }}',
'RAX_API_KEY': '{{ password }}',
'CLOUD_VERIFY_SSL': 'False',
},
},
},
}
def _generate_deprecated_cred_types():
ret = {}
for deprecated_kind in DEPRECATED_CRED_KIND:
ret[deprecated_kind] = None
return ret
def _populate_deprecated_cred_types(cred, kind):
if kind not in cred:
return None
if cred[kind] is None:
new_obj = CredentialType(**DEPRECATED_CRED_KIND[kind])
new_obj.save()
cred[kind] = new_obj
return cred[kind]
def migrate_to_v2_credentials(apps, schema_editor): def migrate_to_v2_credentials(apps, schema_editor):
CredentialType.setup_tower_managed_defaults() CredentialType.setup_tower_managed_defaults()
deprecated_cred = _generate_deprecated_cred_types()
# this monkey-patch is necessary to make the implicit role generation save # this monkey-patch is necessary to make the implicit role generation save
# signal use the correct Role model (the version active at this point in # signal use the correct Role model (the version active at this point in
@@ -18,7 +64,7 @@ def migrate_to_v2_credentials(apps, schema_editor):
data = {} data = {}
if getattr(cred, 'vault_password', None): if getattr(cred, 'vault_password', None):
data['vault_password'] = cred.vault_password data['vault_password'] = cred.vault_password
credential_type = CredentialType.from_v1_kind(cred.kind, data) credential_type = _populate_deprecated_cred_types(deprecated_cred, cred.kind) or CredentialType.from_v1_kind(cred.kind, data)
defined_fields = credential_type.defined_fields defined_fields = credential_type.defined_fields
cred.credential_type = apps.get_model('main', 'CredentialType').objects.get(pk=credential_type.pk) cred.credential_type = apps.get_model('main', 'CredentialType').objects.get(pk=credential_type.pk)

View File

@@ -17,6 +17,14 @@ def remove_manual_inventory_sources(apps, schema_editor):
InventorySource.objects.filter(source='').delete() InventorySource.objects.filter(source='').delete()
def remove_rax_inventory_sources(apps, schema_editor):
'''Rackspace inventory sources are not supported since 3.2, remove them.
'''
InventorySource = apps.get_model('main', 'InventorySource')
logger.debug("Removing all Rackspace InventorySource from database.")
InventorySource.objects.filter(source='rax').delete()
def rename_inventory_sources(apps, schema_editor): def rename_inventory_sources(apps, schema_editor):
'''Rename existing InventorySource entries using the following format. '''Rename existing InventorySource entries using the following format.
{{ inventory_source.name }} - {{ inventory.module }} - {{ number }} {{ inventory_source.name }} - {{ inventory.module }} - {{ number }}

View File

@@ -51,7 +51,6 @@ class V1Credential(object):
('net', 'Network'), ('net', 'Network'),
('scm', 'Source Control'), ('scm', 'Source Control'),
('aws', 'Amazon Web Services'), ('aws', 'Amazon Web Services'),
('rax', 'Rackspace'),
('vmware', 'VMware vCenter'), ('vmware', 'VMware vCenter'),
('satellite6', 'Red Hat Satellite 6'), ('satellite6', 'Red Hat Satellite 6'),
('cloudforms', 'Red Hat CloudForms'), ('cloudforms', 'Red Hat CloudForms'),
@@ -794,28 +793,6 @@ def openstack(cls):
) )
@CredentialType.default
def rackspace(cls):
return cls(
kind='cloud',
name='Rackspace',
managed_by_tower=True,
inputs={
'fields': [{
'id': 'username',
'label': 'Username',
'type': 'string'
}, {
'id': 'password',
'label': 'Password',
'type': 'string',
'secret': True,
}],
'required': ['username', 'password']
}
)
@CredentialType.default @CredentialType.default
def vmware(cls): def vmware(cls):
return cls( return cls(

View File

@@ -745,7 +745,6 @@ class InventorySourceOptions(BaseModel):
('', _('Manual')), ('', _('Manual')),
('file', _('File, Directory or Script')), ('file', _('File, Directory or Script')),
('scm', _('Sourced from a project in Tower')), ('scm', _('Sourced from a project in Tower')),
('rax', _('Rackspace Cloud Servers')),
('ec2', _('Amazon EC2')), ('ec2', _('Amazon EC2')),
('gce', _('Google Compute Engine')), ('gce', _('Google Compute Engine')),
('azure', _('Microsoft Azure Classic (deprecated)')), ('azure', _('Microsoft Azure Classic (deprecated)')),
@@ -953,14 +952,6 @@ class InventorySourceOptions(BaseModel):
('tag_none', _('Tag None')), ('tag_none', _('Tag None')),
] ]
@classmethod
def get_rax_region_choices(cls):
# Not possible to get rax regions without first authenticating, so use
# list from settings.
regions = list(getattr(settings, 'RAX_REGION_CHOICES', []))
regions.insert(0, ('ALL', 'All'))
return regions
@classmethod @classmethod
def get_gce_region_choices(self): def get_gce_region_choices(self):
"""Return a complete list of regions in GCE, as a list of """Return a complete list of regions in GCE, as a list of
@@ -1037,10 +1028,7 @@ class InventorySourceOptions(BaseModel):
if self.source in CLOUD_PROVIDERS: if self.source in CLOUD_PROVIDERS:
get_regions = getattr(self, 'get_%s_region_choices' % self.source) get_regions = getattr(self, 'get_%s_region_choices' % self.source)
valid_regions = [x[0] for x in get_regions()] valid_regions = [x[0] for x in get_regions()]
if self.source == 'rax': region_transform = lambda x: x.strip().lower()
region_transform = lambda x: x.strip().upper()
else:
region_transform = lambda x: x.strip().lower()
else: else:
return '' return ''
all_region = region_transform('all') all_region = region_transform('all')

View File

@@ -23,7 +23,7 @@ from django.apps import apps
from django.contrib.contenttypes.models import ContentType from django.contrib.contenttypes.models import ContentType
# Django-Polymorphic # Django-Polymorphic
from polymorphic import PolymorphicModel from polymorphic.models import PolymorphicModel
# Django-Celery # Django-Celery
from djcelery.models import TaskMeta from djcelery.models import TaskMeta

View File

@@ -3,7 +3,7 @@
import logging import logging
from twilio.rest import TwilioRestClient from twilio.rest import Client
from django.utils.encoding import smart_text from django.utils.encoding import smart_text
from django.utils.translation import ugettext_lazy as _ from django.utils.translation import ugettext_lazy as _
@@ -29,7 +29,7 @@ class TwilioBackend(TowerBaseEmailBackend):
def send_messages(self, messages): def send_messages(self, messages):
sent_messages = 0 sent_messages = 0
try: try:
connection = TwilioRestClient(self.account_sid, self.account_token) connection = Client(self.account_sid, self.account_token)
except Exception as e: except Exception as e:
if not self.fail_silently: if not self.fail_silently:
raise raise

View File

@@ -600,7 +600,10 @@ class BaseTask(Task):
job_timeout = 0 if local_timeout < 0 else job_timeout job_timeout = 0 if local_timeout < 0 else job_timeout
else: else:
job_timeout = 0 job_timeout = 0
child = pexpect.spawnu(args[0], args[1:], cwd=cwd, env=env) child = pexpect.spawn(
args[0], args[1:], cwd=cwd, env=env, ignore_sighup=True,
encoding='utf-8', echo=False,
)
child.logfile_read = logfile child.logfile_read = logfile
canceled = False canceled = False
timed_out = False timed_out = False
@@ -924,10 +927,6 @@ class RunJob(BaseTask):
if len(cloud_cred.security_token) > 0: if len(cloud_cred.security_token) > 0:
env['AWS_SECURITY_TOKEN'] = decrypt_field(cloud_cred, 'security_token') env['AWS_SECURITY_TOKEN'] = decrypt_field(cloud_cred, 'security_token')
# FIXME: Add EC2_URL, maybe EC2_REGION! # FIXME: Add EC2_URL, maybe EC2_REGION!
elif cloud_cred and cloud_cred.kind == 'rax':
env['RAX_USERNAME'] = cloud_cred.username
env['RAX_API_KEY'] = decrypt_field(cloud_cred, 'password')
env['CLOUD_VERIFY_SSL'] = str(False)
elif cloud_cred and cloud_cred.kind == 'gce': elif cloud_cred and cloud_cred.kind == 'gce':
env['GCE_EMAIL'] = cloud_cred.username env['GCE_EMAIL'] = cloud_cred.username
env['GCE_PROJECT'] = cloud_cred.project env['GCE_PROJECT'] = cloud_cred.project
@@ -1506,6 +1505,18 @@ class RunInventoryUpdate(BaseTask):
}, },
'cache': cache, 'cache': cache,
} }
ansible_variables = {
'use_hostnames': True,
'expand_hostvars': False,
'fail_on_errors': True,
}
provided_count = 0
for var_name in ansible_variables:
if var_name in inventory_update.source_vars_dict:
ansible_variables[var_name] = inventory_update.source_vars_dict[var_name]
provided_count += 1
if provided_count:
openstack_data['ansible'] = ansible_variables
private_data['credentials'][credential] = yaml.safe_dump( private_data['credentials'][credential] = yaml.safe_dump(
openstack_data, default_flow_style=False, allow_unicode=True openstack_data, default_flow_style=False, allow_unicode=True
) )
@@ -1527,9 +1538,12 @@ class RunInventoryUpdate(BaseTask):
ec2_opts.setdefault('route53', 'False') ec2_opts.setdefault('route53', 'False')
ec2_opts.setdefault('all_instances', 'True') ec2_opts.setdefault('all_instances', 'True')
ec2_opts.setdefault('all_rds_instances', 'False') ec2_opts.setdefault('all_rds_instances', 'False')
# TODO: Include this option when boto3 support comes.
#ec2_opts.setdefault('include_rds_clusters', 'False')
ec2_opts.setdefault('rds', 'False') ec2_opts.setdefault('rds', 'False')
ec2_opts.setdefault('nested_groups', 'True') ec2_opts.setdefault('nested_groups', 'True')
ec2_opts.setdefault('elasticache', 'False') ec2_opts.setdefault('elasticache', 'False')
ec2_opts.setdefault('stack_filters', 'False')
if inventory_update.instance_filters: if inventory_update.instance_filters:
ec2_opts.setdefault('instance_filters', inventory_update.instance_filters) ec2_opts.setdefault('instance_filters', inventory_update.instance_filters)
group_by = [x.strip().lower() for x in inventory_update.group_by.split(',') if x.strip()] group_by = [x.strip().lower() for x in inventory_update.group_by.split(',') if x.strip()]
@@ -1542,15 +1556,6 @@ class RunInventoryUpdate(BaseTask):
ec2_opts.setdefault('cache_max_age', '300') ec2_opts.setdefault('cache_max_age', '300')
for k,v in ec2_opts.items(): for k,v in ec2_opts.items():
cp.set(section, k, unicode(v)) cp.set(section, k, unicode(v))
# Build pyrax creds INI for rax inventory script.
elif inventory_update.source == 'rax':
section = 'rackspace_cloud'
cp.add_section(section)
credential = inventory_update.credential
if credential:
cp.set(section, 'username', credential.username)
cp.set(section, 'api_key', decrypt_field(credential,
'password'))
# Allow custom options to vmware inventory script. # Allow custom options to vmware inventory script.
elif inventory_update.source == 'vmware': elif inventory_update.source == 'vmware':
credential = inventory_update.credential credential = inventory_update.credential
@@ -1609,6 +1614,11 @@ class RunInventoryUpdate(BaseTask):
cp.set(section, 'password', decrypt_field(credential, 'password')) cp.set(section, 'password', decrypt_field(credential, 'password'))
cp.set(section, 'ssl_verify', "false") cp.set(section, 'ssl_verify', "false")
cloudforms_opts = dict(inventory_update.source_vars_dict.items())
for opt in ['version', 'purge_actions', 'clean_group_keys', 'nest_tags']:
if opt in cloudforms_opts:
cp.set(section, opt, cloudforms_opts[opt])
section = 'cache' section = 'cache'
cp.add_section(section) cp.add_section(section)
cp.set(section, 'max_age', "0") cp.set(section, 'max_age', "0")
@@ -1681,14 +1691,6 @@ class RunInventoryUpdate(BaseTask):
if len(passwords['source_security_token']) > 0: if len(passwords['source_security_token']) > 0:
env['AWS_SECURITY_TOKEN'] = passwords['source_security_token'] env['AWS_SECURITY_TOKEN'] = passwords['source_security_token']
env['EC2_INI_PATH'] = cloud_credential env['EC2_INI_PATH'] = cloud_credential
elif inventory_update.source == 'rax':
env['RAX_CREDS_FILE'] = cloud_credential
env['RAX_REGION'] = inventory_update.source_regions or 'all'
env['RAX_CACHE_MAX_AGE'] = "0"
env['CLOUD_VERIFY_SSL'] = str(False)
# Set this environment variable so the vendored package won't
# complain about not being able to determine its version number.
env['PBR_VERSION'] = '0.5.21'
elif inventory_update.source == 'vmware': elif inventory_update.source == 'vmware':
env['VMWARE_INI_PATH'] = cloud_credential env['VMWARE_INI_PATH'] = cloud_credential
elif inventory_update.source == 'azure': elif inventory_update.source == 'azure':

View File

@@ -1036,71 +1036,6 @@ def test_aws_create_fail_required_fields(post, organization, admin, version, par
assert 'password' in json.dumps(response.data) assert 'password' in json.dumps(response.data)
#
# Rackspace Credentials
#
@pytest.mark.django_db
@pytest.mark.parametrize('version, params', [
['v1', {
'kind': 'rax',
'name': 'Best credential ever',
'username': 'some_username',
'password': 'some_password',
}],
['v2', {
'credential_type': 1,
'name': 'Best credential ever',
'inputs': {
'username': 'some_username',
'password': 'some_password',
}
}]
])
def test_rax_create_ok(post, organization, admin, version, params):
rax = CredentialType.defaults['rackspace']()
rax.save()
params['organization'] = organization.id
response = post(
reverse('api:credential_list', kwargs={'version': version}),
params,
admin
)
assert response.status_code == 201
assert Credential.objects.count() == 1
cred = Credential.objects.all()[:1].get()
assert cred.inputs['username'] == 'some_username'
assert decrypt_field(cred, 'password') == 'some_password'
@pytest.mark.django_db
@pytest.mark.parametrize('version, params', [
['v1', {
'kind': 'rax',
'name': 'Best credential ever'
}],
['v2', {
'credential_type': 1,
'name': 'Best credential ever',
'inputs': {}
}]
])
def test_rax_create_fail_required_field(post, organization, admin, version, params):
rax = CredentialType.defaults['rackspace']()
rax.save()
params['organization'] = organization.id
response = post(
reverse('api:credential_list', kwargs={'version': version}),
params,
admin
)
assert response.status_code == 400
assert Credential.objects.count() == 0
assert 'username' in json.dumps(response.data)
assert 'password' in json.dumps(response.data)
# #
# VMware vCenter Credentials # VMware vCenter Credentials
# #

View File

@@ -18,7 +18,6 @@ def test_default_cred_types():
'gce', 'gce',
'net', 'net',
'openstack', 'openstack',
'rackspace',
'satellite6', 'satellite6',
'scm', 'scm',
'ssh', 'ssh',

View File

@@ -196,11 +196,6 @@ def test_openstack_migration():
assert Credential.objects.count() == 1 assert Credential.objects.count() == 1
@pytest.mark.skip(reason="TODO: rackspace should be a custom type (we're removing official support)")
def test_rackspace():
pass
@pytest.mark.django_db @pytest.mark.django_db
def test_vmware_migration(): def test_vmware_migration():
cred = Credential(name='My Credential') cred = Credential(name='My Credential')

View File

@@ -16,6 +16,16 @@ def test_inv_src_manual_removal(inventory_source):
assert not InventorySource.objects.filter(pk=inventory_source.pk).exists() assert not InventorySource.objects.filter(pk=inventory_source.pk).exists()
@pytest.mark.django_db
def test_rax_inv_src_removal(inventory_source):
inventory_source.source = 'rax'
inventory_source.save()
assert InventorySource.objects.filter(pk=inventory_source.pk).exists()
invsrc.remove_rax_inventory_sources(apps, None)
assert not InventorySource.objects.filter(pk=inventory_source.pk).exists()
@pytest.mark.django_db @pytest.mark.django_db
def test_inv_src_rename(inventory_source_factory): def test_inv_src_rename(inventory_source_factory):
inv_src01 = inventory_source_factory('t1') inv_src01 = inventory_source_factory('t1')

View File

@@ -381,25 +381,6 @@ class TestJobCredentials(TestJobExecution):
assert env['AWS_SECRET_KEY'] == 'secret' assert env['AWS_SECRET_KEY'] == 'secret'
assert env['AWS_SECURITY_TOKEN'] == 'token' assert env['AWS_SECURITY_TOKEN'] == 'token'
def test_rax_credential(self):
rax = CredentialType.defaults['rackspace']()
credential = Credential(
pk=1,
credential_type=rax,
inputs = {'username': 'bob', 'password': 'secret'}
)
credential.inputs['password'] = encrypt_field(credential, 'password')
self.instance.extra_credentials.add(credential)
self.task.run(self.pk)
assert self.task.run_pexpect.call_count == 1
call_args, _ = self.task.run_pexpect.call_args_list[0]
job, args, cwd, env, passwords, stdout = call_args
assert env['RAX_USERNAME'] == 'bob'
assert env['RAX_API_KEY'] == 'secret'
assert env['CLOUD_VERIFY_SSL'] == 'False'
def test_gce_credentials(self): def test_gce_credentials(self):
gce = CredentialType.defaults['gce']() gce = CredentialType.defaults['gce']()
credential = Credential( credential = Credential(

View File

@@ -23,7 +23,7 @@
Azure External Inventory Script Azure External Inventory Script
=============================== ===============================
Generates dynamic inventory by making API requests to the Azure Resource Generates dynamic inventory by making API requests to the Azure Resource
Manager using the AAzure Python SDK. For instruction on installing the Manager using the Azure Python SDK. For instruction on installing the
Azure Python SDK see http://azure-sdk-for-python.readthedocs.org/ Azure Python SDK see http://azure-sdk-for-python.readthedocs.org/
Authentication Authentication
@@ -191,7 +191,7 @@ import os
import re import re
import sys import sys
from distutils.version import LooseVersion from packaging.version import Version
from os.path import expanduser from os.path import expanduser
@@ -309,7 +309,7 @@ class AzureRM(object):
def _get_env_credentials(self): def _get_env_credentials(self):
env_credentials = dict() env_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.iteritems(): for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
env_credentials[attribute] = os.environ.get(env_variable, None) env_credentials[attribute] = os.environ.get(env_variable, None)
if env_credentials['profile'] is not None: if env_credentials['profile'] is not None:
@@ -328,7 +328,7 @@ class AzureRM(object):
self.log('Getting credentials') self.log('Getting credentials')
arg_credentials = dict() arg_credentials = dict()
for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.iteritems(): for attribute, env_variable in AZURE_CREDENTIAL_ENV_MAPPING.items():
arg_credentials[attribute] = getattr(params, attribute) arg_credentials[attribute] = getattr(params, attribute)
# try module params # try module params
@@ -362,7 +362,11 @@ class AzureRM(object):
resource_client = self.rm_client resource_client = self.rm_client
resource_client.providers.register(key) resource_client.providers.register(key)
except Exception as exc: except Exception as exc:
self.fail("One-time registration of {0} failed - {1}".format(key, str(exc))) self.log("One-time registration of {0} failed - {1}".format(key, str(exc)))
self.log("You might need to register {0} using an admin account".format(key))
self.log(("To register a provider using the Python CLI: "
"https://docs.microsoft.com/azure/azure-resource-manager/"
"resource-manager-common-deployment-errors#noregisteredproviderfound"))
@property @property
def network_client(self): def network_client(self):
@@ -442,7 +446,7 @@ class AzureInventory(object):
def _parse_cli_args(self): def _parse_cli_args(self):
# Parse command line arguments # Parse command line arguments
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description='Produce an Ansible Inventory file for an Azure subscription') description='Produce an Ansible Inventory file for an Azure subscription')
parser.add_argument('--list', action='store_true', default=True, parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)') help='List instances (default: True)')
parser.add_argument('--debug', action='store_true', default=False, parser.add_argument('--debug', action='store_true', default=False,
@@ -664,7 +668,7 @@ class AzureInventory(object):
self._inventory['azure'].append(host_name) self._inventory['azure'].append(host_name)
if self.group_by_tag and vars.get('tags'): if self.group_by_tag and vars.get('tags'):
for key, value in vars['tags'].iteritems(): for key, value in vars['tags'].items():
safe_key = self._to_safe(key) safe_key = self._to_safe(key)
safe_value = safe_key + '_' + self._to_safe(value) safe_value = safe_key + '_' + self._to_safe(value)
if not self._inventory.get(safe_key): if not self._inventory.get(safe_key):
@@ -724,7 +728,7 @@ class AzureInventory(object):
def _get_env_settings(self): def _get_env_settings(self):
env_settings = dict() env_settings = dict()
for attribute, env_variable in AZURE_CONFIG_SETTINGS.iteritems(): for attribute, env_variable in AZURE_CONFIG_SETTINGS.items():
env_settings[attribute] = os.environ.get(env_variable, None) env_settings[attribute] = os.environ.get(env_variable, None)
return env_settings return env_settings
@@ -786,11 +790,11 @@ class AzureInventory(object):
def main(): def main():
if not HAS_AZURE: if not HAS_AZURE:
sys.exit("The Azure python sdk is not installed (try 'pip install azure>=2.0.0rc5') - {0}".format(HAS_AZURE_EXC)) sys.exit("The Azure python sdk is not installed (try `pip install 'azure>=2.0.0rc5' --upgrade`) - {0}".format(HAS_AZURE_EXC))
if LooseVersion(azure_compute_version) < LooseVersion(AZURE_MIN_VERSION): if Version(azure_compute_version) < Version(AZURE_MIN_VERSION):
sys.exit("Expecting azure.mgmt.compute.__version__ to be {0}. Found version {1} " sys.exit("Expecting azure.mgmt.compute.__version__ to be {0}. Found version {1} "
"Do you have Azure >= 2.0.0rc5 installed?".format(AZURE_MIN_VERSION, azure_compute_version)) "Do you have Azure >= 2.0.0rc5 installed? (try `pip install 'azure>=2.0.0rc5' --upgrade`)".format(AZURE_MIN_VERSION, azure_compute_version))
AzureInventory() AzureInventory()

View File

@@ -1,16 +1,33 @@
# Ansible CloudForms external inventory script settings
#
[cloudforms] [cloudforms]
# The version of CloudForms (this is not used yet) # the version of CloudForms ; currently not used, but tested with
version = 3.1 version = 4.1
# The hostname of the CloudForms server # This should be the hostname of the CloudForms server
hostname = #insert your hostname here url = https://cfme.example.com
# Username for CloudForms # This will more than likely need to be a local CloudForms username
username = #insert your cloudforms user here username = <set your username here>
# Password for CloudForms user # The password for said username
password = #password password = <set your password here>
# True = verify SSL certificate / False = trust anything
ssl_verify = True
# limit the number of vms returned per request
limit = 100
# purge the CloudForms actions from hosts
purge_actions = True
# Clean up group names (from tags and other groupings so Ansible doesn't complain)
clean_group_keys = True
# Explode tags into nested groups / subgroups
nest_tags = False
[cache]
# Maximum time to trust the cache in seconds
max_age = 600

View File

@@ -10,9 +10,10 @@
# AWS regions to make calls to. Set this to 'all' to make request to all regions # AWS regions to make calls to. Set this to 'all' to make request to all regions
# in AWS and merge the results together. Alternatively, set this to a comma # in AWS and merge the results together. Alternatively, set this to a comma
# separated list of regions. E.g. 'us-east-1,us-west-1,us-west-2' # separated list of regions. E.g. 'us-east-1, us-west-1, us-west-2'
# 'auto' is AWS_REGION or AWS_DEFAULT_REGION environment variable.
regions = all regions = all
regions_exclude = us-gov-west-1,cn-north-1 regions_exclude = us-gov-west-1, cn-north-1
# When generating inventory, Ansible needs to know how to address a server. # When generating inventory, Ansible needs to know how to address a server.
# Each EC2 instance has a lot of variables associated with it. Here is the list: # Each EC2 instance has a lot of variables associated with it. Here is the list:
@@ -56,14 +57,19 @@ vpc_destination_variable = ip_address
#destination_format_tags = Name,environment #destination_format_tags = Name,environment
# To tag instances on EC2 with the resource records that point to them from # To tag instances on EC2 with the resource records that point to them from
# Route53, uncomment and set 'route53' to True. # Route53, set 'route53' to True.
route53 = False route53 = False
# To use Route53 records as the inventory hostnames, uncomment and set
# to equal the domain name you wish to use. You must also have 'route53' (above)
# set to True.
# route53_hostnames = .example.com
# To exclude RDS instances from the inventory, uncomment and set to False. # To exclude RDS instances from the inventory, uncomment and set to False.
rds = False #rds = False
# To exclude ElastiCache instances from the inventory, uncomment and set to False. # To exclude ElastiCache instances from the inventory, uncomment and set to False.
elasticache = False #elasticache = False
# Additionally, you can specify the list of zones to exclude looking up in # Additionally, you can specify the list of zones to exclude looking up in
# 'route53_excluded_zones' as a comma-separated list. # 'route53_excluded_zones' as a comma-separated list.
@@ -75,7 +81,7 @@ all_instances = False
# By default, only EC2 instances in the 'running' state are returned. Specify # By default, only EC2 instances in the 'running' state are returned. Specify
# EC2 instance states to return as a comma-separated list. This # EC2 instance states to return as a comma-separated list. This
# option is overriden when 'all_instances' is True. # option is overridden when 'all_instances' is True.
# instance_states = pending, running, shutting-down, terminated, stopping, stopped # instance_states = pending, running, shutting-down, terminated, stopping, stopped
# By default, only RDS instances in the 'available' state are returned. Set # By default, only RDS instances in the 'available' state are returned. Set
@@ -107,7 +113,7 @@ cache_path = ~/.ansible/tmp
# The number of seconds a cache file is considered valid. After this many # The number of seconds a cache file is considered valid. After this many
# seconds, a new API call will be made, and the cache file will be updated. # seconds, a new API call will be made, and the cache file will be updated.
# To disable the cache, set this value to 0 # To disable the cache, set this value to 0
cache_max_age = 0 cache_max_age = 300
# Organize groups into a nested/hierarchy instead of a flat namespace. # Organize groups into a nested/hierarchy instead of a flat namespace.
nested_groups = False nested_groups = False
@@ -117,15 +123,17 @@ replace_dash_in_groups = True
# If set to true, any tag of the form "a,b,c" is expanded into a list # If set to true, any tag of the form "a,b,c" is expanded into a list
# and the results are used to create additional tag_* inventory groups. # and the results are used to create additional tag_* inventory groups.
expand_csv_tags = True expand_csv_tags = False
# The EC2 inventory output can become very large. To manage its size, # The EC2 inventory output can become very large. To manage its size,
# configure which groups should be created. # configure which groups should be created.
group_by_instance_id = True group_by_instance_id = True
group_by_region = True group_by_region = True
group_by_availability_zone = True group_by_availability_zone = True
group_by_aws_account = False
group_by_ami_id = True group_by_ami_id = True
group_by_instance_type = True group_by_instance_type = True
group_by_instance_state = False
group_by_key_pair = True group_by_key_pair = True
group_by_vpc_id = True group_by_vpc_id = True
group_by_security_group = True group_by_security_group = True
@@ -151,6 +159,12 @@ group_by_elasticache_replication_group = True
# Filters are key/value pairs separated by '=', to list multiple filters use # Filters are key/value pairs separated by '=', to list multiple filters use
# a list separated by commas. See examples below. # a list separated by commas. See examples below.
# If you want to apply multiple filters simultaneously, set stack_filters to
# True. Default behaviour is to combine the results of all filters. Stacking
# allows the use of multiple conditions to filter down, for example by
# environment and type of host.
stack_filters = False
# Retrieve only instances with (key=value) env=staging tag # Retrieve only instances with (key=value) env=staging tag
# instance_filters = tag:env=staging # instance_filters = tag:env=staging

View File

@@ -12,6 +12,8 @@ variables needed for Boto have already been set:
export AWS_ACCESS_KEY_ID='AK123' export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123' export AWS_SECRET_ACCESS_KEY='abc123'
optional region environement variable if region is 'auto'
This script also assumes there is an ec2.ini file alongside it. To specify a This script also assumes there is an ec2.ini file alongside it. To specify a
different path to ec2.ini, define the EC2_INI_PATH environment variable: different path to ec2.ini, define the EC2_INI_PATH environment variable:
@@ -162,6 +164,8 @@ class Ec2Inventory(object):
# and availability zones # and availability zones
self.inventory = self._empty_inventory() self.inventory = self._empty_inventory()
self.aws_account_id = None
# Index of hostname (address) to instance ID # Index of hostname (address) to instance ID
self.index = {} self.index = {}
@@ -216,12 +220,22 @@ class Ec2Inventory(object):
def read_settings(self): def read_settings(self):
''' Reads the settings from the ec2.ini file ''' ''' Reads the settings from the ec2.ini file '''
scriptbasename = __file__
scriptbasename = os.path.basename(scriptbasename)
scriptbasename = scriptbasename.replace('.py', '')
defaults = {'ec2': {
'ini_path': os.path.join(os.path.dirname(__file__), '%s.ini' % scriptbasename)
}
}
if six.PY3: if six.PY3:
config = configparser.ConfigParser() config = configparser.ConfigParser()
else: else:
config = configparser.SafeConfigParser() config = configparser.SafeConfigParser()
ec2_default_ini_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), 'ec2.ini') ec2_ini_path = os.environ.get('EC2_INI_PATH', defaults['ec2']['ini_path'])
ec2_ini_path = os.path.expanduser(os.path.expandvars(os.environ.get('EC2_INI_PATH', ec2_default_ini_path))) ec2_ini_path = os.path.expanduser(os.path.expandvars(ec2_ini_path))
config.read(ec2_ini_path) config.read(ec2_ini_path)
# is eucalyptus? # is eucalyptus?
@@ -245,6 +259,11 @@ class Ec2Inventory(object):
self.regions.append(regionInfo.name) self.regions.append(regionInfo.name)
else: else:
self.regions = configRegions.split(",") self.regions = configRegions.split(",")
if 'auto' in self.regions:
env_region = os.environ.get('AWS_REGION')
if env_region is None:
env_region = os.environ.get('AWS_DEFAULT_REGION')
self.regions = [ env_region ]
# Destination addresses # Destination addresses
self.destination_variable = config.get('ec2', 'destination_variable') self.destination_variable = config.get('ec2', 'destination_variable')
@@ -265,6 +284,10 @@ class Ec2Inventory(object):
# Route53 # Route53
self.route53_enabled = config.getboolean('ec2', 'route53') self.route53_enabled = config.getboolean('ec2', 'route53')
if config.has_option('ec2', 'route53_hostnames'):
self.route53_hostnames = config.get('ec2', 'route53_hostnames')
else:
self.route53_hostnames = None
self.route53_excluded_zones = [] self.route53_excluded_zones = []
if config.has_option('ec2', 'route53_excluded_zones'): if config.has_option('ec2', 'route53_excluded_zones'):
self.route53_excluded_zones.extend( self.route53_excluded_zones.extend(
@@ -306,13 +329,13 @@ class Ec2Inventory(object):
if self.all_instances: if self.all_instances:
self.ec2_instance_states = ec2_valid_instance_states self.ec2_instance_states = ec2_valid_instance_states
elif config.has_option('ec2', 'instance_states'): elif config.has_option('ec2', 'instance_states'):
for instance_state in config.get('ec2', 'instance_states').split(','): for instance_state in config.get('ec2', 'instance_states').split(','):
instance_state = instance_state.strip() instance_state = instance_state.strip()
if instance_state not in ec2_valid_instance_states: if instance_state not in ec2_valid_instance_states:
continue continue
self.ec2_instance_states.append(instance_state) self.ec2_instance_states.append(instance_state)
else: else:
self.ec2_instance_states = ['running'] self.ec2_instance_states = ['running']
# Return all RDS instances? (if RDS is enabled) # Return all RDS instances? (if RDS is enabled)
if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled: if config.has_option('ec2', 'all_rds_instances') and self.rds_enabled:
@@ -338,8 +361,8 @@ class Ec2Inventory(object):
else: else:
self.all_elasticache_nodes = False self.all_elasticache_nodes = False
# boto configuration profile (prefer CLI argument) # boto configuration profile (prefer CLI argument then environment variables then config file)
self.boto_profile = self.args.boto_profile self.boto_profile = self.args.boto_profile or os.environ.get('AWS_PROFILE')
if config.has_option('ec2', 'boto_profile') and not self.boto_profile: if config.has_option('ec2', 'boto_profile') and not self.boto_profile:
self.boto_profile = config.get('ec2', 'boto_profile') self.boto_profile = config.get('ec2', 'boto_profile')
@@ -374,14 +397,11 @@ class Ec2Inventory(object):
os.makedirs(cache_dir) os.makedirs(cache_dir)
cache_name = 'ansible-ec2' cache_name = 'ansible-ec2'
aws_profile = lambda: (self.boto_profile or cache_id = self.boto_profile or os.environ.get('AWS_ACCESS_KEY_ID', self.credentials.get('aws_access_key_id'))
os.environ.get('AWS_PROFILE') or if cache_id:
os.environ.get('AWS_ACCESS_KEY_ID') or cache_name = '%s-%s' % (cache_name, cache_id)
self.credentials.get('aws_access_key_id', None)) self.cache_path_cache = os.path.join(cache_dir, "%s.cache" % cache_name)
if aws_profile(): self.cache_path_index = os.path.join(cache_dir, "%s.index" % cache_name)
cache_name = '%s-%s' % (cache_name, aws_profile())
self.cache_path_cache = cache_dir + "/%s.cache" % cache_name
self.cache_path_index = cache_dir + "/%s.index" % cache_name
self.cache_max_age = config.getint('ec2', 'cache_max_age') self.cache_max_age = config.getint('ec2', 'cache_max_age')
if config.has_option('ec2', 'expand_csv_tags'): if config.has_option('ec2', 'expand_csv_tags'):
@@ -408,6 +428,7 @@ class Ec2Inventory(object):
'group_by_availability_zone', 'group_by_availability_zone',
'group_by_ami_id', 'group_by_ami_id',
'group_by_instance_type', 'group_by_instance_type',
'group_by_instance_state',
'group_by_key_pair', 'group_by_key_pair',
'group_by_vpc_id', 'group_by_vpc_id',
'group_by_security_group', 'group_by_security_group',
@@ -420,6 +441,7 @@ class Ec2Inventory(object):
'group_by_elasticache_cluster', 'group_by_elasticache_cluster',
'group_by_elasticache_parameter_group', 'group_by_elasticache_parameter_group',
'group_by_elasticache_replication_group', 'group_by_elasticache_replication_group',
'group_by_aws_account',
] ]
for option in group_by_options: for option in group_by_options:
if config.has_option('ec2', option): if config.has_option('ec2', option):
@@ -439,7 +461,7 @@ class Ec2Inventory(object):
# Do we need to exclude hosts that match a pattern? # Do we need to exclude hosts that match a pattern?
try: try:
pattern_exclude = config.get('ec2', 'pattern_exclude'); pattern_exclude = config.get('ec2', 'pattern_exclude')
if pattern_exclude and len(pattern_exclude) > 0: if pattern_exclude and len(pattern_exclude) > 0:
self.pattern_exclude = re.compile(pattern_exclude) self.pattern_exclude = re.compile(pattern_exclude)
else: else:
@@ -447,6 +469,12 @@ class Ec2Inventory(object):
except configparser.NoOptionError: except configparser.NoOptionError:
self.pattern_exclude = None self.pattern_exclude = None
# Do we want to stack multiple filters?
if config.has_option('ec2', 'stack_filters'):
self.stack_filters = config.getboolean('ec2', 'stack_filters')
else:
self.stack_filters = False
# Instance filters (see boto and EC2 API docs). Ignore invalid filters. # Instance filters (see boto and EC2 API docs). Ignore invalid filters.
self.ec2_instance_filters = defaultdict(list) self.ec2_instance_filters = defaultdict(list)
if config.has_option('ec2', 'instance_filters'): if config.has_option('ec2', 'instance_filters'):
@@ -534,8 +562,14 @@ class Ec2Inventory(object):
conn = self.connect(region) conn = self.connect(region)
reservations = [] reservations = []
if self.ec2_instance_filters: if self.ec2_instance_filters:
for filter_key, filter_values in self.ec2_instance_filters.items(): if self.stack_filters:
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values })) filters_dict = {}
for filter_key, filter_values in self.ec2_instance_filters.items():
filters_dict[filter_key] = filter_values
reservations.extend(conn.get_all_instances(filters = filters_dict))
else:
for filter_key, filter_values in self.ec2_instance_filters.items():
reservations.extend(conn.get_all_instances(filters = { filter_key : filter_values }))
else: else:
reservations = conn.get_all_instances() reservations = conn.get_all_instances()
@@ -555,6 +589,9 @@ class Ec2Inventory(object):
for tag in tags: for tag in tags:
tags_by_instance_id[tag.res_id][tag.name] = tag.value tags_by_instance_id[tag.res_id][tag.name] = tag.value
if (not self.aws_account_id) and reservations:
self.aws_account_id = reservations[0].owner_id
for reservation in reservations: for reservation in reservations:
for instance in reservation.instances: for instance in reservation.instances:
instance.tags = tags_by_instance_id[instance.id] instance.tags = tags_by_instance_id[instance.id]
@@ -676,7 +713,7 @@ class Ec2Inventory(object):
try: try:
# Boto also doesn't provide wrapper classes to CacheClusters or # Boto also doesn't provide wrapper classes to CacheClusters or
# CacheNodes. Because of that wo can't make use of the get_list # CacheNodes. Because of that we can't make use of the get_list
# method in the AWSQueryConnection. Let's do the work manually # method in the AWSQueryConnection. Let's do the work manually
clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters'] clusters = response['DescribeCacheClustersResponse']['DescribeCacheClustersResult']['CacheClusters']
@@ -710,7 +747,7 @@ class Ec2Inventory(object):
try: try:
# Boto also doesn't provide wrapper classes to ReplicationGroups # Boto also doesn't provide wrapper classes to ReplicationGroups
# Because of that wo can't make use of the get_list method in the # Because of that we can't make use of the get_list method in the
# AWSQueryConnection. Let's do the work manually # AWSQueryConnection. Let's do the work manually
replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups'] replication_groups = response['DescribeReplicationGroupsResponse']['DescribeReplicationGroupsResult']['ReplicationGroups']
@@ -786,9 +823,19 @@ class Ec2Inventory(object):
else: else:
hostname = getattr(instance, self.hostname_variable) hostname = getattr(instance, self.hostname_variable)
# set the hostname from route53
if self.route53_enabled and self.route53_hostnames:
route53_names = self.get_instance_route53_names(instance)
for name in route53_names:
if name.endswith(self.route53_hostnames):
hostname = name
# If we can't get a nice hostname, use the destination address # If we can't get a nice hostname, use the destination address
if not hostname: if not hostname:
hostname = dest hostname = dest
# to_safe strips hostname characters like dots, so don't strip route53 hostnames
elif self.route53_enabled and self.route53_hostnames and hostname.endswith(self.route53_hostnames):
hostname = hostname.lower()
else: else:
hostname = self.to_safe(hostname).lower() hostname = self.to_safe(hostname).lower()
@@ -837,6 +884,13 @@ class Ec2Inventory(object):
if self.nested_groups: if self.nested_groups:
self.push_group(self.inventory, 'types', type_name) self.push_group(self.inventory, 'types', type_name)
# Inventory: Group by instance state
if self.group_by_instance_state:
state_name = self.to_safe('instance_state_' + instance.state)
self.push(self.inventory, state_name, hostname)
if self.nested_groups:
self.push_group(self.inventory, 'instance_states', state_name)
# Inventory: Group by key pair # Inventory: Group by key pair
if self.group_by_key_pair and instance.key_name: if self.group_by_key_pair and instance.key_name:
key_name = self.to_safe('key_' + instance.key_name) key_name = self.to_safe('key_' + instance.key_name)
@@ -863,6 +917,12 @@ class Ec2Inventory(object):
self.fail_with_error('\n'.join(['Package boto seems a bit older.', self.fail_with_error('\n'.join(['Package boto seems a bit older.',
'Please upgrade boto >= 2.3.0.'])) 'Please upgrade boto >= 2.3.0.']))
# Inventory: Group by AWS account ID
if self.group_by_aws_account:
self.push(self.inventory, self.aws_account_id, dest)
if self.nested_groups:
self.push_group(self.inventory, 'accounts', self.aws_account_id)
# Inventory: Group by tag keys # Inventory: Group by tag keys
if self.group_by_tag_keys: if self.group_by_tag_keys:
for k, v in instance.tags.items(): for k, v in instance.tags.items():
@@ -1194,13 +1254,14 @@ class Ec2Inventory(object):
if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available': if not self.all_elasticache_replication_groups and replication_group['Status'] != 'available':
return return
# Skip clusters we cannot address (e.g. private VPC subnet or clustered redis)
if replication_group['NodeGroups'][0]['PrimaryEndpoint'] is None or \
replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] is None:
return
# Select the best destination address (PrimaryEndpoint) # Select the best destination address (PrimaryEndpoint)
dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address'] dest = replication_group['NodeGroups'][0]['PrimaryEndpoint']['Address']
if not dest:
# Skip clusters we cannot address (e.g. private VPC subnet)
return
# Add to index # Add to index
self.index[dest] = [region, replication_group['ReplicationGroupId']] self.index[dest] = [region, replication_group['ReplicationGroupId']]
@@ -1243,7 +1304,10 @@ class Ec2Inventory(object):
''' Get and store the map of resource records to domain names that ''' Get and store the map of resource records to domain names that
point to them. ''' point to them. '''
r53_conn = route53.Route53Connection() if self.boto_profile:
r53_conn = route53.Route53Connection(profile_name=self.boto_profile)
else:
r53_conn = route53.Route53Connection()
all_zones = r53_conn.get_zones() all_zones = r53_conn.get_zones()
route53_zones = [ zone for zone in all_zones if zone.name[:-1] route53_zones = [ zone for zone in all_zones if zone.name[:-1]
@@ -1304,7 +1368,7 @@ class Ec2Inventory(object):
instance_vars[key] = value instance_vars[key] = value
elif isinstance(value, six.string_types): elif isinstance(value, six.string_types):
instance_vars[key] = value.strip() instance_vars[key] = value.strip()
elif type(value) == type(None): elif value is None:
instance_vars[key] = '' instance_vars[key] = ''
elif key == 'ec2_region': elif key == 'ec2_region':
instance_vars[key] = value.name instance_vars[key] = value.name
@@ -1335,6 +1399,8 @@ class Ec2Inventory(object):
#print type(value) #print type(value)
#print value #print value
instance_vars[self.to_safe('ec2_account_id')] = self.aws_account_id
return instance_vars return instance_vars
def get_host_info_dict_from_describe_dict(self, describe_dict): def get_host_info_dict_from_describe_dict(self, describe_dict):
@@ -1413,7 +1479,7 @@ class Ec2Inventory(object):
# Target: Everything # Target: Everything
# Replace None by an empty string # Replace None by an empty string
elif type(value) == type(None): elif value is None:
host_info[key] = '' host_info[key] = ''
else: else:
@@ -1464,26 +1530,22 @@ class Ec2Inventory(object):
''' Reads the inventory from the cache file and returns it as a JSON ''' Reads the inventory from the cache file and returns it as a JSON
object ''' object '''
cache = open(self.cache_path_cache, 'r') with open(self.cache_path_cache, 'r') as f:
json_inventory = cache.read() json_inventory = f.read()
return json_inventory return json_inventory
def load_index_from_cache(self): def load_index_from_cache(self):
''' Reads the index from the cache file sets self.index ''' ''' Reads the index from the cache file sets self.index '''
cache = open(self.cache_path_index, 'r') with open(self.cache_path_index, 'rb') as f:
json_index = cache.read() self.index = json.load(f)
self.index = json.loads(json_index)
def write_to_cache(self, data, filename): def write_to_cache(self, data, filename):
''' Writes data in JSON format to a file ''' ''' Writes data in JSON format to a file '''
json_data = self.json_format_dict(data, True) json_data = self.json_format_dict(data, True)
cache = open(filename, 'w') with open(filename, 'w') as f:
cache.write(json_data) f.write(json_data)
cache.close()
def uncammelize(self, key): def uncammelize(self, key):
temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key) temp = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', key)
@@ -1506,5 +1568,6 @@ class Ec2Inventory(object):
return json.dumps(data) return json.dumps(data)
# Run the script if __name__ == '__main__':
Ec2Inventory() # Run the script
Ec2Inventory()

View File

@@ -1,3 +1,107 @@
# Foreman inventory (https://github.com/theforeman/foreman_ansible_inventory)
#
# This script can be used as an Ansible dynamic inventory.
# The connection parameters are set up via *foreman.ini*
# This is how the script founds the configuration file in
# order of discovery.
#
# * `/etc/ansible/foreman.ini`
# * Current directory of your inventory script.
# * `FOREMAN_INI_PATH` environment variable.
#
# ## Variables and Parameters
#
# The data returned from Foreman for each host is stored in a foreman
# hash so they're available as *host_vars* along with the parameters
# of the host and it's hostgroups:
#
# "foo.example.com": {
# "foreman": {
# "architecture_id": 1,
# "architecture_name": "x86_64",
# "build": false,
# "build_status": 0,
# "build_status_label": "Installed",
# "capabilities": [
# "build",
# "image"
# ],
# "compute_profile_id": 4,
# "hostgroup_name": "webtier/myapp",
# "id": 70,
# "image_name": "debian8.1",
# ...
# "uuid": "50197c10-5ebb-b5cf-b384-a1e203e19e77"
# },
# "foreman_params": {
# "testparam1": "foobar",
# "testparam2": "small",
# ...
# }
#
# and could therefore be used in Ansible like:
#
# - debug: msg="From Foreman host {{ foreman['uuid'] }}"
#
# Which yields
#
# TASK [test_foreman : debug] ****************************************************
# ok: [foo.example.com] => {
# "msg": "From Foreman host 50190bd1-052a-a34a-3c9c-df37a39550bf"
# }
#
# ## Automatic Ansible groups
#
# The inventory will provide a set of groups, by default prefixed by
# 'foreman_'. If you want to customize this prefix, change the
# group_prefix option in /etc/ansible/foreman.ini. The rest of this
# guide will assume the default prefix of 'foreman'
#
# The hostgroup, location, organization, content view, and lifecycle
# environment of each host are created as Ansible groups with a
# foreman_<grouptype> prefix, all lowercase and problematic parameters
# removed. So e.g. the foreman hostgroup
#
# myapp / webtier / datacenter1
#
# would turn into the Ansible group:
#
# foreman_hostgroup_myapp_webtier_datacenter1
#
# Furthermore Ansible groups can be created on the fly using the
# *group_patterns* variable in *foreman.ini* so that you can build up
# hierarchies using parameters on the hostgroup and host variables.
#
# Lets assume you have a host that is built using this nested hostgroup:
#
# myapp / webtier / datacenter1
#
# and each of the hostgroups defines a parameters respectively:
#
# myapp: app_param = myapp
# webtier: tier_param = webtier
# datacenter1: dc_param = datacenter1
#
# The host is also in a subnet called "mysubnet" and provisioned via an image
# then *group_patterns* like:
#
# [ansible]
# group_patterns = ["{app_param}-{tier_param}-{dc_param}",
# "{app_param}-{tier_param}",
# "{app_param}",
# "{subnet_name}-{provision_method}"]
#
# would put the host into the additional Ansible groups:
#
# - myapp-webtier-datacenter1
# - myapp-webtier
# - myapp
# - mysubnet-image
#
# by recursively resolving the hostgroups, getting the parameter keys
# and values and doing a Python *string.format()* like replacement on
# it.
#
[foreman] [foreman]
url = http://localhost:3000/ url = http://localhost:3000/
user = foreman user = foreman

View File

@@ -1,7 +1,8 @@
#!/usr/bin/env python #!/usr/bin/env python
# vim: set fileencoding=utf-8 : # vim: set fileencoding=utf-8 :
# #
# Copyright (C) 2016 Guido Günther <agx@sigxcpu.org> # Copyright (C) 2016 Guido Günther <agx@sigxcpu.org>,
# Daniel Lobato Garcia <dlobatog@redhat.com>
# #
# This script is free software: you can redistribute it and/or modify # This script is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by # it under the terms of the GNU General Public License as published by
@@ -18,106 +19,62 @@
# #
# This is somewhat based on cobbler inventory # This is somewhat based on cobbler inventory
# Stdlib imports
# __future__ imports must occur at the beginning of file
from __future__ import print_function from __future__ import print_function
try:
# Python 2 version
import ConfigParser
except ImportError:
# Python 3 version
import configparser as ConfigParser
import json
import argparse import argparse
import copy import copy
import os import os
import re import re
import requests
from requests.auth import HTTPBasicAuth
import sys import sys
from time import time from time import time
from collections import defaultdict
from distutils.version import LooseVersion, StrictVersion
try: # 3rd party imports
import ConfigParser import requests
except ImportError: if LooseVersion(requests.__version__) < LooseVersion('1.1.0'):
import configparser as ConfigParser print('This script requires python-requests 1.1 as a minimum version')
sys.exit(1)
from requests.auth import HTTPBasicAuth
try: def json_format_dict(data, pretty=False):
import json """Converts a dict to a JSON object and dumps it as a formatted string"""
except ImportError:
import simplejson as json
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else:
return json.dumps(data)
class ForemanInventory(object): class ForemanInventory(object):
config_paths = [
"/etc/ansible/foreman.ini",
os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini',
]
def __init__(self): def __init__(self):
self.inventory = dict() # A list of groups and the hosts in that group self.inventory = defaultdict(list) # A list of groups and the hosts in that group
self.cache = dict() # Details about hosts in the inventory self.cache = dict() # Details about hosts in the inventory
self.params = dict() # Params of each host self.params = dict() # Params of each host
self.facts = dict() # Facts of each host self.facts = dict() # Facts of each host
self.hostgroups = dict() # host groups self.hostgroups = dict() # host groups
self.session = None # Requests session self.session = None # Requests session
self.config_paths = [
def run(self): "/etc/ansible/foreman.ini",
if not self._read_settings(): os.path.dirname(os.path.realpath(__file__)) + '/foreman.ini',
return False ]
self._get_inventory() env_value = os.environ.get('FOREMAN_INI_PATH')
self._print_data() if env_value is not None:
return True self.config_paths.append(os.path.expanduser(os.path.expandvars(env_value)))
def _read_settings(self):
# Read settings and parse CLI arguments
if not self.read_settings():
return False
self.parse_cli_args()
return True
def _get_inventory(self):
if self.args.refresh_cache:
self.update_cache()
elif not self.is_cache_valid():
self.update_cache()
else:
self.load_inventory_from_cache()
self.load_params_from_cache()
self.load_facts_from_cache()
self.load_cache_from_cache()
def _print_data(self):
data_to_print = ""
if self.args.host:
data_to_print += self.get_host_info()
else:
self.inventory['_meta'] = {'hostvars': {}}
for hostname in self.cache:
self.inventory['_meta']['hostvars'][hostname] = {
'foreman': self.cache[hostname],
'foreman_params': self.params[hostname],
}
if self.want_facts:
self.inventory['_meta']['hostvars'][hostname]['foreman_facts'] = self.facts[hostname]
data_to_print += self.json_format_dict(self.inventory, True)
print(data_to_print)
def is_cache_valid(self):
"""Determines if the cache is still valid"""
if os.path.isfile(self.cache_path_cache):
mod_time = os.path.getmtime(self.cache_path_cache)
current_time = time()
if (mod_time + self.cache_max_age) > current_time:
if (os.path.isfile(self.cache_path_inventory) and
os.path.isfile(self.cache_path_params) and
os.path.isfile(self.cache_path_facts)):
return True
return False
def read_settings(self): def read_settings(self):
"""Reads the settings from the foreman.ini file""" """Reads the settings from the foreman.ini file"""
config = ConfigParser.SafeConfigParser() config = ConfigParser.SafeConfigParser()
env_value = os.environ.get('FOREMAN_INI_PATH')
if env_value is not None:
self.config_paths.append(os.path.expanduser(os.path.expandvars(env_value)))
config.read(self.config_paths) config.read(self.config_paths)
# Foreman API related # Foreman API related
@@ -136,7 +93,7 @@ class ForemanInventory(object):
except (ConfigParser.NoOptionError, ConfigParser.NoSectionError): except (ConfigParser.NoOptionError, ConfigParser.NoSectionError):
group_patterns = "[]" group_patterns = "[]"
self.group_patterns = eval(group_patterns) self.group_patterns = json.loads(group_patterns)
try: try:
self.group_prefix = config.get('ansible', 'group_prefix') self.group_prefix = config.get('ansible', 'group_prefix')
@@ -212,12 +169,6 @@ class ForemanInventory(object):
def _get_hosts(self): def _get_hosts(self):
return self._get_json("%s/api/v2/hosts" % self.foreman_url) return self._get_json("%s/api/v2/hosts" % self.foreman_url)
def _get_hostgroup_by_id(self, hid):
if hid not in self.hostgroups:
url = "%s/api/v2/hostgroups/%s" % (self.foreman_url, hid)
self.hostgroups[hid] = self._get_json(url)
return self.hostgroups[hid]
def _get_all_params_by_id(self, hid): def _get_all_params_by_id(self, hid):
url = "%s/api/v2/hosts/%s" % (self.foreman_url, hid) url = "%s/api/v2/hosts/%s" % (self.foreman_url, hid)
ret = self._get_json(url, [404]) ret = self._get_json(url, [404])
@@ -225,10 +176,6 @@ class ForemanInventory(object):
ret = {} ret = {}
return ret.get('all_parameters', {}) return ret.get('all_parameters', {})
def _get_facts_by_id(self, hid):
url = "%s/api/v2/hosts/%s/facts" % (self.foreman_url, hid)
return self._get_json(url)
def _resolve_params(self, host): def _resolve_params(self, host):
"""Fetch host params and convert to dict""" """Fetch host params and convert to dict"""
params = {} params = {}
@@ -239,6 +186,10 @@ class ForemanInventory(object):
return params return params
def _get_facts_by_id(self, hid):
url = "%s/api/v2/hosts/%s/facts" % (self.foreman_url, hid)
return self._get_json(url)
def _get_facts(self, host): def _get_facts(self, host):
"""Fetch all host facts of the host""" """Fetch all host facts of the host"""
if not self.want_facts: if not self.want_facts:
@@ -253,6 +204,29 @@ class ForemanInventory(object):
raise ValueError("More than one set of facts returned for '%s'" % host) raise ValueError("More than one set of facts returned for '%s'" % host)
return facts return facts
def write_to_cache(self, data, filename):
"""Write data in JSON format to a file"""
json_data = json_format_dict(data, True)
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
def _write_cache(self):
self.write_to_cache(self.cache, self.cache_path_cache)
self.write_to_cache(self.inventory, self.cache_path_inventory)
self.write_to_cache(self.params, self.cache_path_params)
self.write_to_cache(self.facts, self.cache_path_facts)
def to_safe(self, word):
'''Converts 'bad' characters in a string to underscores
so they can be used as Ansible groups
>>> ForemanInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = "[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def update_cache(self): def update_cache(self):
"""Make calls to foreman and save the output in a cache""" """Make calls to foreman and save the output in a cache"""
@@ -267,20 +241,20 @@ class ForemanInventory(object):
val = host.get('%s_title' % group) or host.get('%s_name' % group) val = host.get('%s_title' % group) or host.get('%s_name' % group)
if val: if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower())) safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name) self.inventory[safe_key].append(dns_name)
# Create ansible groups for environment, location and organization # Create ansible groups for environment, location and organization
for group in ['environment', 'location', 'organization']: for group in ['environment', 'location', 'organization']:
val = host.get('%s_name' % group) val = host.get('%s_name' % group)
if val: if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower())) safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name) self.inventory[safe_key].append(dns_name)
for group in ['lifecycle_environment', 'content_view']: for group in ['lifecycle_environment', 'content_view']:
val = host.get('content_facet_attributes', {}).get('%s_name' % group) val = host.get('content_facet_attributes', {}).get('%s_name' % group)
if val: if val:
safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower())) safe_key = self.to_safe('%s%s_%s' % (self.group_prefix, group, val.lower()))
self.push(self.inventory, safe_key, dns_name) self.inventory[safe_key].append(dns_name)
params = self._resolve_params(host) params = self._resolve_params(host)
@@ -297,44 +271,27 @@ class ForemanInventory(object):
for pattern in self.group_patterns: for pattern in self.group_patterns:
try: try:
key = pattern.format(**groupby) key = pattern.format(**groupby)
self.push(self.inventory, key, dns_name) self.inventory[key].append(dns_name)
except KeyError: except KeyError:
pass # Host not part of this group pass # Host not part of this group
self.cache[dns_name] = host self.cache[dns_name] = host
self.params[dns_name] = params self.params[dns_name] = params
self.facts[dns_name] = self._get_facts(host) self.facts[dns_name] = self._get_facts(host)
self.push(self.inventory, 'all', dns_name) self.inventory['all'].append(dns_name)
self._write_cache() self._write_cache()
def _write_cache(self): def is_cache_valid(self):
self.write_to_cache(self.cache, self.cache_path_cache) """Determines if the cache is still valid"""
self.write_to_cache(self.inventory, self.cache_path_inventory) if os.path.isfile(self.cache_path_cache):
self.write_to_cache(self.params, self.cache_path_params) mod_time = os.path.getmtime(self.cache_path_cache)
self.write_to_cache(self.facts, self.cache_path_facts) current_time = time()
if (mod_time + self.cache_max_age) > current_time:
def get_host_info(self): if (os.path.isfile(self.cache_path_inventory) and
"""Get variables about a specific host""" os.path.isfile(self.cache_path_params) and
os.path.isfile(self.cache_path_facts)):
if not self.cache or len(self.cache) == 0: return True
# Need to load index from cache return False
self.load_cache_from_cache()
if self.args.host not in self.cache:
# try updating the cache
self.update_cache()
if self.args.host not in self.cache:
# host might not exist anymore
return self.json_format_dict({}, True)
return self.json_format_dict(self.cache[self.args.host], True)
def push(self, d, k, v):
if k in d:
d[k].append(v)
else:
d[k] = [v]
def load_inventory_from_cache(self): def load_inventory_from_cache(self):
"""Read the index from the cache file sets self.index""" """Read the index from the cache file sets self.index"""
@@ -365,33 +322,58 @@ class ForemanInventory(object):
json_cache = cache.read() json_cache = cache.read()
self.cache = json.loads(json_cache) self.cache = json.loads(json_cache)
def write_to_cache(self, data, filename): def get_inventory(self):
"""Write data in JSON format to a file""" if self.args.refresh_cache or not self.is_cache_valid():
json_data = self.json_format_dict(data, True) self.update_cache()
cache = open(filename, 'w')
cache.write(json_data)
cache.close()
@staticmethod
def to_safe(word):
'''Converts 'bad' characters in a string to underscores
so they can be used as Ansible groups
>>> ForemanInventory.to_safe("foo-bar baz")
'foo_barbaz'
'''
regex = "[^A-Za-z0-9\_]"
return re.sub(regex, "_", word.replace(" ", ""))
def json_format_dict(self, data, pretty=False):
"""Converts a dict to a JSON object and dumps it as a formatted string"""
if pretty:
return json.dumps(data, sort_keys=True, indent=2)
else: else:
return json.dumps(data) self.load_inventory_from_cache()
self.load_params_from_cache()
self.load_facts_from_cache()
self.load_cache_from_cache()
def get_host_info(self):
"""Get variables about a specific host"""
if not self.cache or len(self.cache) == 0:
# Need to load index from cache
self.load_cache_from_cache()
if self.args.host not in self.cache:
# try updating the cache
self.update_cache()
if self.args.host not in self.cache:
# host might not exist anymore
return json_format_dict({}, True)
return json_format_dict(self.cache[self.args.host], True)
def _print_data(self):
data_to_print = ""
if self.args.host:
data_to_print += self.get_host_info()
else:
self.inventory['_meta'] = {'hostvars': {}}
for hostname in self.cache:
self.inventory['_meta']['hostvars'][hostname] = {
'foreman': self.cache[hostname],
'foreman_params': self.params[hostname],
}
if self.want_facts:
self.inventory['_meta']['hostvars'][hostname]['foreman_facts'] = self.facts[hostname]
data_to_print += json_format_dict(self.inventory, True)
print(data_to_print)
def run(self):
# Read settings and parse CLI arguments
if not self.read_settings():
return False
self.parse_cli_args()
self.get_inventory()
self._print_data()
return True
if __name__ == '__main__': if __name__ == '__main__':
inv = ForemanInventory() sys.exit(not ForemanInventory().run())
sys.exit(not inv.run())

View File

@@ -312,7 +312,7 @@ class GceInventory(object):
return gce return gce
def parse_env_zones(self): def parse_env_zones(self):
'''returns a list of comma seperated zones parsed from the GCE_ZONE environment variable. '''returns a list of comma separated zones parsed from the GCE_ZONE environment variable.
If provided, this will be used to filter the results of the grouped_instances call''' If provided, this will be used to filter the results of the grouped_instances call'''
import csv import csv
reader = csv.reader([os.environ.get('GCE_ZONE',"")], skipinitialspace=True) reader = csv.reader([os.environ.get('GCE_ZONE',"")], skipinitialspace=True)
@@ -323,7 +323,7 @@ class GceInventory(object):
''' Command line argument processing ''' ''' Command line argument processing '''
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
description='Produce an Ansible Inventory file based on GCE') description='Produce an Ansible Inventory file based on GCE')
parser.add_argument('--list', action='store_true', default=True, parser.add_argument('--list', action='store_true', default=True,
help='List instances (default: True)') help='List instances (default: True)')
parser.add_argument('--host', action='store', parser.add_argument('--host', action='store',
@@ -428,8 +428,10 @@ class GceInventory(object):
if zones and zone not in zones: if zones and zone not in zones:
continue continue
if zone in groups: groups[zone].append(name) if zone in groups:
else: groups[zone] = [name] groups[zone].append(name)
else:
groups[zone] = [name]
tags = node.extra['tags'] tags = node.extra['tags']
for t in tags: for t in tags:
@@ -437,26 +439,36 @@ class GceInventory(object):
tag = t[6:] tag = t[6:]
else: else:
tag = 'tag_%s' % t tag = 'tag_%s' % t
if tag in groups: groups[tag].append(name) if tag in groups:
else: groups[tag] = [name] groups[tag].append(name)
else:
groups[tag] = [name]
net = node.extra['networkInterfaces'][0]['network'].split('/')[-1] net = node.extra['networkInterfaces'][0]['network'].split('/')[-1]
net = 'network_%s' % net net = 'network_%s' % net
if net in groups: groups[net].append(name) if net in groups:
else: groups[net] = [name] groups[net].append(name)
else:
groups[net] = [name]
machine_type = node.size machine_type = node.size
if machine_type in groups: groups[machine_type].append(name) if machine_type in groups:
else: groups[machine_type] = [name] groups[machine_type].append(name)
else:
groups[machine_type] = [name]
image = node.image and node.image or 'persistent_disk' image = node.image and node.image or 'persistent_disk'
if image in groups: groups[image].append(name) if image in groups:
else: groups[image] = [name] groups[image].append(name)
else:
groups[image] = [name]
status = node.extra['status'] status = node.extra['status']
stat = 'status_%s' % status.lower() stat = 'status_%s' % status.lower()
if stat in groups: groups[stat].append(name) if stat in groups:
else: groups[stat] = [name] groups[stat].append(name)
else:
groups[stat] = [name]
groups["_meta"] = meta groups["_meta"] = meta

View File

@@ -27,6 +27,6 @@ clouds:
password: stack password: stack
project_name: stack project_name: stack
ansible: ansible:
use_hostnames: False use_hostnames: True
expand_hostvars: True expand_hostvars: False
fail_on_errors: True fail_on_errors: True

View File

@@ -1,471 +0,0 @@
#!/usr/bin/env python
# (c) 2013, Jesse Keating <jesse.keating@rackspace.com,
# Paul Durivage <paul.durivage@rackspace.com>,
# Matt Martz <matt@sivel.net>
#
# This file is part of Ansible.
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
"""
Rackspace Cloud Inventory
Authors:
Jesse Keating <jesse.keating@rackspace.com,
Paul Durivage <paul.durivage@rackspace.com>,
Matt Martz <matt@sivel.net>
Description:
Generates inventory that Ansible can understand by making API request to
Rackspace Public Cloud API
When run against a specific host, this script returns variables similar to:
rax_os-ext-sts_task_state
rax_addresses
rax_links
rax_image
rax_os-ext-sts_vm_state
rax_flavor
rax_id
rax_rax-bandwidth_bandwidth
rax_user_id
rax_os-dcf_diskconfig
rax_accessipv4
rax_accessipv6
rax_progress
rax_os-ext-sts_power_state
rax_metadata
rax_status
rax_updated
rax_hostid
rax_name
rax_created
rax_tenant_id
rax_loaded
Configuration:
rax.py can be configured using a rax.ini file or via environment
variables. The rax.ini file should live in the same directory along side
this script.
The section header for configuration values related to this
inventory plugin is [rax]
[rax]
creds_file = ~/.rackspace_cloud_credentials
regions = IAD,ORD,DFW
env = prod
meta_prefix = meta
access_network = public
access_ip_version = 4
Each of these configurations also has a corresponding environment variable.
An environment variable will override a configuration file value.
creds_file:
Environment Variable: RAX_CREDS_FILE
An optional configuration that points to a pyrax-compatible credentials
file.
If not supplied, rax.py will look for a credentials file
at ~/.rackspace_cloud_credentials. It uses the Rackspace Python SDK,
and therefore requires a file formatted per the SDK's specifications.
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md
regions:
Environment Variable: RAX_REGION
An optional environment variable to narrow inventory search
scope. If used, needs a value like ORD, DFW, SYD (a Rackspace
datacenter) and optionally accepts a comma-separated list.
environment:
Environment Variable: RAX_ENV
A configuration that will use an environment as configured in
~/.pyrax.cfg, see
https://github.com/rackspace/pyrax/blob/master/docs/getting_started.md
meta_prefix:
Environment Variable: RAX_META_PREFIX
Default: meta
A configuration that changes the prefix used for meta key/value groups.
For compatibility with ec2.py set to "tag"
access_network:
Environment Variable: RAX_ACCESS_NETWORK
Default: public
A configuration that will tell the inventory script to use a specific
server network to determine the ansible_ssh_host value. If no address
is found, ansible_ssh_host will not be set. Accepts a comma-separated
list of network names, the first found wins.
access_ip_version:
Environment Variable: RAX_ACCESS_IP_VERSION
Default: 4
A configuration related to "access_network" that will attempt to
determine the ansible_ssh_host value for either IPv4 or IPv6. If no
address is found, ansible_ssh_host will not be set.
Acceptable values are: 4 or 6. Values other than 4 or 6
will be ignored, and 4 will be used. Accepts a comma-separated list,
the first found wins.
Examples:
List server instances
$ RAX_CREDS_FILE=~/.raxpub rax.py --list
List servers in ORD datacenter only
$ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD rax.py --list
List servers in ORD and DFW datacenters
$ RAX_CREDS_FILE=~/.raxpub RAX_REGION=ORD,DFW rax.py --list
Get server details for server named "server.example.com"
$ RAX_CREDS_FILE=~/.raxpub rax.py --host server.example.com
Use the instance private IP to connect (instead of public IP)
$ RAX_CREDS_FILE=~/.raxpub RAX_ACCESS_NETWORK=private rax.py --list
"""
import os
import re
import sys
import argparse
import warnings
import collections
import ConfigParser
from six import iteritems
try:
import json
except ImportError:
import simplejson as json
try:
import pyrax
from pyrax.utils import slugify
except ImportError:
sys.exit('pyrax is required for this module')
from time import time
from ansible.constants import get_config, mk_boolean
NON_CALLABLES = (basestring, bool, dict, int, list, type(None))
def load_config_file():
p = ConfigParser.ConfigParser()
config_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),
'rax.ini')
try:
p.read(config_file)
except ConfigParser.Error:
return None
else:
return p
p = load_config_file()
def rax_slugify(value):
return 'rax_%s' % (re.sub('[^\w-]', '_', value).lower().lstrip('_'))
def to_dict(obj):
instance = {}
for key in dir(obj):
value = getattr(obj, key)
if isinstance(value, NON_CALLABLES) and not key.startswith('_'):
key = rax_slugify(key)
instance[key] = value
return instance
def host(regions, hostname):
hostvars = {}
for region in regions:
# Connect to the region
cs = pyrax.connect_to_cloudservers(region=region)
for server in cs.servers.list():
if server.name == hostname:
for key, value in to_dict(server).items():
hostvars[key] = value
# And finally, add an IP address
hostvars['ansible_ssh_host'] = server.accessIPv4
print(json.dumps(hostvars, sort_keys=True, indent=4))
def _list_into_cache(regions):
groups = collections.defaultdict(list)
hostvars = collections.defaultdict(dict)
images = {}
cbs_attachments = collections.defaultdict(dict)
prefix = get_config(p, 'rax', 'meta_prefix', 'RAX_META_PREFIX', 'meta')
try:
# Ansible 2.3+
networks = get_config(p, 'rax', 'access_network',
'RAX_ACCESS_NETWORK', 'public', value_type='list')
except TypeError:
# Ansible 2.2.x and below
networks = get_config(p, 'rax', 'access_network',
'RAX_ACCESS_NETWORK', 'public', islist=True)
try:
try:
ip_versions = map(int, get_config(p, 'rax', 'access_ip_version',
'RAX_ACCESS_IP_VERSION', 4, value_type='list'))
except TypeError:
ip_versions = map(int, get_config(p, 'rax', 'access_ip_version',
'RAX_ACCESS_IP_VERSION', 4, islist=True))
except:
ip_versions = [4]
else:
ip_versions = [v for v in ip_versions if v in [4, 6]]
if not ip_versions:
ip_versions = [4]
# Go through all the regions looking for servers
for region in regions:
# Connect to the region
cs = pyrax.connect_to_cloudservers(region=region)
if cs is None:
warnings.warn(
'Connecting to Rackspace region "%s" has caused Pyrax to '
'return None. Is this a valid region?' % region,
RuntimeWarning)
continue
for server in cs.servers.list():
# Create a group on region
groups[region].append(server.name)
# Check if group metadata key in servers' metadata
group = server.metadata.get('group')
if group:
groups[group].append(server.name)
for extra_group in server.metadata.get('groups', '').split(','):
if extra_group:
groups[extra_group].append(server.name)
# Add host metadata
for key, value in to_dict(server).items():
hostvars[server.name][key] = value
hostvars[server.name]['rax_region'] = region
for key, value in iteritems(server.metadata):
groups['%s_%s_%s' % (prefix, key, value)].append(server.name)
groups['instance-%s' % server.id].append(server.name)
groups['flavor-%s' % server.flavor['id']].append(server.name)
# Handle boot from volume
if not server.image:
if not cbs_attachments[region]:
cbs = pyrax.connect_to_cloud_blockstorage(region)
for vol in cbs.list():
if mk_boolean(vol.bootable):
for attachment in vol.attachments:
metadata = vol.volume_image_metadata
server_id = attachment['server_id']
cbs_attachments[region][server_id] = {
'id': metadata['image_id'],
'name': slugify(metadata['image_name'])
}
image = cbs_attachments[region].get(server.id)
if image:
server.image = {'id': image['id']}
hostvars[server.name]['rax_image'] = server.image
hostvars[server.name]['rax_boot_source'] = 'volume'
images[image['id']] = image['name']
else:
hostvars[server.name]['rax_boot_source'] = 'local'
try:
imagegroup = 'image-%s' % images[server.image['id']]
groups[imagegroup].append(server.name)
groups['image-%s' % server.image['id']].append(server.name)
except KeyError:
try:
image = cs.images.get(server.image['id'])
except cs.exceptions.NotFound:
groups['image-%s' % server.image['id']].append(server.name)
else:
images[image.id] = image.human_id
groups['image-%s' % image.human_id].append(server.name)
groups['image-%s' % server.image['id']].append(server.name)
# And finally, add an IP address
ansible_ssh_host = None
# use accessIPv[46] instead of looping address for 'public'
for network_name in networks:
if ansible_ssh_host:
break
if network_name == 'public':
for version_name in ip_versions:
if ansible_ssh_host:
break
if version_name == 6 and server.accessIPv6:
ansible_ssh_host = server.accessIPv6
elif server.accessIPv4:
ansible_ssh_host = server.accessIPv4
if not ansible_ssh_host:
addresses = server.addresses.get(network_name, [])
for address in addresses:
for version_name in ip_versions:
if ansible_ssh_host:
break
if address.get('version') == version_name:
ansible_ssh_host = address.get('addr')
break
if ansible_ssh_host:
hostvars[server.name]['ansible_ssh_host'] = ansible_ssh_host
if hostvars:
groups['_meta'] = {'hostvars': hostvars}
with open(get_cache_file_path(regions), 'w') as cache_file:
json.dump(groups, cache_file)
def get_cache_file_path(regions):
regions_str = '.'.join([reg.strip().lower() for reg in regions])
ansible_tmp_path = os.path.join(os.path.expanduser("~"), '.ansible', 'tmp')
if not os.path.exists(ansible_tmp_path):
os.makedirs(ansible_tmp_path)
return os.path.join(ansible_tmp_path,
'ansible-rax-%s-%s.cache' % (
pyrax.identity.username, regions_str))
def _list(regions, refresh_cache=True):
cache_max_age = int(get_config(p, 'rax', 'cache_max_age',
'RAX_CACHE_MAX_AGE', 600))
if (not os.path.exists(get_cache_file_path(regions)) or
refresh_cache or
(time() - os.stat(get_cache_file_path(regions))[-1]) > cache_max_age):
# Cache file doesn't exist or older than 10m or refresh cache requested
_list_into_cache(regions)
with open(get_cache_file_path(regions), 'r') as cache_file:
groups = json.load(cache_file)
print(json.dumps(groups, sort_keys=True, indent=4))
def parse_args():
parser = argparse.ArgumentParser(description='Ansible Rackspace Cloud '
'inventory module')
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--list', action='store_true',
help='List active servers')
group.add_argument('--host', help='List details about the specific host')
parser.add_argument('--refresh-cache', action='store_true', default=False,
help=('Force refresh of cache, making API requests to'
'RackSpace (default: False - use cache files)'))
return parser.parse_args()
def setup():
default_creds_file = os.path.expanduser('~/.rackspace_cloud_credentials')
# pyrax does not honor the environment variable CLOUD_VERIFY_SSL=False, so let's help pyrax
if 'CLOUD_VERIFY_SSL' in os.environ:
pyrax.set_setting('verify_ssl', os.environ['CLOUD_VERIFY_SSL'] in [1, 'true', 'True'])
env = get_config(p, 'rax', 'environment', 'RAX_ENV', None)
if env:
pyrax.set_environment(env)
keyring_username = pyrax.get_setting('keyring_username')
# Attempt to grab credentials from environment first
creds_file = get_config(p, 'rax', 'creds_file',
'RAX_CREDS_FILE', None)
if creds_file is not None:
creds_file = os.path.expanduser(creds_file)
else:
# But if that fails, use the default location of
# ~/.rackspace_cloud_credentials
if os.path.isfile(default_creds_file):
creds_file = default_creds_file
elif not keyring_username:
sys.exit('No value in environment variable %s and/or no '
'credentials file at %s'
% ('RAX_CREDS_FILE', default_creds_file))
identity_type = pyrax.get_setting('identity_type')
pyrax.set_setting('identity_type', identity_type or 'rackspace')
region = pyrax.get_setting('region')
try:
if keyring_username:
pyrax.keyring_auth(keyring_username, region=region)
else:
pyrax.set_credential_file(creds_file, region=region)
except Exception as e:
sys.exit("%s: %s" % (e, e.message))
regions = []
if region:
regions.append(region)
else:
try:
# Ansible 2.3+
region_list = get_config(p, 'rax', 'regions', 'RAX_REGION', 'all',
value_type='list')
except TypeError:
# Ansible 2.2.x and below
region_list = get_config(p, 'rax', 'regions', 'RAX_REGION', 'all',
islist=True)
for region in region_list:
region = region.strip().upper()
if region == 'ALL':
regions = pyrax.regions
break
elif region not in pyrax.regions:
sys.exit('Unsupported region %s' % region)
elif region not in regions:
regions.append(region)
return regions
def main():
args = parse_args()
regions = setup()
if args.list:
_list(regions, refresh_cache=args.refresh_cache)
elif args.host:
host(regions, args.host)
sys.exit(0)
if __name__ == '__main__':
main()

View File

@@ -374,7 +374,7 @@ class VMWareInventory(object):
if cfm is not None and cfm.field: if cfm is not None and cfm.field:
for f in cfm.field: for f in cfm.field:
if f.managedObjectType == vim.VirtualMachine: if f.managedObjectType == vim.VirtualMachine:
self.custom_fields[f.key] = f.name; self.custom_fields[f.key] = f.name
self.debugl('%d custom fieds collected' % len(self.custom_fields)) self.debugl('%d custom fieds collected' % len(self.custom_fields))
return instance_tuples return instance_tuples
@@ -628,7 +628,10 @@ class VMWareInventory(object):
elif type(vobj) in self.vimTable: elif type(vobj) in self.vimTable:
rdata = {} rdata = {}
for key in self.vimTable[type(vobj)]: for key in self.vimTable[type(vobj)]:
rdata[key] = getattr(vobj, key) try:
rdata[key] = getattr(vobj, key)
except Exception as e:
self.debugl(e)
elif issubclass(type(vobj), str) or isinstance(vobj, str): elif issubclass(type(vobj), str) or isinstance(vobj, str):
if vobj.isalnum(): if vobj.isalnum():
@@ -685,12 +688,15 @@ class VMWareInventory(object):
if self.lowerkeys: if self.lowerkeys:
method = method.lower() method = method.lower()
if level + 1 <= self.maxlevel: if level + 1 <= self.maxlevel:
rdata[method] = self._process_object_types( try:
methodToCall, rdata[method] = self._process_object_types(
thisvm=thisvm, methodToCall,
inkey=inkey + '.' + method, thisvm=thisvm,
level=(level + 1) inkey=inkey + '.' + method,
) level=(level + 1)
)
except vim.fault.NoPermission:
self.debugl("Skipping method %s (NoPermission)" % method)
else: else:
pass pass
@@ -719,5 +725,3 @@ class VMWareInventory(object):
if __name__ == "__main__": if __name__ == "__main__":
# Run the script # Run the script
print(VMWareInventory().show()) print(VMWareInventory().show())

View File

@@ -246,7 +246,7 @@ class AzureInventory(object):
def push(self, my_dict, key, element): def push(self, my_dict, key, element):
"""Pushed an element onto an array that may not have been defined in the dict.""" """Pushed an element onto an array that may not have been defined in the dict."""
if key in my_dict: if key in my_dict:
my_dict[key].append(element); my_dict[key].append(element)
else: else:
my_dict[key] = [element] my_dict[key] = [element]

View File

@@ -632,32 +632,6 @@ AD_HOC_COMMANDS = [
'win_user', 'win_user',
] ]
# Not possible to get list of regions without authenticating, so use this list
# instead (based on docs from:
# http://docs.rackspace.com/loadbalancers/api/v1.0/clb-devguide/content/Service_Access_Endpoints-d1e517.html)
RAX_REGION_CHOICES = [
('ORD', _('Chicago')),
('DFW', _('Dallas/Ft. Worth')),
('IAD', _('Northern Virginia')),
('LON', _('London')),
('SYD', _('Sydney')),
('HKG', _('Hong Kong')),
]
# Inventory variable name/values for determining if host is active/enabled.
RAX_ENABLED_VAR = 'rax_status'
RAX_ENABLED_VALUE = 'ACTIVE'
# Inventory variable name containing unique instance ID.
RAX_INSTANCE_ID_VAR = 'rax_id'
# Filter for allowed group/host names when importing inventory from Rackspace.
# By default, filter group of one created for each instance and exclude all
# groups without children, hosts and variables.
RAX_GROUP_FILTER = r'^(?!instance-.+).+$'
RAX_HOST_FILTER = r'^.+$'
RAX_EXCLUDE_EMPTY_GROUPS = True
INV_ENV_VARIABLE_BLACKLIST = ("HOME", "USER", "_", "TERM") INV_ENV_VARIABLE_BLACKLIST = ("HOME", "USER", "_", "TERM")
# ---------------- # ----------------

View File

@@ -1,5 +1,7 @@
The requirements.txt and requirements_ansible.txt files are generated from requirements.in and requirements_ansible.in, respectively, using `pip-tools` `pip-compile`. The following commands should do this if ran inside the tower_tools container. The requirements.txt and requirements_ansible.txt files are generated from requirements.in and requirements_ansible.in, respectively, using `pip-tools` `pip-compile`. The following commands should do this if ran inside the tower_tools container.
NOTE: before running `pip-compile`, please copy-paste contents in `requirements/requirements_git.txt` to the top of `requirements/requirements.in` and prepend each copied line with `-e `. Later after `requirements.txt` is generated, don't forget to remove all `git+https://github.com...`-like lines from both `requirements.txt` and `requirements.in`
``` ```
virtualenv /buildit virtualenv /buildit
source /buildit/bin/activate source /buildit/bin/activate
@@ -17,3 +19,5 @@ pip-compile requirements/requirements_ansible.in > requirements/requirements_ans
* As of `pip-tools` `1.8.1` `pip-compile` does not resolve packages specified using a git url. Thus, dependencies for things like `dm.xmlsec.binding` do not get resolved and output to `requirements.txt`. This means that: * As of `pip-tools` `1.8.1` `pip-compile` does not resolve packages specified using a git url. Thus, dependencies for things like `dm.xmlsec.binding` do not get resolved and output to `requirements.txt`. This means that:
* can't use `pip install --no-deps` because other deps WILL be sucked in * can't use `pip install --no-deps` because other deps WILL be sucked in
* all dependencies are NOT captured in our `.txt` files. This means you can't rely on the `.txt` when gathering licenses. * all dependencies are NOT captured in our `.txt` files. This means you can't rely on the `.txt` when gathering licenses.
* Packages `gevent-websocket` and `twisted` are put in `requirements.in` *not* because they are primary dependency of Tower, but because their versions needs to be freezed as dependencies of django channel. Please be mindful when doing dependency updates.

View File

@@ -1,53 +1,55 @@
apache-libcloud==1.3.0 apache-libcloud==2.0.0
appdirs==1.4.2 appdirs==1.4.2
asgi-amqp==0.4.1 asgi-amqp==0.4.1
asgiref==1.0.1 asgiref==1.0.1
azure==2.0.0rc6 azure==2.0.0rc6
backports.ssl-match-hostname==3.5.0.1 backports.ssl-match-hostname==3.5.0.1
boto==2.45.0 boto==2.46.1
channels==0.17.3 channels==0.17.3
celery==3.1.17 celery==3.1.25
daphne>=0.15.0,<1.0.0 daphne>=0.15.0,<1.0.0
Django==1.8.16 Django==1.8.16
django-auth-ldap==1.2.8 django-auth-ldap==1.2.8
django-celery==3.1.17 django-celery==3.1.17
django-crum==0.7.1 django-crum==0.7.1
django-extensions==1.7.4 django-extensions==1.7.8
django-jsonfield==1.0.1 django-jsonfield==1.0.1
django-polymorphic==0.7.2 django-polymorphic==1.2
django-radius==1.1.0 django-radius==1.1.0
django-solo==1.1.2 django-solo==1.1.2
django-split-settings==0.2.2 django-split-settings==0.2.5
django-taggit==0.21.3 django-taggit==0.22.1
django-transaction-hooks==0.2 django-transaction-hooks==0.2
djangorestframework==3.3.3 djangorestframework==3.3.3
djangorestframework-yaml==1.0.3 djangorestframework-yaml==1.0.3
gevent-websocket==0.9.5 gevent-websocket==0.9.5
irc==15.0.4 irc==15.1.1
jsonschema==2.6.0
M2Crypto==0.25.1 M2Crypto==0.25.1
Markdown==2.6.7 Markdown==2.6.7
ordereddict==1.1 ordereddict==1.1
pexpect==3.1 pexpect==4.2.1
psphere==0.5.2 psphere==0.5.2
psutil==5.0.0 psutil==5.2.2
pygerduty==0.35.1 pygerduty==0.35.2
pyOpenSSL==16.2.0 pyOpenSSL==17.0.0
pyparsing==2.2.0 pyparsing==2.2.0
python-logstash==0.4.6 python-logstash==0.4.6
python-memcached==1.58 python-memcached==1.58
python-radius==1.0 python-radius==1.0
python-saml==2.2.0 python-saml==2.2.1
python-social-auth==0.2.21 python-social-auth==0.2.21
pyvmomi==6.5 pyvmomi==6.5
redbaron==0.6.2 redbaron==0.6.3
requests==2.11.1
requests-futures==0.9.7 requests-futures==0.9.7
service-identity==16.0.0 service-identity==16.0.0
shade==1.19.0 shade==1.20.0
slackclient==1.0.2 slackclient==1.0.5
tacacs_plus==0.1 tacacs_plus==0.2
twilio==5.6.0 twilio==6.1.0
twisted==16.6.0 twisted==16.6.0
uWSGI==2.0.14 uWSGI==2.0.14
xmltodict==0.10.2 xmltodict==0.11.0
pip==8.1.2 pip==9.0.1
setuptools==23.0.0 setuptools==35.0.2

View File

@@ -7,13 +7,13 @@
adal==0.4.5 # via msrestazure adal==0.4.5 # via msrestazure
amqp==1.4.9 # via kombu amqp==1.4.9 # via kombu
anyjson==0.3.3 # via kombu anyjson==0.3.3 # via kombu
apache-libcloud==1.3.0 apache-libcloud==2.0.0
appdirs==1.4.2 appdirs==1.4.2
asgi-amqp==0.4.1 asgi-amqp==0.4.1
asgiref==1.0.1 asgiref==1.0.1
asn1crypto==0.22.0 # via cryptography asn1crypto==0.22.0 # via cryptography
attrs==16.3.0 # via service-identity attrs==16.3.0 # via service-identity
autobahn==0.18.2 # via daphne autobahn==17.5.1 # via daphne
azure-batch==1.0.0 # via azure azure-batch==1.0.0 # via azure
azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage
azure-mgmt-batch==1.0.0 # via azure-mgmt azure-mgmt-batch==1.0.0 # via azure-mgmt
@@ -32,35 +32,35 @@ azure-servicebus==0.20.3 # via azure
azure-servicemanagement-legacy==0.20.4 # via azure azure-servicemanagement-legacy==0.20.4 # via azure
azure-storage==0.33.0 # via azure azure-storage==0.33.0 # via azure
azure==2.0.0rc6 azure==2.0.0rc6
babel==2.4.0 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient babel==2.3.4 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
backports.functools-lru-cache==1.3 # via jaraco.functools backports.functools-lru-cache==1.3 # via jaraco.functools
backports.ssl-match-hostname==3.5.0.1 backports.ssl-match-hostname==3.5.0.1
baron==0.6.5 # via redbaron baron==0.6.5 # via redbaron
billiard==3.3.0.23 # via celery billiard==3.3.0.23 # via celery
boto==2.45.0 boto==2.46.1
celery==3.1.17 celery==3.1.25
#certifi==2017.4.17 # via msrest #certifi==2017.4.17 # via msrest
cffi==1.10.0 # via cryptography cffi==1.10.0 # via cryptography
channels==0.17.3 channels==0.17.3
cliff==2.5.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient cliff==2.6.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cmd2==0.7.0 # via cliff cmd2==0.7.0 # via cliff
constantly==15.1.0 # via twisted constantly==15.1.0 # via twisted
cryptography==1.8.1 # via adal, azure-storage, pyopenssl, secretstorage cryptography==1.8.1 # via adal, azure-storage, pyopenssl, secretstorage, twilio
daphne==0.15.0 daphne==0.15.0
debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient
decorator==4.0.11 # via shade decorator==4.0.11 # via shade
defusedxml==0.4.1 # via python-saml defusedxml==0.4.1 # via python-saml
deprecation==1.0 # via openstacksdk deprecation==1.0.1 # via openstacksdk
django-auth-ldap==1.2.8 django-auth-ldap==1.2.8
django-celery==3.1.17 django-celery==3.1.17
django-crum==0.7.1 django-crum==0.7.1
django-extensions==1.7.4 django-extensions==1.7.8
django-jsonfield==1.0.1 django-jsonfield==1.0.1
django-polymorphic==0.7.2 django-polymorphic==1.2
django-radius==1.1.0 django-radius==1.1.0
django-solo==1.1.2 django-solo==1.1.2
django-split-settings==0.2.2 django-split-settings==0.2.5
django-taggit==0.21.3 django-taggit==0.22.1
django-transaction-hooks==0.2 django-transaction-hooks==0.2
django==1.8.16 # via channels, django-auth-ldap, django-crum, django-split-settings, django-transaction-hooks django==1.8.16 # via channels, django-auth-ldap, django-crum, django-split-settings, django-transaction-hooks
djangorestframework-yaml==1.0.3 djangorestframework-yaml==1.0.3
@@ -73,17 +73,16 @@ futures==3.1.1 # via azure-storage, requests-futures, shade
gevent-websocket==0.9.5 gevent-websocket==0.9.5
gevent==1.2.1 # via gevent-websocket gevent==1.2.1 # via gevent-websocket
greenlet==0.4.12 # via gevent greenlet==0.4.12 # via gevent
httplib2==0.10.3 # via twilio idna==2.5 # via cryptography, twilio
idna==2.5 # via cryptography
incremental==16.10.1 # via twisted incremental==16.10.1 # via twisted
inflect==0.2.5 # via jaraco.itertools inflect==0.2.5 # via jaraco.itertools
ipaddress==1.0.18 # via cryptography, shade ipaddress==1.0.18 # via cryptography, shade
irc==15.0.4 irc==15.1.1
iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient
isodate==0.5.4 # via msrest, python-saml isodate==0.5.4 # via msrest, python-saml
jaraco.classes==1.4.1 # via jaraco.collections jaraco.classes==1.4.1 # via jaraco.collections
jaraco.collections==1.5.1 # via irc, jaraco.text jaraco.collections==1.5.1 # via irc, jaraco.text
jaraco.functools==1.15.2 # via irc, jaraco.text jaraco.functools==1.16 # via irc, jaraco.text
jaraco.itertools==2.0.1 # via irc jaraco.itertools==2.0.1 # via irc
jaraco.logging==1.5 # via irc jaraco.logging==1.5 # via irc
jaraco.stream==1.1.2 # via irc jaraco.stream==1.1.2 # via irc
@@ -92,9 +91,9 @@ jmespath==0.9.2 # via shade
jsonpatch==1.15 # via shade, warlock jsonpatch==1.15 # via shade, warlock
jsonpickle==0.9.4 # via asgi-amqp jsonpickle==0.9.4 # via asgi-amqp
jsonpointer==1.10 # via jsonpatch jsonpointer==1.10 # via jsonpatch
jsonschema==2.6.0 # via python-designateclient, python-ironicclient, warlock jsonschema==2.6.0
keyring==10.3.2 # via msrestazure keyring==10.3.2 # via msrestazure
keystoneauth1==2.19.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade keystoneauth1==2.20.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
kombu==3.0.37 # via asgi-amqp, celery kombu==3.0.37 # via asgi-amqp, celery
lxml==3.7.3 lxml==3.7.3
m2crypto==0.25.1 m2crypto==0.25.1
@@ -108,28 +107,29 @@ munch==2.1.1 # via shade
netaddr==0.7.19 # via oslo.config, oslo.utils, pyrad, python-neutronclient netaddr==0.7.19 # via oslo.config, oslo.utils, pyrad, python-neutronclient
netifaces==0.10.5 # via oslo.utils, shade netifaces==0.10.5 # via oslo.utils, shade
oauthlib==2.0.2 # via python-social-auth, requests-oauthlib oauthlib==2.0.2 # via python-social-auth, requests-oauthlib
openstacksdk==0.9.15 # via python-openstackclient openstacksdk==0.9.16 # via python-openstackclient
ordereddict==1.1 ordereddict==1.1
os-client-config==1.26.0 # via openstacksdk, osc-lib, python-neutronclient, shade os-client-config==1.27.0 # via openstacksdk, osc-lib, python-neutronclient, shade
osc-lib==1.3.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient osc-lib==1.6.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
oslo.config==3.24.0 # via python-keystoneclient oslo.config==4.1.0 # via python-keystoneclient
oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient
oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
packaging==16.8 # via cryptography packaging==16.8 # via cryptography, setuptools
pbr==2.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore pbr==3.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
pexpect==3.1 pexpect==4.2.1
positional==1.1.1 # via keystoneauth1, python-keystoneclient positional==1.1.1 # via keystoneauth1, python-keystoneclient
prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient
psphere==0.5.2 psphere==0.5.2
psutil==5.0.0 psutil==5.2.2
psycopg2==2.7.1 psycopg2==2.7.1
ptyprocess==0.5.1 # via pexpect
pyasn1-modules==0.0.8 # via service-identity pyasn1-modules==0.0.8 # via service-identity
pyasn1==0.2.3 # via pyasn1-modules, service-identity pyasn1==0.2.3 # via pyasn1-modules, service-identity
pycparser==2.17 # via cffi pycparser==2.17 # via cffi
pygerduty==0.35.1 pygerduty==0.35.2
pyjwt==1.5.0 # via adal, python-social-auth pyjwt==1.5.0 # via adal, python-social-auth, twilio
pyopenssl==16.2.0 # via service-identity pyopenssl==17.0.0 # via service-identity, twilio
pyparsing==2.2.0 pyparsing==2.2.0
pyrad==2.1 # via django-radius pyrad==2.1 # via django-radius
python-cinderclient==2.0.1 # via python-openstackclient, shade python-cinderclient==2.0.1 # via python-openstackclient, shade
@@ -138,48 +138,48 @@ python-designateclient==2.6.0 # via shade
python-glanceclient==2.6.0 # via python-openstackclient python-glanceclient==2.6.0 # via python-openstackclient
python-ironicclient==1.12.0 # via shade python-ironicclient==1.12.0 # via shade
python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade
python-ldap==2.4.32 # via django-auth-ldap python-ldap==2.4.38 # via django-auth-ldap
python-logstash==0.4.6 python-logstash==0.4.6
python-memcached==1.58 python-memcached==1.58
python-neutronclient==6.2.0 # via shade python-neutronclient==6.2.0 # via shade
python-novaclient==8.0.0 # via python-openstackclient, shade python-novaclient==8.0.0 # via python-openstackclient, shade
python-openid==2.2.5 # via python-social-auth python-openid==2.2.5 # via python-social-auth
python-openstackclient==3.9.0 # via python-ironicclient python-openstackclient==3.11.0 # via python-ironicclient
python-radius==1.0 python-radius==1.0
python-saml==2.2.0 python-saml==2.2.1
python-social-auth==0.2.21 python-social-auth==0.2.21
pytz==2017.2 # via babel, celery, irc, oslo.serialization, oslo.utils, tempora, twilio pytz==2017.2 # via babel, celery, irc, oslo.serialization, oslo.utils, tempora, twilio
pyvmomi==6.5 pyvmomi==6.5
pyyaml==3.12 # via cliff, djangorestframework-yaml, os-client-config, psphere, python-ironicclient pyyaml==3.12 # via cliff, djangorestframework-yaml, os-client-config, psphere, python-ironicclient
redbaron==0.6.2 redbaron==0.6.3
requests-futures==0.9.7 requests-futures==0.9.7
requests-oauthlib==0.8.0 # via msrest, python-social-auth requests-oauthlib==0.8.0 # via msrest, python-social-auth
requests==2.12.5 # via adal, azure-servicebus, azure-servicemanagement-legacy, azure-storage, keystoneauth1, msrest, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-social-auth, pyvmomi, requests-futures, requests-oauthlib, slackclient requests==2.11.1
requestsexceptions==1.2.0 # via os-client-config, shade requestsexceptions==1.2.0 # via os-client-config, shade
rfc3986==0.4.1 # via oslo.config rfc3986==0.4.1 # via oslo.config
rply==0.7.4 # via baron rply==0.7.4 # via baron
secretstorage==2.3.1 # via keyring secretstorage==2.3.1 # via keyring
service-identity==16.0.0 service-identity==16.0.0
shade==1.19.0 shade==1.20.0
simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient
six==1.10.0 # via asgi-amqp, asgiref, autobahn, cliff, cmd2, cryptography, debtcollector, django-extensions, irc, jaraco.classes, jaraco.collections, jaraco.itertools, jaraco.logging, jaraco.stream, keystoneauth1, more-itertools, munch, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, pygerduty, pyopenssl, pyrad, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, python-social-auth, pyvmomi, shade, slackclient, stevedore, tacacs-plus, tempora, twilio, txaio, warlock, websocket-client six==1.10.0 # via asgi-amqp, asgiref, autobahn, cliff, cmd2, cryptography, debtcollector, django-extensions, irc, jaraco.classes, jaraco.collections, jaraco.itertools, jaraco.logging, jaraco.stream, keystoneauth1, more-itertools, munch, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, pygerduty, pyopenssl, pyrad, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, python-social-auth, pyvmomi, setuptools, shade, slackclient, stevedore, tacacs-plus, tempora, twilio, txaio, warlock, websocket-client
slackclient==1.0.2 slackclient==1.0.5
stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient
suds==0.4 # via psphere suds==0.4 # via psphere
tacacs_plus==0.1 tacacs_plus==0.2
tempora==1.6.1 # via irc, jaraco.logging tempora==1.6.1 # via irc, jaraco.logging
twilio==5.6.0 twilio==6.1.0
twisted==16.6.0 twisted==16.6.0
txaio==2.7.0 # via autobahn txaio==2.7.1 # via autobahn
typing==3.6.1 # via m2crypto typing==3.6.1 # via m2crypto
unicodecsv==0.14.1 # via cliff unicodecsv==0.14.1 # via cliff
uwsgi==2.0.14 uwsgi==2.0.14
warlock==1.2.0 # via python-glanceclient warlock==1.2.0 # via python-glanceclient
websocket-client==0.40.0 # via slackclient websocket-client==0.40.0 # via slackclient
wrapt==1.10.10 # via debtcollector, positional, python-glanceclient wrapt==1.10.10 # via debtcollector, positional, python-glanceclient
xmltodict==0.10.2 xmltodict==0.11.0
zope.interface==4.3.3 # via twisted zope.interface==4.4.0 # via twisted
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:
pip==8.1.2 pip==9.0.1
setuptools==23.0.0 setuptools==35.0.2

View File

@@ -1,14 +1,15 @@
apache-libcloud==1.3.0 apache-libcloud==2.0.0
azure==2.0.0rc6 azure==2.0.0rc6
backports.ssl-match-hostname==3.5.0.1 backports.ssl-match-hostname==3.5.0.1
kombu==3.0.35 kombu==3.0.37
boto==2.45.0 boto==2.46.1
python-memcached==1.58 python-memcached==1.58
psphere==0.5.2 psphere==0.5.2
psutil==5.0.0 psutil==5.2.2
pyvmomi==6.5 pyvmomi==6.5
pywinrm[kerberos]==0.2.2 pywinrm[kerberos]==0.2.2
requests==2.11.1
secretstorage==2.3.1 secretstorage==2.3.1
shade==1.19.0 shade==1.20.0
setuptools==23.0.0 setuptools==35.0.2
pip==8.1.2 pip==9.0.1

View File

@@ -7,8 +7,8 @@
adal==0.4.5 # via msrestazure adal==0.4.5 # via msrestazure
amqp==1.4.9 # via kombu amqp==1.4.9 # via kombu
anyjson==0.3.3 # via kombu anyjson==0.3.3 # via kombu
apache-libcloud==1.3.0 apache-libcloud==2.0.0
appdirs==1.4.3 # via os-client-config, python-ironicclient appdirs==1.4.3 # via os-client-config, python-ironicclient, setuptools
asn1crypto==0.22.0 # via cryptography asn1crypto==0.22.0 # via cryptography
azure-batch==1.0.0 # via azure azure-batch==1.0.0 # via azure
azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage azure-common[autorest]==1.1.4 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage, azure-servicebus, azure-servicemanagement-legacy, azure-storage
@@ -17,33 +17,33 @@ azure-mgmt-compute==0.30.0rc6 # via azure-mgmt
azure-mgmt-keyvault==0.30.0rc6 # via azure-mgmt azure-mgmt-keyvault==0.30.0rc6 # via azure-mgmt
azure-mgmt-logic==1.0.0 # via azure-mgmt azure-mgmt-logic==1.0.0 # via azure-mgmt
azure-mgmt-network==0.30.0rc6 # via azure-mgmt azure-mgmt-network==0.30.0rc6 # via azure-mgmt
azure-mgmt-nspkg==1.0.0 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage azure-mgmt-nspkg==2.0.0 # via azure-batch, azure-mgmt-batch, azure-mgmt-compute, azure-mgmt-keyvault, azure-mgmt-logic, azure-mgmt-network, azure-mgmt-redis, azure-mgmt-resource, azure-mgmt-scheduler, azure-mgmt-storage
azure-mgmt-redis==1.0.0 # via azure-mgmt azure-mgmt-redis==1.0.0 # via azure-mgmt
azure-mgmt-resource==0.30.0rc6 # via azure-mgmt azure-mgmt-resource==0.30.0rc6 # via azure-mgmt
azure-mgmt-scheduler==1.0.0 # via azure-mgmt azure-mgmt-scheduler==1.0.0 # via azure-mgmt
azure-mgmt-storage==0.30.0rc6 # via azure-mgmt azure-mgmt-storage==0.30.0rc6 # via azure-mgmt
azure-mgmt==0.30.0rc6 # via azure azure-mgmt==0.30.0rc6 # via azure
azure-nspkg==1.0.0 # via azure-common, azure-mgmt-nspkg, azure-storage azure-nspkg==2.0.0 # via azure-common, azure-mgmt-nspkg, azure-storage
azure-servicebus==0.20.3 # via azure azure-servicebus==0.20.3 # via azure
azure-servicemanagement-legacy==0.20.4 # via azure azure-servicemanagement-legacy==0.20.4 # via azure
azure-storage==0.33.0 # via azure azure-storage==0.33.0 # via azure
azure==2.0.0rc6 azure==2.0.0rc6
babel==2.4.0 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient babel==2.3.4 # via osc-lib, oslo.i18n, python-cinderclient, python-glanceclient, python-neutronclient, python-novaclient, python-openstackclient
backports.ssl-match-hostname==3.5.0.1 backports.ssl-match-hostname==3.5.0.1
boto==2.45.0 boto==2.46.1
certifi==2017.1.23 # via msrest certifi==2017.4.17 # via msrest
cffi==1.10.0 # via cryptography cffi==1.10.0 # via cryptography
cliff==2.5.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient cliff==2.6.0 # via osc-lib, python-designateclient, python-neutronclient, python-openstackclient
cmd2==0.7.0 # via cliff cmd2==0.7.0 # via cliff
cryptography==1.8.1 # via adal, azure-storage, secretstorage cryptography==1.8.1 # via adal, azure-storage, secretstorage
debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient debtcollector==1.13.0 # via oslo.config, oslo.utils, python-designateclient, python-keystoneclient, python-neutronclient
decorator==4.0.11 # via shade decorator==4.0.11 # via shade
deprecation==1.0 # via openstacksdk deprecation==1.0.1 # via openstacksdk
dogpile.cache==0.6.2 # via python-ironicclient, shade dogpile.cache==0.6.2 # via python-ironicclient, shade
enum34==1.1.6 # via cryptography, msrest enum34==1.1.6 # via cryptography, msrest
funcsigs==1.0.2 # via debtcollector, oslo.utils funcsigs==1.0.2 # via debtcollector, oslo.utils
functools32==3.2.3.post2 # via jsonschema functools32==3.2.3.post2 # via jsonschema
futures==3.0.5 # via azure-storage, shade futures==3.1.1 # via azure-storage, shade
idna==2.5 # via cryptography idna==2.5 # via cryptography
ipaddress==1.0.18 # via cryptography, shade ipaddress==1.0.18 # via cryptography, shade
iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient iso8601==0.1.11 # via keystoneauth1, oslo.utils, python-neutronclient, python-novaclient
@@ -52,34 +52,33 @@ jmespath==0.9.2 # via shade
jsonpatch==1.15 # via shade, warlock jsonpatch==1.15 # via shade, warlock
jsonpointer==1.10 # via jsonpatch jsonpointer==1.10 # via jsonpatch
jsonschema==2.6.0 # via python-designateclient, python-ironicclient, warlock jsonschema==2.6.0 # via python-designateclient, python-ironicclient, warlock
keyring==10.3.1 # via msrestazure keyring==10.3.2 # via msrestazure
keystoneauth1==2.19.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade keystoneauth1==2.20.0 # via openstacksdk, os-client-config, osc-lib, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, shade
kombu==3.0.35 kombu==3.0.37
monotonic==1.3 # via oslo.utils monotonic==1.3 # via oslo.utils
msgpack-python==0.4.8 # via oslo.serialization msgpack-python==0.4.8 # via oslo.serialization
msrest==0.4.6 # via azure-common, msrestazure msrest==0.4.7 # via azure-common, msrestazure
msrestazure==0.4.7 # via azure-common msrestazure==0.4.7 # via azure-common
munch==2.1.1 # via shade munch==2.1.1 # via shade
netaddr==0.7.19 # via oslo.config, oslo.utils, python-neutronclient netaddr==0.7.19 # via oslo.config, oslo.utils, python-neutronclient
netifaces==0.10.5 # via oslo.utils, shade netifaces==0.10.5 # via oslo.utils, shade
ntlm-auth==1.0.2 # via requests-ntlm ntlm-auth==1.0.3 # via requests-ntlm
oauthlib==2.0.2 # via requests-oauthlib oauthlib==2.0.2 # via requests-oauthlib
openstacksdk==0.9.14 # via python-openstackclient openstacksdk==0.9.16 # via python-openstackclient
ordereddict==1.1 # via ntlm-auth os-client-config==1.27.0 # via openstacksdk, osc-lib, python-neutronclient, shade
os-client-config==1.26.0 # via openstacksdk, osc-lib, python-neutronclient, shade osc-lib==1.6.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient
osc-lib==1.3.0 # via python-designateclient, python-ironicclient, python-neutronclient, python-openstackclient oslo.config==4.1.0 # via python-keystoneclient
oslo.config==3.24.0 # via python-keystoneclient
oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient oslo.i18n==3.15.0 # via osc-lib, oslo.config, oslo.utils, python-cinderclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient oslo.serialization==2.18.0 # via python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient
oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient oslo.utils==3.25.0 # via osc-lib, oslo.serialization, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient
packaging==16.8 # via cryptography packaging==16.8 # via cryptography, setuptools
pbr==2.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore pbr==3.0.0 # via cliff, debtcollector, keystoneauth1, openstacksdk, osc-lib, oslo.i18n, oslo.serialization, oslo.utils, positional, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, python-openstackclient, requestsexceptions, shade, stevedore
positional==1.1.1 # via keystoneauth1, python-keystoneclient positional==1.1.1 # via keystoneauth1, python-keystoneclient
prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient prettytable==0.7.2 # via cliff, python-cinderclient, python-glanceclient, python-ironicclient, python-novaclient
psphere==0.5.2 psphere==0.5.2
psutil==5.0.0 psutil==5.2.2
pycparser==2.17 # via cffi pycparser==2.17 # via cffi
pyjwt==1.4.2 # via adal pyjwt==1.5.0 # via adal
pykerberos==1.1.14 # via requests-kerberos pykerberos==1.1.14 # via requests-kerberos
pyparsing==2.2.0 # via cliff, cmd2, oslo.utils, packaging pyparsing==2.2.0 # via cliff, cmd2, oslo.utils, packaging
python-cinderclient==2.0.1 # via python-openstackclient, shade python-cinderclient==2.0.1 # via python-openstackclient, shade
@@ -89,9 +88,9 @@ python-glanceclient==2.6.0 # via python-openstackclient
python-ironicclient==1.12.0 # via shade python-ironicclient==1.12.0 # via shade
python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade python-keystoneclient==3.10.0 # via python-neutronclient, python-openstackclient, shade
python-memcached==1.58 python-memcached==1.58
python-neutronclient==6.1.0 # via shade python-neutronclient==6.2.0 # via shade
python-novaclient==7.1.0 # via python-openstackclient, shade python-novaclient==8.0.0 # via python-openstackclient, shade
python-openstackclient==3.9.0 # via python-ironicclient python-openstackclient==3.11.0 # via python-ironicclient
pytz==2017.2 # via babel, oslo.serialization, oslo.utils pytz==2017.2 # via babel, oslo.serialization, oslo.utils
pyvmomi==6.5 pyvmomi==6.5
pywinrm[kerberos]==0.2.2 pywinrm[kerberos]==0.2.2
@@ -99,20 +98,20 @@ pyyaml==3.12 # via cliff, os-client-config, psphere, python-ironicc
requests-kerberos==0.11.0 # via pywinrm requests-kerberos==0.11.0 # via pywinrm
requests-ntlm==1.0.0 # via pywinrm requests-ntlm==1.0.0 # via pywinrm
requests-oauthlib==0.8.0 # via msrest requests-oauthlib==0.8.0 # via msrest
requests==2.12.5 # via adal, azure-servicebus, azure-servicemanagement-legacy, azure-storage, keystoneauth1, msrest, python-cinderclient, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-neutronclient, python-novaclient, pyvmomi, pywinrm, requests-kerberos, requests-ntlm, requests-oauthlib requests==2.11.1
requestsexceptions==1.2.0 # via os-client-config, shade requestsexceptions==1.2.0 # via os-client-config, shade
rfc3986==0.4.1 # via oslo.config rfc3986==0.4.1 # via oslo.config
secretstorage==2.3.1 secretstorage==2.3.1
shade==1.19.0 shade==1.20.0
simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient simplejson==3.10.0 # via osc-lib, python-cinderclient, python-neutronclient, python-novaclient
six==1.10.0 # via cliff, cmd2, cryptography, debtcollector, keystoneauth1, munch, ntlm-auth, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, pyvmomi, pywinrm, shade, stevedore, warlock six==1.10.0 # via cliff, cmd2, cryptography, debtcollector, keystoneauth1, munch, ntlm-auth, openstacksdk, osc-lib, oslo.config, oslo.i18n, oslo.serialization, oslo.utils, packaging, python-cinderclient, python-dateutil, python-designateclient, python-glanceclient, python-ironicclient, python-keystoneclient, python-memcached, python-neutronclient, python-novaclient, python-openstackclient, pyvmomi, pywinrm, setuptools, shade, stevedore, warlock
stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient stevedore==1.21.0 # via cliff, keystoneauth1, openstacksdk, osc-lib, oslo.config, python-designateclient, python-keystoneclient
suds==0.4 # via psphere suds==0.4 # via psphere
unicodecsv==0.14.1 # via cliff unicodecsv==0.14.1 # via cliff
warlock==1.2.0 # via python-glanceclient warlock==1.2.0 # via python-glanceclient
wrapt==1.10.10 # via debtcollector, positional, python-glanceclient wrapt==1.10.10 # via debtcollector, positional, python-glanceclient
xmltodict==0.10.2 # via pywinrm xmltodict==0.11.0 # via pywinrm
# The following packages are considered to be unsafe in a requirements file: # The following packages are considered to be unsafe in a requirements file:
pip==8.1.2 pip==9.0.1
setuptools==23.0.0 setuptools==35.0.2