Sunday, February 28, 2016

Hackery to get going RDO Kilo on Fedora 23


Sequence of hacks required to get going RDO Kilo on Fedora 23 is caused by existence of many areas where Nova has a hard dependency on Glance v1. Per https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/use-glance-v2-api.html
The key areas that still use Glance v1 include:
  • Nova’s Image API, that is effectively just a proxy of Glance’s v1 API
  • Virt driver specific download of images and upload of images
  • CRUD operations on image metadata
  • Creating and deleting snapshots in common code.
According to https://bugs.launchpad.net/glance/+bug/1476770
I can see that for python-glanceclient status is "In Progress". So, to get things working right now only one option is available (for myself of course) , it is to try to convert into F23 downstream temporary patch abandoned commit   and test would it help me or no.

Versions on python-urllib3 :-

RDO Kilo F23 -  1.13.1
RDO Kilo CentOS 7 - 1.10.1
RDO Liberty CentOS 7 - 1.10.4

I believe that is a reason why bug manifests on Fedora 23 :  1.13.1 > 1.11.X
Patch helps on F23 due to per it's description :-

Subject: [PATCH] Convert headers to lower-case when parsing metadata

In urllib3 1.11.0 (and consequently, requests 2.8.0) handling was added
to preserve the casing of headers as they arrived over the wire. This
means that when iterating over a CaseInsensitiveDict from requests (as
we do in _image_meta_from_headers) we no longer get keys that are
already all lowercase. As a result, we need to ensure that the header
key is lowercased before checking its properties.

As of time of writing I've got positive results on AIO RDO Kilo set up on F23.

1. Download , rebuild and install pm-utils-1.4.1-31.fc22.src.rpm on F23
2. Download and rebuild python-glanceclient-0.17.0-3.fc23.src.rpm with patch
    increasing  0.17.0-3 up to 0.17.0-4. RPMs bellow are supposed to be
    installed after packstack set up

  [root@ip-192-169-142-54 ~]# ls -l *.rpm
    -rw-r--r--. 1 root root 140174 Feb 28 10:57 python-glanceclient- 0.17.0-4.fc23.noarch.rpm
    -rw-r--r--. 1 root root 122010 Feb 28 10:57 python-glanceclient-doc-0.17.0-4.fc23.noarch.rpm

 3. Activate as rdo repo https://repos.fedorapeople.org/repos/openstack/openstack-kilo/f22/ 

 4. Run `dnf -y install openstack-packstack`
 5. To be able use Swift back end for glance :-

# yum install -y \
openstack-swift-object openstack-swift-container \
openstack-swift-account openstack-swift-proxy openstack-utils \
rsync xfsprogs

mkfs.xfs /dev/vdb1
mkdir -p /srv/node/vdb1
echo "/dev/vdb1 /srv/node/vdb1 xfs defaults 1 2" >> /etc/fstab

mkfs.xfs /dev/vdc1
mkdir -p /srv/node/vdc1
echo "/dev/vdc1 /srv/node/vdc1 xfs defaults 1 2" >> /etc/fstab

mkfs.xfs /dev/vdd1
mkdir -p /srv/node/vdd1
echo "/dev/vdd1 /srv/node/vdd1 xfs defaults 1 2" >> /etc/fstab

mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

  6. Apply patch https://bugzilla.redhat.com/show_bug.cgi?id=1234042
      mentioned in Comment 6
 
 Insert as first line:-
     require File.expand_path(File.join(File.dirname(__FILE__), '.','ovs_redhat.rb')) 
     into ovs_redhat_el6.rb

  7. Comment out first line in  /etc/httpd/conf.d/mod_dnssd.conf


*******************************************
Create answer-file as follows
*******************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.54
CONFIG_COMPUTE_HOSTS=192.169.142.54
CONFIG_NETWORK_HOSTS=192.169.142.54
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.54
CONFIG_SAHARA_HOST=192.169.142.54
CONFIG_USE_EPEL=n
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-54.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-54.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.54
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.54
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=1d1ac704de2f4d47
CONFIG_KEYSTONE_DB_PW=3be4677750ac4804
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=a1cd54646aa24ce79e900da43816690f
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=457b511a7ea04c19
CONFIG_KEYSTONE_DEMO_PW=ef3d90d60c1d4578
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.54
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=2aec3b4e90674863
CONFIG_GLANCE_KS_PW=1baf404b24cd41cc
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=eec9d502035a48cf
CONFIG_CINDER_KS_PW=918d89afa0d3462a
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=2G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PW=7188910f8c884108
CONFIG_NOVA_KS_PW=72b030a0e0df4982
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=em2
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=em1
CONFIG_NOVA_NETWORK_PRIVIF=em2
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=432ebd6f06ed49be
CONFIG_NEUTRON_DB_PW=7fab632699934012
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=8b15dba4e2b745f3
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:100
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=dad8835a16e74e2bb86e44cac84ddcd4
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=070f50c3263946f1
CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=c8b8b521f7014c10
CONFIG_SWIFT_STORAGE_SIZE=20G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=311f2f0e2e2b48ca
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=0d7356fd2dae4954
CONFIG_CEILOMETER_KS_PW=f9c1869022ee4295
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.54
CONFIG_REDIS_MASTER_HOST=192.169.142.54
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=8d70d9cab47c4add

****************************************
Post installations steps
****************************************
# yum -y install python-glanceclient- 0.17.0-4.fc23.noarch.rpm \
     python-glanceclient-doc-0.17.0-4.fc23.noarch.rpm
# openstack-service restart
# cd /etc/sysconfig/network-scripts
**************************************************
Create ifcfg-br-ex and ifcfg-ens3 as follows
**************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.54"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

# cat ifcfg-ens3
DEVICE="ens3"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***************************
Then run script
***************************
#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node
**********************************************************
Switch to swift back end  for glance as follows :-
**********************************************************
Update /etc/glance/glance-api.conf like it has been done bellow

[DEFAULT]
show_image_direct_url=False
bind_host=0.0.0.0
bind_port=9292
workers=4
backlog=4096
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
registry_port=9191
registry_client_protocol=http
debug=False
verbose=True
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
use_syslog=False
syslog_log_facility=LOG_USER
use_stderr=True
notification_driver =messaging
amqp_durable_queues=False
# default_store = swift
# swift_store_auth_address = http://192.169.142.54:5000/v2.0/
# swift_store_user = services:glance
# swift_store_key = 6bc67e33258c4228
# swift_store_create_container_on_put = True

[database]
connection=mysql://glance:41264fc52ffd4fe8@192.169.142.54/glance
idle_timeout=3600

[glance_store]
default_store = swift
stores = glance.store.swift.Store
swift_store_auth_address = http://192.169.142.54:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797
swift_store_create_container_on_put = True
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
os_region_name=RegionOne

[image_format]
[keystone_authtoken]
auth_uri=http://192.169.142.54:5000/v2.0
identity_uri=http://192.169.142.54:35357
admin_user=glance
admin_password=6bc67e33258c4228
admin_tenant_name=services
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.169.142.54
rabbit_port=5672
rabbit_hosts=192.169.142.54:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
heartbeat_timeout_threshold=0
heartbeat_rate=2
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

Lines commented out are suggested by  [ 1 ], however to succeed finally
we would have follow [ 2 ]. Also  per [ 1 ] run as admin :-


# keystone user-role-add --tenant_id=$UUID_SERVICES_TENANT \
  --user=$UUID_GLANCE_USER --role=$UUID_ResellerAdmin_ROLE

In particular case :-

[root@ServerCentOS7 ~(keystone_admin)]# keystone tenant-list | grep services
 | 6bbac43abaf04cefb53c259a6afc285b |   services  |   True  |

[root@ServerCentOS7 ~(keystone_admin)]# keystone user-list | grep glance
| 99ed8b1fbd1c428f917f570d4cae75f4 |   glance   |   True  |   glance@localhost   |

[root@ServerCentOS7 ~(keystone_admin)]# keystone role-list | grep ResellerAdmin
| e229166b8ab24900933c23ea88c8b673 |  ResellerAdmin   |


# keystone user-role-add --tenant_id=6bbac43abaf04cefb53c259a6afc285b  \       
    --user=99ed8b1fbd1c428f917f570d4cae75f4  \
    --role=e229166b8ab24900933c23ea88c8b673


Value f6a9398960534797  is corresponding CONFIG_GLANCE_KS_PW in answer-file, i.e. keystone glance password for authentification

***************
Next step is
***************

# openstack-service restart glance

# [root@ServerCentOS72 ~(keystone_admin)]# systemctl | grep glance
openstack-glance-api.service                                                        loaded active running   OpenStack Image Service (code-named Glance) API server
openstack-glance-registry.service                                                   loaded active running   OpenStack Image Service (code-named Glance) Registry server


what will result your swift storage to work as you glance back end on RDO Kilo

********************************************************************************
In case of deployment to Storage Node , please , be aware of
http://silverskysoft.com/open-stack-xwrpr/2015/07/enabling-openstack-swift-object-storage-service/
*********************************************************************************

  
  

Saturday, February 20, 2016

Setup Swift as Glance backend on RDO Liberty Multi node deployment (CentOS 7.2)

Post bellow presumes that your testing Swift storage is on VM emulating
Storage Server (192.169.142.157) which has 3 devices attached
/dev/vdb,/dev/vdc,/dev/vdd what allows to place in answer file entries like
bellow. In case you need not just a POC, but real HA Swift deployment  install RDO with CONFIG_INSTALL_SWIFT=n and make deployment of Swift Proxy and Storage nodes per http://www.server-world.info/en/note?os=CentOS_7&p=openstack_liberty2&f=4
on your management network - one Swift proxy node and at least 3 Swift Storage nodes.

# Specify 'y' to install OpenStack Object Storage (swift). ['y', 'n']
CONFIG_SWIFT_INSTALL=y
# unsupported options will not be fixed before the next major release.
# ['y', 'n']
CONFIG_UNSUPPORTED=y
# (Unsupported!) Server on which to install OpenStack services
# specific to storage servers such as Image or Block Storage services.
CONFIG_STORAGE_HOST=192.169.142.157
. . . . .

# Comma-separated list of devices to use as storage device for Object
# Storage. Each entry must take the format /path/to/dev (for example,
# specifying /dev/vdb installs /dev/vdb as the Object Storage storage
# device; Packstack does not create the filesystem, you must do this
# first). If left empty, Packstack creates a loopback device for test
# setup.
CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1

# Number of Object Storage storage zones; this number MUST be no
# larger than the number of configured storage devices.
CONFIG_SWIFT_STORAGE_ZONES=3

# Number of Object Storage storage replicas; this number MUST be no
# larger than the number of configured storage zones.
CONFIG_SWIFT_STORAGE_REPLICAS=3

# File system type for storage nodes. ['xfs', 'ext4']
CONFIG_SWIFT_STORAGE_FSTYPE=xfs

# Custom seed number to use for swift_hash_path_suffix in
# /etc/swift/swift.conf. If you do not provide a value, a seed number
# is automatically generated.
CONFIG_SWIFT_HASH=a60aacbedde7429a

# Size of the Object Storage loopback file storage device.
CONFIG_SWIFT_STORAGE_SIZE=20G

Therefor before  running packstack on Storage Node :-

# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum update -y

To get swift account installed preinstall packages, it does no harm to
packstack during runtime

# yum install -y \
openstack-swift-object openstack-swift-container \
openstack-swift-account openstack-swift-proxy openstack-utils \
rsync xfsprogs

mkfs.xfs /dev/vdb1
mkdir -p /srv/node/vdb1
echo "/dev/vdb1 /srv/node/vdb1 xfs defaults 1 2" >> /etc/fstab

mkfs.xfs /dev/vdc1
mkdir -p /srv/node/vdc1
echo "/dev/vdc1 /srv/node/vdc1 xfs defaults 1 2" >> /etc/fstab

mkfs.xfs /dev/vdd1
mkdir -p /srv/node/vdd1
echo "/dev/vdd1 /srv/node/vdd1 xfs defaults 1 2" >> /etc/fstab

mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

**************************************
Answer-file been used for depoyment
**************************************

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_SAHARA_INSTALL=n
CONFIG_HEAT_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=n
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=y
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAMES=
CONFIG_STORAGE_HOST=192.169.142.157
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://12.0.0.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_PCI_ALIAS=
CONFIG_NOVA_PCI_PASSTHROUGH_WHITELIST=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_ML2_SUPPORTED_PCI_VENDOR_DEVS=['15b3:1004', '8086:10ca']
CONFIG_NEUTRON_ML2_SRIOV_AGENT_REQUIRED=n
CONFIG_NEUTRON_ML2_SRIOV_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=a25b5ece9db24e2aba8d3a2b4d908ca5
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=/dev/vdb1,/dev/vdc1,/dev/vdd1
CONFIG_SWIFT_STORAGE_ZONES=3
CONFIG_SWIFT_STORAGE_REPLICAS=3
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=20G

CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=38b3f69c8eaf4750
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=PW_PLACEHOLDER


Due to deprecation of CONFIG_SWIFT_PROXY_HOST, CONFIG_SWIFT_STORAGE_HOST  all Swift services are running on the
same host having 3 different disks for 3 replicas.

In case you need not just a POC, but real HA Swift deployment  install RDO with
CONFIG_INSTALL_SWIFT=n and make deployment of Swift Proxy and Storage
nodes per http://www.server-world.info/en/note?os=CentOS_7&p=openstack_liberty2&f=4
on your management network - one Swift proxy node and at least 3 Swift Storage nodes

When done update /etc/glance/glance-api.conf on Storage Node like it has been done bellow (192.169.142.127  is IP of my Controller (Keystone hosting Node ) :-

[DEFAULT]
show_image_direct_url=False
bind_host=0.0.0.0
bind_port=9292
workers=4
backlog=4096
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
registry_port=9191
registry_client_protocol=http
debug=False
verbose=True
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
use_syslog=False
syslog_log_facility=LOG_USER
use_stderr=True
notification_driver =messaging
amqp_durable_queues=False
# default_store = swift
# swift_store_auth_address = http://192.169.142.127:5000/v2.0/
# swift_store_user = services:glance
# swift_store_key = 6bc67e33258c4228
# swift_store_create_container_on_put = True

[database]
connection=mysql://glance:41264fc52ffd4fe8@192.169.142.127/glance
idle_timeout=3600

[glance_store]
default_store = swift
stores = glance.store.swift.Store
swift_store_auth_address = http://192.169.142.127:5000/v2.0/
swift_store_user = services:glance
swift_store_key = f6a9398960534797
swift_store_create_container_on_put = True
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False
os_region_name=RegionOne

[image_format]
[keystone_authtoken]
auth_uri=http://192.169.142.127:5000/v2.0
identity_uri=http://192.169.142.127:35357
admin_user=glance
admin_password=6bc67e33258c4228
admin_tenant_name=services
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.169.142.127
rabbit_port=5672
rabbit_hosts=192.169.142.127:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
heartbeat_timeout_threshold=0
heartbeat_rate=2
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

Lines commented out are suggested by  [ 1 ], however to succeed finally
we would have follow [ 2 ]. Also  per [ 1 ] run as admin :-

# keystone user-role-add --tenant_id=$UUID_SERVICES_TENANT \
  --user=$UUID_GLANCE_USER --role=$UUID_ResellerAdmin_ROLE

In particular case :-

[root@ServerCentOS7 ~(keystone_admin)]# keystone tenant-list | grep services
 | 6bbac43abaf04cefb53c259a6afc285b |   services  |   True  |

[root@ServerCentOS7 ~(keystone_admin)]# keystone user-list | grep glance
| 99ed8b1fbd1c428f917f570d4cae75f4 |   glance   |   True  |   glance@localhost   |

[root@ServerCentOS7 ~(keystone_admin)]# keystone role-list | grep ResellerAdmin
| e229166b8ab24900933c23ea88c8b673 |  ResellerAdmin   |


# keystone user-role-add --tenant_id=6bbac43abaf04cefb53c259a6afc285b  \       
    --user=99ed8b1fbd1c428f917f570d4cae75f4  \
    --role=e229166b8ab24900933c23ea88c8b673


Value f6a9398960534797  is corresponding CONFIG_GLANCE_KS_PW in answer-file, i.e. keystone glance password for authentification

***************
Next step is
***************

# openstack-service restart glance

# [root@ServerCentOS72 ~(keystone_admin)]# systemctl | grep glance
openstack-glance-api.service                                                        loaded active running   OpenStack Image Service (code-named Glance) API server
openstack-glance-registry.service                                                   loaded active running   OpenStack Image Service (code-named Glance) Registry server


what will result your swift storage to work as you glance back end on RDO Liberty (CentOS 7.2). Verification of updates done to glance-api.conf

# wget \
https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
 
Then run as admin :-
 
# glance image-create --name ubuntu-trusty-x86_64 --disk-format=qcow2 \ 
  --container-format=bare --file trusty-server-cloudimg-amd64-disk1.img \
  --progress
 
Create keystonerc_glance as shown on snapshots to make sure,that now glance is
is uploading glance and sahara images to Swift Object Storage
 
[root@ip-192-169-142-127 ~(keystone_admin)]# cat  keystonerc_glance
unset OS_SERVICE_TOKEN
export OS_USERNAME=glance
export OS_PASSWORD=f6a9398960534797
export OS_AUTH_URL=http://192.169.142.127:5000/v2.0
export PS1='[\u@\h \W(keystone_glance)]\$ '
export OS_TENANT_NAME=services
export OS_REGION_NAME=RegionOne
 
  
 
  `df -h` output on storage node before and after uploading 
  sahara-liberty-vanilla-2.7.1-ubuntu-14.04.qcow2 ( 1.1 GB)
 
  [root@ip-192-169-142-127 ~(keystone_admin)]# glance image-create \
             --name sahara-liberty-vanilla-2.7.1-ubuntu-14.04 --disk-format=qcow2 \
             --container-format=bare --file sahara-liberty-vanilla-2.7.1-ubuntu-14.04.qcow2 \
             --progress

[=============================>] 100%
+------------------+-------------------------------------------+
| Property         | Value                                     |
+------------------+-------------------------------------------+
| checksum         | 3da49911332fc46db0c5fb7c197e3a77          |
| container_format | bare                                      |
| created_at       | 2016-02-20T17:34:00Z                      |
| disk_format      | qcow2                                     |
| id               | 72a915c3-14e2-45fb-9048-285e30f05792      |
| min_disk         | 0                                         |
| min_ram          | 0                                         |
| name             | sahara-liberty-vanilla-2.7.1-ubuntu-14.04 |
| owner            | 1257e963f6be4123b9b5ab9a2d73b5e4          |
| protected        | False                                     |
| size             | 1181876224                                |
| status           | active                                    |
| tags             | []                                        |
| updated_at       | 2016-02-20T17:35:28Z                      |
| virtual_size     | None                                      |
| visibility       | private                                   |
+------------------+-------------------------------------------+
 
 
[root@ip-192-169-142-127 ~(keystone_glance)]# glance image-list
+--------------------------------------+-------------------------------------------+
| ID                                   | Name                                      |
+--------------------------------------+-------------------------------------------+
| 8a209266-bc15-40cd-89ad-d63cc6c2ff3c | cirros                                    |
| 72a915c3-14e2-45fb-9048-285e30f05792 | sahara-liberty-vanilla-2.7.1-ubuntu-14.04 |
| 9ab4ef74-54f8-4f50-a862-bb8b10fea7a8 | ubuntu-trusty-x86_64                      |
| 0adaaeae-81e8-435d-99bb-fe7e6f7769d3 | Ubuntu1510Cloud                           |
| 6f8fd347-cb8b-4221-bd51-91d272f10cb8 | VF23Cloud                                 |
+--------------------------------------+-------------------------------------------+
[root@ip-192-169-142-127 ~(keystone_glance)]# swift list  glance| \
                          grep 72a915c3-14e2-45fb-9048-285e30f05792
72a915c3-14e2-45fb-9048-285e30f05792
72a915c3-14e2-45fb-9048-285e30f05792-00001
72a915c3-14e2-45fb-9048-285e30f05792-00002
72a915c3-14e2-45fb-9048-285e30f05792-00003
72a915c3-14e2-45fb-9048-285e30f05792-00004
72a915c3-14e2-45fb-9048-285e30f05792-00005
72a915c3-14e2-45fb-9048-285e30f05792-00006

Sunday, February 14, 2016

Setup Swift as Glance backend on RDO Liberty (CentOS 7.2)

Post bellow presumes that your testing Swift storage is located  somewhere
on workstation (say /dev/sdb1) is about 25 GB (XFS) and before running packstack (AIO mode for testing)  following steps have been done :-


# yum install -y https://www.rdoproject.org/repos/rdo-release.rpm
# yum update -y
# yum install -y openstack-packstack

To get swift account installed preinstall packages, it does no harm to
packstack during runtime
# yum install -y \
openstack-swift-object openstack-swift-container \
openstack-swift-account openstack-swift-proxy openstack-utils \
rsync xfsprogs

mkfs.xfs /dev/sdb1
mkdir -p /srv/node/sdb1
echo "/dev/sdb1 /srv/node/sdb1 xfs defaults 1 2" >> /etc/fstab
mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

*********************************************
Schema with 2 replicas on single host :-
*********************************************

mkfs.xfs /dev/sdb6
mkdir -p /srv/node/sdb6
echo "/dev/sdb6 /srv/node/sdb6 xfs defaults 1 2" >> /etc/fstab

mkfs.xfs /dev/sda6
mkdir -p /srv/node/sda6
echo "/dev/sda6 /srv/node/sda6 xfs defaults 1 2" >> /etc/fstab

mount -a
chown -R swift:swift /srv/node
restorecon -R /srv/node

*****************
Answer-file :-
*****************

CONFIG_SWIFT_INSTALL=y
CONFIG_SWIFT_KS_PW=7de571599d894b86
CONFIG_SWIFT_STORAGES=/dev/sdb6,/dev/sda6
CONFIG_SWIFT_STORAGE_ZONES=2
CONFIG_SWIFT_STORAGE_REPLICAS=2
CONFIG_SWIFT_STORAGE_FSTYPE=xfs
CONFIG_SWIFT_HASH=eb150786b84346dd
CONFIG_SWIFT_STORAGE_SIZE=50G


would work as well.

So , that  CONFIG_SWIFT_STORAGES=/dev/sdb1 was set before run packstack

When done update /etc/glance/glance-api.conf like it has been done bellow :-
192.168.1.47  is IP of my Controller (Keystone hosting Node )

[DEFAULT]
show_image_direct_url=False
bind_host=0.0.0.0
bind_port=9292
workers=4
backlog=4096
image_cache_dir=/var/lib/glance/image-cache
registry_host=0.0.0.0
registry_port=9191
registry_client_protocol=http
debug=False
verbose=True
log_file=/var/log/glance/api.log
log_dir=/var/log/glance
use_syslog=False
syslog_log_facility=LOG_USER
use_stderr=True
notification_driver =messaging
amqp_durable_queues=False
# default_store = swift
# swift_store_auth_address = http://192.168.1.47:5000/v2.0/
# swift_store_user = services:glance
# swift_store_key = 6bc67e33258c4228
# swift_store_create_container_on_put = True
# stores=glance.store.swift.Store
[database]
connection=mysql://glance:c6ce03f4464c45cc@192.168.1.47/glance
idle_timeout=3600

[glance_store]
default_store = swift
stores = glance.store.swift.Store
swift_store_auth_address = http://192.168.1.47:5000/v2.0/
swift_store_user = services:glance
swift_store_key = 6bc67e33258c4228
swift_store_create_container_on_put = True
os_region_name=RegionOne

[image_format]
[keystone_authtoken]
auth_uri=http://192.168.1.47:5000/v2.0
identity_uri=http://192.168.1.47:35357
admin_user=glance
admin_password=6bc67e33258c4228
admin_tenant_name=services
[matchmaker_redis]
[matchmaker_ring]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_qpid]
[oslo_messaging_rabbit]
rabbit_host=192.168.1.47
rabbit_port=5672
rabbit_hosts=192.168.1.47:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
heartbeat_timeout_threshold=0
heartbeat_rate=2
rabbit_notification_exchange=glance
rabbit_notification_topic=notifications
[oslo_policy]
[paste_deploy]
flavor=keystone
[store_type_location_strategy]
[task]
[taskflow_executor]

Lines commented out are suggested by  [ 1 ], however to succeed finally
we would have follow [ 2 ].

Also  per [ 1 ] run as admin :-

# keystone user-role-add --tenant_id=$UUID_SERVICES_TENANT \
  --user=$UUID_GLANCE_USER --role=$UUID_ResellerAdmin_ROLE

In particular case :-

[root@ServerCentOS7 ~(keystone_admin)]# keystone tenant-list | grep services
 | 6bbac43abaf04cefb53c259a6afc285b |   services  |   True  |

[root@ServerCentOS7 ~(keystone_admin)]# keystone user-list | grep glance
| 99ed8b1fbd1c428f917f570d4cae75f4 |   glance   |   True  |   glance@localhost   |

[root@ServerCentOS7 ~(keystone_admin)]# keystone role-list | grep ResellerAdmin
| e229166b8ab24900933c23ea88c8b673 |  ResellerAdmin   |


# keystone user-role-add --tenant_id=6bbac43abaf04cefb53c259a6afc285b  \       
    --user=99ed8b1fbd1c428f917f570d4cae75f4  \
    --role=e229166b8ab24900933c23ea88c8b673


Value 6bc67e33258c4228  is corresponding CONFIG_GLANCE_KS_PW

***************
Next step is
***************

# openstack-service restart glance

# [root@ServerCentOS72 ~(keystone_admin)]# systemctl | grep glance
openstack-glance-api.service                                                        loaded active running   OpenStack Image Service (code-named Glance) API server
openstack-glance-registry.service                                                   loaded active running   OpenStack Image Service (code-named Glance) Registry server


what will result your swift storage to work as you glance back end on RDO Liberty (CentOS 7.2)

Verification of updates done to glance-api.conf

# wget https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
 
Then run as admin :-
 
# glance image-create --name ubuntu-trusty-x86_64 --disk-format=qcow2 \ 
  --container-format=bare --file trusty-server-cloudimg-amd64-disk1.img \
  --progress
 
Create keystonerc_glance as shown on snapshots to make sure,that now glance is
is uploading glance and sahara images to Swift Object Storage
 
# cat  keystonerc_glance
export OS_USERNAME=glance
export OS_PASSWORD=6bc67e33258c4228
export OS_TENANT_NAME=services
export OS_AUTH_URL=http://192.168.1.47:5000/v2.0
export PS1='[\u@\h \W(keystone_glance)]\$ '
export OS_REGION_NAME=RegionOne
 
 
 
 Same file sahara-liberty-vanilla-2.7.1-ubuntu-14.04.qcow2 uploaded via `glance image-create`
 on i7 4790,16 GB box doesn't get fragmented 
 
[root@ServerCentOS7 ~(keystone_glance)]# glance image-list
+--------------------------------------+--------------------------------------+
| ID                                   | Name                                 |
+--------------------------------------+--------------------------------------+
| 74f3460e-489f-4e5f-8f37-ada1db576c67 | sahara-liberty-vanilla271-ubuntu1404 |
+--------------------------------------+--------------------------------------+
[root@ServerCentOS7 ~(keystone_glance)]# swift list glance
74f3460e-489f-4e5f-8f37-ada1db576c67
 
You can force fragmentation via adding to 
 
[glance_store]
. . . . . . . 
 
swift_store_large_object_size = 5120
swift_store_large_object_chunk_size = 200
swift_enable_snet = False 
 
 
 
  References 
  1. https://www.rdoproject.org/storage/Swift/Liberty/using-swift-for-glance-with-rdo-liberty/ 
  2. http://magicalyak.org/2015/03/20/using-swift-backend-for-glance-on-ubuntu-openstack/ 




 
 

Friday, February 12, 2016

Monday, February 08, 2016

Sahara running Hadoop 4 nodes Cluster (Vanilla 2.1.7 Plugin) on RDO Liberty

  
                              
    Node Templates
  

    VMs Floating IPs will show up in web reports following bellow
  

    Verify IP of Master Node - 192.168.1.187
 


  
  

   Verify IP of Worker Node 192.168.1.189


   Verify IP of Worker Node 192.168.1.186
 

 

Master Node Processes


 Worker Node Processes




Thursday, February 04, 2016

Python API for "boot from image creates new volume" RDO Liberty

Post bellow addresses several questions been posted at ask.openstack.org
In particular, code bellow doesn't require volume UUID to be  hard coded
to start server attached to boot able cinder's LVM, created via glance image,
which is supposed to be passed to script via command line. In the same way
name of cinder volume and instance name may passed to script via CLI 

Place in current directory following files :-

[root@ip-192-169-142-127 api(keystone_admin)]# cat credentials.py
#!/usr/bin/env python
import os

def get_keystone_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['password'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['tenant_name'] = os.environ['OS_TENANT_NAME']
    return d

def get_nova_creds():
    d = {}
    d['username'] = os.environ['OS_USERNAME']
    d['api_key'] = os.environ['OS_PASSWORD']
    d['auth_url'] = os.environ['OS_AUTH_URL']
    d['project_id'] = os.environ['OS_TENANT_NAME']
    return d

[root@ip-192-169-142-127 api(keystone_admin)]# cat  startServer.py
#!/usr/bin/env python
import sys
import os
import time
from novaclient.v2.client import Client
from credentials import get_nova_creds

total = len(sys.argv)
cmdargs = str(sys.argv)
print ("The total numbers of args passed to the script: %d " % total)
print ("Args list: %s " % cmdargs)
print ("First argument: %s" % str(sys.argv[1]))

creds = get_nova_creds()
nova = Client(**creds)
if not nova.keypairs.findall(name="oskeyadm0302"):
    with open(os.path.expanduser('~/.ssh/id_rsa.pub')) as fpubkey:
        nova.keypairs.create(name="oskeyadm0302", public_key=fpubkey.read())

# Creating bootable volume

image = nova.images.find(name=str(sys.argv[1]))
flavor = nova.flavors.find(name="m1.small")
volume = nova.volumes.create(5,display_name="Ubuntu1510LVM",
               volume_type="lvms", imageRef=image.id )

# Wait until volume downloading will be done

status = volume.status
while ( status == 'creating' or status == 'downloading'):
    time.sleep(15)
    print "status: %s" % status
    volume = nova.volumes.get(volume.id)
    status = volume.status
print "status: %s" % status

# Select tenant's network

nova.networks.list()
network = nova.networks.find(label="demo_network1")
nics = [{'net-id': network.id}]

block_dev_mapping = {'vda': volume.id }

# Starting nova instance

instance = nova.servers.create(name="Ubuntu1510Devs", image='',
                  flavor=flavor,
                  availability_zone="nova:ip-192-169-142-137.ip.secureserver.net",
                  key_name="oskeyadm0302", nics=nics,       
                  block_device_mapping=block_dev_mapping)

# Poll at 5 second intervals, until the status is no longer 'BUILD'
status = instance.status
while status == 'BUILD':
    time.sleep(5)
    # Retrieve the instance again so the status field updates
    instance = nova.servers.get(instance.id)
    status = instance.status
print "status: %s" % status

[root@ip-192-169-142-127 api(keystone_admin)]# cat assignFIP.py

#!/usr/bin/env python
import sys
import os
import time
from novaclient.v2.client import Client
from credentials import get_nova_creds

total = len(sys.argv)
cmdargs = str(sys.argv)
print ("The total numbers of args passed to the script: %d " % total)
print ("Args list: %s " % cmdargs)
print ("First argument: %s" % str(sys.argv[1]))

# Assign floating IP for active instance

creds = get_nova_creds()
nova = Client(**creds)
nova.floating_ip_pools.list()
floating_ip = nova.floating_ips.create(nova.floating_ip_pools.list()[0].name)
instance = nova.servers.find(name=str(sys.argv[1]))
instance.add_floating_ip(floating_ip)



[root@ip-192-169-142-127 api(keystone_admin)]# python  startServer.pyc  Ubuntu1510Cloud-image

The total numbers of args passed to the script: 2
Args list: ['startServer.pyc', 'Ubuntu1510Cloud-image']


status: creating
status: downloading
status: downloading
status: available
status: ACTIVE

[root@ip-192-169-142-127 api(keystone_admin)]# nova list
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                                 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| e6ffa475-c026-4033-bb83-f3d32e5bf491 | Ubuntu1510Devs | ACTIVE  | -          | Running     | demo_network1=50.0.0.17                  |
| 4ceb4c64-f347-40a8-81a0-d032286cbd16 | VF23Devs0137   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.15, 192.169.142.181 |
| 6c0e29bb-2936-47f3-b94b-a9b00cc5bee6 | VF23Devs0157   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.16, 192.169.142.182 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+

**************************************
Check ports available for server
**************************************

[root@ip-192-169-142-127 api(keystone_admin)]# neutron port-list --device-id e6ffa475-c026-4033-bb83-f3d32e5bf491
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                        |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+
| 53d18ed3-2ec1-487d-a975-0150ebcae23b |      | fa:16:3e:af:a1:21 | {"subnet_id": "5a148b53-780e-4282-8cbf-bf5e05624e5c", "ip_address": "50.0.0.17"} |
+--------------------------------------+------+-------------------+----------------------------------------------------------------------------------+

[root@ip-192-169-142-127 api(keystone_admin)]# python  assignFIP.pyc  \
 Ubuntu1510Devs

[root@ip-192-169-142-127 api(keystone_admin)]# nova list
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| ID                                   | Name           | Status  | Task State | Power State | Networks                                 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+
| e6ffa475-c026-4033-bb83-f3d32e5bf491 | Ubuntu1510Devs | ACTIVE  | -          | Running     | demo_network1=50.0.0.17, 192.169.142.190 |
| 4ceb4c64-f347-40a8-81a0-d032286cbd16 | VF23Devs0137   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.15, 192.169.142.181 |
| 6c0e29bb-2936-47f3-b94b-a9b00cc5bee6 | VF23Devs0157   | SHUTOFF | -          | Shutdown    | private_admin=70.0.0.16, 192.169.142.182 |
+--------------------------------------+----------------+---------+------------+-------------+------------------------------------------+


  

References
1. http://www.ibm.com/developerworks/cloud/library/cl-openstack-pythonapis/
2. http://docs.openstack.org/developer/python-novaclient/api/novaclient.v1_1.volumes.html