Friday, May 29, 2015

RDO Kilo Set up for three Fedora 22 VM Nodes Controller&Network&Compute (ML2&OVS&VXLAN)

************************
UPDATE 08/08/2015
************************
 After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow

*********************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*********************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack

******************************************************************************
Action to be undertaken on Controller before deployment:
******************************************************************************
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
 /usr/lib/python2.7/site-packages/packstack/puppet/templates.

Another option rebuild  openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc23.src.rpm on Fedora 22
with patch 0002-Avoid-running-neutron-db-manage-twice 
Place patch in SOURCES and update correspondingly spec file.

$ rpmbuild -bb openstack-packstack.spec
$ cd ../RPMS/noarch
$ dnf install openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-doc-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-puppet-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm

You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
*******************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
*******************************************************************************

SELINUX converted to permissive mode on all depoyment nodes

# packstack --answer-file=./answer3Node.txt

****************
END UPDATE
****************
     Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute across Fedora 22 VMs for RDO Kilo, which was performed on Fedora 22 host with QEMU/KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ).
    Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.
 

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat public.xml
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes


*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.

  


*************************************************
On Hypervisor Host ( Fedora 22)
*************************************************
# iptables -S -t nat 
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .
***********************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
********************************************************************************


************************************
Answer-file - answer3Node.txt
************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Node.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
CONFIG_KEYSTONE_SERVICE_NAME=httpd
# CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4


**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
#reboot

*************************************************
General Three node RDO Kilo system layout
*************************************************



***********************
 Controller Node
***********************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat /etc/neutron/plugins/ml2/ml2_conf.ini| grep -v ^# | grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True

   


   Network Node


*********************
Network Node
*********************
[root@ip-192-169-142-147 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver



********************
Compute Node
*******************
[root@ip-192-169-142-137 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.137
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

   


   By some reasons virt-manager doesn't allow to set remote connection to Spice
   Session running locally on F22 Virtualization Host 192.168.1.95

   So from remote Fedora host run :-
    
  # ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.95
    # ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.95
  # ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.95

  Then spicy installed on remote host would connect

   1)  to VM 192.169.142.127
        $ spicy -h localhost -p 5902  
   2)  to VM 192.169.142.147
        $ spicy -h localhost -p 5901
   3) to VM 192.169.142.137
        $ spicy -h localhost -p 5900
   


   Dashboard snapshots

  
  
  


Wednesday, May 27, 2015

How VMs access metadata via qrouter-namespace in Openstack Kilo

It is actually an update for Neutron on Kilo of original blog entry
http://techbackground.blogspot.ie/2013/06/metadata-via-quantum-router.html 
considering  Quantum implementation on Grizzly.

From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova-api service) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Neutron proxies metadata requests to Nova adding HTTP headers which Nova uses to identify the source instance. Neutron actually uses two proxies to do this: a namespace proxy and a metadata agent. This post shows how a metadata request gets from an instance to the Nova metadata service via a namespace proxy running in a Neutron router.

   


    Here both services openstack-nova-api && neutron-server are    running on Controller 192.169.142.127.

[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep nova-api
openstack-nova-api.service  loaded active running   OpenStack Nova API Server

[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep neutron-server
neutron-server.service         loaded active running   OpenStack Neutron Server

Regarding architecture in general,please,view http://lxer.com/module/newswire/view/214009/index.html


*************************************
1.Instance makes request
*************************************
[root@vf22rls ~]# curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups

[root@vf22rls ~]# ip -4 address show dev eth0
2: eth0: mtu 1400 qdisc fq_codel state UP group default qlen 1000
    inet 50.0.0.15/24 brd 50.0.0.255 scope global dynamic eth0
       valid_lft 85770sec preferred_lft 85770sec

[root@vf22rls ~]#  ip route
default via 50.0.0.1 dev eth0  proto static  metric 100
50.0.0.0/24 dev eth0  proto kernel  scope link  src 50.0.0.15  metric 100


******************************************************************************
2.Namespace proxy receives request. The default gateway 50.0.0.1  exists within a Neutron router namespace on the network node.The Neutron-l3-agent started a namespace proxy in this namespace and added some iptables rules to redirect metadata requests to it. There are no special routes, so the request goes out the default gateway of course a Neutron router needs to have an interface on the subnet.
*******************************************************************************
Network Node 192.169.142.147
**********************************
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb
qrouter-79801567-a0b5-4780-bfae-ac00e185a148

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         50.0.0.1        0.0.0.0         UG    0      0        0 tapd6da9bb8-0e
50.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 tapd6da9bb8-0e

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name      | external_gateway_info                                                                                                                                                                    | distributed | ha    |
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| 79801567-a0b5-4780-bfae-ac00e185a148 | RouerDemo | {"network_id": "1faee6ae-faea-4775-9c4e-abbf22c5815c", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "35262e52-e288-4244-b107-dd093a2254d5", "ip_address": "172.24.4.227"}]} | False       | False |
+--------------------------------------+-----------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-1feb35d8-b6: flags=4163  mtu 1500
        inet 172.24.4.227  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe7b:7be0  prefixlen 64  scopeid 0x20
        ether fa:16:3e:7b:7b:e0  txqueuelen 0  (Ethernet)
        RX packets 868209  bytes 1181713676 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 413610  bytes 32594119 (31.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-6b8bf870-d4: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:feb3:30bf  prefixlen 64  scopeid 0x20
        ether fa:16:3e:b3:30:bf  txqueuelen 0  (Ethernet)
        RX packets 414032  bytes 32641578 (31.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 868416  bytes 1181753564 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 iptables-save| grep 9697
-A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148  netstat -anpt
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3210/python2       

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 3210 | fold -s -w 82
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   3210     1  0 08:14 ?        00:00:00 /usr/bin/python2
/bin/neutron-ns-metadata-proxy
--pid_file=/var/lib/neutron/external/pids/79801567-a0b5-4780-bfae-ac00e185a148.pid
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=79801567-a0b5-4780-bfae-ac00e185a148 --state_path=/var/lib/neutron
--metadata_port=9697 --metadata_proxy_user=990 --metadata_proxy_group=988
--verbose
--log-file=neutron-ns-metadata-proxy-79801567-a0b5-4780-bfae-ac00e185a148.log
--log-dir=/var/log/neutron



The nameserver proxy adds two HTTP headers to the request:
    X-Forwarded-For: with the instance's IP address
    X-Neutron-Router-ID: with the uuid of the Neutron router
and proxies it to a Unix domain socket with name /var/lib/neutron/metadata_proxy.

***********************************************************************************
3. Metadata agent receives request and queries the Neutron service
The metadata agent listens on this Unix socket. It is a normal
Linux service that runs in the main operating system IP namespace,
and so it is able to reach the Neutron and Nova metadata services.
Its configuration file has all the information required to do so.
***********************************************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# netstat -lxp | grep metadata
unix  2      [ ACC ]     STREAM     LISTENING     36208    1291/python2         /var/lib/neutronmetadata_proxy

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 1291 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1291     1  0 08:12 ?        00:00:06 /usr/bin/python2
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --config-dir
/etc/neutron/conf.d/neutron-metadata-agent --log-file
/var/log/neutron/metadata-agent.log

[root@ip-192-169-142-147 ~(keystone_admin)]# lsof /var/lib/neutron/metadata_proxy
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
      Output information may be incomplete.
COMMAND    PID    USER   FD   TYPE             DEVICE SIZE/OFF  NODE NAME
neutron-m 1291 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy
neutron-m 2764 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy
neutron-m 2765 neutron    5u  unix 0xffff8801375ecb40      0t0 36208 /var/lib/neutron/metadata_proxy

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -f --pid 1291 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1291     1  0 08:12 ?        00:00:06 /usr/bin/python2
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --config-dir
/etc/neutron/conf.d/neutron-metadata-agent --log-file
/var/log/neutron/metadata-agent.log

[root@ip-192-169-142-147 ~(keystone_admin)]# grep -v '^#\|^\s*$' /etc/neutron/metadata_agent.ini
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =2
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

It reads the X-Forwarded-For and X-Neutron-Router-ID headers in the request and queries the Neutron service to find the ID of the instance that created the request.

***********************************************************************************
4.Metadata agent proxies request to Nova metadata service
It then adds these headers:
    X-Instance-ID: the instance ID returned from Neutron
    X-Instance-ID-Signature: instance ID signed with the shared-secret
    X-Forwarded-For: the instance's IP address
and proxies the request to the Nova metadata service.

5. Nova metadata service receives request
The metadata service was started by nova-api. The handler checks the X-Instance-ID-Signature with the shared key, looks up the data and returns the response which travels back via the two proxies to the instance.
************************************************************************************


*****************************
Controller 192.169.142.127
*****************************

[root@ip-192-169-142-127 ~(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.169.142.127
service_metadata_proxy=True
metadata_proxy_shared_secret=a965cd23ed2f4502


[root@ip-192-169-142-127 ~(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.169.142.127
service_metadata_proxy=True
metadata_proxy_shared_secret=a965cd23ed2f4502


[root@ip-192-169-142-127 ~(keystone_admin)]#  grep metadata /var/log/nova/nova-api.log | tail -15
2015-05-27 10:23:25.232 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/local-ipv4 HTTP/1.1" status: 200 len: 125 time: 0.0006239
2015-05-27 10:23:25.271 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/reservation-id HTTP/1.1" status: 200 len: 127 time: 0.0006211
2015-05-27 10:23:25.309 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/local-hostname HTTP/1.1" status: 200 len: 134 time: 0.0006039
2015-05-27 10:23:25.348 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/security-groups HTTP/1.1" status: 200 len: 116 time: 0.0006092
2015-05-27 10:23:25.386 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-launch-index HTTP/1.1" status: 200 len: 117 time: 0.0006170
2015-05-27 10:23:25.424 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ramdisk-id HTTP/1.1" status: 200 len: 120 time: 0.0006149
2015-05-27 10:23:25.463 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/public-hostname HTTP/1.1" status: 200 len: 134 time: 0.0006301
2015-05-27 10:23:25.502 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/hostname HTTP/1.1" status: 200 len: 134 time: 0.0006180
2015-05-27 10:23:25.541 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-id HTTP/1.1" status: 200 len: 129 time: 0.0006082
2015-05-27 10:23:25.581 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/kernel-id HTTP/1.1" status: 200 len: 120 time: 0.0006080
2015-05-27 10:23:25.618 3986 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-action HTTP/1.1" status: 200 len: 120 time: 0.0006869
2015-05-27 10:23:25.656 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 129 time: 0.0006471
2015-05-27 10:23:25.696 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/ami-manifest-path HTTP/1.1" status: 200 len: 121 time: 0.0007231
2015-05-27 10:23:25.735 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-type HTTP/1.1" status: 200 len: 124 time: 0.0006821
2015-05-27 10:23:25.775 3987 INFO nova.metadata.wsgi.server [-] 50.0.0.15,192.169.142.147 "GET /latest/meta-data/instance-id HTTP/1.1" status: 200 len: 127 time: 0.0007501

Monday, May 25, 2015

Setup Nova-Docker Driver with RDO Kilo on Fedora 21

    Set up RDO Kilo on Fedora 21 per https://www.rdoproject.org/Quickstart
Next step supposed to be is upgrade several python packages via Fedora
Rawhide, build Nova-Docker Driver based on stable/kilo branch and
switch openstack-nova-compute to run Nova-Docker Driver been built  via stable/kilo branch of http://github.com/stackforge/nova-docker.git

 # yum -y install git docker-io python-six  fedora-repos-rawhide
 # yum --enablerepo=rawhide install  python-pip python-pbr systemd
 # reboot
 **********************
 Next
 **********************
 # chmod 666 /var/run/docker.sock
 # yum - y install gcc python-devel
 # git clone http://github.com/stackforge/nova-docker.git
 # cd nova-docker
 # git checkout -b kilo origin/stable/kilo
 # git branch -v -a
 * kilo                           d556444 Do not enable swift/ceilometer/sahara
  master                         d556444 Do not enable swift/ceilometer/sahara
  remotes/origin/HEAD            -> origin/master
  remotes/origin/master          d556444 Do not enable swift/ceilometer/sahara
  remotes/origin/stable/icehouse 9045ca4 Fix lockpath for tests
  remotes/origin/stable/juno     b724e65 Fix tests on stable/juno
  remotes/origin/stable/kilo     d556444 Do not enable swift/ceilometer/sahara

 # python setup.py install
 # systemctl start docker
 # systemctl enable docker
 # chmod 666  /var/run/docker.sock
 # mkdir /etc/nova/rootwrap.d

******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"

************************************************
Next, create the docker.filters file:
************************************************
$ vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker

Restart Services
************************
# systemctl restart openstack-nova-compute
# systemctl status openstack-nova-compute
# systemctl restart openstack-glance-api

***************************************************
 For docker pull && docker save
 Uploading docker image to glance
***************************************************
 # .  keystonerc_admin 
 #  docker pull rastasheep/ubuntu-sshd:14.04
 #  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

  
****************************************************************
To enable security rules and launch NovaDocker Container :-
****************************************************************

#  . keystonerc_demo 

# neutron security-group-rule-create --protocol icmp \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

# neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 80 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 4848 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 8080 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default



# neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 8181  \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default


******************************************************************
Launch new instance via uploaded image :-
******************************************************************


#  . keystonerc_demo  

#   nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny
    --nic net-id=private-net-id UbuntuDocker


either via dashboard

*****************************************************
Update before reboot /etc/cr.d/rc.local as follows :-
*****************************************************
[root@fedora21wks ~(keystone_admin)]# cat  /etc/rc.d/rc.local
#!/bin/bash
chmod 666 /var/run/docker.sock ;
systemctl restart  openstack-nova-compute



[root@fedora21wks ~(keystone_admin)]# chmod a+x   /etc/rc.d/rc.local
 

   Starting NovaDocker TomCat container,  floating IP 192.168.1.158

  
Starting Nova-Docker GlassFish4.1 NovaDocker container,
floating IP 192.168.1.159


  
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/01_start-sshd.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !
*** Running /etc/my_init.d/database.sh...
Derby database started !
*** Running /etc/my_init.d/run.sh...
Bad Network Configuration.  DNS can not resolve the hostname: 
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error
Waiting for domain1 to start ..............
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password> 
Enter the new admin password> 
Enter the new admin password again> 
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name>  admin
Enter admin password for user "admin"> 
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:

     admin:0f2HOP1vCiDd

Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop 
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false
Bad Network Configuration.  DNS can not resolve the hostname: 
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error 
 
 
[root@fedora21wks ~(keystone_admin)]# ssh root@192.168.1.159
root@192.168.1.159's password: 
Last login: Tue May 26 12:38:48 2015 from 192.168.1.75
root@instance-00000009:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 12:18 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root        96     1  0 12:18 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       100     1  0 12:18 ?        00:00:00 /usr/sbin/sshd
root       162     1  0 12:18 ?        00:00:03 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/op
root       426    96  0 12:18 ?        00:00:01 java -jar /opt/glassfish4/bin/../glassfish/lib/cl
root       443   426 12 12:18 ?        00:02:43 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/gla
root      1110   100  0 12:39 ?        00:00:00 sshd: root@pts/0 
root      1112  1110  0 12:39 pts/0    00:00:00 -bash
root      1123  1112  0 12:39 pts/0    00:00:00 ps -ef
root@instance-00000009:~# ifconfig
lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:8479 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8479 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1544705 (1.5 MB)  TX bytes:1544705 (1.5 MB)

ns292e45a2-ad Link encap:Ethernet  HWaddr fa:16:3e:b9:a8:4e  
          inet addr:50.0.0.19  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:feb9:a84e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17453 errors:0 dropped:0 overruns:0 frame:0
          TX packets:9984 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:28521655 (28.5 MB)  TX bytes:5336887 (5.3 MB)

root@instance-00000009:~# 

**************************************************
Running NovaDocker's containers (instances) :- 
**************************************************
 
[root@fedora21wks ~(keystone_admin)]# docker ps
CONTAINER ID        IMAGE                                      COMMAND                CREATED             STATUS              PORTS               NAMES
c5c4594da13d        boris/docker-glassfish41:latest            "/sbin/my_init"        26 minutes ago      Up 26 minutes                           nova-d751e04c-8f9b-4171-988a-cd57fb37574c   
a58781eba98b        tutum/tomcat:latest                        "/run.sh"              4 hours ago         Up 4 hours                              nova-3024f190-8dbb-4faf-b2b0-e627d6faba97   
cd1418845931        eugeneware/docker-wordpress-nginx:latest   "/bin/bash /start.sh   5 hours ago         Up 5 hours                              nova-c0211200-eee9-431e-aa64-db5cdcadad66   
700fe66add76        rastasheep/ubuntu-sshd:14.04               "/usr/sbin/sshd -D"    7 hours ago         Up 7 hours                              nova-9d0ebc1d-5bfa-44d7-990d-957d7fec5ea2   
 

Sunday, May 24, 2015

RDO Kilo Set up for Two VM Nodes (Controller&&Network+Compute) ML2&OVS&VXLAN on Fedora 21

Following bellow is brief instruction  for two node deployment test Controller&&Network+Compute Nodes for RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor . Two VMs (4 GB RAM,2 VCPUS)  have been setup. Controller&&Network VM two (management subnet,VTEP's subnet) VNICs, Compute Node VM two VNICS (management,VTEP's subnets). Management network finally converted to public.SELINUX should be set to permissive mode ( vs  packstack deployments on CentOS 7.1)
*********************************
Two Libvirt networks created
*********************************
# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>
**********************************************************************
Libvirt's default network 192.168.122.0/24 was used as VTEP's
**********************************************************************
Follow https://www.rdoproject.org/Quickstart  until packstack startup.
You might have to switch to rdo-testing.repo manually (/etc/yum.repos.d) .
Just updated "enabled=1 or 0" in corresponding *.repo file. Anyway in
meantime make sure that release and testing repos are in expected state,
to avoid unpredictable consequences.

**********************
AnswerTwoNode.txt
**********************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answerTwoNode.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
# CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

******************
Then run :-
******************
# packstack --answer-file=./answerTwoNode.txt
**********************************************************************************
Up on packstack completion on Controller Node create following files ,
designed to  convert mgmt network into external
**********************************************************************************

[root@ip-192-169-142-127 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.155"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-127 network-scripts]# cat ifcfg-eth0
DEVICE="eth0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# reboot ( Controller Node)

*************************
Controller status
*************************
[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:02.000000 | -               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:01.000000 | -               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:02.000000 | -               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-24T15:12:00.000000 | -               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-24T15:12:00.000000 | -               |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 08dd042e-fa52-4b06-980f-16063ecd6a90 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 26a92f7c-d960-4c8c-8176-aec558b1fd43 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| 4f5376af-a8f5-4359-8e53-1fabf885b3d2 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| a64d3787-8d9d-4b41-a4da-ea0b2b611491 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f16a196a-a1ec-464a-875d-432a3dba182d | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
c5da4b6e-70a9-49c4-895c-7a4715b0bfce
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-147dd7b7-45"
            Interface "qg-147dd7b7-45"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        fail_mode: secure
        Port "tap672f6457-99"
            tag: 1
            Interface "tap672f6457-99"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qr-c53117c1-e2"
            tag: 1
            Interface "qr-c53117c1-e2"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-c0a87a89"
            Interface "vxlan-c0a87a89"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.122.127", out_key=flow, remote_ip="192.168.122.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.3.1-git4750c96"

     
     
     
     
  
  


Saturday, May 09, 2015

RDO Kilo Set up for three VM Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

Following bellow is brief instruction  for traditional three node deployment test Controller&&Network&&Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

SELINUX stays in enforcing mode.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# cat public.xml
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes


*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth3" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
*************************************************
On Hypervisor Host ( Fedora 21)
*************************************************
[root@junoJVC01 ~] # iptables -S -t nat 
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .

[root@junoJVC01 ~]# virsh net-info public
Name:           public
UUID:           d0e9965b-f92c-40c1-b749-b609aed42cf2
Active:         yes
Persistent:     yes
Autostart:      yes
Bridge:         virbr3



***********************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************



*********************
Answer-file :-
*********************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer-fileRHTest.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
# In case of two Compute nodes
# CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.157
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
CONFIG_KEYSTONE_SERVICE_NAME=httpd
# CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
# This is VXLAN tunnel endpoint interface
# It should be assigned IP from vteps network
# before running packstack
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Kilo is supposed to handle this)

# yum install -y https://rdoproject.org/repos/rdo-release.rpm
# yum install -y openstack-packstack
# packstack --answer-file=./answer-fileRHTest.txt

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

  OVS PORT should be eth2 (third Ethernet interface on Network Node)
  Libvirt bridge VIRBR2 in real deployment is a your router to External
  network. OVS BRIDGE br-ex should have IP belongs to External network




 In case CONFIG_KEYSTONE_SERVICE_NAME=httpd on Controller :-

[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 35357
tcp6       0      0 :::35357                :::*                    LISTEN   3115/httpd          
 
[root@ip-192-169-142-127 ~(keystone_admin)]# netstat -lntp |  grep 5000
tcp6       0      0 :::5000                 :::*                    LISTEN   3115/httpd 
 
[root@ip-192-169-142-127 ~(keystone_admin)]# ps -ef | grep 3115
root      3115     1  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
keystone  3126  3115  0 15:19 ?        00:00:43 keystone-admin  -DFOREGROUND
keystone  3128  3115  0 15:19 ?        00:00:04 keystone-main   -DFOREGROUND
apache    3129  3115  0 15:19 ?        00:00:09 /usr/sbin/httpd -DFOREGROUND
apache    3130  3115  0 15:19 ?        00:00:16 /usr/sbin/httpd -DFOREGROUND
apache    3131  3115  0 15:19 ?        00:00:08 /usr/sbin/httpd -DFOREGROUND
apache    3132  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3133  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3136  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3137  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3138  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3139  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3140  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3141  3115  0 15:19 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    3244  3115  0 16:48 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache   24514  3115  0 15:54 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND


[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
d9a60201-a2c2-4c6a-ad9d-63cc2ae296b3
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth3"
            Interface "eth3"

        Port br-ex
            Interface br-ex
                type: internal
        Port "eth2"
            Interface "eth2"
        Port "qg-d433fa46-e2"
            Interface "qg-d433fa46-e2"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "tap70da94fb-c1"
            tag: 1
            Interface "tap70da94fb-c1"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-0737c492-f6"
            tag: 1
            Interface "qr-0737c492-f6"
                type: internal
    ovs_version: "2.3.1"


**********************************************************
Following bellow is Network Node status verification
**********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# openstack-status
== neutron services ==
neutron-server:                           inactive  (disabled on boot)
neutron-dhcp-agent:                    active
neutron-l3-agent:                         active
neutron-metadata-agent:              active
neutron-openvswitch-agent:         active
== Support services ==
libvirtd:                               active
openvswitch:                       active
dbus:                                   active

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24     |
+--------------------------------------+----------+------------------------------------------------------+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-list
+--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name       | external_gateway_info                                                                                                                                                                    | distributed | ha    |
+--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| d63ca3f3-5b71-4540-bb5c-01b44ce3081b | RouterDemo | {"network_id": "7ecdfc27-57cf-410d-9a76-8e9eb76582cb", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "5fc0118a-f710-448d-af67-17dbfe01d5fc", "ip_address": "172.24.4.229"}]} | False       | False |
+--------------------------------------+------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron router-port-list RouterDemo
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                           |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+
| 0737c492-f607-4d6a-8e72-ad447453b3c0 |      | fa:16:3e:d7:d0:66 | {"subnet_id": "ba2cded7-5546-4a64-aa49-7ef4d077dee3", "ip_address": "50.0.0.1"}     |
| d433fa46-e203-4fdd-b3f7-dcbc884e9f1e |      | fa:16:3e:02:ef:51 | {"subnet_id": "5fc0118a-f710-448d-af67-17dbfe01d5fc", "ip_address": "172.24.4.229"} |
+--------------------------------------+------+-------------------+-------------------------------------------------------------------------------------+

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron port-show 0737c492-f607-4d6a-8e72-ad447453b3c0 | grep ACTIVE
| status                | ACTIVE                                                                          |

[root@ip-192-169-142-147 ~(keystone_admin)]# dmesg | grep promisc
[   14.174240] device ovs-system entered promiscuous mode
[   14.184284] device br-ex entered promiscuous mode
[   14.200068] device eth2 entered promiscuous mode
[   14.200253] device eth3 entered promiscuous mode
[   14.207443] device br-int entered promiscuous mode
[   14.209360] device br-tun entered promiscuous mode
[   27.311116] device virbr0-nic entered promiscuous mode
[  142.406262] device tap70da94fb-c1 entered promiscuous mode
[  144.045031] device qr-0737c492-f6 entered promiscuous mode
[  144.792618] device qg-d433fa46-e2 entered promiscuous mode



[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
qdhcp-98dd1928-96e8-47fb-a2fe-49292ae092ba
qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 172.24.4.231/32 -j DNAT --to-destination 50.0.0.14
-A neutron-l3-agent-OUTPUT -d 172.24.4.235/32 -j DNAT --to-destination 50.0.0.18
-A neutron-l3-agent-OUTPUT -d 172.24.4.228/32 -j DNAT --to-destination 50.0.0.19
-A neutron-l3-agent-POSTROUTING ! -i qg-d433fa46-e2 ! -o qg-d433fa46-e2 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 172.24.4.231/32 -j DNAT --to-destination 50.0.0.14
-A neutron-l3-agent-PREROUTING -d 172.24.4.235/32 -j DNAT --to-destination 50.0.0.18
-A neutron-l3-agent-PREROUTING -d 172.24.4.228/32 -j DNAT --to-destination 50.0.0.19
-A neutron-l3-agent-float-snat -s 50.0.0.14/32 -j SNAT --to-source 172.24.4.231
-A neutron-l3-agent-float-snat -s 50.0.0.18/32 -j SNAT --to-source 172.24.4.235
-A neutron-l3-agent-float-snat -s 50.0.0.19/32 -j SNAT --to-source 172.24.4.228
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-d433fa46-e2 -j SNAT --to-source 172.24.4.229
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source 172.24.4.229
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3525/python2       

[root@ip-192-169-142-147 ~(keystone_admin)]# ps -ef | grep 3525
neutron   3525     1  0 06:20 ?        00:00:00 /usr/bin/python2 /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/d63ca3f3-5b71-4540-bb5c-01b44ce3081b.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=d63ca3f3-5b71-4540-bb5c-01b44ce3081b --state_path=/var/lib/neutron --metadata_port=9697 --metadata_proxy_user=990 --metadata_proxy_group=988 --verbose --log-file=neutron-ns-metadata-proxy-d63ca3f3-5b71-4540-bb5c-01b44ce3081b.log --log-dir=/var/log/neutron
root     22354 21471  0 20:47 pts/1    00:00:00 grep --color=auto 3525

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-d433fa46-e2: flags=4163  mtu 1500
        inet 172.24.4.229  netmask 255.255.255.240  broadcast 172.24.4.239
        inet6 fe80::f816:3eff:fe02:ef51  prefixlen 64  scopeid 0x20
        ether fa:16:3e:02:ef:51  txqueuelen 0  (Ethernet)
        RX packets 166724  bytes 207207094 (197.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 93439  bytes 8208502 (7.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-0737c492-f6: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fed7:d066  prefixlen 64  scopeid 0x20
        ether fa:16:3e:d7:d0:66  txqueuelen 0  (Ethernet)
        RX packets 93442  bytes 8226129 (7.8 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 166586  bytes 207213870 (197.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface

0.0.0.0         172.24.4.225    0.0.0.0         UG    0      0        0     qg-d433fa46-e2
50.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0     qr-0737c492-f6
172.24.4.224    0.0.0.0         255.255.255.240 U     0      0   0     qg-d433fa46-e2



[root@ip-192-169-142-147 ~(keystone_admin)]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0           172.24.4.225    0.0.0.0         UG    0      0        0   br-ex

10.0.0.0           0.0.0.0         255.255.255.0   U     0      0       0   eth1
169.254.0.0      0.0.0.0         255.255.0.0     U     1002   0      0   eth0
169.254.0.0      0.0.0.0         255.255.0.0     U     1003   0      0   eth1
169.254.0.0      0.0.0.0         255.255.0.0     U     1004   0      0   eth2
169.254.0.0      0.0.0.0         255.255.0.0     U     1005   0      0   eth3
169.254.0.0      0.0.0.0         255.255.0.0     U     1007   0      0   br-ex
172.24.4.224    0.0.0.0         255.255.255.240 U     0      0      0   br-ex
192.168.122.0   0.0.0.0        255.255.255.0   U       0      0      0   virbr0
192.169.142.0   0.0.0.0        255.255.255.0   U       0      0      0   eth0



**************************************************************
Compute Node Status
**************************************************************

[root@ip-192-169-142-137 ~]#  dmesg | grep promisc
[    9.683238] device ovs-system entered promiscuous mode
[    9.699664] device br-ex entered promiscuous mode
[    9.735288] device br-int entered promiscuous mode
[    9.748086] device br-tun entered promiscuous mode
[  137.203583] device qvbe7160159-fd entered promiscuous mode
[  137.288235] device qvoe7160159-fd entered promiscuous mode
[  137.715508] device qvbe90ef79b-80 entered promiscuous mode
[  137.796083] device qvoe90ef79b-80 entered promiscuous mode
[  605.884770] device tape90ef79b-80 entered promiscuous mode
[  767.083214] device qvbbf1c441c-ad entered promiscuous mode
[  767.184783] device qvobf1c441c-ad entered promiscuous mode
[  767.446575] device tapbf1c441c-ad entered promiscuous mode
[  973.679071] device qvb3c3e98d7-2d entered promiscuous mode
[  973.775480] device qvo3c3e98d7-2d entered promiscuous mode
[  973.997621] device tap3c3e98d7-2d entered promiscuous mode
[ 1863.868574] device tapbf1c441c-ad left promiscuous mode
[ 1889.386251] device tape90ef79b-80 left promiscuous mode
[ 2256.698108] device tap3c3e98d7-2d left promiscuous mode
[ 2336.931559] device qvb6597428d-5b entered promiscuous mode
[ 2337.021941] device qvo6597428d-5b entered promiscuous mode
[ 2337.283293] device tap6597428d-5b entered promiscuous mode
[ 4092.577561] device tap6597428d-5b left promiscuous mode
[ 4099.798474] device tap6597428d-5b entered promiscuous mode
[ 5098.563689] device tape90ef79b-80 entered promiscuous mode

[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000093"
            Interface "vxlan-0a000093"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.137", out_key=flow, remote_ip="10.0.0.147"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qvoe90ef79b-80"
            tag: 1
            Interface "qvoe90ef79b-80"
        Port br-int
            Interface br-int
                type: internal
        Port "qvobf1c441c-ad"
            tag: 1
            Interface "qvobf1c441c-ad"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo6597428d-5b"
            tag: 1
            Interface "qvo6597428d-5b"
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.3.1"

[root@ip-192-169-142-137 ~]# brctl show
bridge name    bridge id        STP enabled    interfaces
qbr6597428d-5b       8000.1a483dd02cee    no        qvb6597428d-5b
                                tap6597428d-5b
qbrbf1c441c-ad        8000.ca2f911ff649      no        qvbbf1c441c-ad
qbre90ef79b-80        8000.16342824f4ba    no        qvbe90ef79b-80
                                tape90ef79b-80

**************************************************
Controller Node status verification
**************************************************

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:             inactive  (disabled on boot)
openstack-nova-network:              inactive  (disabled on boot)
openstack-nova-scheduler:           active
openstack-nova-conductor:           active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:            active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                  inactive  (disabled on boot)
neutron-l3-agent:                       inactive  (disabled on boot)
neutron-metadata-agent:            inactive  (disabled on boot)
== Swift services ==
openstack-swift-proxy:                 active
openstack-swift-account:              active
openstack-swift-container:            active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                      active
openstack-cinder-scheduler:            active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:                 active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:         inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
mysqld:                                    inactive  (disabled on boot)
libvirtd:                                    active
dbus:                                        active
target:                                      active
rabbitmq-server:                       active
memcached:                             active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
  'python-keystoneclient.', DeprecationWarning)
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 4e1008fd31944fecbb18cdc215af23ec |   admin    |   True  |    root@localhost    |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer |   True  | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 |   cinder   |   True  |   cinder@localhost   |
| 8393bb4de49a44b798af8b118b9f0eb6 |    demo    |   True  |                      |
| f9be6eaa789e4b3c8771372fffb00230 |   glance   |   True  |   glance@localhost   |
| a518b95a92044ad9a4b04f0be90e385f |  neutron   |   True  |  neutron@localhost   |
| 40dddef540fb4fa5a69fb7baa03de657 |    nova    |   True  |    nova@localhost    |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 |   swift    |   True  |   swift@localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| ID                                   | Name         | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros       | qcow2       | bare             | 13200896  | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2       | bare             | 158443520 | active |

+--------------------------------------+--------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | -               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | -               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:16.000000 | -               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:14:17.000000 | -               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:14:21.000000 | -               |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+----------+------+
| ID                                   | Label    | Cidr |
+--------------------------------------+----------+------+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public   | -    |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | -    |
+--------------------------------------+----------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova hypervisor-list
+----+----------------------------------------+-------+---------+
| ID | Hypervisor hostname                    | State | Status  |
+----+----------------------------------------+-------+---------+
| 1  | ip-192-169-142-137.ip.secureserver.net | up    | enabled |
+----+----------------------------------------+-------+---------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+
| 22af7b3b-232f-4642-9418-d1c8021c7eb5 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| 34e1078c-c75b-4d14-b813-b273ea8f7b86 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 5d652094-6711-409d-8546-e29c09e03d5a | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 8a8ad680-1071-4c7f-8787-ba4ef0a7dfb7 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| d81e97af-c210-4855-af06-fb1d139e2e10 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+---------------------------+

[root@ip-192-169-142-127 ~(keystone_admin)]# nova service-list
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | -               |
| 2  | nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | -               |
| 3  | nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:16.000000 | -               |
| 4  | nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2015-05-09T14:15:17.000000 | -               |
| 5  | nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2015-05-09T14:15:21.000000 | -               |
+----+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+

   Controller Node


    Network Node


    Compute Node
  

  Connect to VM(L2)  ruuning on Compute from VM (L1) running on libvirt network 172.24.4.224/28