Wednesday, September 30, 2015

RDO Kilo DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-
ovs-dvr.html

1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).

2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).

3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.

Setup configuration
- Controller node: Nova, Keystone, Cinder, Glance,
Neutron (using Open vSwitch plugin && VXLAN )
- (2x) Compute node: Nova (nova-compute),
Neutron (openvswitch-agent,l3-agent,metadata-agent )

Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing
at Fedora 22 KVM Hypervisor.

  Two libvirt sub-nets were used first "openstackvms" for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and "vteps" 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0' />
<mac address='52:54:00:60:f8:6d'/>
<ip address='192.169.142.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.169.142.2' end='192.169.142.254' />
</dhcp>
</ip>
</network>


# cat vteps.xml
<network>
<name>vteps</name>
<uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr2' stp='on' delay='0' />
<mac address='52:54:00:60:f8:6d'/>
<ip address='10.0.0.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.0.0.1' end='10.0.0.254' />
</dhcp>
</ip>
</network>

# virsh net-define openstackvms.xml
# virsh net-start openstackvms
# virsh net-autostart openstackvms
Second libvirt sub-net maybe defined and started same way.

ip-192-169-142-127.ip.secureserver.net - Controller/Network Node
ip-192-169-142-137.ip.secureserver.net - Compute Node
ip-192-169-142-147.ip.secureserver.net - Compute Node


Answer File :-
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4
********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.1(X)7"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"

# cat ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart
Reboot
*****************************************
On Controller update neutron.conf
*****************************************
router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

*****************
On Controller
*****************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
allow_automatic_l3agent_failover=False
*********************************
On each Compute Node
*********************************
[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr

*******************
On each node
*******************
[root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5

[root@ip-192-169-142-147 neutron]# cat ml2_conf.ini | grep -v ^#| grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population = True


***************************************************************************************
The last entry for [agent] is important for DVR configuration on Kilo ( vs Juno )
Command been run on Compute :-
   rsync -av root@controller:/etc/neutron/plugins/ml2 /etc/neutron/plugins
as suggested in http://schmaustech.blogspot.com/2014/12/configuring-dvr-in-openstack-juno.html
would work for you on Juno. On Kilo files  /etc/neutron/plugins/ml2/ml2_conf.ini
are different on Controller/Network and on Compute nodes. Missing [agent]
section on Kilo will result VXLAN tunnels not come up after nodes reboot.
***************************************************************************************


[root@ip-192-169-142-147 openvswitch]# cat ovs_neutron_plugin.ini | grep -v ^#| grep -v ^$
[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
enable_distributed_routing = True
arp_responder = True

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
******************************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
******************************************************************************
# yum install openstack-neutron-ml2
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent

 


  

   [root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterDemo
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id | host | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 50388b16-4461-441c-83a4-f7e7084ec415 | ip-192-169-142-127.ip.secureserver.net | True | :-) | |
| 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 | ip-192-169-142-137.ip.secureserver.net | True | :-) | |
| d18cdf01-6814-489d-bef2-5207c1aac0eb | ip-192-169-142-147.ip.secureserver.net | True | :-) | |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4
+---------------------+-------------------------------------------------------------------------------+
| Field | Value |
+---------------------+-------------------------------------------------------------------------------+
| admin_state_up | True |
| agent_type | L3 agent |
| alive | True |
| binary | neutron-l3-agent |
| configurations | { |
| | "router_id": "", |
| | "agent_mode": "dvr", |
| | "gateway_external_network_id": "", |
| | "handle_internal_only_routers": true, |
| | "use_namespaces": true, |
| | "routers": 1, |
| | "interfaces": 1, |
| | "floating_ips": 1, |
| | "interface_driver": "neutron.agent.linux.interface.OVSInterfaceDriver", |
| | "external_network_bridge": "br-ex", |
| | "ex_gw_ports": 1 |
| | } |
| created_at | 2015-09-29 07:40:37 |
| description | |
| heartbeat_timestamp | 2015-09-30 09:58:24 |
| host | ip-192-169-142-137.ip.secureserver.net |
| id | 7e89d4a7-7ebf-4a7a-9589-f6694e3637d4 |
| started_at | 2015-09-30 08:08:53 |
| topic | l3_agent |
+---------------------+--------------------------------------------------------------------------

  

  

   [root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
bbf7aa55-c701-4032-a3e6-ef9291f4f7e7
    Bridge br-int
        fail_mode: secure
        Port "qvo2495c8c5-2e"
            tag: 2
            Interface "qvo2495c8c5-2e"
        Port "qr-4a97cab0-ad"
            tag: 2
            Interface "qr-4a97cab0-ad"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00007f"
            Interface "vxlan-0a00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.127"}
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port "fg-87c492c2-1a"
            Interface "fg-87c492c2-1a"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.1"


References

   1.  http://assafmuller.com/2015/04/15/distributed-virtual-routing-overview-and-eastwest-routing/
   2. http://assafmuller.com/2014/02/23/ml2-address-population/
   3. https://access.redhat.com/documentation/en/red-hat-enterprise-linux-openstack-platform/version-7/red-hat-enterprise-linux-openstack-platform-7-networking-guide/chapter-9-configure-distributed-virtual-routing-dvr 

Saturday, September 26, 2015

Resize nova instances on RDO Liberty

  Per http://funcptr.net/2014/09/29/openstack-resizing-of-instances/
During the resize process, the node where the instance is currently running will use SSH to connect to another compute node where the re-sized instance will live, and copy over the instance and associated files.
  Actually, there is an option to change this behavior and perform resize instance on the same compute node .
View :- http://www.madorn.com/resize-on-single-compute.html#.Vgb1wrNRpFB
Just one notice it requires restart all nova services on Controller and
openstack-nova-compute on Compute Node.
Posting bellow presumes that there are at least 2 Compute nodes :-
  compute01
  compute02
 
There are a couple of assumptions which will be made:
  1. Nova and qemu user both have the same UID on all compute nodes
  2. The path for your instances is the same on all of your compute nodes

*************************
On Controller
*************************
Verify nova account has no any shell
  cat /etc/passwd | grep nova
In this case add /bin/bash to account "nova"
  usermod -s /bin/bash nova
****************************************
Generate SSH key and Configuration
****************************************
After doing this the next steps are all run as the nova user.
# su - nova
 
Now  to generate an SSH key:
$  ssh-keygen -t rsa

Save the key without a passphrase.

Next we need to configure SSH to not do host key verification,

$  cat << EOF > ~/.ssh/config 
>Host * 
>StrictHostKeyChecking no 
>UserKnownHostsFile=/dev/null 
>EOF 

Next step is :-
$  cat ~/.ssh/id_rsa.pub > .ssh/authorized_keys 
$  chmod 600 .ssh/authorized_keys
 
*******************************************************
Then user nova creates tar ball to replicate it
between all Compute Nodes
*******************************************************
# su - nova
$ id
$ pwd
   /var/lib/nova
$ tar -cvf  ssh.tar .ssh/*
^D
# cd ~nova
# scp ssh.tar compute01:/var/lib/nova
# scp ssh.tar compute02:/var/lib/nova 
# ssh compute01
**********************
In other terminal
********************** 
# ssh compute02
**************************************** 
On Compute01 and Compute02
****************************************
# usermod -s /bin/bash nova
# cd /var/lib/nova
# chown nova:nova ssh.tar
# su - nova 
$  ls -la 
$  tar -xvf ssh.tar

At this point :  You should be able ssh as "nova"  from contoller to compute01 , compute02 and between compute nodes without password prompt.
Compute nodes are trusting Controller and each other via account "nova"

*****************************
Now run on Controller
*****************************
[root@ip-192-169-142-127 ~(keystone_admin)]# . keystonerc_demo
[root@ip-192-169-142-127 ~(keystone_demo)]# nova list
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
| ID                                   | Name       | Status  | Task State | Power State | Networks                             |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
| 64b84cb7-c249-4808-b16e-0071d4d288e8 | CirrOSDevs | ACTIVE  | -          | Running     | demo_network=40.0.0.15, 172.24.4.234 |
| b6a9c438-3d7c-4f7e-aa4d-3ad47178eeac | VF22Devs15 | SHUTOFF | -          | Shutdown    | demo_network=40.0.0.13, 172.24.4.232 |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
[root@ip-192-169-142-127 ~(keystone_demo)]# nova resize CirrOSDevs 2 --poll

Server resizing... 100% complete
Finished
[root@ip-192-169-142-127 ~(keystone_demo)]# nova list
+--------------------------------------+------------+---------------+------------+-------------+--------------------------------------+
| ID                                   | Name       | Status        | Task State | Power State | Networks                             |
+--------------------------------------+------------+---------------+------------+-------------+--------------------------------------+
| 64b84cb7-c249-4808-b16e-0071d4d288e8 | CirrOSDevs | VERIFY_RESIZE | -          | Running     | demo_network=40.0.0.15, 172.24.4.234 |
| b6a9c438-3d7c-4f7e-aa4d-3ad47178eeac | VF22Devs15 | SHUTOFF       | -          | Shutdown    | demo_network=40.0.0.13, 172.24.4.232 |
+--------------------------------------+------------+---------------+------------+-------------+--------------------------------------+
[root@ip-192-169-142-127 ~(keystone_demo)]# nova resize-confirm 64b84cb7-c249-4808-b16e-0071d4d288e8
[root@ip-192-169-142-127 ~(keystone_demo)]# nova list
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
| ID                                   | Name       | Status  | Task State | Power State | Networks                             |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
| 64b84cb7-c249-4808-b16e-0071d4d288e8 | CirrOSDevs | ACTIVE  | -          | Running     | demo_network=40.0.0.15, 172.24.4.234 |
| b6a9c438-3d7c-4f7e-aa4d-3ad47178eeac | VF22Devs15 | SHUTOFF | -          | Shutdown    | demo_network=40.0.0.13, 172.24.4.232 |
+--------------------------------------+------------+---------+------------+-------------+--------------------------------------+
 
 

Thursday, September 24, 2015

RDO Liberty (beta) DVR Deployment (Controller/Network)+Compute+Compute (ML2&OVS&VXLAN) on CentOS 7.1

*************************************************************************************
UPDATE  10/01/2015
Would you experience VXLAN tunnels disappiaring issue like it happens
on RDO Kilo add following lines to ml2_conf.ini on each Compute Node :-
[agent]
l2_population = True

followed by  `openstack-service restart`
I also have to notice that Nested KVM does provide significant performance
improvement on Haswell i5,i7 CPUs 
*************************************************************************************

 Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html 
  1. Neutron DVR implements the fip-namespace on every Compute Node where the VMs are running. Thus VMs with FloatingIPs can forward the traffic to the External Network without routing it via Network Node. (North-South Routing).
  2. Neutron DVR implements the L3 Routers across the Compute Nodes, so that tenants intra VM communication will occur with Network Node not involved. (East-West Routing).
  3. Neutron Distributed Virtual Router provides the legacy SNAT behavior for the default SNAT for all private VMs. SNAT service is not distributed, it is centralized and the service node will host the service.


Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin && VXLAN )

- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )


Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing
at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first "openstackvms" for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and  "vteps" 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-define openstackvms.xml
# virsh net-start  openstackvms
# virsh net-autostart  openstackvms

Second libvirt sub-net maybe defined and started same way.


ip-192-169-142-127.ip.secureserver.net - Controller/Network Node
ip-192-169-142-137.ip.secureserver.net - Compute Node
ip-192-169-142-147.ip.secureserver.net - Compute Node

********************************
On each deployment node
********************************
Per http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/
cd /etc/yum.repos.d/
sudo wget http://trunk.rdoproject.org/centos7/delorean-deps.repo
sudo wget http://trunk.rdoproject.org/centos7/current/delorean.repo

*********************
Answer File :-
*********************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=n
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.1(X)7"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

#  cat ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********
# chkconfig network on

# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

*****************************************
On Controller update neutron.conf
*****************************************
router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

 [root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
[AGENT]

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr
[AGENT]


 [root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:5000/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5
[AGENT]

[root@ip-192-169-142-147 ml2]# pwd
/etc/neutron/plugins/ml2

[root@ip-192-169-142-147 ml2]# cat ml2_conf.ini | grep -v ^$ | grep -v ^#
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

********************************************************************************
Please, be asvised that command like ([1]) :-
# rsync -av root@192.169.142.127:/etc/neutron/plugins/ml2 /etc/neutron/plugins
been run on Liberty Compute Node 192.169.142.147 will overwrite file
/etc/neutron/plugins/ml2/openvswitch_agent.ini
So, local_ip after this command should be turned backed to it's initial value.
********************************************************************************

 [root@ip-192-169-142-147 ml2]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

*********************************************************************************
Create under plugins directory "openvswitch" and copy
/etc/neutron/plugins/ml2/openvswitch_agent.ini  to
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
# chgrp neutron  /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
otherwise neutron-ovs-cleanup.service won't start on Compute node.
*********************************************************************************



On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
# yum install  openstack-neutron-ml2  
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent


[root@ip-192-169-142-147 ~]# systemctl | grep openstack
openstack-ceilometer-compute.service                                                loaded active running   OpenStack ceilometer compute agent
openstack-nova-compute.service                                                      loaded active running   OpenStack Nova Compute Server

[root@ip-192-169-142-147 ~]# systemctl | grep neutron
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

**************
On Controller
**************



[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router routerBS
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 3e6f1c57-9c89-4f53-acda-55ec33a78ca9 | ip-192-169-142-127.ip.secureserver.net | True           | :-)   |          |
| 764552cc-c017-4877-8da1-24fe21e469f1 | ip-192-169-142-147.ip.secureserver.net | True           | :-)   |          |
| daddaa79-c4de-4a41-9f9a-20d71514db28 | ip-192-169-142-137.ip.secureserver.net | True           | :-)   |          |
+--------------------------------------+----------------------------------------+----------------+-------+----------+

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-show  764552cc-c017-4877-8da1-24fe21e469f1
+---------------------+-------------------------------------------------------------------------------+
| Field               | Value                                                                         |
+---------------------+-------------------------------------------------------------------------------+
| admin_state_up      | True                                                                          |
| agent_type          | L3 agent                                                                      |
| alive               | True                                                                          |
| binary              | neutron-l3-agent                                                              |
| configurations      | {                                                                             |
|                     |      "router_id": "",                                                         |
|                     |      "agent_mode": "dvr",                                                     |
|                     |      "gateway_external_network_id": "",                                       |
|                     |      "handle_internal_only_routers": true,                                    |
|                     |      "use_namespaces": true,                                                  |
|                     |      "routers": 2,                                                            |
|                     |      "interfaces": 2,                                                         |
|                     |      "floating_ips": 2,                                                       |
|                     |      "interface_driver": "neutron.agent.linux.interface.OVSInterfaceDriver",  |
|                     |      "log_agent_heartbeats": false,                                           |
|                     |      "external_network_bridge": "br-ex",                                      |
|                     |      "ex_gw_ports": 2                                                         |
|                     | }                                                                             |
| created_at          | 2015-09-24 09:08:53                                                           |
| description         |                                                                               |
| heartbeat_timestamp | 2015-09-24 13:09:35                                                           |
| host                | ip-192-169-142-147.ip.secureserver.net                                        |
| id                  | 764552cc-c017-4877-8da1-24fe21e469f1                                          |
| started_at          | 2015-09-24 11:24:35                                                           |
| topic               | l3_agent                                                                      |
+---------------------+-------------------------------------------------------------------------------+

[root@ip-192-169-142-147 ~]# ip netns
qrouter-6f638e97-7621-4d05-b9bc-50147b29d6b8
fip-fc3e1bf8-bb39-4468-b5cb-8cdc79837be1
qrouter-97a1c79a-178b-4507-8eb8-f0d6f8958858

[root@ip-192-169-142-147 ~]#  ip netns exec fip-fc3e1bf8-bb39-4468-b5cb-8cdc79837be1  ip a | grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.29/31 scope global fpr-97a1c79a-1
    inet6 fe80::40c0:3aff:fe04:50e5/64 scope link
    inet 169.254.31.239/31 scope global fpr-6f638e97-7
    inet6 fe80::84e6:42ff:fe56:33e2/64 scope link
    inet 192.169.142.158/24 brd 192.169.142.255 scope global fg-b97c737e-17
    inet6 fe80::f816:3eff:fe5f:ce5d/64 scope link

[root@ip-192-169-142-147 ~]#  ip netns exec fip-fc3e1bf8-bb39-4468-b5cb-8cdc79837be1  ip route
default via 192.169.142.1 dev fg-b97c737e-17
169.254.31.28/31 dev fpr-97a1c79a-1  proto kernel  scope link  src 169.254.31.29
169.254.31.238/31 dev fpr-6f638e97-7  proto kernel  scope link  src 169.254.31.239
192.169.142.0/24 dev fg-b97c737e-17  proto kernel  scope link  src 192.169.142.158
192.169.142.157 via 169.254.31.28 dev fpr-97a1c79a-1
192.169.142.161 via 169.254.31.238 dev fpr-6f638e97-7

[root@ip-192-169-142-147 ~]#  ip netns exec fip-fc3e1bf8-bb39-4468-b5cb-8cdc79837be1 ifconfig
fg-b97c737e-17: flags=4163  mtu 1500
        inet 192.169.142.158  netmask 255.255.255.0  broadcast 192.169.142.255
        inet6 fe80::f816:3eff:fe5f:ce5d  prefixlen 64  scopeid 0x20
        ether fa:16:3e:5f:ce:5d  txqueuelen 0  (Ethernet)
        RX packets 8710  bytes 10132914 (9.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5053  bytes 584497 (570.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-6f638e97-7: flags=4163  mtu 1500
        inet 169.254.31.239  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::84e6:42ff:fe56:33e2  prefixlen 64  scopeid 0x20
        ether 86:e6:42:56:33:e2  txqueuelen 1000  (Ethernet)
        RX packets 4901  bytes 515260 (503.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8507  bytes 10060869 (9.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-97a1c79a-1: flags=4163  mtu 1500
        inet 169.254.31.29  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::40c0:3aff:fe04:50e5  prefixlen 64  scopeid 0x20
        ether 42:c0:3a:04:50:e5  txqueuelen 1000  (Ethernet)
        RX packets 166  bytes 70269 (68.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 212  bytes 72783 (71.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-6f638e97-7621-4d05-b9bc-50147b29d6b8 ip a | grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.238/31 scope global rfp-6f638e97-7
    inet 192.169.142.161/32 brd 192.169.142.161 scope global rfp-6f638e97-7
    inet6 fe80::78b2:3dff:fe5a:b755/64 scope link
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-84ff9231-1b
    inet6 fe80::f816:3eff:fe14:3066/64 scope link
 

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-97a1c79a-178b-4507-8eb8-f0d6f8958858 ip a | grep "inet"
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host
    inet 169.254.31.28/31 scope global rfp-97a1c79a-1
    inet 192.169.142.157/32 brd 192.169.142.157 scope global rfp-97a1c79a-1
    inet6 fe80::78e4:79ff:fe1c:a9ac/64 scope link
    inet 40.0.0.1/24 brd 40.0.0.255 scope global qr-88316f06-72
    inet6 fe80::f816:3eff:fea7:dbbe/64 scope link


 [root@ip-192-169-142-147 ~]# ovs-vsctl show
7a1e89bb-4080-4346-bf34-eaf3fa8e58d7
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "fg-b97c737e-17"
            Interface "fg-b97c737e-17"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qr-84ff9231-1b"
            tag: 2
            Interface "qr-84ff9231-1b"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvoa40da5f6-56"
            tag: 2
            Interface "qvoa40da5f6-56"
        Port "qr-88316f06-72"
            tag: 1
            Interface "qr-88316f06-72"
                type: internal
        Port "qvo76606c25-db"
            tag: 1
            Interface "qvo76606c25-db"
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a00007f"
            Interface "vxlan-0a00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.127"}
    ovs_version: "2.3.1"

[root@ip-192-169-142-147 ~]# ip netns exec qrouter-6f638e97-7621-4d05-b9bc-50147b29d6b8 iptables-save -t nat | grep "^-A"|grep l3-agent
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 192.169.142.161/32 -j DNAT --to-destination 50.0.0.16
-A neutron-l3-agent-POSTROUTING ! -i rfp-6f638e97-7 ! -o rfp-6f638e97-7 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.169.142.161/32 -j DNAT --to-destination 50.0.0.16
-A neutron-l3-agent-float-snat -s 50.0.0.16/32 -j SNAT --to-source 192.169.142.161
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat


  


       
      
    
   
     

Monday, September 21, 2015

RDO Liberty (beta) Set up for three VM Nodes (Controller+Network+Compute) ML2&OVS&VXLAN on CentOS 7.1

RDO Liberty-3  (beta) passed 3 node deployment test : Controller+Network+Compute Configuration ML2&OVS&VXLAN. Regardless RH is mainly focused on RDO-Manager based Liberty deployments. I don't have desktop been able to run 6 VMs at a time, because I truly believe that it requires 8 CORE Intel CPU like Intel® Xeon® Processor E5-2690. On 4 CORE CPU like i7 4790 it's just impossible  via my experience.
Following bellow is brief instruction  for  three node deployment test Controller&&Network&&Compute for oncoming RDO Liberty  which was performed on Fedora 22 host with KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

SELINUX stays in permissive mode due to https://bugzilla.redhat.com/show_bug.cgi?id=1249685.

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# cat public.xml
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

[root@junoJVC01 ~]# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes


*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
**********************************************************************************
3. Third Libvirt subnet "vteps" serves for support VXLAN tunnel between Network and Compute nodes
**********************************************************************************


********************************
On each deployment node
********************************
Per http://beta.rdoproject.org/testday/rdo-test-day-liberty-01/
cd /etc/yum.repos.d/
sudo wget http://trunk.rdoproject.org/centos7/delorean-deps.repo
sudo wget http://trunk.rdoproject.org/centos7/current/delorean.repo
****************************************************************
Answer file been used for deployment - answe3Node.txt
****************************************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
CONFIG_KEYSTONE_SERVICE_NAME=httpd
# CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

***************************************************************************
First time run
# packstack --answer-file=./answe3Node.txt
with  CONFIG_PROVISION_DEMO=y

It will crash running IP_provision_glance.pp
Then switch to  CONFIG_PROVISION_DEMO=n
and rerun `packstack --answer-file=./answe3Node.txt`

**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.235"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"

ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

  OVS PORT should be eth2 (third Ethernet interface on Network Node)
Controller node snapshot

 



   Network node snapshot


  Running CentOS 7.1 cloud VM

  

     Running Fedora 22 cloud VM
     

    

   Links session running inside VF22Devs15 cloud VM


  Verification access to Nova-metadata server from within VF22Devs15 VM