Tuesday, October 27, 2015

VRRP four nodes setup on RDO Liberty (CentOS 7.1)

  Sample bellow demonstrates uninterrupted access, providing via HA Neutron router,  to cloud VMs  running on Compute node, when two installed Network Nodes node are swapping MASTER and BACKUP roles (as members of keepalived pair).

    Following bellow is brief instruction for 4 node deployment test Controller & 2xNetwork & Compute on RDO Liberty (CentOS 7.1), which was performed on Fedora 21 host with KVM/Libvirt Hypervisor  (32 GB RAM, Intel Core i7-4790  Haswell CPU, ASUS Z97-P ) .Four VMs (4 GB RAM, 4 VCPUS)  have been setup. Performing this setup I didn't have any problems with memory
swapping and did have problems with i7 CPU 4 CORES limitations.
 Actually , I am forced to to run packstack not because I love it so much ,
because I have no choice.

  Controller VM one (management subnet) VNIC, 2xNetwork Nodes VM three VNICS (management,vtep's external subnets),  Compute Node VM two VNICS (management,vtep's subnets)

Setup :-


192.169.142.127 - Controller Node
192.169.142.147,192.169.142.157 - Network Nodes
192.169.142.137 - Compute Node

*******************************************
Three Libvirt networks created
*******************************************


# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>



# cat external.xml
<network>
   <name>external</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
&</network>


# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list

 Name                 State      Autostart     Persistent

--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms          active        yes           yes
 external              active        yes           yes
 vteps                 active        yes          yes
***********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 4 VM are attached to this subnet
***********************************************************************************
2. Second Libvirt subnet "external" serves for simulation external network 
Network Nodes attached to "external",latter on "eth2" interfaces (belongs to "external") which are supposed to be converted into OVS ports of br-ex(s) on Network Nodes. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
***********************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Nodes are attached to this subnet.
***********************************************************************************

***************************************
Answer file (answer4Node.txt)
***************************************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_MANILA_INSTALL=n
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_SAHARA_INSTALL=n
CONFIG_TROVE_INSTALL=n
CONFIG_IRONIC_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147,192.169.142.157

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_USE_SUBNETS=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_SAHARA_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_ENABLE_RDO_TESTING=n
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_SSL_CACERT_FILE=/etc/pki/tls/certs/selfcert.crt
CONFIG_SSL_CACERT_KEY_FILE=/etc/pki/tls/private/selfkey.key
CONFIG_SSL_CERT_DIR=~/packstackca/
CONFIG_SSL_CACERT_SELFSIGN=y
CONFIG_SELFSIGN_CACERT_SUBJECT_C=--
CONFIG_SELFSIGN_CACERT_SUBJECT_ST=State
CONFIG_SELFSIGN_CACERT_SUBJECT_L=City
CONFIG_SELFSIGN_CACERT_SUBJECT_O=openstack
CONFIG_SELFSIGN_CACERT_SUBJECT_OU=packstack
CONFIG_SELFSIGN_CACERT_SUBJECT_CN=ip-192-169-142-127.ip.secureserver.net
CONFIG_SELFSIGN_CACERT_SUBJECT_MAIL=admin@ip-192-169-142-127.ip.secureserver.net
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_DB_PURGE_ENABLE=True
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_EMAIL=root@localhost
CONFIG_KEYSTONE_ADMIN_USERNAME=admin
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_API_VERSION=v2.0
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_KEYSTONE_IDENTITY_BACKEND=sql
CONFIG_KEYSTONE_LDAP_URL=ldap://192.169.142.127
CONFIG_KEYSTONE_LDAP_USER_DN=
CONFIG_KEYSTONE_LDAP_USER_PASSWORD=
CONFIG_KEYSTONE_LDAP_SUFFIX=
CONFIG_KEYSTONE_LDAP_QUERY_SCOPE=one
CONFIG_KEYSTONE_LDAP_PAGE_SIZE=-1
CONFIG_KEYSTONE_LDAP_USER_SUBTREE=
CONFIG_KEYSTONE_LDAP_USER_FILTER=
CONFIG_KEYSTONE_LDAP_USER_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_USER_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_MAIL_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_MASK=-1
CONFIG_KEYSTONE_LDAP_USER_ENABLED_DEFAULT=TRUE
CONFIG_KEYSTONE_LDAP_USER_ENABLED_INVERT=n
CONFIG_KEYSTONE_LDAP_USER_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_USER_DEFAULT_PROJECT_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_USER_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_USER_PASS_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_USER_ENABLED_EMULATION_DN=
CONFIG_KEYSTONE_LDAP_USER_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_GROUP_SUBTREE=
CONFIG_KEYSTONE_LDAP_GROUP_FILTER=
CONFIG_KEYSTONE_LDAP_GROUP_OBJECTCLASS=
CONFIG_KEYSTONE_LDAP_GROUP_ID_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_NAME_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_MEMBER_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_DESC_ATTRIBUTE=
CONFIG_KEYSTONE_LDAP_GROUP_ATTRIBUTE_IGNORE=
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_CREATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_UPDATE=n
CONFIG_KEYSTONE_LDAP_GROUP_ALLOW_DELETE=n
CONFIG_KEYSTONE_LDAP_GROUP_ADDITIONAL_ATTRIBUTE_MAPPING=
CONFIG_KEYSTONE_LDAP_USE_TLS=n
CONFIG_KEYSTONE_LDAP_TLS_CACERTDIR=
CONFIG_KEYSTONE_LDAP_TLS_CACERTFILE=
CONFIG_KEYSTONE_LDAP_TLS_REQ_CERT=demand
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_DB_PURGE_ENABLE=True
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES=
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=/etc/cinder/shares.conf
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_PARTNER_BACKEND_NAME=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_ESERIES_HOST_TYPE=linux_dm_mp
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_MANILA_DB_PW=PW_PLACEHOLDER
CONFIG_MANILA_KS_PW=PW_PLACEHOLDER
CONFIG_MANILA_BACKEND=generic
CONFIG_MANILA_NETAPP_DRV_HANDLES_SHARE_SERVERS=false
CONFIG_MANILA_NETAPP_TRANSPORT_TYPE=https
CONFIG_MANILA_NETAPP_LOGIN=admin
CONFIG_MANILA_NETAPP_PASSWORD=
CONFIG_MANILA_NETAPP_SERVER_HOSTNAME=
CONFIG_MANILA_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_MANILA_NETAPP_SERVER_PORT=443
CONFIG_MANILA_NETAPP_AGGREGATE_NAME_SEARCH_PATTERN=(.*)
CONFIG_MANILA_NETAPP_ROOT_VOLUME_AGGREGATE=
CONFIG_MANILA_NETAPP_ROOT_VOLUME_NAME=root
CONFIG_MANILA_NETAPP_VSERVER=
CONFIG_MANILA_GENERIC_DRV_HANDLES_SHARE_SERVERS=true
CONFIG_MANILA_GENERIC_VOLUME_NAME_TEMPLATE=manila-share-%s
CONFIG_MANILA_GENERIC_SHARE_MOUNT_PATH=/shares
CONFIG_MANILA_SERVICE_IMAGE_LOCATION=https://www.dropbox.com/s/vi5oeh10q1qkckh/ubuntu_1204_nfs_cifs.qcow2
CONFIG_MANILA_SERVICE_INSTANCE_USER=ubuntu
CONFIG_MANILA_SERVICE_INSTANCE_PASSWORD=ubuntu
CONFIG_MANILA_NETWORK_TYPE=neutron
CONFIG_MANILA_NETWORK_STANDALONE_GATEWAY=
CONFIG_MANILA_NETWORK_STANDALONE_NETMASK=
CONFIG_MANILA_NETWORK_STANDALONE_SEG_ID=
CONFIG_MANILA_NETWORK_STANDALONE_IP_RANGE=
CONFIG_MANILA_NETWORK_STANDALONE_IP_VERSION=4
CONFIG_MANILA_GLUSTERFS_SERVERS=
CONFIG_MANILA_GLUSTERFS_NATIVE_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_VOLUME_PATTERN=
CONFIG_MANILA_GLUSTERFS_TARGET=
CONFIG_MANILA_GLUSTERFS_MOUNT_POINT_BASE=
CONFIG_MANILA_GLUSTERFS_NFS_SERVER_TYPE=gluster
CONFIG_MANILA_GLUSTERFS_PATH_TO_PRIVATE_KEY=
CONFIG_MANILA_GLUSTERFS_GANESHA_SERVER_IP=
CONFIG_IRONIC_DB_PW=PW_PLACEHOLDER
CONFIG_IRONIC_KS_PW=PW_PLACEHOLDER
CONFIG_NOVA_DB_PURGE_ENABLE=True
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_MANAGER=nova.compute.manager.ComputeManager
CONFIG_VNC_SSL_CERT=
CONFIG_VNC_SSL_KEY=
CONFIG_NOVA_COMPUTE_PRIVIF=
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_VPNAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_HORIZON_SECRET_KEY=33cade531a764c858e4e6c22488f379f
CONFIG_HORIZON_SSL_CERT=
CONFIG_HORIZON_SSL_KEY=
CONFIG_HORIZON_SSL_CACERT=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=09e304c52d714220
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_IMAGE_NAME=cirros
CONFIG_PROVISION_IMAGE_URL=http://download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
CONFIG_PROVISION_IMAGE_FORMAT=qcow2
CONFIG_PROVISION_IMAGE_SSH_USER=cirros
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=PW_PLACEHOLDER
CONFIG_PROVISION_TEMPEST_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_OVS_BRIDGE=n
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_CEILOMETER_COORDINATION_BACKEND=redis
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_REDIS_MASTER_HOST=192.169.142.127
CONFIG_REDIS_PORT=6379
CONFIG_REDIS_HA=n
CONFIG_REDIS_SLAVE_HOSTS=
CONFIG_REDIS_SENTINEL_HOSTS=
CONFIG_REDIS_SENTINEL_CONTACT_HOST=
CONFIG_REDIS_SENTINEL_PORT=26379
CONFIG_REDIS_SENTINEL_QUORUM=2
CONFIG_REDIS_MASTER_NAME=mymaster
CONFIG_SAHARA_DB_PW=PW_PLACEHOLDER
CONFIG_SAHARA_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_DB_PW=PW_PLACEHOLDER
CONFIG_TROVE_KS_PW=PW_PLACEHOLDER
CONFIG_TROVE_NOVA_USER=trove
CONFIG_TROVE_NOVA_TENANT=services
CONFIG_TROVE_NOVA_PW=PW_PLACEHOLDER
CONFIG_NAGIOS_PW=02f168ee8edd44e4
**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer4Node.txt

***********************************************************
Upon completion on Network node 192.169.142.147
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.229"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

***********************************************************
Upon completion on Network node 192.169.142.157
***********************************************************
[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.230"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on both Network Nodes :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

*****************************************************
On each Network Node
*****************************************************
# systemctl start keepalived
# systemctl enable keepalived
****************************************************************************
On Controller and both Network Nodes
Update /etc/neutron/neutron.conf as follows
****************************************************************************
[DEFAULT]
 router_distributed = False
 l3_ha = True
 max_l3_agents_per_router = 2
 dhcp_agents_per_network  = 2
*****************************************************************************

All nodes restart




 ******************************************************************
Creating HA Neutron Router belongs tenant demo
******************************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# python
Python 2.7.5 (default, Jun 24 2015, 00:41:19)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from keystoneclient.v2_0 import client
>>> token = '3ad2de159f9649afb0c342ba57e637d9'
>>> endpoint = 'http://192.169.142.127:35357/v2.0'
>>> keystone = client.Client(token=token, endpoint=endpoint)
>>>  keystone.tenants.list()
[<Tenant {u'enabled': True, u'description': u'Tenant for the openstack services', u'name': u'services', u'id': u'20d1f633cb384e07b9019cb01ee9f02c'}>, <Tenant {u'enabled': True, u'description': u'admin tenant', u'name': u'admin', u'id': u'cce9a541723a4c26b70b746bab051f6c'}>, <Tenant {u'enabled': True, u'description': u'default tenant', u'name': u'demo', u'id': u'd9d06a467fb54b6e9612cbb1a245c370'}>]
>>>
# neutron router-create --ha True --tenant_id  d9d06a467fb54b6e9612cbb1a245c370 RouterHA

Attach demo_network and external network to RouterHA



[root@ip-192-169-142-127 ~(keystone_admin)]# neutron l3-agent-list-hosting-router RouterHA
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| id                                   | host                                   | admin_state_up | alive | ha_state |
+--------------------------------------+----------------------------------------+----------------+-------+----------+
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | ip-192-169-142-157.ip.secureserver.net | True           | :-)   | active   |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | ip-192-169-142-147.ip.secureserver.net | True           | :-)   | standby  |
+--------------------------------------+----------------------------------------+----------------+--
 
[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "147"
| 30c38f80-4dee-4144-a2aa-a088629f33fb | Metadata agent     | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |
| 33b5ec51-33b6-49ee-b5bf-1c66c283b818 | L3 agent           | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 8390e450-c5ff-4697-aff3-7cfd66873055 | Open vSwitch agent | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| d01a0e08-31ab-41d9-bf4b-11888d82bc41 | DHCP agent         | ip-192-169-142-147.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list | grep "157"
| 1e8aec09-e4a4-473a-91c7-9771e0499b1c | L3 agent           | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-l3-agent          |
| 84ce6181-1eaa-445b-8f14-e865c3658bad | DHCP agent         | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-dhcp-agent        |
| bf54ed7a-e478-4e0f-b38a-612cc89af26c | Open vSwitch agent | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-openvswitch-agent |
| f1a1d7fc-6cc2-44c0-9254-367d9dcbb74c | Metadata agent     | ip-192-169-142-157.ip.secureserver.net | :-)   | True           | neutron-metadata-agent    |

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron router-show RouterHA
+-----------------------+--------------------------------------------------------------------------------------------+
| Field  | Value                |
+-----------------------+---------------------------------------------------------------------------------------------+
| admin_state_up | True
| distributed        | False
| external_gateway_info | {"network_id": "b87a1cdf-8635-424b-b986-347aa1b2d4a7", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "65472e1a-f6ff-4549-b7e8-ab2010b88c69", "ip_address": "172.24.4.227"}]} |
| ha | True                   |
| id  | a4bdf550-76a5-4069-9d03-075b8668f3c5                  |
| name  | RouterHA                 |
| routes                |
| status   | ACTIVE              |
|tenant_id  | d9d06a467fb54b6e9612cbb1a245c370
+-----------------------+---------------------------------------------------------------------------------------------------+
  Verify VRRP advertisements from the master node HA interface IP address on the corresponding network interface:
  


  Verification status of neutron services  on each one of Network Nodes
 

  Runing VMs
  
     Connectivity verification
  

   Current MASTER is 192.169.142.157
  

   
   MASTER 192.169.142.157 stopped , 192.169.142.147 changing state from   BACKUP to MASTER
  


   Connectivity verification to 172.24.4.231
  


192.169.142.157 brought up again
 


      192.169.142.157 goes to MASTER State again due to 192.169.142.147 shutdown.
  
   **************************************
   Network node 192.169.142.147
   **************************************
   [root@ip-192-169-142-147 ~]# ovs-vsctl show
5b798479-567a-4d14-bbb7-d014e001307c
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00009d"
            Interface "vxlan-0a00009d"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.157"}

        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        fail_mode: secure
        Port "tap9b85b5b7-4c"
            tag: 2
            Interface "tap9b85b5b7-4c"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 2
            Interface "qr-299a4e77-af"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "ha-3c63186b-f7"
            tag: 1
            Interface "ha-3c63186b-f7"
                type: internal
    Bridge br-ex
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.4.0"
**************************************
Network node 192.169.142.157
**************************************
[root@ip-192-169-142-157 ~]# ovs-vsctl show
15fa30fd-6900-4de7-ac1b-69760ccdfa4f
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port "qg-c88a6f64-88"
            Interface "qg-c88a6f64-88"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.137"}

        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000093"
            Interface "vxlan-0a000093"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.157", out_key=flow, remote_ip="10.0.0.147"}

    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "ha-083e9c72-69"
            tag: 2
            Interface "ha-083e9c72-69"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-299a4e77-af"
            tag: 1
            Interface "qr-299a4e77-af"
                type: internal
    ovs_version: "2.4.0"


 

 
   

Monday, October 12, 2015

RDO Liberty DVR Neutron workflow on CentOS 7.1

UPDATE 10/23/2015
Post updated for final RDO Liberty Release
END UPDATE 

Per http://specs.openstack.org/openstack/neutron-specs/specs/juno/neutron-ovs-dvr.html 
DVR is supposed to address following problems which has traditional 3 Node
deployment schema:-

Problem 1: Intra VM traffic flows through the Network Node
In this case even VMs traffic that belong to the same tenant
on a different subnet has to hit the Network Node to get routed
between the subnets. This would affect Performance.

Problem 2: VMs with FloatingIP also receive and send packets
through the Network Node Routers.
FloatingIP (DNAT) translation done at the Network Node and also
the external network gateway port is available only at the Network Node.
So any traffic that is intended for the External Network from
the VM will have to go through the Network Node.

In this case the Network Node becomes a single point of failure
and also the traffic load will be heavy in the Network Node.
This would affect the performance and scalability.


Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, 

   Neutron (using Open vSwitch plugin && VXLAN )

- (2x) Compute node: Nova (nova-compute),
         Neutron (openvswitch-agent,l3-agent,metadata-agent )


Three CentOS 7.1 VMs (4 GB RAM, 4 VCPU, 2 VNICs ) has been built for testing
at Fedora 22 KVM Hypervisor. Two libvirt sub-nets were used first "openstackvms" for emulating External && Mgmt Networks 192.169.142.0/24 gateway virbr1 (192.169.142.1) and  "vteps" 10.0.0.0/24 to support two VXLAN tunnels between Controller and Compute Nodes.

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>


# cat vteps.xml

<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-define openstackvms.xml
# virsh net-start  openstackvms
# virsh net-autostart  openstackvms

Second libvirt sub-net maybe defined and started same way.


ip-192-169-142-127.ip.secureserver.net - Controller/Network Node
ip-192-169-142-137.ip.secureserver.net - Compute Node
ip-192-169-142-147.ip.secureserver.net - Compute Node

**************************************
At this point run on Controller:-
**************************************
 # yum -y  install centos-release-openstack-liberty
 # yum -y  install openstack-packstack
 # packstack --answer-file=./answer3Node.txt
 
*********************
Answer File :-
*********************
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137,192.169.142.147
CONFIG_NETWORK_HOSTS=192.169.142.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=httpd
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=5G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************************************************
On Controller (X=2) and Computes X=(3,4) update :-
********************************************************
# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.1(X)7"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

# cat ifcfg-eth0
DEVICE="eth0"
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no
***********
Then
***********
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart

Reboot

**********************************
General information   ( [3] )
**********************************
Enabling l2pop :-

On the Neutron API node, in the conf file you pass
to the Neutron service (plugin.ini/ml2_conf.ini):
[ml2]
mechanism_drivers = openvswitch,l2population

On each compute node, in the conf file you pass
to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
l2_population = True

Enable the ARP responder:
On each compute node, in the conf file
you pass to the OVS agent (plugin.ini/ml2_conf.ini):
[agent]
arp_responder = True

*****************************************
On Controller update neutron.conf
*****************************************
router_distributed = True
dvr_base_mac = fa:16:3f:00:00:00

 [root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = True
external_network_bridge = br-ex
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = False
agent_mode = dvr_snat
[AGENT]

*********************************
On each Compute Node
*********************************

[root@ip-192-169-142-147 neutron]# cat l3_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = dvr
[AGENT]


 [root@ip-192-169-142-147 neutron]# cat metadata_agent.ini | grep -v ^#| grep -v ^$
[DEFAULT]
debug = False
auth_url = http://192.169.142.127:5000/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.169.142.127
nova_metadata_port = 8775
nova_metadata_protocol = http
metadata_proxy_shared_secret =a965cd23ed2f4502
metadata_workers =4
metadata_backlog = 4096
cache_url = memory://?default_ttl=5
[AGENT]

[root@ip-192-169-142-147 ml2]# pwd
/etc/neutron/plugins/ml2

[root@ip-192-169-142-147 ml2]# cat ml2_conf.ini | grep -v ^$ | grep -v ^#
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch,l2population
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[ml2_type_geneve]
[securitygroup]
enable_security_group = True
# On Compute nodes
[agent]
l2_population=True

********************************************************************************
Please, be asvised that command like ( [ 2 ] ) :-
# rsync -av root@192.169.142.127:/etc/neutron/plugins/ml2 /etc/neutron/plugins
been run on Liberty Compute Node 192.169.142.147 will overwrite file
/etc/neutron/plugins/ml2/openvswitch_agent.ini
So, local_ip after this command should be turned backed to it's initial value.
********************************************************************************
 [root@ip-192-169-142-147 ml2]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = True
arp_responder = True
enable_distributed_routing = True
drop_flows_on_start=False
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

***************************************************************************************
On each Compute node neutron-l3-agent and neutron-metadata-agent are
supposed to be started.
***************************************************************************************
# yum install  openstack-neutron-ml2  
# systemctl start neutron-l3-agent
# systemctl start neutron-metadata-agent
# systemctl enable neutron-l3-agent
# systemctl enable neutron-metadata-agent


[root@ip-192-169-142-147 ~]# systemctl | grep openstack
openstack-ceilometer-compute.service                                                loaded active running   OpenStack ceilometer compute agent
openstack-nova-compute.service                                                      loaded active running   OpenStack Nova Compute Server

[root@ip-192-169-142-147 ~]# systemctl | grep neutron
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

********************************************************************************************
When floating IP gets assigned to  VM ,  what actually happens ( [1] ):
The same explanation may be found in ([4]) , the only style would not be in step by step manner, in particular, it conatains detailed descriptition of reverse
network flow and ARP Proxy fucntionality
********************************************************************************************

1.The fip-<netid> namespace is created on the local
   compute node (if it does not already exist)
2.A new port rfp-<portid> gets created on the qrouter-<routerid>
   namespace (if it does not already exist)
3.The rfp port on the qrouter namespace is assigned the associated floating IP
   address
4.The fpr port on the fip namespace gets created and linked via point-to-point 
  network to the rfp port of the qrouter namespace
5.The fip namespace gateway port fg-<portid> is assigned an additional
  address  from the public network range to set up  ARP proxy point
6.The fg-<portid> is configured as a Proxy ARP

***************************************
Network flow itself  ( [1] ):
***************************************
1.The VM, initiating transmission, sends a packet via default gateway
   and br-int forwards the traffic to the local DVR gateway port (qr-<portid>).
2.DVR routes the packet using the routing table to the rfp-<portid> port
3.The packet is applied NAT rule, replacing the source-IP of VM to
   the assigned floating IP, and then it gets sent through the rfp-<portid>
   port,  which connects to the fip namespace via point-to-point network
  169.254.31.28/31
4.The packet is received on the fpr-<portid> port in the fip namespace
   and then routed outside through the fg-<portid> port



*********************************************************
In case of particular deployment :-
*********************************************************

[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| 1b202547-e1de-4c35-86a9-3119d6844f88 | public       | e6473e85-5a4c-4eea-a42b-3a63def678c5 192.169.142.0/24 |
| 267c9192-29e2-41e2-8db4-826a6155dec9 | demo_network | 89704ab3-5535-4c87-800e-39255a0a11d9 50.0.0.0/24      |
+--------------------------------------+--------------+------------------------------------------



[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
fip-1b202547-e1de-4c35-86a9-3119d6844f88
qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf


 [root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip rule
0:    from all lookup local
32766:    from all lookup main
32767:    from all lookup default
57480:    from 50.0.0.15 lookup 16
57481:    from 50.0.0.13 lookup 16

838860801:    from 50.0.0.1/24 lookup 838860801


[root@ip-192-169-142-147 ~]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route show table 16
default via 169.254.31.29 dev rfp-51ed47a7-3


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf ip route

50.0.0.0/24 dev qr-b0a8a232-ab  proto kernel  scope link  src 50.0.0.1
169.254.31.28/31 dev rfp-51ed47a7-3  proto kernel  scope link  src 169.254.31.28 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-51ed47a7-3fcf-4389-9961-0b457e10cecf iptables-save -t nat | grep "^-A"|grep l3-agent

-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A neutron-l3-agent-OUTPUT -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-OUTPUT -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-POSTROUTING ! -i rfp-51ed47a7-3 ! -o rfp-51ed47a7-3 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 192.169.142.153/32 -j DNAT --to-destination 50.0.0.13
-A neutron-l3-agent-PREROUTING -d 192.169.142.156/32 -j DNAT --to-destination 50.0.0.15

-A neutron-l3-agent-float-snat -s 50.0.0.13/32 -j SNAT --to-source 192.169.142.153
-A neutron-l3-agent-float-snat -s 50.0.0.15/32 -j SNAT --to-source 192.169.142.156
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
 


[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ip route

default via 192.169.142.1 dev fg-58e0cabf-07
169.254.31.28/31 dev fpr-51ed47a7-3  proto kernel  scope link  src 169.254.31.29
192.169.142.0/24 dev fg-58e0cabf-07  proto kernel  scope link  src 192.169.142.154
192.169.142.153 via 169.254.31.28 dev fpr-51ed47a7-3
192.169.142.156 via 169.254.31.28 dev fpr-51ed47a7-3
 




[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter- 51ed47a7-3fcf-4389-9961-0b457e10cecf ifconfig

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-b0a8a232-ab: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe23:586c  prefixlen 64  scopeid 0x20
        ether fa:16:3e:23:58:6c  txqueuelen 0  (Ethernet)
        RX packets 88594  bytes 6742614 (6.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173961  bytes 234594118 (223.7 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

rfp-51ed47a7-3: flags=4163  mtu 1500
        inet 169.254.31.28  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::282e:4bff:fe52:3bca  prefixlen 64  scopeid 0x20
        ether 2a:2e:4b:52:3b:ca  txqueuelen 1000  (Ethernet)
        RX packets 173514  bytes 234542852 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87837  bytes 6670792 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
fe2f4449-82fc-45e9-8827-6c6d9c8cc92d
    Bridge br-int
        fail_mode: secure
        Port "qr-b0a8a232-ab"
            tag: 1
            Interface "qr-b0a8a232-ab"

                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo19855b4d-3b"
            tag: 1
            Interface "qvo19855b4d-3b"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port "qvobd487c99-41"
            tag: 1
            Interface "qvobd487c99-41"
    Bridge br-ex
        Port "fg-58e0cabf-07"
            Interface "fg-58e0cabf-07"

                type: internal
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-0a00007f"
            Interface "vxlan-0a00007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.127"}
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.4.0"

[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec fip-1b202547-e1de-4c35-86a9-3119d6844f88 ifconfig
fg-58e0cabf-07: flags=4163  mtu 1500
        inet 192.169.142.154  netmask 255.255.255.0  broadcast 192.169.142.255
        inet6 fe80::f816:3eff:fe15:efff  prefixlen 64  scopeid 0x20
        ether fa:16:3e:15:ef:ff  txqueuelen 0  (Ethernet)
        RX packets 173587  bytes 234547834 (223.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 87751  bytes 6665500 (6.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

fpr-51ed47a7-3: flags=4163  mtu 1500
        inet 169.254.31.29  netmask 255.255.255.254  broadcast 0.0.0.0
        inet6 fe80::a805:e5ff:fe38:3bb1  prefixlen 64  scopeid 0x20
        ether aa:05:e5:38:3b:b1  txqueuelen 1000  (Ethernet)
        RX packets 87841  bytes 6671008 (6.3 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 173518  bytes 234543068 (223.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



****************
On Controller
****************

  

 


 


Thursday, October 08, 2015

Multiple external networks with a single L3 agent testing on RDO Liberty per Lars Kellogg-Stedman

Following bellow is supposed to test in multi node environment
Multiple external networks with a single L3 agent by Lars Kellogg-Stedman

However, current post contains an attempt to analyze and understand how traffic to/from external network flows through br-int when provider external networks has been involved

I was also hit by  Bug  neutron-openvswitch-agent is crashing with "invalid literal for int() with base 10" error
and patch https://review.openstack.org/#/c/225001/   was also applied

Basic 3 VM node setup was done per https://www.linux.com/community/blogs/133-general-linux/854587-rdo-liberty-beta-set-up-for-three-vm-nodes-controllernetworkcompute-ml2aovsavxlan-on-centos71/

Nested KVM was enable for all VM hosting RDO Liberty nodes.

Create to two Libvirt sub-nets external3,external4 on KVM Virtualization Host (F22)

[root@fedora22wksr ~]# cat external3.xml
<network>
   <name>external3</name>
   <uuid>d0e9964b-f95d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr6' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.3.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.3.0.1' end='10.3.0.254' />
     </dhcp>
   </ip>
</network>
[root@fedora22wksr ~]# cat external4.xml
<network>
   <name>external4</name>
   <uuid>d0e9964b-f97d-40c2-b749-b609aed52cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr7' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.4.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.4.0.1' end='10.4.0.254' />
     </dhcp>
   </ip>
</network>

Shutdown VM hosting Network Node and add two VNICs eth3 belongs
external3 , eth4 belongs  external4
Startup VM and create corresponding files ifcfg-eth3,ifcfg-eth4 with static
IP addresses.

# service network restart

or reboot Nerwork Node.

*************************
On Network Node
*************************
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth2
# ovs-vsctl add-br br-eth3
# ovs-vsctl add-port br-eth3 eth3
# ovs-vsctl add-br br-eth4
# ovs-vsctl add-port br-eth4 eth4

******************************
Update l3_agent.ini file
******************************
external_network_bridge =
external_network_id =

***********************************************************************
Update /etc/neutron/plugins/ml2/openvswitch_agent.ini
***********************************************************************
[ovs]
network_vlan_ranges =physnet1, physnet3,physnet4
bridge_mappings = physnet1:br-ex,physnet3:br-eth3,physnet4:br-eth4

Then copy  /etc/neutron/plugins/ml2/openvswitch_agent.ini
to /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini

************************************************************************
SSH to Controller 192.169.142.127 and update ml2_conf.ini
************************************************************************
[ml2]
type_drivers = local,flat,gre,vxlan

[ml2_type_flat]
flat_networks = *

# openstack-service restart on Controller

**********************************************************
Get back to VM hosting Network Node
**********************************************************
# openstack-service restart neutron
# systemctl | grep neutron

[root@ip-192-169-142-147 ~]# systemctl| grep neutron
neutron-dhcp-agent.service                                                          loaded active running   OpenStack Neutron DHCP Agent
neutron-l3-agent.service                                                            loaded active running   OpenStack Neutron Layer 3 Agent
neutron-metadata-agent.service                                                      loaded active running   OpenStack Neutron Metadata Agent
neutron-openvswitch-agent.service                                                   loaded active running   OpenStack Neutron Open vSwitch Agent
neutron-ovs-cleanup.service                                                         loaded active exited    OpenStack Neutron Open vSwitch Cleanup Utility

****************************************
External networks creation
****************************************
# source keystonerc_admin
# neutron net-create external3 -- --router:external  \
  --provider:network_type=flat \
  --provider:physical_network=physnet3

# neutron net-create external4 -- --router:external  \
  --provider:network_type=flat \
  --provider:physical_network=physnet4

# neutron subnet-create --disable-dhcp external3 10.3.0.0/24
# neutron subnet-create --disable-dhcp external4 10.4.0.0/24

# neutron net-create public1 --provider:network_type flat \
 --provider:physical_network physnet1 --router:external

# neutron subnet-create public1\
 --gateway 172.24.4.225  172.24.4.224/28 \
 --allocation-pool start=172.24.4.226,end=172.24.4.238 \
 --enable_dhcp=False

*************************************************
Then login as demo and create
*************************************************
RouterExt3 with gateway to external3
RouterExt4 with gateway to external4
RouterDemo with gateway to public1

Then create private networks private1, demo-network4,demo_network5
Attach first to RouterDemo , second to RouterExt4, third to RouterExt3




Notice that qg-xxxxxxx interfaces from all qrouter-namespaces are attached to br-int
While using provider external networks,traffic to/from external network flows through br-int. 
br-int and br-ex
will be connected using veth pair int-br-ex and phy-br-ex.

br-int and br-eth3 will be connected using veth pair int-br-eth3 and phy-br-eth3. 
br-int and br-eth4 will be connected using veth pair int-br-eth4 and phy-br-eth4. 
This will be automatically created by neutron-openvswitch-agent based on the bridge_mappings configured earlier.  

[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
38e920e3-da61-4a1b-876a-052a49d777a2
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a000089"
            Interface "vxlan-0a000089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge "br-eth4"
        Port "br-eth4"
            Interface "br-eth4"
                type: internal
        Port "phy-br-eth4"
            Interface "phy-br-eth4"
                type: patch
                options: {peer="int-br-eth4"}
        Port "eth4"
            Interface "eth4"
    Bridge br-int
        fail_mode: secure
        Port "tap7ce0a427-fd"
            tag: 5
            Interface "tap7ce0a427-fd"
                type: internal
        Port "qr-45110e77-5b"
            tag: 1
            Interface "qr-45110e77-5b"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qr-a99aa111-1d"
            tag: 3
            Interface "qr-a99aa111-1d"
                type: internal
        Port "qg-615baaa8-a6"
            tag: 6
            Interface "qg-615baaa8-a6"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap709fbf6f-ab"
            tag: 3
            Interface "tap709fbf6f-ab"
                type: internal
        Port "int-br-eth3"
            Interface "int-br-eth3"
                type: patch
                options: {peer="phy-br-eth3"}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-b7d78d6d-dd"
            tag: 5
            Interface "qr-b7d78d6d-dd"
                type: internal
        Port "int-br-eth4"
            Interface "int-br-eth4"
                type: patch
                options: {peer="phy-br-eth4"}
        Port "qg-c28dfe1c-44"
            tag: 2
            Interface "qg-c28dfe1c-44"
                type: internal
        Port "qg-54aa0373-dd"
            tag: 4
            Interface "qg-54aa0373-dd"
                type: internal
        Port "tap06adaf37-d4"
            tag: 1
            Interface "tap06adaf37-d4"
                type: internal
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-eth3"
        Port "eth3"
            Interface "eth3"
        Port "phy-br-eth3"
            Interface "phy-br-eth3"
                type: patch
                options: {peer="int-br-eth3"}
        Port "br-eth3"
            Interface "br-eth3"
                type: internal
    ovs_version: "2.3.1"