Tuesday, April 28, 2015

Setup Nova-Docker driver && Openstack Kilo on Ubuntu 14.04 in devstack environment recoverable between reboots

  Step by step instruction for  setting up  Nova-Docker driver && Openstack Kilo on Ubuntu 14.04 in devstack environment recoverable between reboots. Also routing across LAN is described for remote access to Devstack (stack.sh) public network. I've tried to cover all known as of time of writing problems preventing ./rejoin-stack.sh from successful run. However , I cannot issue any warranty that tomorrow another daemon wouldn't rejoin stack instance after reboot. The only way for safe solution would be RDO Kilo Release expected in may of 2015 (for myself of course). This post is written with major concern of successful loading by Kilo Nova-Docker Driver in development environment.

Proceed as follows :-

$ sudo apt-get update
$ sudo apt-get -y install git git-review python-pip python-dev
$ sudo apt-get -y upgrade

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9
$ sudo sh -c "echo deb https://get.docker.com/ubuntu docker main  \
   > /etc/apt/sources.list.d/docker.list"
$ sudo apt-get update
$ sudo apt-get install lxc-docker

*********************************************
Update  /etc/default/docker and setting:
*********************************************
DOCKER_OPTS='-G ubuntu'

#service docker restart

Logout from root and verify

ubuntu@ubuntu-WKS:~$ docker -v
Docker version 1.6.0, build 4749651

*******************************
Installing nova-docker
*******************************
$ git clone http://github.com/stackforge/nova-docker.git
$ cd nova-docker
$ sudo pip install .


iptables -t nat -A POSTROUTING -o eth0 -j
*****************************
Configuring devstack
*****************************

Now we're ready to get devstack up and running. Start by cloning the repository:

$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack
$ git checkout -b kilo origin/stable/kilo

******************************************
Create local.conf under devstack
******************************************
[[local|localrc]]
HOST_IP=192.168.1.142
ADMIN_PASSWORD=secret
MYSQL_PASSWORD=secret
RABBIT_PASSWORD=secret
SERVICE_PASSWORD=secret
FLOATING_RANGE=192.168.12.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.12.150,end=192.168.12.254
PUBLIC_NETWORK_GATEWAY=192.168.12.15

SERVICE_TOKEN=super-secret-admin-token
VIRT_DRIVER=novadocker.virt.docker.DockerDriver

DEST=/opt/stack
SERVICE_DIR=$DEST/status
DATA_DIR=$DEST/data
LOGFILE=$DEST/logs/stack.sh.log
LOGDIR=$DEST/logs

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest

# Introduce glance to docker images
[[post-config|$GLANCE_API_CONF]]
[DEFAULT]
container_formats=ami,ari,aki,bare,ovf,ova,docker

# Configure nova to use the nova-docker driver
[[post-config|$NOVA_CONF]]
[DEFAULT]
compute_driver=novadocker.virt.docker.DockerDriver

**************************************
Corresponding iptables entry
**************************************
# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

At this point you are ready to run :-

$ ./stack.sh

*****************************************************************************
Attention skipping this step causes message "No hosts available"
when launching, either causes failure to launch nova-docker instances
in case of stack.sh rerun after ./unstack.sh
******************************************************************************

$ sudo cp nova-docker/etc/nova/rootwrap.d/docker.filters \
  /etc/nova/rootwrap.d/

$ .   openrc admin

For docker pull && docker save
************************************************************************
To enable security rules and launch NovaDocker Container :-
************************************************************************

$ cd dev*

$ . openrc demo 

$ neutron security-group-rule-create --protocol icmp \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 22 --port-range-max 22 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

$ neutron security-group-rule-create --protocol tcp \
  --port-range-min 80 --port-range-max 80 \
  --direction ingress --remote-ip-prefix 0.0.0.0/0 default

Uploading docker image to glance

$ . openrc admin
$  docker pull rastasheep/ubuntu-sshd:14.04
$  docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

Launch new instance via uploaded image :-
$ . openrc demo
$  nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny
    --nic net-id=private-net-id UbuntuDocker
 

Enable security rules via dashboard  :-



*******************************************************************************
 You have run `sudo ./unstack.sh` , rebooted box hosting devstack  instance and OVS bridge "br-ex" came up with no IP no matter which one of local.conf has been used for ./stack.sh deployment.
Before running ./rejoin-stack.sh following actions have to be undertaken
(just add to /etc/rc.local)
*******************************************************************************

    sudo ip addr flush dev br-ex
    sudo ip addr add 192.168.12.15/24 dev br-ex

    sudo ip link set br-ex up
    sudo route add -net 10.254.1.0/24 gw 192.168.12.15



You also might experience problems with rejoining cinder-volume daemon, in this case copy to &HOME folder corresponding files  and run before ./rejoin-stack.sh from your $HOME ./cinder.sh


ubuntu@ubuntu-WKS:~$ cat cinder.sh
cp stack-volumes-default-backing-file /opt/stack/data/stack-volumes-default-backing-file
cp stack-volumes-lvmdriver-1-backing-file /opt/stack/data/stack-volumes-lvmdriver-1-backing-file
sudo losetup /dev/loop0 /opt/stack/data/stack-volumes-default-backing-file
sudo losetup /dev/loop1 /opt/stack/data/stack-volumes-lvmdriver-1-backing-file


*******************************************************************************
To access NovaDocker instances  running within  stack (devstack) AIO instance on Ubuntu 14.04 host 192.168.1.142  from another boxes located on the same
office LAN having address 192.168.1.0/24 manage as follows :-
*******************************************************************************

***************************
Run on Devstack Node
***************************
# Add route to LAN
$ sudo route add -net  192.168.1.0/24 gw 192.168.1.142

**************************
Run on LAN's box
**************************
# Add route to devstack public network  via HOST_IP
$ sudo route add -net 192.168.12.0/24 gw 192.168.1.142

where 192.168.1.142  HOST_IP of  Devstack Node running stack instance
192.168.12.0/24  is  Devstack's public  network. 192.168.1.0/24 is  LAN address

Routing table on Devstack node should look like :-

ubuntu@ubuntu-WKS:~$ route -n

Kernel IP routing table

Destination      Gateway              Genmask         Flags  Metric  Ref    Use Iface
0.0.0.0              192.168.1.1         0.0.0.0               UG     0          0        0  eth0
10.254.1.0        192.168.12.150   255.255.255.0    UG    0          0        0  br-ex
172.17.0.0        0.0.0.0                 255.255.0.0        U       0          0        0  docker0
192.168.1.0      192.168.1.142     255.255.255.0    UG     0         0        0  eth0
192.168.1.0      0.0.0.0                 255.255.255.0     U      1          0        0  eth0
192.168.12.0    0.0.0.0                 255.255.255.0     U      0          0        0  br-ex

   


References

http://blog.oddbit.com/2015/02/11/installing-novadocker-with-devstack/ 

Sunday, April 19, 2015

Nested KVM set up on Fedora 22 && Running devstack on Ubuntu 14.04 guests

Following bellow are brief instructions how to achieve extremely high performance of VMs created via devstack ( stack.sh ) inside another virtual machine created with Fedora 22 KVM Hypervisor and having Nested KVM feature enabled, working with sufficiently advanced Intel CPUs (Haswell kernel or above which have newer hardware virt extensions ) and 16 GB or more RAM.

****************************************
Create non-default libvirt subnet
****************************************

1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat devstackvms.xml

<network>
   <name>devstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6e'/>
   <ip address='192.157.141.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.157.141.2' end='192.157.141.254' />
     </dhcp>
   </ip>
 </network>


 $ virsh net-define devstackvms.xml

 Then start the network and enable "autostart"

 $ virsh net-start devstackvms
 $ virsh net-autostart devstackvms


4. List your libvirt networks to see if it reflects:

$ virsh net-list

  Name              State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
 devstackvms      active     yes           yes



Launch VM Ubuntu1404 attached to subnet created. Set Disk && Network to "Virtio" mode before start installation 

**********************************************************************************
 Procedure to enable nested virtualization (on Intel-based machines) [ 1 ]
**********************************************************************************

1. List modules and ensure KVM Kernel modules are enabled on L0:

    $ lsmod | grep -i kvm
    kvm_intel             133627  0
    kvm                   435079  1 kvm_intel


2. Show information for `kvm_intel` module:

    $ modinfo kvm_intel | grep -i nested
    parm:           nested:boolkvm                   435079  1 kvm_intel


3. Ensure nested virt is persistent across reboots by adding it as a
   config directive:

    $ cat /etc/modprobe.d/dist.conf
    options kvm-intel nested=y


4. Reboot the host.


5. Check if the Nested KVM Kernel module option is enabled:

    $ cat /sys/module/kvm_intel/parameters/nested
    Y


6. Before you boot your L1 guest (i.e. the guest hypervisor that runs
   the nested guest), expose virtualization extensions to it. The
   following exposes all the CPU features of host to your guest
   unconditionally:

    $ virt-xml Ubuntu1404 --edit  --cpu host-passthrough,clearxml=yes


7. Start your L1 guest (i.e. guest hypervisor):

    $ virsh start Ubuntu1404  --console


8. Ensure KVM extensions are enabled in L1 guest by running the below
   command:

$ file /dev/kvm      
    /dev/kvm: character special


You might enable Shadow VMCS, APIC Virtualization on the physical host (L0):
    $ cat /sys/module/kvm_intel/parameters/enable_shadow_vmcs
    Y

    $ cat /sys/module/kvm_intel/parameters/enable_apicv
    N

    $ cat /sys/module/kvm_intel/parameters/ept
    Y

 
   


***************************************************************
Devstack installation procedure on Ubuntu 14.04.2 VM
***************************************************************


$ git clone https://git.openstack.org/openstack-dev/devstack
$ cd devstack

********************************************
Create local.conf
********************************************

[[local|localrc]]
HOST_IP= 192.157.141.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50

FLOATING_RANGE=192.168.12.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.12.150,end=192.168.12.254
PUBLIC_NETWORK_GATEWAY=192.168.12.15

# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen

# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1

# Services
disable_service n-net
enable_service  n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest


Then run ./stack.sh

 

 

  

  


****************************************************************************
To provide outbound  connectivity  run from within VM running stack instance
****************************************************************************

 # iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE



****************************************************************************
To provide inbound  connectivity (from host running KVM Hypervisor)
to VMs (L2) created  run from within VM (L1)
****************************************************************************

# route add -net 192.168.1.0/24  gw 192.157.141.57 

where 192.157.141.57 is KVM's IP on non-standard libvirt subnet - devstackvms  192.168.1.0/24 is subnet hosting machine 192.168.1.47 running KVM Hypervisor


On machine 192.168.1.47 (L0) ,which is Fedora 22 box plus KVM/QEMU/LIBVIRT
run :-

# route add -net 192.168.12.0/24 gw 192.157.141.57


where 192.168.12.0/24 is devstack public network ( view local.conf).


 

Wednesday, April 15, 2015

Nova libvirt-xen driver fails to schedule instance under Xen 4.4.1 Hypervisor with libxl toolstack

UPDATE 16/04/2015
For now http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
is supposed to work only with nova networking per Anthony PERARD
Neutron appears to be an issue.
Please, view details of troubleshooting and diagnostic obtained (thanks to Ian Campbell) :-
http://lists.xen.org/archives/html/xen-devel/2015-04/msg01856.html
END UPDATE

This post is written in regards of two publications done in February 2015
First:   http://wiki.xen.org/wiki/OpenStack_via_DevStack
Second : http://www.slideshare.net/xen_com_mgr/openstack-xenfinal
Both of them are devoted to same problem nova libvirt-xen driver. Second one states that everything is supposed to be fine as far as some mysterious patch will merge mainline libvirt .Both of them don’t work for me generating errors  in  libxl-driver.log even with  libvirt 1.2.14 ( the most recent version as of time of writing).
For better understanding problem been raised up view also https://ask.openstack.org/en/question/64942/nova-libvirt-xen-driver-and-patch-feb-2015-in-upstream-libvirt/
I’ve followed more accurately written second one :-
On Ubuntu 14.04.2

# apt-get update
# apt-get -y upgrade
# apt-get install xen-hypervisor-4.4-amd64
# sudo reboot
$ git clone https://git.openstack.org/openstack-dev/devstack

Created local.conf under devstack folder as follows :-

[[local|localrc]]
HOST_IP=192.168.1.57
ADMIN_PASSWORD=secret
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=a682f596-76f3-11e3-b3b2-e716f9080d50
FLOATING_RANGE=192.168.10.0/24
FLAT_INTERFACE=eth0
Q_FLOATING_ALLOCATION_POOL=start=192.168.10.150,end=192.168.10.254
PUBLIC_NETWORK_GATEWAY=192.168.10.15
# Useful logging options for debugging:
DEST=/opt/stack
LOGFILE=$DEST/logs/stack.sh.log
SCREEN_LOGDIR=$DEST/logs/screen
# The default fixed range (10.0.0.0/24) conflicted with an address
# range I was using locally.
FIXED_RANGE=10.254.1.0/24
NETWORK_GATEWAY=10.254.1.1
# Services
disable_service n-net
enable_service n-cauth
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service horizon
disable_service tempest
# This is a Xen Project host:
LIBVIRT_TYPE=xen

 Ran ./stack.sh and successfully completed installation, versions of libvirt 1.2.2,1.2.9.1.2.24 have been tested. The first one is default on Trusty, 1.2.9 && 1.2.14 have been built and installed after stack.sh completion. For every version of libvirt been tested new hardware instance of Ubuntu 14.04.2 has been created.

Manual libvirt upgrade was done via :-

# apt-get build-dep libvirt
# tar xvzf libvirt-1.2.14.tar.gz -C /usr/src
# cd /usr/src/libvirt-1.2.14
# ./configure –prefix=/usr/
# make
# make install
# service libvirt-bin restart

root@ubuntu-system:~# virsh –connect xen:///
Welcome to virsh, the virtualization interactive terminal.
Type: ‘help’ for help with commands
‘quit’ to quit
virsh # version
Compiled against library: libvirt 1.2.14
Using library: libvirt 1.2.14
Using API: Xen 1.2.14
Running hypervisor: Xen 4.4.0

*********************************
Per page 19 of second post
*********************************
xen.gz command line tuned
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec set vm_mode=HVM
ubuntu@ubuntu-system:~/devstack$ nova image-meta cirros-0.3.2-x86_64-uec delete vm_mode

Attempt to launch instance ( nova-compute is up ) error “No available host found” in n-sch.log from Nova side

The libxl-driver.log reports :-
root@ubuntu-system:/var/log/libvirt/libxl# ls -l
total 32
-rw-r–r– 1 root root 30700 Apr 12 03:47 libxl-driver.log
*****************************************************************************************
libxl: debug: libxl_dm.c:1320:libxl__spawn_local_dm: Spawning device-model /usr/bin/qemu-system-i386 with arguments:
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: /usr/bin/qemu-system-i386
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-domid
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 2
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -chardev
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -mon
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: chardev=libxl-cmd,mode=control
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -nodefaults
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -xen-attach
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -name
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: instance-00000002
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -vnc
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 127.0.0.1:1
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -display
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: none
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -k
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: en-us
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -machine
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: xenpv
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: -m
libxl: debug: libxl_dm.c:1322:libxl__spawn_local_dm: 513
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: register slotnum=3
libxl: debug: libxl_create.c:1356:do_domain_create: ao 0x7f36cc0012e0: inprogress: poller=0x7f36d8013130, flags=i
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: event epath=/local/domain/0/device-model/2/state
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990 wpath=/local/domain/0/device-model/2/state token=3/3: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36cc001990: deregister unregistered
libxl: debug: libxl_qmp.c:696:libxl__qmp_initialize: connected to /var/run/xen/qmp-libxl-2
libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: qmp
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “qmp_capabilities”,
“id”: 1
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-chardev”,
“id”: 2
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_qmp.c:546:qmp_send_prepare: next qmp command: ‘{
“execute”: “query-vnc”,
“id”: 3
}

libxl: debug: libxl_qmp.c:296:qmp_handle_response: message type: return
libxl: debug: libxl_event.c:570:libxl__ev_xswatch_register: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: register slotnum=3
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:657:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state 1
libxl: debug: libxl_event.c:514:watchfd_callback: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: event epath=/local/domain/0/backend/vif/2/0/state
libxl: debug: libxl_event.c:653:devstate_watch_callback: backend /local/domain/0/backend/vif/2/0/state wanted state 2 ok
libxl: debug: libxl_event.c:606:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8 wpath=/local/domain/0/backend/vif/2/0/state token=3/4: deregister slotnum=3
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b3e8: deregister unregistered
libxl: debug: libxl_device.c:1023:device_hotplug: calling hotplug script: /etc/xen/scripts/vif-bridge online
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: /etc/xen/scripts/vif-bridge online [-1] exited with error status 1
libxl: error: libxl_device.c:1085:device_hotplug_child_death_cb: script: ip link set vif2.0 name tap5600079c-9e failed
libxl: debug: libxl_event.c:618:libxl__ev_xswatch_deregister: watch w=0x7f36f284b470: deregister unregistered
libxl: error: libxl_create.c:1226:domcreate_attach_vtpms: unable to add nic devices

libxl: debug: libxl_dm.c:1495:kill_device_model: Device Model signaled

Tuesday, April 14, 2015

Establishing access to public devstack (stack) network from LAN

To access VMs running within  stack (devstack) AIO instance on Ubuntu 14.04 host 192.168.1.57 from another boxes located on the same office LAN having address 192.168.1.0/24 manage as follows :-

Run on Devstack Node
# Add route to LAN
$ sudo route add -net  192.168.1.0/24 gw 192.168.1.57

Run on LAN box
# Add route to devstack public network  via HOST_IP
$ sudo route add -net 192.168.10.0/24 gw 192.168.1.57

where 192.168.1.57 HOST_IP of  Devstack Node running stack instance
192.168.10.0/24  is  Devstack's public  network. 192.168.1.0/24 is  LAN address

*************************************************************************************
If stack instance is running on KVM (Ubuntu 14.04) on Libvirt Subnet to access stack VMs running inside KVM (Ubuntu 14.04) from F21 box hosting KVM Hypervisor  run from within  KVM (Ubuntu 14.04)
*************************************************************************************

# route add -net 192.168.1.0/24  gw 192.168.122.57 

where 192.168.122.57 is KVM's IP on standard libvirt subnet 192.168.122.0/24 , 192.168.1.0/24 is subnet hosting machine 192.168.1.47 running KVM Hypervisor


On machine 192.168.1.47,which is Fedora 21 box plus KVM/QEMU/LIBVIRT
run :-

# route add -net 192.168.12.0/24 gw 192.168.122.57

where 192.168.12.0/24 is devstack public subnet running on KVM (Ubuntu 14.04) hosting  stack (e.g. devstack) instance.

Saturday, April 11, 2015

RDO Juno multi node setup && Switching to eth(X) interfaces on Fedora 21

This post is closely related to RDO Juno Multi Node deployment via packstack on Fedora 21 landscape with boxes having different boards and different Ethernet NICs integrated on boards either  plugged into systems.
Originally tested on Two Node Controller&&Network and Compute  Fedora 21 .

[root@junoVHS01 ~(keystone_admin)]# uname -a
Linux junoVHS01.localdomain 3.19.3-200.fc21.x86_64 #1 SMP Thu Mar 26 21:39:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


Interfaces on first board  (enp3s0,enp5s0) on second board (enp2s0,enp5s1).Converted on both boards to (eth0,eth1), creating udev rules to rename Ethernet interfaces allows to set one to one correspondence between MAC adresses  and  eth(X) names. Just updating /boot/grub2/grub.cfg is not
enough on systems having several NICs. View also [ 1 ].


***************************************
Update  /etc/default/grub
***************************************
Append to GRUB_CMDLINE_LINUX line append "net.ifnames=0 biosdevname=0"

Issue :-
# grub2-mkconfig -o /boot/grub2/grub.cfg

******************************************************
Run ifconfig to get MAC addresses of your NICS
******************************************************
[root@junoVHS01 network-scripts]# ifconfig

enp3s0: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::7a24:afff:fe43:1b53  prefixlen 64  scopeid 0x20
        ether 78:24:af:43:1b:53  txqueuelen 1000  (Ethernet)
        RX packets 44533  bytes 64844663 (61.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 23881  bytes 1625287 (1.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp5s0 : flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 65  bytes 22230 (21.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 34  bytes 3466 (3.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



********************************************************
Create /etc/udev/rules.d/60-net.rules
********************************************************

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="78:24:af:43:1b:53", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

{SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:e0:53:13:17:4c", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

***********************************************************
Got to /etc/sysconfig/network-scripts
***********************************************************
cp ifcfg-enp3s0 ifcfg-eth0
cp ifcfg-enp5s0 ifcfg-eth1

and set

DEVICE="eth0"
DEVICE="eth1"

in corresponding files

# rm  -f ifcfg-enp*s*

************************
System reboot.
************************

RDO Juno Multi Node setup , easily updated from two to three node with
separate box  for Network Node (CONFIG_NETWORK_HOSTS=192.168.1.147)
Several Compute Nodes may be added via CONFIG_COMPUTE_HOSTS


*******************************************************************************
Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


junoVHS01.localdomain   -  Controller&&Network Node (192.168.1.127)
junoVHS02.localdomain   -  Compute Node   (192.168.1.137)

VTEPS (192.168.0.127 - Controller, 192.168.0.137 - Compute )

********************************************************************************
 

Answer file been used by packstack

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.127
CONFIG_COMPUTE_HOSTS=192.168.1.137
CONFIG_NETWORK_HOSTS=192.168.1.127

CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.168.1.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

********************************************************************
Up on successful completion you are supposed to get
********************************************************************

[root@junoVHS01 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
== Swift services ==
openstack-swift-proxy:                  active
openstack-swift-account:                active
openstack-swift-container:              active
openstack-swift-object:                 active
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                active
== Ceilometer services ==
openstack-ceilometer-api:               active
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
openstack-ceilometer-notification:      active
== Support services ==
openvswitch:                            active
dbus:                                   active
target:                                 inactive  (disabled on boot)
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 82fb089130a64902a3c0cdfefc25aadb |   admin    |   True  |    root@localhost    |
| 8d20be7fd2e04054992bde8af6658b5f | ceilometer |   True  | ceilometer@localhost |
| 91def7a2ef424ef287041a88341c886a |   cinder   |   True  |   cinder@localhost   |
| 77a7997146ca4a9ea8cc4572f79a111a |    demo    |   True  |                      |
| 94079d20cd6a457db9a0ab319c0d1f0f |   glance   |   True  |   glance@localhost   |
| ebf0369d9a6b49f088a10e80eabe683d |  neutron   |   True  |  neutron@localhost   |
| cae11d29ca204dee97fb3bc426afc78f |    nova    |   True  |    nova@localhost    |
| 53188618a56f4dc0a59e06703349fa39 |   swift    |   True  |   swift@localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| ID                                   | Name               | Disk Format | Container Format | Size      | Status |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
| fbc1f97a-c176-4a64-a495-bf72580e3d9e | cirros             | qcow2       | bare             | 13200896  | active |
| 0abaa464-f41f-4871-b73d-7d264b773597 | Fedora 21 image    | qcow2       | bare             | 158443520 | active |
| 469f7921-2ffa-4f4b-b223-2cd6e9a101e2 | Ubuntu 15.04 image | qcow2       | bare             | 284492288 | active |
+--------------------------------------+--------------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
| 2  | nova-scheduler   | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:33.000000 | -               |
| 3  | nova-conductor   | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:33.000000 | -               |
| 4  | nova-cert        | junoVHS01.localdomain | internal | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
| 5  | nova-compute     | junoVHS02.localdomain | nova     | enabled | up    | 2015-04-11T17:22:34.000000 | -               |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+----------+------+
| ID                                   | Label    | Cidr |
+--------------------------------------+----------+------+
| 39b4dd7b-dc1d-4752-84eb-caeadd0e5781 | public   | -    |
| 8b1f58fd-924b-4b85-9ab6-e2ea249ac0ea | demo_net | -    |
| a5f04387-2663-4f05-9eb4-95bd30f30e9c | private  | -    |
+--------------------------------------+----------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+----+------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------+--------+------------+-------------+----------+
+----+------+--------+------------+-------------+----------+
************************************
In more details
************************************  

[root@junoVHS01 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-scheduler   junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-conductor   junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:44
nova-cert        junoVHS01.localdomain                internal         enabled    :-)   2015-04-11 17:40:45
nova-compute     junoVHS02.localdomain                nova             enabled    :-)   2015-04-11 17:40:44

[root@junoVHS01 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host                  | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+
| 50b9df88-58a1-4a16-84ed-38c423bdd76f | Metadata agent     | junoVHS01.localdomain | :-)   | True           | neutron-metadata-agent    |
| 65afc586-c15e-48eb-bb29-1fd664f88960 | Open vSwitch agent | junoVHS02.localdomain | :-)   | True           | neutron-openvswitch-agent |
| b6351d3f-ffbd-4839-a6b9-5f01cee6a9b7 | Open vSwitch agent | junoVHS01.localdomain | :-)   | True           | neutron-openvswitch-agent |
| c1a55d0a-b1b1-461f-bc56-dbac4ef7a538 | L3 agent           | junoVHS01.localdomain | :-)   | True           | neutron-l3-agent          |
| d3847d47-8b08-4f23-aa8c-887ca4534b9f | DHCP agent         | junoVHS01.localdomain | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+-----------------------+-------+----------------+---------------------------+


            

     

   

********************************************************************************
Only on Controller (generally in case of 3 node deployment on Network Node) updates :-
********************************************************************************

[root@junoVHS01  network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

[root@junoVHS01 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE="eth0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

Service network restarted, NetworkManager disabled

****************************************************************
OVS_VSCTL SHOW REPORT ON CONTROLLER
****************************************************************

[root@junoVHS01 ~(keystone_admin)]# ovs-vsctl show
14e6125c-c108-4369-b461-4fb2e68c4884
    Bridge br-int
        fail_mode: secure
        Port "qr-bdc3038d-50"
            tag: 2
            Interface "qr-bdc3038d-50"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tapd91e13c6-54"
            tag: 3
            Interface "tapd91e13c6-54"
                type: internal
        Port "tap117fa529-b1"
            tag: 2
            Interface "tap117fa529-b1"
                type: internal
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth0"
            Interface "eth0"
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-0289d92f-ca"
            Interface "qg-0289d92f-ca"
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a80089"
            Interface "vxlan-c0a80089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.3.1-git4750c96"

**********************************************************
OVS_VSCTL SHOW REPORT ON COMPUTE
**********************************************************
[root@junoVHS02 ~]# ovs-vsctl show
2fd00c5e-ac58-460b-8c3e-0fdb36afa8d4
    Bridge br-int
        fail_mode: secure
        Port "qvo6447cf52-0e"
            tag: 1
            Interface "qvo6447cf52-0e"
        Port "qvob88ccbd4-0c"
            tag: 1
            Interface "qvob88ccbd4-0c"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo089db78b-b0"
            tag: 1
            Interface "qvo089db78b-b0"
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a8007f"
            Interface "vxlan-c0a8007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.137", out_key=flow, remote_ip="192.168.0.127"}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.3.1-git4750c96"

References
1. http://unix.stackexchange.com/questions/81834/how-can-i-change-the-default-ens33-network-device-to-old-eth0-on-fedora-19