Sunday, June 29, 2014

Attempt of straight forward install Enlightenment 17 on Fedora 20 Cloud instance

# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm \
dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts \
xcompmgr lxappearance -y

# yum install dbus-x11 -y

# yum install yum-utils -y
# yum-config-manager --enable fmd-testing
# yum -y install enlightenment
 
 

 $ echo "exec /usr/bin/enlightenment_start" >> ~/.xinitrc
 $ startx

Saturday, June 28, 2014

Setup Light Weight X Windows environment (Enlightenment) on Fedora 20 Cloud instance


    Needless to say that setting up Light Weight X environment on Fedora 20 cloud  instances is very important for comfortable work in VM's environment, for instance on Ubuntu Trusty cloud server just one command installs E17 environment  `apt-get install xinit e17 firefox`. By some reasons E17 was dropped from official F20 repos and maybe functional only via previous MATE Desktop setup on VM

# yum -y groups install "MATE Desktop"
$ echo "exec /usr/bin/mate-session" >> ~/.xinitrc
$ startx
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

VM reboot 

Having MATE desktop installed :-

# yum install yum-utils -y
# yum-config-manager --enable fmd-testing
# yum -y install enlightenment

Attempt of straight forward install Enlightenment 17 on Fedora 20 Cloud instance ( testing version) 

# yum install xorg-x11-server-Xorg xorg-x11-xdm \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm \
dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts \
xcompmgr lxappearance -y

# yum install dbus-x11 -y

# yum install yum-utils -y
# yum-config-manager --enable fmd-testing
# yum -y install enlightenment



 $ echo "exec /usr/bin/enlightenment_start" >> ~/.xinitrc
 $ startx

Enlightenment Desktop 

  
  

  Mate Desktop


   
   
   E17 on Ubuntu Cloud instance

  

   

Monday, June 23, 2014

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VLAN Cluster on Fedora 20

 Successful implementation of Neutron ML2&&OVS&&VLAN multi node setup requires correct version of plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini which appears to be generated with errors by packstack. Several days playing with plugin.ini allowed me to build properly working system

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for
Controller && Compute Nodes setup. Before running
`packstack --answer-file= TwoRealNode Neutron ML2&OVS&VLAN.txt` SELINUX set to permissive on both nodes.Both p4p1's assigned IPs and set to promiscuous mode (192.168.0.127, 192.168.0.137 ). Services firewalld and NetworkManager disabled, IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


icehouse1.localdomain   -  Controller (192.168.1.127)
icehouse2.localdomain   -  Compute   (192.168.1.137)

Status after packstack install and updating /etc/neutron/plugin.ini as shown bellow

[root@icehouse1 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-p4p1
local_ip = 192.168.1.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

 Recreating link plugin.ini :-
 
    [root@ip-192-169-142-127 neutron]# ls -l
     total 84
    -rw-r--r--. 1 root root      197 Jun 20 11:18 api-paste.ini
    -rw-r-----. 1 root neutron  3855 Jun 21 08:17 dhcp_agent.ini
    -rw-r--r--. 1 root root      333 Jun 21 13:35 dhcp_agent.out
    -rw-r-----. 1 root neutron   109 Apr 17 15:50 fwaas_driver.ini
    -rw-r-----. 1 root neutron  3431 Jun 20 14:42 l3_agent.ini
    -rw-r-----. 1 root neutron  1400 Apr 17 15:50 lbaas_agent.ini
    -rw-r-----. 1 root neutron   328 Jun 20 14:58 metadata_agent.ini
    -rw-r-----. 1 root neutron 19057 Jun 21 13:47 neutron.conf
    lrwxrwxrwx. 1 root root       37 Jun 21 15:30 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
    drwxr-xr-x. 4 root root     4096 Jun 20 11:18 plugins
    -rw-r-----. 1 root neutron  6148 Apr 17 15:50 policy.json
    -rw-r--r--. 1 root root       80 May 19 19:53 release
    -rw-r--r--. 1 root root     1216 Apr 17 15:50 rootwrap.conf
 
  Restarting Compute and Controller nodes

[root@icehouse1 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
 
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 8534ffebeac84b0d80805e02f4b0cc13 |   admin    |   True  |    test@test.com     |
| b5a424c3cc9d4c91a7de069ce68b3361 | ceilometer |   True  | ceilometer@localhost |
| 4845de6370fb46a38894b082634dd5a7 |   cinder   |   True  |   cinder@localhost   |
| db2f21652ba74d4a8b40187c5d58c303 |   glance   |   True  |   glance@localhost   |
| 717fc912609947f4a5a6a96bb734f9ca |  neutron   |   True  |  neutron@localhost   |
| b43f85c05dba4571b2fc84492226e1c8 |    nova    |   True  |    nova@localhost    |
+----------------------------------+------------+---------+----------------------+
 
== Glance images ==
+--------------------------------------+-------------------+-------------+------------------+-----------+--------+
| ID                                   | Name              | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-------------------+-------------+------------------+-----------+--------+
| eb920f3d-3980-4e14-a82b-572990de2e19 | CirrOS32          | qcow2       | bare             | 13167616  | active |
| 5536837a-d650-42d5-82be-19d4f3962f6d | Ubuntu 06/21/2014 | qcow2       | bare             | 254149120 | active |
+--------------------------------------+-------------------+-------------+------------------+-----------+--------+
 
== Nova managed services ==
+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | icehouse1.localdomain | internal | enabled | up    | 2014-06-23T11:14:36.000000 | -               |
| nova-scheduler   | icehouse1.localdomain | internal | enabled | up    | 2014-06-23T11:14:36.000000 | -               |
| nova-conductor   | icehouse1.localdomain | internal | enabled | up    | 2014-06-23T11:14:34.000000 | -               |
| nova-cert        | icehouse1.localdomain | internal | enabled | up    | 2014-06-23T11:14:36.000000 | -               |
| nova-compute     | icehouse2.localdomain | nova     | enabled | up    | 2014-06-23T11:14:39.000000 | -               |
+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
 
== Nova networks ==
+--------------------------------------+---------+------+
| ID                                   | Label   | Cidr |
+--------------------------------------+---------+------+
| f4e7f0f5-bdb4-43fe-bfc4-6e16428638ef | private | -    |
| f23bd22c-a755-4119-9911-97980a0bd9ba | public  | -    |
+--------------------------------------+---------+------+
 
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
 
== Nova instances ==
+--------------------------------------+---------------+--------+------------+-------------+----------------------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks                         |
+--------------------------------------+---------------+--------+------------+-------------+----------------------------------+
| 36c1022e-ab79-4709-b8de-ef27b94d2076 | CirrOS325     | ACTIVE | -          | Running     | private=40.0.0.11, 192.168.1.152 |
| d3768f16-f003-4bb5-938e-9505a4518caf | UbuntuSRV0623 | ACTIVE | -          | Running     | private=40.0.0.12, 192.168.1.153 |
+--------------------------------------+---------------+--------+------------+-------------+----------------------------------+
 
[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-06-23 11:14:46
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-06-23 11:14:46
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-06-23 11:14:44
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-06-23 11:14:46
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-06-23 11:14:49
 
[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 4c79ae4c-374a-43a8-a4cd-a839788af56e | L3 agent           | icehouse1.localdomain | :-)   | True           |
| 5c4d05a2-e9e4-47b7-b9ee-ed815e205925 | Open vSwitch agent | icehouse2.localdomain | :-)   | True           |
| 6fa0f569-ea7f-4925-b788-b0d70442c9e0 | DHCP agent         | icehouse1.localdomain | :-)   | True           |
| c6fca55b-e9ad-433a-b146-5223b1b4b851 | Metadata agent     | icehouse1.localdomain | :-)   | True           |
| e62f13a6-7d5c-44ac-8a99-6211e62a0c3c | Open vSwitch agent | icehouse1.localdomain | :-)   | True           |
+--------------------------------------+--------------------+-----------------------+-------+----------------+


[root@icehouse1 ~(keystone_admin)]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 389
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
| test               |
+--------------------+
9 rows in set (0.03 sec)

MariaDB [(none)]> use neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed



MariaDB [(none)]> SELECT TABLE_NAME, ENGINE FROM information_schema.TABLES where TABLE_SCHEMA = 'neutron';
+------------------------------+--------+
| TABLE_NAME                   | ENGINE |
+------------------------------+--------+
| agents                       | InnoDB |
| alembic_version              | InnoDB |
| allowedaddresspairs          | InnoDB |
| arista_provisioned_nets      | InnoDB |
| arista_provisioned_tenants   | InnoDB |
| arista_provisioned_vms       | InnoDB |
| cisco_ml2_credentials        | InnoDB |
| cisco_ml2_nexusport_bindings | InnoDB |
| consistencyhashes            | InnoDB |
| dnsnameservers               | InnoDB |
| externalnetworks             | InnoDB |
| extradhcpopts                | InnoDB |
| floatingips                  | InnoDB |
| ipallocationpools            | InnoDB |
| ipallocations                | InnoDB |
| ipavailabilityranges         | InnoDB |
| ml2_brocadenetworks          | InnoDB |
| ml2_brocadeports             | InnoDB |
| ml2_flat_allocations         | InnoDB |
| ml2_gre_allocations          | InnoDB |
| ml2_gre_endpoints            | InnoDB |
| ml2_network_segments         | InnoDB |
| ml2_port_bindings            | InnoDB |
| ml2_vlan_allocations         | InnoDB |
| ml2_vxlan_allocations        | InnoDB |
| ml2_vxlan_endpoints          | InnoDB |
| networkdhcpagentbindings     | InnoDB |
| networks                     | InnoDB |
| ports                        | InnoDB |
| quotas                       | InnoDB |
| routerl3agentbindings        | InnoDB |
| routerroutes                 | InnoDB |
| routers                      | InnoDB |
| securitygroupportbindings    | InnoDB |
| securitygrouprules           | InnoDB |
| securitygroups               | InnoDB |
| servicedefinitions           | InnoDB |
| servicetypes                 | InnoDB |
| subnetroutes                 | InnoDB |
| subnets                      | InnoDB |
+------------------------------+--------+
40 rows in set (0.01 sec)


MariaDB [neutron]> select * from ml2_port_bindings ;
+--------------------------------------+-----------------------+----------+-------------+--------------------------------------+-----------+------------------------------------------------+---------+
| port_id                              | host                  | vif_type | driver      | segment                              | vnic_type | vif_details                                    | profile |
+--------------------------------------+-----------------------+----------+-------------+--------------------------------------+-----------+------------------------------------------------+---------+
| 2c664775-624d-4e92-9510-3b95b851f0cc | icehouse2.localdomain | ovs      | openvswitch | 78561388-cad6-43b0-8909-7f34426faf41 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} |         |
| 3073e90e-d8c1-4bc9-9478-aacc5e36672d | icehouse1.localdomain | ovs      | openvswitch | 78561388-cad6-43b0-8909-7f34426faf41 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} | {}      |
| 32b3bc11-b9d0-4f8c-8489-288c627784be |                       | unbound  | NULL        | NULL                                 | normal    |                                                | {}      |
| 425eedda-772a-411d-8db8-8fae20f22e10 |                       | unbound  | NULL        | NULL                                 | normal    |                                                | {}      |
| 495ba455-4034-4388-ba20-1d36b2c53fc7 | icehouse2.localdomain | ovs      | openvswitch | 78561388-cad6-43b0-8909-7f34426faf41 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} |         |
| 6aa4d544-e29e-436b-801a-72edfe3ab386 |                       | unbound  | NULL        | NULL                                 | normal    |                                                | {}      |
| 8be46650-b3b5-4494-8661-4aba15be0bb6 | icehouse2.localdomain | ovs      | openvswitch | 78561388-cad6-43b0-8909-7f34426faf41 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} |         |
| a55e262f-c878-4b27-8176-8c8ce946fbd5 | icehouse1.localdomain | ovs      | openvswitch | 78561388-cad6-43b0-8909-7f34426faf41 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} | {}      |
| ce46806f-9693-4baf-9bb0-5f33ac72f9c3 | icehouse1.localdomain | ovs      | openvswitch | 8ce25f91-9f4c-431b-ab3a-2766359cf8e4 | normal    | {"port_filter": true, "ovs_hybrid_plug": true} | {}      |
+--------------------------------------+-----------------------+----------+-------------+--------------------------------------+-----------+------------------------------------------------+---------+
9 rows in set (0.00 sec)
 

MariaDB [neutron]> select * from ml2_network_segments ;
+--------------------------------------+--------------------------------------+--------------+------------------+-----------------+
| id                                   | network_id                           | network_type | physical_network | segmentation_id |
+--------------------------------------+--------------------------------------+--------------+------------------+-----------------+
| 78561388-cad6-43b0-8909-7f34426faf41 | f4e7f0f5-bdb4-43fe-bfc4-6e16428638ef | vlan         | physnet1         |             101 |
| 8ce25f91-9f4c-431b-ab3a-2766359cf8e4 | f23bd22c-a755-4119-9911-97980a0bd9ba | vlan         | physnet1         |             100 |
+--------------------------------------+--------------------------------------+--------------+------------------+-----------------+
2 rows in set (0.00 sec)
 

   

MATE Setup on Fedora 20 VM

# yum -y groups install "MATE Desktop"
$ echo "exec /usr/bin/mate-session" >> ~/.xinitrc
$ startx
# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

VM reboot 

   
  
   
   Having MATE desktop installed :-
    1. yum-config-manager --enable fmd-testing
   2. yum -y install enlightenment

   
  
  

Sunday, June 22, 2014

RDO IceHouse Setup Two Node (Controller+Compute) Neutron ML2&OVS&VLAN Cluster on Fedora 20

Two KVMs have been created , each one having 2 virtual NICs (eth0,eth1) for
Controller && Compute Nodes setup. Before running `packstack --answer-file= TwoNodeML2&OVS&VLAN.txt` SELINUX set to permissive on both nodes.
Both eth1's assigned IPs from VLAN Libvirts subnet before installation and set
to promiscuous mode (192.168.122.127, 192.168.122.137 ). Packstack bind to
public IP - eth0  192.169.142.127 , Compute Node 192.169.142.137

Answer file been used by packstack here http://textuploader.com/k9xo

Two Libvirt's  subnet created on F20 KVM Sever to support installation

 Public subnet :  192.169.142.0/24  
 VLAN  Support subnet:      192.168.122.0/24 


1. Create a new libvirt network (other than your default 198.162.x.x) file:

$ cat openstackvms.xml
 
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6e'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network> 
 
 2. Define the above network:

  $ virsh net-define openstackvms.xml

3. Start the network and enable it for "autostart" 
 
 $ virsh net-start openstackvms
 $ virsh net-autostart openstackvms


4. List your libvirt networks to see if it reflects:

  $ virsh net-list
  Name                 State      Autostart     Persistent
  ----------------------------------------------------------
  default              active     yes           yes
  openstackvms         active     yes           yes


5. Optionally, list your bridge devices:

  $ brctl show
  bridge name     bridge id               STP enabled     interfaces
  virbr0          8000.5254003339b3       yes             virbr0-nic
  virbr1          8000.52540060f86e       yes             virbr1-nic

Status after packstack install and updating /etc/neutron/plugin.ini as shown bellow

[root@ip-192-169-142-127 neutron]# cat plugin.ini
[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers = openvswitch
[ml2_type_vlan]
[network_vlan_ranges = physnet1:100:200
[ovs]
network_vlan_ranges = physnet1:100:200
tenant_network_type = vlan
enable_tunneling = False
integration_bridge = br-int
bridge_mappings = physnet1:br-eth1
local_ip = 192.168.122.127
[AGENT]
polling_interval = 2
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

 Recreating link plugin.ini :-
 
    [root@ip-192-169-142-127 neutron]# ls -l
     total 84
    -rw-r--r--. 1 root root      197 Jun 20 11:18 api-paste.ini
    -rw-r-----. 1 root neutron  3855 Jun 21 08:17 dhcp_agent.ini
    -rw-r--r--. 1 root root      333 Jun 21 13:35 dhcp_agent.out
    -rw-r-----. 1 root neutron   109 Apr 17 15:50 fwaas_driver.ini
    -rw-r-----. 1 root neutron  3431 Jun 20 14:42 l3_agent.ini
    -rw-r-----. 1 root neutron  1400 Apr 17 15:50 lbaas_agent.ini
    -rw-r-----. 1 root neutron   328 Jun 20 14:58 metadata_agent.ini
    -rw-r-----. 1 root neutron 19057 Jun 21 13:47 neutron.conf
    lrwxrwxrwx. 1 root root       37 Jun 21 15:30 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
    drwxr-xr-x. 4 root root     4096 Jun 20 11:18 plugins
    -rw-r-----. 1 root neutron  6148 Apr 17 15:50 policy.json
    -rw-r--r--. 1 root root       80 May 19 19:53 release
    -rw-r--r--. 1 root root     1216 Apr 17 15:50 rootwrap.conf

 

[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api:                     active
openstack-nova-cert:                    active
openstack-nova-compute:                 inactive  (disabled on boot)
openstack-nova-network:                 inactive  (disabled on boot)
openstack-nova-scheduler:               active
openstack-nova-volume:                  inactive  (disabled on boot)
openstack-nova-conductor:               active
== Glance services ==
openstack-glance-api:                   active
openstack-glance-registry:              active
== Keystone service ==
openstack-keystone:                     active
== Horizon service ==
openstack-dashboard:                    active
== neutron services ==
neutron-server:                         active
neutron-dhcp-agent:                     active
neutron-l3-agent:                       active
neutron-metadata-agent:                 active
neutron-lbaas-agent:                    inactive  (disabled on boot)
neutron-openvswitch-agent:              active
neutron-linuxbridge-agent:              inactive  (disabled on boot)
neutron-ryu-agent:                      inactive  (disabled on boot)
neutron-nec-agent:                      inactive  (disabled on boot)
neutron-mlnx-agent:                     inactive  (disabled on boot)
== Cinder services ==
openstack-cinder-api:                   active
openstack-cinder-scheduler:             active
openstack-cinder-volume:                active
openstack-cinder-backup:                inactive  (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api:               failed
openstack-ceilometer-central:           active
openstack-ceilometer-compute:           inactive  (disabled on boot)
openstack-ceilometer-collector:         active
openstack-ceilometer-alarm-notifier:    active
openstack-ceilometer-alarm-evaluator:   active
== Support services ==
openvswitch:                            active
dbus:                                   active
tgtd:                                   active
rabbitmq-server:                        active
memcached:                              active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
|                id                |    name    | enabled |        email         |
+----------------------------------+------------+---------+----------------------+
| 42ceb5a601b041f0a5669868dd7f7663 |   admin    |   True  |    test@test.com     |
| d602599e69904691a6094d86f07b6121 | ceilometer |   True  | ceilometer@localhost |
| cc11c36f6e9a4bb7b050db7a380a51db |   cinder   |   True  |   cinder@localhost   |
| c3b1e25936a241bfa63c791346f179fc |   glance   |   True  |   glance@localhost   |
| d2bfcd4e6fc44478899b0a2544df0b00 |  neutron   |   True  |  neutron@localhost   |
| 3d572a8e32b94ac09dd3318cd84fd932 |    nova    |   True  |    nova@localhost    |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| 898a4245-d191-46b8-ac87-e0f1e1873cb1 | CirrOS31        | qcow2       | bare             | 13147648  | active |
| c4647c90-5160-48b1-8b26-dba69381b6fa | Ubuntu 06/18/14 | qcow2       | bare             | 254149120 | active |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host                                   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | -               |
| nova-scheduler   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:21.000000 | -               |
| nova-conductor   | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:23.000000 | -               |
| nova-cert        | ip-192-169-142-127.ip.secureserver.net | internal | enabled | up    | 2014-06-22T10:39:20.000000 | -               |
| nova-compute     | ip-192-169-142-137.ip.secureserver.net | nova     | enabled | up    | 2014-06-22T10:39:23.000000 | -               |
+------------------+----------------------------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+---------+------+
| ID                                   | Label   | Cidr |
+--------------------------------------+---------+------+
| 577b7ba7-adad-4051-a03f-787eb8bd55f6 | public  | -    |
| 70298098-a022-4a6b-841f-cef13524d86f | private | -    |
| 7459c84b-b460-4da2-8f24-e0c840be2637 | int     | -    |
+--------------------------------------+---------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name      | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny   | 512       | 1    | 0         |      | 1     | 1.0         | True      |
| 2  | m1.small  | 2048      | 20   | 0         |      | 1     | 1.0         | True      |
| 3  | m1.medium | 4096      | 40   | 0         |      | 2     | 1.0         | True      |
| 4  | m1.large  | 8192      | 80   | 0         |      | 4     | 1.0         | True      |
| 5  | m1.xlarge | 16384     | 160  | 0         |      | 8     | 1.0         | True      |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+--------------------------------------+-------------+-----------+------------+-------------+------------------------------------+
| ID                                   | Name        | Status    | Task State | Power State | Networks                           |
+--------------------------------------+-------------+-----------+------------+-------------+------------------------------------+
| 388bbe10-87b2-40e5-a6ee-b87b05116d51 | CirrOS445   | ACTIVE    | -          | Running     | private=30.0.0.14, 192.169.142.155 |
| 4d380c79-3213-45c0-8e4c-cef2dd19836d | UbuntuSRV01 | SUSPENDED | -          | Shutdown    | private=30.0.0.13, 192.169.142.154 |
+--------------------------------------+-------------+-----------+------------+-------------+------------------------------------+
 

[root@ip-192-169-142-127 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-scheduler   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:01
nova-conductor   ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:03
nova-cert        ip-192-169-142-127.ip.secureserver.net internal         enabled    :-)   2014-06-22 10:40:00
nova-compute     ip-192-169-142-137.ip.secureserver.net nova             enabled    :-)   2014-06-22 10:40:03
 

[root@ip-192-169-142-127 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+
| id                                   | agent_type         | host                                   | alive | admin_state_up |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+
| 61160392-4c97-4e8f-a902-1e55867e4425 | DHCP agent         | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| 6cd022b9-9eb8-4d1e-9991-01dfe678eba5 | Open vSwitch agent | ip-192-169-142-137.ip.secureserver.net | :-)   | True           |
| 893a1a71-5709-48e9-b1a4-11e02f5eca15 | Metadata agent     | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| bb29c2dc-2db6-487c-a262-32cecf85c608 | L3 agent           | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
| d7456233-53ba-4ae4-8936-3448f6ea9d65 | Open vSwitch agent | ip-192-169-142-127.ip.secureserver.net | :-)   | True           |
+--------------------------------------+--------------------+----------------------------------------+-------+----------------+
 

 

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.169.142.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.169.142.255"
GATEWAY="192.169.142.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSBridge"
DEVICETYPE="ovs"

 

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth0
DEVICE="eth0"
# HWADDR=90:E6:BA:2D:11:EB
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

 

[root@ip-192-169-142-127 network-scripts(keystone_admin)]# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEVICE=eth1
ONBOOT=yes
IPADDR=192.168.122.127
PREFIX=24
# HWADDR=52:54:00:EE:94:93
NM_CONTROLLED=no

 

[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
86e16ac0-c2e6-4eb4-a311-cee56fe86800
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "qg-068e0e7a-95"
            Interface "qg-068e0e7a-95"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-eth1"
        Port "eth1"
            Interface "eth1"
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    Bridge br-int
        Port "qr-16b1ea2b-fc"
            tag: 1
            Interface "qr-16b1ea2b-fc"
                type: internal
        Port "qr-2bb007df-e1"
            tag: 2
            Interface "qr-2bb007df-e1"
                type: internal
        Port "tap1c48d234-23"
            tag: 2
            Interface "tap1c48d234-23"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap26440f58-b0"
            tag: 1
            Interface "tap26440f58-b0"
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
    ovs_version: "2.1.2"

   Checksum offloading disabled on eth1 of Compute Node

 
[root@ip-192-169-142-137 neutron]# /usr/sbin/ethtool --offload eth1 tx off
Actual changes:
tx-checksumming: off
    tx-checksum-ip-generic: off
tcp-segmentation-offload: off
    tx-tcp-segmentation: off [requested on]
    tx-tcp-ecn-segmentation: off [requested on]
    tx-tcp6-segmentation: off [requested on]
udp-fragmentation-offload: off [requested on]

Friday, June 13, 2014

RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on Fedora 20

Finally I've designed answer-file creating  ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini, but plugin.ini -> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini has been created manually exactly the same as ml2_conf.ini following http://kashyapc.fedorapeople.org/virt/openstack/rdo/IceHouse-Nova-Neutron-ML2-GRE-OVS.txt
Similar file has been created on Compute Node.
Metadata_agent.ini are the same on Controller and on Compute Nodes

Two boxes  have been setup , each one having 2  NICs (p37p1,p4p1) for
Controller && Compute Nodes setup. Before running
`packstack --answer-file= TwoNodeML2&OVS&GRE.txt` SELINUX set to permissive on both nodes.Both p4p1's assigned IPs and set to support GRE tunnel  (192.168.0.127, 192.168.0.137 ) between Controller and Compute Nodes. Services firewalld and NetworkManager disabled (after packstack completion), IPv4 firewall with iptables and service network are enabled and running. Packstack is bind to public IP of interface p37p1 192.169.1.127, Compute Node is 192.169.1.137 ( view answer-file ).

Setup configuration

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && GRE )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


icehouse1.localdomain   -  Controller (192.168.1.127)
icehouse2.localdomain   -  Compute   (192.168.1.137)


********************************
Metadata access verification
********************************

[root@icehouse1 ~(keystone_admin)]# iptables-save | grep 8775
-A INPUT -p tcp -m multiport --dports 8773,8774,8775 -m comment --comment "001 novaapi incoming" -j ACCEPT
-A nova-api-INPUT -d 192.168.1.127/32 -p tcp -m tcp --dport 8775 -j ACCEPT

[root@icehouse1 ~(keystone_admin)]# netstat -antp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1181/python        

[root@icehouse1 ~(keystone_admin)]# ps -ef| grep 1181
nova      1181     1  0 06:30 ?        00:00:25 /usr/bin/python /usr/bin/nova-api
nova      3478  1181  0 06:31 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3479  1181  0 06:31 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3524  1181  0 06:31 ?        00:00:04 /usr/bin/python /usr/bin/nova-api
nova      3525  1181  0 06:31 ?        00:00:04 /usr/bin/python /usr/bin/nova-api
nova      3549  1181  0 06:31 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3555  1181  0 06:31 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root     11803  4686  0 07:48 pts/0    00:00:00 grep --color=auto 1181

[root@icehouse1 ~(keystone_admin)]# ip netns
qdhcp-8b22b262-c9c1-4138-8092-0581195f0889
qrouter-ecf9ee4e-b92c-4a5b-a884-d753a184764b

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-ecf9ee4e-b92c-4a5b-a884-d753a184764b iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

[root@icehouse1 ~(keystone_admin)]# ip netns exec qrouter-ecf9ee4e-b92c-4a5b-a884-d753a184764b netstat -antp

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      3821/python        

[root@icehouse1 ~(keystone_admin)]# ps -ef| grep 3821
root      3821     1  0 06:31 ?        00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/ecf9ee4e-b92c-4a5b-a884-d753a184764b.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=ecf9ee4e-b92c-4a5b-a884-d753a184764b --state_path=/var/lib/neutron --metadata_port=9697 --verbose --log-file=neutron-ns-metadata-proxy-ecf9ee4e-b92c-4a5b-a884-d753a184764b.log --log-dir=/var/log/neutron
root     11908  4686  0 07:50 pts/0    00:00:00 grep --color=auto 3821



***********************************************
Status nova && neutron services after install
***********************************************

[root@icehouse1 ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id                                   | agent_type         | host                  | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| 43fa28fb-46fa-4030-9f25-5da92847754f | Open vSwitch agent | icehouse2.localdomain | :-)   | True           |
| 471ab637-49eb-424b-b63e-3d03539150ac | Open vSwitch agent | icehouse1.localdomain | :-)   | True           |
| 495056c8-bb69-4bb4-b954-2398f49dd57a | Metadata agent     | icehouse1.localdomain | :-)   | True           |
| 76eb528d-2673-4ac2-936f-70157d46c566 | L3 agent           | icehouse1.localdomain | :-)   | True           |
| 8f1b4d6b-81df-4903-8a35-df9250143a8b | DHCP agent         | icehouse1.localdomain | :-)   | True           |
+--------------------------------------+--------------------+-----------------------+-------+----------------+

[root@icehouse1 ~(keystone_admin)]# nova-manage service list
Binary           Host                                 Zone             Status     State Updated_At
nova-consoleauth icehouse1.localdomain                internal         enabled    :-)   2014-06-14 17:44:56
nova-scheduler   icehouse1.localdomain                internal         enabled    :-)   2014-06-14 17:44:56
nova-conductor   icehouse1.localdomain                internal         enabled    :-)   2014-06-14 17:44:47
nova-cert        icehouse1.localdomain                internal         enabled    :-)   2014-06-14 17:44:46
nova-compute     icehouse2.localdomain                nova             enabled    :-)   2014-06-14 17:44:47

******************************************************
Routines tables on Controller && Compute Nodes
******************************************************

[root@icehouse1 ~(keystone_admin)]# route -n


Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 br-ex
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 p37p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1004   0        0 p4p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1018   0        0 br-ex
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 p4p1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br-ex

[root@icehouse1 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Thu Oct  2 16:10:58 2014
[root@icehouse2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 p37p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 p37p1
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 p4p1
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 p4p1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 p37p1
192.168.122.0   0.0.0.0       255.255.255.0   U     0      0        0 virbr0


****************************************
Neutron database status after install
****************************************


[root@icehouse1 ~(keystone_admin)]# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 1588
Server version: 5.5.36-MariaDB-wsrep MariaDB Server, wsrep_25.9.r3961

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> show databases ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| cinder             |
| glance             |
| keystone           |
| mysql              |
| neutron            |
| nova               |
| performance_schema |
| test               |
+--------------------+
9 rows in set (0.00 sec)

MariaDB [(none)]> use neutron ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [neutron]> show tables ;
+------------------------------+
| Tables_in_neutron            |
+------------------------------+
| agents                       |
| alembic_version              |
| allowedaddresspairs          |
| arista_provisioned_nets      |
| arista_provisioned_tenants   |
| arista_provisioned_vms       |
| cisco_ml2_credentials        |
| cisco_ml2_nexusport_bindings |
| consistencyhashes            |
| dnsnameservers               |
| externalnetworks             |
| extradhcpopts                |
| floatingips                  |
| ipallocationpools            |
| ipallocations                |
| ipavailabilityranges         |
| ml2_brocadenetworks          |
| ml2_brocadeports             |
| ml2_flat_allocations         |
| ml2_gre_allocations          |
| ml2_gre_endpoints            |
| ml2_network_segments         |
| ml2_port_bindings            |
| ml2_vlan_allocations         |
| ml2_vxlan_allocations        |
| ml2_vxlan_endpoints          |
| networkdhcpagentbindings     |
| networks                     |
| ports                        |
| quotas                       |
| routerl3agentbindings        |
| routerroutes                 |
| routers                      |
| securitygroupportbindings    |
| securitygrouprules           |
| securitygroups               |
| servicedefinitions           |
| servicetypes                 |
| subnetroutes                 |
| subnets                      |
+------------------------------+
40 rows in set (0.00 sec)


*******************************************************************************
System completely functional, however packstack picked up several undesired gre_endpoints showing up in `ovs-vsctl show` reports
*******************************************************************************
Removing not needed gre_endpoints via databases deleting 1 record from
ml2_gre_endpoints
*******************************************************************************


MariaDB [neutron]> select * from ml2_gre_endpoints ;
+---------------+
| ip_address    |
+---------------+
| 192.168.1.137 |
| 192.168.0.127 |
| 192.168.0.137 |
+---------------+
3 rows in set (0.00 sec)

MariaDB [neutron]> delete from ml2_gre_endpoints where ip_address='192.168.1.137' ;
Query OK, 1 row affected (0.01 sec)

MariaDB [neutron]> select * from ml2_gre_endpoints ;
+---------------+
| ip_address    |
+---------------+
| 192.168.0.127 |
| 192.168.0.137 |
+---------------+
2 rows in set (0.00 sec)

MariaDB [neutron]> quit
 

Restart neutron-openvswitch-agent service on both nodes


 [root@icehouse1 neutron(keystone_admin)]# ls -l
total 72
-rw-r--r--. 1 root root      193 Sep 30 17:08 api-paste.ini
-rw-r-----. 1 root neutron  3901 Sep 30 19:19 dhcp_agent.ini
-rw-r--r--. 1 root root       86 Sep 30 19:20 dnsmasq.conf
-rw-r-----. 1 root neutron   208 Sep 30 17:08 fwaas_driver.ini
-rw-r-----. 1 root neutron  3431 Sep 30 17:08 l3_agent.ini
-rw-r-----. 1 root neutron  1400 Aug  8 02:56 lbaas_agent.ini
-rw-r-----. 1 root neutron  1863 Sep 30 17:08 metadata_agent.ini
lrwxrwxrwx. 1 root root       37 Sep 30 18:41 ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r-----. 1 root neutron 19187 Sep 30 17:08 neutron.conf
lrwxrwxrwx. 1 root root       55 Sep 30 18:40 plugin.ini -> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
-rw-r--r--. 1 root root      211 Sep 30 17:30 plugin.out
drwxr-xr-x. 4 root root     4096 Sep 30 17:08 plugins
-rw-r-----. 1 root neutron  6148 Aug  8 02:56 policy.json
-rw-r--r--. 1 root root       79 Aug 11 15:27 release
-rw-r--r--. 1 root root     1216 Aug  8 02:56 rootwrap.conf

[root@icehouse1 neutron(keystone_admin)]# cat ml2_conf.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.0.127
[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[root@icehouse1 neutron(keystone_admin)]# cat plugin.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.0.127
[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf




On Controller:-

 [root@icehouse1 ~(keystone_admin)]# ovs-vsctl show
50a2dcb7-9502-4c08-b175-563eec368db9
    Bridge br-int
        Port "qr-19f312c1-cb"
            tag: 1
            Interface "qr-19f312c1-cb"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap707ec6ff-71"
            tag: 1
            Interface "tap707ec6ff-71"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Port "gre-c0a80189"
            Interface "gre-c0a80189"
                type: gre
                options: {in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-908c1363-66"
            Interface "qg-908c1363-66"
                type: internal
        Port "p37p1"
            Interface "p37p1"
    ovs_version: "2.1.2"

On Compute:-

[root@icehouse1 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Sat Jun 14 12:47:57 2014
[root@icehouse2 ~]# ovs-vsctl show
bd17e782-fc1b-4c75-8a9a-0bd11ca90dbc
    Bridge br-int
        Port "qvo1e52ffe0-c9"
            tag: 1
            Interface "qvo1e52ffe0-c9"
        Port "qvo897b91ae-71"
            tag: 1
            Interface "qvo897b91ae-71"
        Port "qvo67962cf3-c8"
            tag: 1
            Interface "qvo67962cf3-c8"
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo7e0bdbb7-4e"
            tag: 1
            Interface "qvo7e0bdbb7-4e"
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-c0a8017f"
            Interface "gre-c0a8017f"
                type: gre
                options: {in_key=flow, local_ip="192.168.0.137", out_key=flow, remote_ip="192.168.0.127"}
    ovs_version: "2.1.2"


 [root@icehouse1 ~(keystone_admin)]# ovs-ofctl show br-tun && ovs-ofctl dump-flows br-tun

OFPT_FEATURES_REPLY (xid=0x2): dpid:00001ecc77fbb64c
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-int): addr:0a:f9:4e:af:fe:c6
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(gre-c0a80089): addr:32:c5:59:d7:4c:8b
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:1e:cc:77:fb:b6:4c
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=15350.220s, table=0, n_packets=0, n_bytes=0, idle_age=15350, priority=0 actions=drop
 cookie=0x0, duration=15350.290s, table=0, n_packets=712066, n_bytes=983698886, idle_age=62, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=13862.653s, table=0, n_packets=428887, n_bytes=34296128, idle_age=63, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=15350.131s, table=1, n_packets=712019, n_bytes=983695552, idle_age=62, priority=1,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=15350.025s, table=1, n_packets=47, n_bytes=3334, idle_age=9071, priority=1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=15349.909s, table=2, n_packets=0, n_bytes=0, idle_age=15349, priority=0 actions=drop
 cookie=0x0, duration=13583.119s, table=2, n_packets=360519, n_bytes=28901782, idle_age=9071, priority=1,tun_id=0x4 actions=mod_vlan_vid:3,resubmit(,10)
 cookie=0x0, duration=15346.715s, table=2, n_packets=68542, n_bytes=5413601, idle_age=63, priority=1,tun_id=0x3 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=15345.408s, table=2, n_packets=0, n_bytes=0, idle_age=15345, priority=1,tun_id=0x2 actions=mod_vlan_vid:2,resubmit(,10)
 cookie=0x0, duration=15349.797s, table=3, n_packets=0, n_bytes=0, idle_age=15349, priority=0 actions=drop
 cookie=0x0, duration=15349.663s, table=10, n_packets=429061, n_bytes=34315383, idle_age=63, priority=1

actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],
NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],
load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1

 cookie=0x0, duration=15349.575s, table=20, n_packets=2, n_bytes=204, idle_age=752, priority=0 actions=resubmit(,21)

 cookie=0x0, duration=752.794s, table=20, n_packets=25787, n_bytes=34340181, hard_timeout=300, idle_age=62, hard_age=62, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:00:5e:64 actions=load:0->NXM_OF_VLAN_TCI[],load:0x3->NXM_NX_TUN_ID[],output:2

 cookie=0x0, duration=15349.503s, table=21, n_packets=28, n_bytes=2084, idle_age=13454, priority=0 actions=drop
 cookie=0x0, duration=13583.174s, table=21, n_packets=10, n_bytes=656, idle_age=9071, dl_vlan=3 actions=strip_vlan,set_tunnel:0x4,output:2
 cookie=0x0, duration=15345.489s, table=21, n_packets=3, n_bytes=210, idle_age=15337, hard_age=13862, dl_vlan=2 actions=strip_vlan,set_tunnel:0x2,output:2
 cookie=0x0, duration=15346.806s, table=21, n_packets=7, n_bytes=498, idle_age=752, hard_age=13862, dl_vlan=1 actions=strip_vlan,set_tunnel:0x3,output:2


Samples here

[root@icehouse1 neutron(keystone_admin)]# cat neutron.conf
[DEFAULT]
verbose = True
debug = False
use_syslog = False
log_dir =/var/log/neutron
bind_host = 0.0.0.0
bind_port = 9696
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
base_mac = fa:16:3e:00:00:00
mac_generation_retries = 16
dhcp_lease_duration = 86400
allow_bulk = True
allow_pagination = False
allow_sorting = False
allow_overlapping_ips = True
rpc_backend = neutron.openstack.common.rpc.impl_kombu
control_exchange = neutron
rabbit_host = 192.168.0.127
rabbit_password = guest
rabbit_port = 5672
rabbit_hosts = 192.168.1.127:5672
rabbit_userid = guest
rabbit_virtual_host = /
rabbit_ha_queues = False
agent_down_time = 75
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
dhcp_agents_per_network = 1
api_workers = 0
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.168.1.127:8774/v2
nova_region_name =RegionOne
nova_admin_username =nova
nova_admin_tenant_id =f4e7985ae16d4fac9166b41c394614af
nova_admin_password =aaf8cf4c60224150
nova_admin_auth_url =http://192.168.1.127:35357/v2.0
send_events_interval = 2
[quotas]
[agent]
root_helper = sudo neutron-rootwrap /etc/neutron/rootwrap.conf
report_interval = 30
[keystone_authtoken]
auth_host = 192.168.1.127
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = neutron
admin_password = 5f11f559abc94440
auth_uri=http://192.168.1.127:5000/
[database]
connection = mysql://neutron:0302dcfeb69e439f@192.168.1.127/neutron
max_retries = 10
retry_interval = 10
idle_timeout = 3600
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

[root@icehouse1 neutron(keystone_admin)]# cat plugin.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.0.127
[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf