Friday, June 24, 2016

TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots

================
UPDATE 09/03/2016
================
Undercloud VM gets created with AutoStart at boot up
in meantime.So just change permissions and allow services
to start on undercloud (5 min - 7 min )

Up on deployment completed
[stack@ServerTQS72 ~]$ virsh dominfo undercloud | grep -i autostart
Autostart:      enable

================
UPDATE 08/18/2016
================
Make following updates

[root@ServerTQS72 ~]# cat /etc/rc.d/rc.local
#!/bin/bash
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
mkdir -p /run/user/1001
chown -R stack /run/user/1001
chgrp -R stack /run/user/1001

touch /var/lock/subsys/local

========================
In stack's .bashrc
========================

[stack@ServerTQS72 ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
# BEGIN ANSIBLE MANAGED BLOCK
# Make sure XDG_RUNTIME_DIR is set (used by libvirt
# for creating config and sockets for qemu:///session
# connections)
: ${XDG_RUNTIME_DIR:=/run/user/$(id -u)}
export XDG_RUNTIME_DIR
export DISPLAY=:0.0
export NO_AT_BRIDGE=1

# END ANSIBLE MANAGED BLOCK

===========================
Reboot VIRTHOST
===========================

$ sudo su -
# xhost +
# su - stack

[stack@ServerTQS72 ~]$ virt-manager --connect qemu:///session

Start VM undercloud



=============
END UPDATE
=============


This post follows up http://lxer.com/module/newswire/view/230814/index.html
and might work as timer saver unless status undecloud.qcow2 per
http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/
requires fresh installation to be done from scratch
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment. Proceed as follows :-

1. System shutdown
    Cleanly commit :-
    [stack@undercloud~] $ openstack stack delete overcloud
2. Login into VIRTHOST as stack and gracefully shutdown undercloud
    [stack@ServerCentOS72 ~]$ virsh shutdown undercloud

  

**************************************
 Shutdown and bring up VIRTHOST
**************************************

 Login as root to VIRTHOST :-

[boris@ServerCentOS72 ~]$ sudo su -
[sudo] password for boris:
Last login: Fri Jun 24 16:47:25 MSK 2016 on pts/0

********************************************************************************
This is core step , not to create /run/user/1001/libvirt by root
setting appropriate permissions, just only set correct permissions
on /run/user.  This will allow "stack" to issue `virsh list --all` and create
by himself /run/user/1001/libvirt. The rest works fine for myself
********************************************************************************

[root@ServerCentOS72 ~]# chown -R stack /run/user
[root@ServerCentOS72 ~]# chgrp -R stack /run/user

[root@ServerCentOS72 ~]# ls -ld  /run/user
drwxr-xr-x. 3 stack stack 60 Jun 24 20:01 /run/user

[root@ServerCentOS72 ~]# su - stack
Last login: Fri Jun 24 16:48:09 MSK 2016 on pts/0

[stack@ServerCentOS72 ~]$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     compute_0                   shut off
 -     compute_1                   shut off
 -     control_0                      shut off
 -     control_1                      shut off
 -     control_2                      shut off
 -     undercloud                   shut off

**********************
Make sure :-
**********************

[stack@ServerCentOS72 ~]$ ls -ld /run/user/1001/libvirt
drwx------. 6 stack stack 160 Jun 24 21:38 /run/user/1001/libvirt


[stack@ServerCentOS72 ~]$ virsh start undercloud
Domain undercloud started

[stack@ServerCentOS72 ~]$ virsh list --all
 Id    Name                           State
---------------------------------------------------------------
 2     undercloud                    running
 -     compute_0                      shut off
 -     compute_1                      shut off
 -     control_0                        shut off
 -     control_1                        shut off
 -     control_2                        shut off

Wait about 5 min and access the undercloud from workstation by:-

[boris@fedora22wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added '192.168.1.75' (ECDSA) to the list of known hosts.
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.
Last login: Fri Jun 24 15:34:40 2016 from gateway

[stack@undercloud ~]$ ls -l
total 1640244
-rw-rw-r--. 1 stack stack   13287936 Jun 24 13:10 cirros.img
-rw-rw-r--. 1 stack stack    3740163 Jun 24 13:10 cirros.initramfs
-rw-rw-r--. 1 stack stack    4979632 Jun 24 13:10 cirros.kernel
-rw-rw-r--. 1  1001  1001      21769 Jun 24 11:56 instackenv.json
-rw-r--r--. 1 root  root   385824684 Jun 24 03:28 ironic-python-agent.initramfs
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:28 ironic-python-agent.kernel
-rwxr-xr-x. 1 stack stack        487 Jun 24 12:17 network-environment.yaml
-rwxr-xr-x. 1 stack stack        792 Jun 24 12:17 overcloud-deploy-post.sh
-rwxr-xr-x. 1 stack stack       2284 Jun 24 12:17 overcloud-deploy.sh
-rw-rw-r--. 1 stack stack       4324 Jun 24 13:50 overcloud-env.json
-rw-r--r--. 1 root  root    36478203 Jun 24 03:28 overcloud-full.initrd
-rw-r--r--. 1 root  root  1224070144 Jun 24 03:29 overcloud-full.qcow2
-rwxr-xr-x. 1 root  root     5158704 Jun 24 03:29 overcloud-full.vmlinuz
-rw-rw-r--. 1 stack stack        389 Jun 24 14:28 overcloudrc
-rwxr-xr-x. 1 stack stack       3374 Jun 24 12:17 overcloud-validate.sh
-rwxr-xr-x. 1 stack stack        284 Jun 24 12:17 run-tempest.sh
-rw-r--r--. 1 stack stack        161 Jun 24 12:17 skipfile
-rw-------. 1 stack stack        287 Jun 24 12:16 stackrc
-rw-rw-r--. 1 stack stack        232 Jun 24 14:28 tempest-deployer-input.conf
drwxrwxr-x. 9 stack stack       4096 Jun 24 15:23 tripleo-ci
-rw-rw-r--. 1 stack stack       1123 Jun 24 14:28 tripleo-overcloud-passwords
-rw-------. 1 stack stack       6559 Jun 24 11:59 undercloud.conf
-rw-rw-r--. 1 stack stack     782405 Jun 24 12:16 undercloud_install.log
-rwxr-xr-x. 1 stack stack         83 Jun 24 12:00 undercloud-install.sh
-rw-rw-r--. 1 stack stack       1579 Jun 24 12:00 undercloud-passwords.conf
-rw-rw-r--. 1 stack stack       7699 Jun 24 12:17 undercloud_post_install.log
-rwxr-xr-x. 1 stack stack       2780 Jun 24 12:00 undercloud-post-install.sh

[stack@undercloud ~]$ ./overcloud-deploy.sh

  
  

  Fourth redeployment based on same undercloud VM.  DHCP pool of ctlplane
  is obviosly increasing  starting point



   Libvirt's pool && volumes configuration been built by QuickStart


[stack@ServerCentOS72 ~]$  virsh pool-dumpxml oooq_pool
<pool type='dir'>
  <name>oooq_pool</name>
  <uuid>dcf7f52b-e7f7-46aa-aa67-591afe598804</uuid>
  <capacity unit='bytes'>257572208640</capacity>
  <allocation unit='bytes'>85467271168</allocation>
  <available unit='bytes'>172104937472</available>
  <source>
  </source>
  <target>
    <path>/home/stack/.quickstart/pool</path>
    <permissions>
      <mode>0775</mode>
      <owner>1001</owner>
      <group>1001</group>
      <label>unconfined_u:object_r:user_home_t:s0</label>
    </permissions>
  </target>
</pool>
 
***************************************************************************
A bit different way to manage - login as stack and invoke virt-manager
via `virt-manager --connect qemu:///session` when /run/user already got
a correct permissions.
***************************************************************************
$ sudo su -
# chown -R stack /run/user
# chgrp -R stack /run/user
^D
[stack@ServerCentOS72 ~]$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 -     compute_0                      shut off
 -     compute_1                      shut off
 -     control_0                      shut off
 -     control_1                      shut off
 -     control_2                      shut off
 -     undercloud                     shut off

[stack@ServerCentOS72 ~]$ virt-manager --connect qemu:///session
[stack@ServerCentOS72 ~]$ virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     undercloud                     running
 -     compute_0                      shut off
 -     compute_1                      shut off
 -     control_0                      shut off
 -     control_1                      shut off
 -     control_2                      shut off

   To start virt-manager without warning :-

  


From workstation connect to undercloud
[boris@fedora22wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
[stack@undercloud~] ./overcloud-deploy.sh
In several minutes you will see
  
  

[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 40754e8a-461e-4328-b0c4-6740c71e9a0d | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.27 |
| df272524-a0bd-4ed7-b95c-92ac779c0b96 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.26 |
| 22802ff4-c472-4500-94d7-415c429073ab | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.29 |
| e79a8967-5c81-4ce1-9037-4e07b298d779 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.25 |
| 27a7c6ac-a480-4945-b4d5-72e32b3c1886 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.28 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@undercloud ~]$ ssh heat-admin@192.0.2.27
Last login: Sat Jun 25 09:35:35 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Sat Jun 25 09:54:06 UTC 2016 on pts/0
[root@overcloud-controller-0 ~]# .  keystonerc_admin
[root@overcloud-controller-0 ~(keystone_admin)]# pcs status
Cluster name: tripleo_cluster
Last updated: Sat Jun 25 10:04:32 2016        Last change: Sat Jun 25 09:21:21 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-192.0.2.24    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-1 ]
     Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed Actions:
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=92, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:45 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=355, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:10 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=313, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:20:51 2016', queued=0ms, exec=2101ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=328, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:23:05 2016', queued=0ms, exec=2121ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=97, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=365, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:12 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=324, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:32 2016', queued=0ms, exec=2237ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=342, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:23:32 2016', queued=0ms, exec=2200ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=94, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:16:47 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=353, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 10:00:08 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=318, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:39 2016', queued=0ms, exec=2113ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=322, status=complete, exitreason='none',
    last-rc-change='Sat Jun 25 09:22:48 2016', queued=0ms, exec=2123ms



PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Saturday, June 18, 2016

RDO Triple0 QuickStart HA Setup - Work in progress

  This post follows up https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-intel-core-i7-4790-desktop
 In meantime undercloud-install,undercloud-post-install (openstack undercloud install, openstack overcloud image upload ) are supposed to be performed during original run  `bash quickstart.sh --config /path-to/ha.yml $VIRTHOST`. Neutron networks deployment on undercloud and HA Server's configuration has been significantly rebuilt during the last weeks. I believe current design is close to proposed in https://remote-lab.net/rdo-manager-ha-openstack-deployment
However , attempt to reproduce http://docs.openstack.org/developer/tripleo-docs/installation/installation.html
results  hanging  on  `openstack undercloud install`, wheh it attempts to start
openstack-nova-compute on undercloud. Nova-compute.log report failure
to connect 127.0.0.1:5672. Verification via `netstat -antp | grep 5672` reports
port 5672 bind only to 192.0.2.1 ( ctlplane IP address ).

See also https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html
Quoting ( complaints are not mine)  :-
By the way, I'd love to see and help to have an complete installation guide for TripleO powered by RDO on the RDO site (the instack virt setup without quickstart . . . . 

*****************************
Start on workstation :-
*****************************
$ git clone https://github.com/openstack/tripleo-quickstart
$ cd tripleo-quickstart
$ sudo bash quickstart.sh --install-deps
$ sudo yum -y  install redhat-rpm-config
$ export VIRTHOST=192.168.1.75 #put your own IP here
$ ssh-keygen
$ ssh-copy-id root@$VIRTHOST
$ ssh root@$VIRTHOST uname -a # no root login prompt

######################
# Template code
######################
compute_memory: 6144
compute_vcpu:1

undercloud_memory: 8192

# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4

# Create three controller nodes and one compute node.
overcloud_nodes:
  - name: control_0
    flavor: control
  - name: control_1
    flavor: control
  - name: control_2
    flavor: control

  - name: compute_0
    flavor: compute
  - name: compute_1
    flavor: compute

# We don't need introspection in a virtual environment (because we are
# creating all the "hardware" we really know the necessary
# information).
introspect: false

# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
  --control-scale 3 --compute-scale 2 --neutron-network-type vxlan
  --neutron-tunnel-types vxlan
  -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml
  --ntp-server pool.ntp.org
deploy_timeout: 75
tempest: false
pingtest: true

***********************************************
Then run under tripleo-quickstart
***********************************************
$ bash quickstart.sh --config ./config/general_config/ha.yml  $VIRTHOST

During this run the most important is to reach this point on VIRTHOST

[root@ServerCentOS72 ~]# cd /var/cache/tripleo-quickstart/images
[root@ServerCentOS72 images]# ls -l
total 2638232
-rw-rw-r--. 1 stack stack 2701548544 Jun 17 19:25 83e62624dd7bd637dada343bbf4fe8f1.qcow2
lrwxrwxrwx. 1 stack stack         75 Jun 17 19:25 latest-undercloud.qcow2 -> /var/cache/tripleo-quickstart/images/83e62624dd7bd637dada343bbf4fe8f1.qcow2

Saturday 18 June 2016  12:07:05 +0300 (0:00:00.124)       0:26:21.276 *********
===============================================================================
 tripleo/undercloud : Install the undercloud -------------------------- 1155.95s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/install-undercloud.yml:1 
setup/undercloud : Get undercloud vm ip address ------------------------ 81.26s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:173 
setup/undercloud : Resize undercloud image (call virt-resize) ---------- 76.39s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:122 
tripleo/undercloud : Prepare the undercloud for deploy ----------------- 70.15s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/post-install.yml:27 
setup/undercloud : Upload undercloud volume to storage pool ------------ 53.20s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:142 
setup/undercloud : Copy instackenv.json to appliance ------------------- 35.25s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:53
setup/undercloud : Get qcow2 image from cache -------------------------- 32.77s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:144 
setup/undercloud : Inject undercloud ssh public key to appliance -------- 7.07s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:72 
setup ------------------------------------------------------------------- 6.68s
None --------------------------------------------------------------------------
setup/undercloud : Perform selinux relabel on undercloud image ---------- 3.47s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:94
environment/teardown : Check if libvirt is available -------------------- 1.99s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:8 ----
setup ------------------------------------------------------------------- 1.92s
/home/boris/.quickstart/playbooks/provision.yml:29 ----------------------------
setup ------------------------------------------------------------------- 1.90s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.81s
None --------------------------------------------------------------------------
parts/libvirt : Install packages for libvirt ---------------------------- 1.78s
/home/boris/tripleo-quickstart/roles/parts/libvirt/tasks/main.yml:5 -----------
setup/overcloud : Create overcloud vm storage --------------------------- 1.57s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:55 
setup/overcloud : Define overcloud vms ---------------------------------- 1.48s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:67 
provision/teardown : Remove non-root user account ----------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:47 -----
provision/teardown : Wait for processes to exit ------------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:27 -----
environment/teardown : Stop libvirt networks ---------------------------- 1.35s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:29 ---
+ set +x
##################################
Virtual Environment Setup Complete
##################################

Access the undercloud by:

    ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud

There are scripts in the home directory to continue the deploy:

    overcloud-deploy.sh will deploy the overcloud

   Detailed syntax of `openstack overcloud deploy --templates ... `
   captured by snapshot bellow, compare with https://remote-lab.net/rdo-manager-ha-openstack-deployment

  $ openstack overcloud deploy --control-scale 3 --compute-scale 2  \
  --libvirt-type qemu --ntp-server pool.ntp.org --templates ~/the-cloud/  \
  -e ~/the-cloud/environments/puppet-pacemaker.yaml  \
  -e ~/the-cloud/environments/network-isolation.yaml  \
  -e ~/the-cloud/environments/net-single-nic-with-vlans.yaml  \
  -e ~/the-cloud/environments/network-environment.yaml
  


    overcloud-deploy-post.sh will do any post-deploy configuration
    overcloud-validate.sh will run post-deploy validation

Alternatively, you can ignore these scripts and follow the upstream docs,
starting from the overcloud deploy section:

    http://ow.ly/1Vc1301iBlb

Then run 3 mentoned above scripts

[stack@undercloud ~]$ . stackrc
[stack@undercloud ~]$ heat stack-list
+--------------------------------------+------------+-----------------+---------------------+--------------+
| id                                   | stack_name | stack_status    | creation_time       | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 356243b1-a071-45c8-8083-85b9a12532c6 | overcloud  | CREATE_COMPLETE | 2016-06-18T09:09:40 | None         |
+--------------------------------------+------------+-----------------+---------------------+--------------+

[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.7  |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.9  |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.8  |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@undercloud ~]$ neutron net-list
+--------------------------------------+--------------+----------------------------------------+
| id                                   | name         | subnets                                |
+--------------------------------------+--------------+----------------------------------------+
| cde382ae-a7fa-4ebb-bbdc-9e2af9c0df83 | external     | 42fac214-7177-4b4f-8778-105015ed30da   |
|                                      |              | 10.0.0.0/24                            |
| 5fc97bca-fa67-4ede-b4d3-8234c0ace5e5 | storage_mgmt | 719f9a19-2f1d-4eed-914a-430468086f10   |
|                                      |              | 172.16.3.0/24                          |
| 4236d358-b4cd-4fb9-a337-f8a421bb13cd | tenant       | d6f1e772-c0a1-4869-a9bc-b551faf5be8e   |
|                                      |              | 172.16.0.0/24                          |
| a4155b70-a4d8-41bf-bbe6-a5f4e248c5ad | ctlplane     | 199a8e99-d9c7-43f2-8ccd-6a59b8424362   |
|                                      |              | 192.0.2.0/24                           |
| fae53fb0-c5da-427f-b473-bfaa0ab21877 | internal_api | 5f2ff369-1000-4361-8131-b0ae69821b9f   |
|                                      |              | 172.16.2.0/24                          |
| 41862220-b9e6-4000-8341-9fbdb34b47f5 | storage      | d0cf1cac-f841-41dd-923d-47d164c07d0f   |
|                                      |              | 172.16.1.0/24                          |
+--------------------------------------+--------------+----------------------------------------+

[stack@undercloud ~]$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.4:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.0.0.4,192.0.2.6
export OS_PASSWORD=gdjYmYMdB6aWX8PjBUWdCHkem
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_TENANT_NAME=admin
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

| ID                                   | Name                    | Status | Task State | Power State | Networks            |

+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.11 |

| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.7  |

| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.9  |

| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.10 |

| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.8  |

+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+


 
 


[stack@undercloud ~]$ ssh heat-admin@192.0.2.11
The authenticity of host '192.0.2.11 (192.0.2.11)' can't be established.
ECDSA key fingerprint is 74:99:da:b1:c8:ac:58:e6:65:c1:51:45:64:e4:e9:ed.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.0.2.11' (ECDSA) to the list of known hosts.
Last login: Sat Jun 18 09:52:37 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# vi keystonerc_admin
[root@overcloud-controller-0 ~]# .  keystonerc_admin
[root@overcloud-controller-0 ~(keystone_admin)]# psc status
-bash: psc: command not found
[root@overcloud-controller-0 ~(keystone_admin)]# pcs  status
Cluster name: tripleo_cluster
Last updated: Sat Jun 18 10:01:58 2016        Last change: Sat Jun 18 09:49:22 2016 by root via cibadmin on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-1 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:
 ip-192.0.2.6    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-1
 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-1 ]
     Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Failed Actions:
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=95, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:44:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=331, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:56:44 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=335, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:50:53 2016', queued=0ms, exec=2099ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=339, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:51:17 2016', queued=0ms, exec=2117ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=96, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:44:40 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=332, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:56:42 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=339, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:51:13 2016', queued=0ms, exec=2145ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=341, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:51:28 2016', queued=0ms, exec=2147ms
* openstack-aodh-evaluator_start_0 on overcloud-controller-2 'not running' (7): call=368, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:53:18 2016', queued=0ms, exec=2107ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=321, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:56:46 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=326, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:51:06 2016', queued=0ms, exec=2185ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=378, status=complete, exitreason='none',
    last-rc-change='Sat Jun 18 09:54:14 2016', queued=1ms, exec=2116ms

PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

[root@overcloud-controller-0 ~(keystone_admin)]# ovs-vsctl show
8fea5ee4-62cf-4767-96c8-d9867cab9972
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-ac100004"
            Interface "vxlan-ac100004"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.4"}
        Port "vxlan-ac100005"
            Interface "vxlan-ac100005"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.5"}
        Port "vxlan-ac100008"
            Interface "vxlan-ac100008"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.8"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-ac100007"
            Interface "vxlan-ac100007"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.7"}
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "vlan20"
            tag: 20
            Interface "vlan20"
                type: internal
        Port "eth0"
            Interface "eth0"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "vlan40"
            tag: 40
            Interface "vlan40"
                type: internal
        Port "vlan50"
            tag: 50
            Interface "vlan50"
                type: internal
        Port "vlan10"
            tag: 10
            Interface "vlan10"
                type: internal
        Port "vlan30"
            tag: 30
            Interface "vlan30"
                type: internal
    ovs_version: "2.5.0"

[root@overcloud-controller-0 ~(keystone_admin)]# ifconfig
br-ex: flags=4163  mtu 1500
        inet 192.0.2.11  netmask 255.255.255.0  broadcast 192.0.2.255
        inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
        ether 00:50:dc:cf:b7:d5  txqueuelen 0  (Ethernet)
        RX packets 15254  bytes 29305270 (27.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 15111  bytes 2037368 (1.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
        inet6 fe80::250:dcff:fecf:b7d5  prefixlen 64  scopeid 0x20
        ether 00:50:dc:cf:b7:d5  txqueuelen 1000  (Ethernet)
        RX packets 554865  bytes 314056269 (299.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 537763  bytes 196316938 (187.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 128951  bytes 42842317 (40.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 128951  bytes 42842317 (40.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
        inet 10.0.0.6  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::2cf7:9cff:fe98:df2e  prefixlen 64  scopeid 0x20
        ether 2e:f7:9c:98:df:2e  txqueuelen 0  (Ethernet)
        RX packets 1563  bytes 22172141 (21.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 935  bytes 339459 (331.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan20: flags=4163  mtu 1500
        inet 172.16.2.9  netmask 255.255.255.0  broadcast 172.16.2.255
        inet6 fe80::9c4a:96ff:fe42:f562  prefixlen 64  scopeid 0x20
        ether 9e:4a:96:42:f5:62  txqueuelen 0  (Ethernet)
        RX packets 515281  bytes 202417994 (193.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 498334  bytes 112312907 (107.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan30: flags=4163  mtu 1500
        inet 172.16.1.5  netmask 255.255.255.0  broadcast 172.16.1.255
        inet6 fe80::8cbe:80ff:fe80:7945  prefixlen 64  scopeid 0x20
        ether 8e:be:80:80:79:45  txqueuelen 0  (Ethernet)
        RX packets 20275  bytes 45196003 (43.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20405  bytes 52618634 (50.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan40: flags=4163  mtu 1500
        inet 172.16.3.6  netmask 255.255.255.0  broadcast 172.16.3.255
        inet6 fe80::8c06:98ff:fe7a:5b7  prefixlen 64  scopeid 0x20
        ether 8e:06:98:7a:05:b7  txqueuelen 0  (Ethernet)
        RX packets 2299  bytes 12722091 (12.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2557  bytes 26854977 (25.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan50: flags=4163  mtu 1500
        inet 172.16.0.6  netmask 255.255.255.0  broadcast 172.16.0.255
        inet6 fe80::6454:dff:fe41:90e9  prefixlen 64  scopeid 0x20
        ether 66:54:0d:41:90:e9  txqueuelen 0  (Ethernet)
        RX packets 107  bytes 9834 (9.6 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 121  bytes 12394 (12.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@overcloud-controller-0 ~(keystone_admin)]# route -n

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.0.1        0.0.0.0         UG    0      0        0 vlan10
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 vlan10
169.254.169.254 192.0.2.1       255.255.255.255 UGH   0      0        0 br-ex
172.16.0.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan50
172.16.1.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan30
172.16.2.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan20
172.16.3.0      0.0.0.0         255.255.255.0   U     0      0        0 vlan40
192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 br-ex
 
[root@overcloud-controller-0 ~]# cat /etc/os-net-config/config.json | jq '.[]'
[
  {
    "addresses": [
      {
        "ip_netmask": "192.0.2.11/24"
      }
    ],
    "type": "ovs_bridge",
    "use_dhcp": false,
    "routes": [
      {
        "next_hop": "192.0.2.1",
        "ip_netmask": "169.254.169.254/32"
      }
    ],
    "members": [
      {
        "primary": true,
        "name": "nic1",
        "type": "interface"
      },
      {
        "vlan_id": 10,
        "addresses": [
          {
            "ip_netmask": "10.0.0.6/24"
          }
        ],
        "type": "vlan",
        "routes": [
          {
            "next_hop": "10.0.0.1",
            "default": true
          }
        ]
      },
      {
        "vlan_id": 20,
        "addresses": [
          {
            "ip_netmask": "172.16.2.9/24"
          }
        ],
        "type": "vlan"
      },
      {
        "vlan_id": 30,
        "addresses": [
          {
            "ip_netmask": "172.16.1.5/24"
          }
        ],
        "type": "vlan"
      },
      {
        "vlan_id": 40,
        "addresses": [
          {
            "ip_netmask": "172.16.3.6/24"
          }
        ],
        "type": "vlan"
      },
      {
        "vlan_id": 50,
        "addresses": [
          {
            "ip_netmask": "172.16.0.6/24"
          }
        ],
        "type": "vlan"
      }
    ],
    "name": "br-ex",
    "dns_servers": [
      "8.8.8.8",
      "8.8.4.4"
    ]
  }
]
 
************************
On underclod
************************
[stack@undercloud ~]$ sudo su -
Last login: Sat Jun 18 10:47:31 UTC 2016 on pts/1
[root@undercloud ~]# ovs-vsctl show
7fb4d9b7-4704-410f-845f-6f3f0a1b65cd
    Bridge br-ctlplane
        Port "vlan10"
            tag: 10
            Interface "vlan10"
                type: internal
        Port br-ctlplane
            Interface br-ctlplane
                type: internal
        Port phy-br-ctlplane
            Interface phy-br-ctlplane
                type: patch
                options: {peer=int-br-ctlplane}
        Port "eth1"
            Interface "eth1"
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap41a7c72c-39"
            tag: 1
            Interface "tap41a7c72c-39"
                type: internal
        Port int-br-ctlplane
            Interface int-br-ctlplane
                type: patch
                options: {peer=phy-br-ctlplane}
    ovs_version: "2.5.0"
[root@undercloud ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.23.1    0.0.0.0         UG    0      0        0 eth0
10.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 vlan10
192.0.2.0       0.0.0.0         255.255.255.0   U     0      0        0 br-ctlplane
192.168.23.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
 
[root@undercloud ~]# ifconfig
br-ctlplane: flags=4163  mtu 1500
        inet 192.0.2.1  netmask 255.255.255.0  broadcast 192.0.2.255
        inet6 fe80::2ad:c4ff:fe6f:778a  prefixlen 64  scopeid 0x20
        ether 00:ad:c4:6f:77:8a  txqueuelen 0  (Ethernet)
        RX packets 4743446  bytes 382457275 (364.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6573214  bytes 31299066406 (29.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163  mtu 1500
        inet 192.168.23.46  netmask 255.255.255.0  broadcast 192.168.23.255
        inet6 fe80::2ad:c4ff:fe6f:7788  prefixlen 64  scopeid 0x20
        ether 00:ad:c4:6f:77:88  txqueuelen 1000  (Ethernet)
        RX packets 402911  bytes 1166354846 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 286351  bytes 63608008 (60.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth1: flags=4163  mtu 1500
        inet6 fe80::2ad:c4ff:fe6f:778a  prefixlen 64  scopeid 0x20
        ether 00:ad:c4:6f:77:8a  txqueuelen 1000  (Ethernet)
        RX packets 4793675  bytes 390579748 (372.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 6627325  bytes 32167819071 (29.9 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 5342779  bytes 31375282714 (29.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5342779  bytes 31375282714 (29.2 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:b7:65:c0  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

vlan10: flags=4163  mtu 1500
        inet 10.0.0.1  netmask 255.255.255.0  broadcast 10.0.0.255
        inet6 fe80::c4d1:81ff:fec1:6006  prefixlen 64  scopeid 0x20
        ether c6:d1:81:c1:60:06  txqueuelen 0  (Ethernet)
        RX packets 49362  bytes 7857042 (7.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 52980  bytes 868430005 (828.1 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 
 

Saturday, June 11, 2016

RDO Mitaka Virtual Deployment having real physical network as External

  Nova-Docker driver is installed on Compute node which is supposed to run several Java EE Servers as light weight Nova-Docker Containers (instances) having floating IPs on external flat network (actually real office network 192.168.1.0/24) . General Setup RDO Mitaka ML2&OVS&VLAN 3 Nodes.  VLAN tenant's segregation for RDO lansdcape was selected to avoid DVR configuration Controller && Compute Cluster.
Details here Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka

Configuration RDO Mitaka :-

  Controller/Network  (VM)  192.169.142.127 (eth0 -mgmt, eth1- vlan 
    vm/data, eth2 external )
  Compute  (VM)  192.169.142.137 (eth0 -mgmt, eth1- valn, vm/data)
  Storage    (VM ) 192.169.142.147 (eth0 -mgmt)


********************************************************************************************
Office LAN 192.168.1.0/24 is supposed to match external network (configured via flat network provider ) for VM's  deployed system . VIRTHOST (F23) is based on linux bridge br0 having original interface enp3s0 as source interface
********************************************************************************************
[root@fedora23wks network-scripts]# cat ifcfg-br0
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
DNS1=192.168.1.1
DNS2=83.221.202.254
GATEWAY=192.168.1.1
IPADDR=192.168.1.57
NETMASK=255.255.255.0
ONBOOT=yes

[root@fedora23wks network-scripts]# cat ifcfg-enp3s0
DEVICE=enp3s0
HWADDR=78:24:af:43:1b:53
ONBOOT=yes
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
BRIDGE=br0

***************************
Then run script
***************************
#!/bin/bash -x
chkconfig network on
systemctl stop NetworkManager
systemctl disable NetworkManager
service network restart

Reboot node
[root@fedora23wks network-scripts]# brctl show
bridge name    bridge id        STP enabled    interfaces
br0        8000.7824af431b53    no                enp3s0
                                                                      vnet2
********************************************************************************************
Creating external network via flat external network provider on Controller
matching CIDR of Office LAN 192.168.1.1 is IP of external physical router
device.
********************************************************************************************
  
  
  

********************************
Controller Configuration
********************************

[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^$|grep -v ^#
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
agent_mode = legacy
gateway_external_network_id =
external_network_bridge =
debug = False

[AGENT]
[root@ip-192-169-142-127 neutron(keystone_admin)]# cd plugins/ml2
[root@ip-192-169-142-127 ml2(keystone_admin)]# cat ml2_conf.ini
[DEFAULT]
[ml2]
type_drivers = vlan,flat
tenant_network_types = vlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
flat_networks = *
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges =physnet1:100:200,physnet2
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True

[root@ip-192-169-142-127 ml2(keystone_admin)]# cat openvswitch_agent.ini
[DEFAULT]
[agent]
l2_population = False
drop_flows_on_start = False
[ovs]
integration_bridge = br-int
bridge_mappings =physnet1:br-eth1,physnet2:br-eth2
enable_tunneling=False
local_ip=192.169.142.127
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver


[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show
d12e6a7a-f589-42cd-91b3-96156ad9ed59
    Bridge br-int
        fail_mode: secure
        Port "tap4118e71e-a4"
            tag: 2
            Interface "tap4118e71e-a4"
                type: internal
        Port "qr-41a1a0fa-ec"
            tag: 1
            Interface "qr-41a1a0fa-ec"
                type: internal
        Port "tap390b9bc5-b9"
            tag: 1
            Interface "tap390b9bc5-b9"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "int-br-eth1"
            Interface "int-br-eth1"
                type: patch
                options: {peer="phy-br-eth1"}
        Port "qg-65a69bdf-c7"
            tag: 2
            Interface "qg-65a69bdf-c7"
                type: internal
        Port "int-br-eth2"
            Interface "int-br-eth2"
                type: patch
                options: {peer="phy-br-eth2"}
    Bridge "br-eth2"          <=== external bridge for non-bridged networking
        Port "phy-br-eth2"
            Interface "phy-br-eth2"
                type: patch
                options: {peer="int-br-eth2"}
        Port "br-eth2"
            Interface "br-eth2"
                type: internal
        Port "eth2"
            Interface "eth2"
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port "eth0"
            Interface "eth0"
    Bridge "br-eth1"    <=== internal VLAN vm/data network bridge
        Port "phy-br-eth1"
            Interface "phy-br-eth1"
                type: patch
                options: {peer="int-br-eth1"}
        Port "eth1"
            Interface "eth1"
        Port "br-eth1"
            Interface "br-eth1"
                type: internal
    ovs_version: "2.4.0"

****************************************************************************************
Dashboard Console ( Controller VM on VIRTHOST 192.168.1.57 )
****************************************************************************************


  Connect to GF 4.1 Server from remote workstation