Tuesday, August 23, 2016

Attempt to reproduce Deploying Kubernetes on Openstack using Heat by Ales Nosek (CentOS 7.2)

UPDATE 09/07/2016
Issue with RDO Mitaka ( CentOS repos based ) escalated to RH
"Bug 1374183 - Import Error for python-senlinclient python-zaqarclient python-magnumclient python-mistralclient"
END UPDATE

UPDATE 09/05/2016
Attempt on RDO Newton M3 results kubernetes stack CREATE_IN_PROGRESS  to hang, reporting waiting for Master in heat logs.
Conditions from http://kubernetes.io/docs/getting-started-guides/openstack-heat/
for python clients are sartisfied in Newton (Master is running )
However, RDO Newton M3 itself fails with simple `nova boot ... ` issued on Compute Node.
END UPDATE
 
UPDATE 08/27/2016
I tested updated CentOS-7-x86_64-GenericCloud-1607.qcow2 with python2-boto 2.41 preinstalled.It eliminates "ERROR" during Master boot and provides the option to login into Master via ssh-keypair, exported in build environment. There is no any httpd daemon in SSL mode running into VM.Obviously https://Master-IP fails.
END UPDATE

I got negative results attempting to reproduce blog http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/  .  Following bellow is my step by step procedure which finally
builds kubernetes heat stack which is not functional  in meantime and troubleshooting kubernetes VM's boot logs  having ERRORS . The last ones been fixed still don't make kubernetes stack functional.

Two Node Cluster Controller/Network/Compute  && Storage deployed on RDO
Mitaka.

====================================
Environment set up for kubernetes stack build via heat
====================================
[boris@CentOS72Server ~(keystone_build)]$ cat openrc.sh
unset OS_SERVICE_TOKEN
export OS_USERNAME=admin
export OS_PASSWORD=dda05d8fb4554e93
export OS_AUTH_URL=http://192.168.1.52:5000/v3
export PS1='[\u@\h \W(keystone_build)]\$ '

export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_REGION_NAME=RegionOne
export OS_TENANT_ID=6e72c704971d4da3845f0ae9982bca6b

[boris@CentOS72Server ~(keystone_build)]$ cat openstack-heat.sh
export KUBERNETES_PROVIDER=openstack-heat
export STACK_NAME=kubernetes
export KUBERNETES_KEYPAIR_NAME=oskey082316
export NUMBER_OF_MINIONS=1
export MAX_NUMBER_OF_MINIONS=1
export EXTERNAL_NETWORK=public
export CREATE_IMAGE=false
export DOWNLOAD_IMAGE=false
export IMAGE_ID=7133dcf8-21a7-4beb-be1d-4a1f9d972cd8
export DNS_SERVER=83.221.202.254
export SWIFT_SERVER_URL=http://192.168.1.54:8080/v1/AUTH_6e72c704971d4da3845f0ae9982bca6b

1. Storage node separated during packstack deployment ( localhost:8080 causes issue on AIO box due to proxy-swift default endpoint )
2. SSL connection via horizon enabled in packstack deployment.
3. Security rules provide access to ports  443,80,22
========
Results
========
[root@CentOS72Server ~(keystone_admin)]# nova list
+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+
| ID                                   | Name                     | Status | Task State | Power State | Networks                                                      |
+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+
| f72bcec6-2def-4103-bb84-fcdc4a8af65e | CentOS72Devs01           | ACTIVE | -          | Running     | private=10.0.0.3, 192.168.1.150                               |
| 462e5122-fe5b-486e-8b1d-4379345271d6 | kubernetes-master        | ACTIVE | -          | Running     | kubernetes-fixed_network-htt6bujn7umv=10.0.0.3, 192.168.1.155 |
| 9c0f4e2c-1e9c-4370-8906-6b104b9bedbd | kubernetes-node-FhUQ6AJz | ACTIVE | -          | Running     | kubernetes-fixed_network-htt6bujn7umv=10.0.0.4, 192.168.1.156 |
+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+

[root@CentOS72Server ~(keystone_admin)]# openstack stack list
+------------------------+------------+-----------------+---------------------+--------------+
| ID                     | Stack Name | Stack Status    | Creation Time       | Updated Time |
+------------------------+------------+-----------------+---------------------+--------------+
| 57b4511f-d264-4a29     | kubernetes | CREATE_COMPLETE | 2016-08-23T14:29:43 | None         |
| -ab8c-9ce273a4d9bb     |            |                 |                     |              |
+------------------------+------------+-----------------+---------------------+--------------+
[root@CentOS72Server ~(keystone_admin)]# nova secgroup-list
+--------------------------------------+-----------------------------------------+------------------------+
| Id                                   | Name                                    | Description            |

+--------------------------------------+-----------------------------------------+------------------------+

| 9763cead-5816-40c5-a6e0-50a821347e52 | default                                 | Default security group |

| fc918814-db18-4be9-a319-4d8988b9060f | kubernetes-secgroup_base-7raauykt5owy   |                        |

| 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 | kubernetes-secgroup_master-ztdnfr6paudu |                        |

| 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 | kubernetes-secgroup_node-dt77fol3a7og   |                        |

+--------------------------------------+-----------------------------------------+------------------------+


[boris@CentOS72Server kubernetes(keystone_build)]$ ./cluster/kube-up.sh
... Starting cluster using provider: openstack-heat
... calling verify-prereqs
swift client installed
glance client installed
nova client installed
heat client installed
openstack client installed
... calling kube-up
kube-up for provider openstack-heat
[INFO] Execute commands to create Kubernetes cluster
[INFO] Uploading kubernetes-server-linux-amd64.tar.gz
kubernetes-server.tar.gz
[INFO] Uploading kubernetes-salt.tar.gz
kubernetes-salt.tar.gz
[INFO] Key pair already exists
Stack not found: kubernetes
[INFO] Create stack kubernetes
+---------------------+-------------------------------------------------------------------------+
| Field               | Value                                                                   |
+---------------------+-------------------------------------------------------------------------+
| id                  | 57b4511f-d264-4a29-ab8c-9ce273a4d9bb                                    |
| stack_name          | kubernetes                                                              |
| description         | Kubernetes cluster with one master and one or more worker nodes (as     |
|                     | specified by the number_of_minions parameter, which defaults to 3).     |
|                     |                                                                         |
| creation_time       | 2016-08-23T14:29:43                                                     |
| updated_time        | None                                                                    |
| stack_status        | CREATE_IN_PROGRESS                                                      |
| stack_status_reason |                                                                         |
+---------------------+-------------------------------------------------------------------------+

... calling validate-cluster
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_IN_PROGRESS
Cluster status CREATE_COMPLETE
cluster "openstack-kubernetes" set.
user "openstack-kubernetes" set.
context "openstack-kubernetes" set.
switched to context "openstack-kubernetes".
Wrote config for openstack-kubernetes to /home/boris/.kube/config
Done, listing cluster services:

The connection to the server 192.168.1.155 was refused - did you specify the right host or port?
=========================================
Status of heat-engine.log up on successful completition
As far as I understand
python-senlinclient
python-zaqarclient
are not packaged with RDO Mitaka on CentOS 7.2
See also :-
https://bugs.launchpad.net/heat/+bug/1544220
https://bugs.launchpad.net/heat/+bug/1597593
https://bugzilla.redhat.com/show_bug.cgi?id=1294489
=========================================
[boris@CentOS72Server kubernetes(keystone_build)]$ cat  /home/boris/.kube/config
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://192.168.1.155
  name: openstack-kubernetes
contexts:
- context:
    cluster: openstack-kubernetes
    user: openstack-kubernetes
  name: openstack-kubernetes
current-context: openstack-kubernetes
kind: Config
preferences: {}
users:
- name: openstack-kubernetes
  user:

  
 

=======
Finally
=======
[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp  --direction ingress --remote-ip-prefix 0.0.0.0/0 fc918814-db18-4be9-a319-4d8988b9060f
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 83e43587-1f6f-4f1b-b8b9-85e353b4d030 |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | fc918814-db18-4be9-a319-4d8988b9060f |
| tenant_id         | 6e72c704971d4da3845f0ae9982bca6b     |
+-------------------+--------------------------------------+

[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp  --direction ingress --remote-ip-prefix 0.0.0.0/0 29a1ff1d-be63-4bec-bac7-fdfa00a9c551
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 275f5b0b-4521-4b40-abb8-97bc1ab9566f |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 |
| tenant_id         | 6e72c704971d4da3845f0ae9982bca6b     |
+-------------------+--------------------------------------+

[root@CentOS72Server ~(keystone_admin)]# nova secgroup-list
+--------------------------------------+-----------------------------------------+------------------------+
| Id                                   | Name                                    | Description            |
+--------------------------------------+-----------------------------------------+------------------------+
| 9763cead-5816-40c5-a6e0-50a821347e52 | default                                 | Default security group |
| fc918814-db18-4be9-a319-4d8988b9060f | kubernetes-secgroup_base-7raauykt5owy   |                        |
| 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 | kubernetes-secgroup_master-ztdnfr6paudu |                        |
| 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 | kubernetes-secgroup_node-dt77fol3a7og   |                        |
+--------------------------------------+-----------------------------------------+------------------------+

[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp  --direction ingress --remote-ip-prefix 0.0.0.0/0 08d5e1d7-0223-4acb-bf74-ed7230e98bf1
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field             | Value                                |
+-------------------+--------------------------------------+
| description       |                                      |
| direction         | ingress                              |
| ethertype         | IPv4                                 |
| id                | 8ef7ae78-42ff-4f82-baab-ce41e5e90cc8 |
| port_range_max    |                                      |
| port_range_min    |                                      |
| protocol          | icmp                                 |
| remote_group_id   |                                      |
| remote_ip_prefix  | 0.0.0.0/0                            |
| security_group_id | 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 |
| tenant_id         | 6e72c704971d4da3845f0ae9982bca6b     |
+-------------------+--------------------------------------+

Can ping 192.168.1.155,192.168.1.156

Security rules for each kubernetes secgroup have ports 1 - 6535  open , however


==========================
 Kubernetes Master VM boot log contains
===========================

  
[[32m  OK  [0m] Started Update UTMP about System Runlevel Changes.
[  380.104758] cloud-init[4161]: [ERROR   ] boto_route53 requires at least boto 2.35.0.
[  455.439213] cloud-init[4161]: [ERROR   ] boto_route53 requires at least boto 2.35.0.
[  469.546079] cloud-init[4161]: [WARNING ] /usr/lib/python2.7/site-packages/salt/states/cmd.py:1041: DeprecationWarning: The legacy user/group arguments are deprecated. Replace them with runas. These arguments will be removed in Salt Oxygen.
[  521.559170] cloud-init[4161]: [WARNING ] State for file: /var/log/kube-apiserver.log - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
[  521.723063] cloud-init[4161]: [ERROR   ] boto_route53 requires at least boto 2.35.0.

Even if I checkout branch :-

$ git clone https://github.com/kubernetes/kubernetes.git 
$ cd kubernetes 
$ git checkout origin/release-1.3.0
$ make quick-release
 
Same error in Master VM boot log.

I believe CentOS 7.2 image has to be updated up to python2-boto 2.41 via EPEL 7 during cloud-init run ( first boot )


  References
  http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/

Sunday, August 21, 2016

Emulation Triple0 QuickStart HA Controller's Cluster failover

Procedure bellow identify Controller which has RouterDSA in active state and
shutdown/startup  this Controller ( controller-1 in particular case).
Then  log into conntroller-1 and restart pcs cluster on particular  Controller,
afterwards  runs `pacemaker resource cleanup` for several resources what
results bringing back cluster nodes in proper status


 
[root@overcloud-controller-0 ~]# neutron l3-agent-list-hosting-router RouterDSA
+-----------------------------+-----------------------------+----------------+-------+----------+
| id                          | host                        | admin_state_up | alive | ha_state |
+-----------------------------+-----------------------------+----------------+-------+----------+
| 558fe2d4-a709-482f-         | overcloud-                  | True           | :-)   | active   |
| 85f2-9bb9835cf360           | controller-1.localdomain    |           |       |              |
| ae0f67ce-732b-                  | overcloud-                  | True           | :-)   | standby  |
| 4cb2-9b52-d15c22211972      | controller-0.localdomain    |          |       |          |
| fd9bfd34-9e36-4dac-a350-d18 | overcloud-         | True           | :-)   | standby  |
| fd1c3489b                      | controller-2.localdomain    |            |       |                |
+-----------------------------+-----------------------------+----------------+-------+----------+
[root@overcloud-controller-0 ~]# logout
[heat-admin@overcloud-controller-0 ~]$ logout
Connection to 192.0.2.16 closed.
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.16 |
| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.14 |
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.17 |
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.13 |
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.15 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
[stack@undercloud ~]$ nova stop overcloud-controller-1
Request to stop server overcloud-controller-1 has been accepted.
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+
| ID                                   | Name                    | Status  | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0  | ACTIVE  | -          | Running     | ctlplane=192.0.2.16 |
| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1  | SHUTOFF | -          | Shutdown    | ctlplane=192.0.2.14 |
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2  | ACTIVE  | -          | Running     | ctlplane=192.0.2.17 |
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE  | -          | Running     | ctlplane=192.0.2.13 |
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE  | -          | Running     | ctlplane=192.0.2.15 |
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+
[stack@undercloud ~]$ nova start overcloud-controller-1
Request to start server overcloud-controller-1 has been accepted.
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.16 |
| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.14 |
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.17 |
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.13 |
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.15 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

[stack@undercloud ~]$ ssh heat-admin@192.0.2.14
The authenticity of host '192.0.2.14 (192.0.2.14)' can't be established.
ECDSA key fingerprint is a3:e6:de:2e:2b:45:e4:33:3d:d0:75:e5:b7:7f:da:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.0.2.14' (ECDSA) to the list of known hosts.

[heat-admin@overcloud-controller-1 ~]$ sudo su -
[root@overcloud-controller-1 ~]# pcs status
Cluster name: tripleo_cluster
Last updated: Sun Aug 21 15:12:39 2016        Last change: Sun Aug 21 13:24:42 2016 by root via cibadmin on overcloud-controller-1
Stack: corosync
Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-192.0.2.12    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-2 ]
     Slaves: [ overcloud-controller-1 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Stopped
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
     Started: [ overcloud-controller-0 ]
     Stopped: [ overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: delay-clone [delay]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-2 ]
     Stopped: [ overcloud-controller-1 ]

Failed Actions:
* rabbitmq_monitor_10000 on overcloud-controller-0 'not running' (7): call=81, status=complete, exitreason='none',
    last-rc-change='Sun Aug 21 15:11:13 2016', queued=0ms, exec=0ms
* rabbitmq_monitor_10000 on overcloud-controller-2 'not running' (7): call=79, status=complete, exitreason='none',
    last-rc-change='Sun Aug 21 15:11:13 2016', queued=0ms, exec=0ms


PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@overcloud-controller-1 ~]# pcs cluster stop
Stopping Cluster (pacemaker)... Stopping Cluster (corosync)...
[root@overcloud-controller-1 ~]# pcs cluster start
Starting Cluster...
[root@overcloud-controller-1 ~]#
Broadcast message from systemd-journald@overcloud-controller-1.localdomain (Sun 2016-08-21 15:16:07 UTC):

haproxy[16997]: proxy nova_ec2 has no server available!

======================================
Script start.sh  [ 1 ]
======================================
#!/bin/bash -x
pcs resource cleanup rabbitmq-clone ;
sleep 10
pcs resource cleanup neutron-server-clone ;
sleep 10
pcs resource cleanup openstack-nova-api-clone ;
sleep 10
pcs resource cleanup openstack-nova-consoleauth-clone ;
sleep 10
pcs resource cleanup openstack-heat-engine-clone ;
sleep 10
pcs resource cleanup openstack-cinder-api-clone ;
sleep 10
pcs resource cleanup openstack-glance-registry-clone ;
sleep 10
pcs resource cleanup httpd-clone
=======================================


[root@overcloud-controller-1 ~]# . ./start.sh
Waiting for 3 replies from the CRMd... OK
Cleaning up rabbitmq:0 on overcloud-controller-0, removing fail-count-rabbitmq
Cleaning up rabbitmq:0 on overcloud-controller-1, removing fail-count-rabbitmq
Cleaning up rabbitmq:0 on overcloud-controller-2, removing fail-count-rabbitmq

Waiting for 3 replies from the CRMd... OK
Cleaning up neutron-server:0 on overcloud-controller-0, removing fail-count-neutron-server
Cleaning up neutron-server:0 on overcloud-controller-1, removing fail-count-neutron-server
Cleaning up neutron-server:0 on overcloud-controller-2, removing fail-count-neutron-server

Waiting for 3 replies from the CRMd... OK
Cleaning up openstack-nova-api:0 on overcloud-controller-0, removing fail-count-openstack-nova-api
Cleaning up openstack-nova-api:0 on overcloud-controller-1, removing fail-count-openstack-nova-api
Cleaning up openstack-nova-api:0 on overcloud-controller-2, removing fail-count-openstack-nova-api

Waiting for 3 replies from the CRMd... OK
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-0, removing fail-count-openstack-nova-consoleauth
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-1, removing fail-count-openstack-nova-consoleauth
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-2, removing fail-count-openstack-nova-consoleauth

Waiting for 3 replies from the CRMd... OK
Cleaning up openstack-heat-engine:0 on overcloud-controller-0, removing fail-count-openstack-heat-engine
Cleaning up openstack-heat-engine:0 on overcloud-controller-1, removing fail-count-openstack-heat-engine
Cleaning up openstack-heat-engine:0 on overcloud-controller-2, removing fail-count-openstack-heat-engine

Waiting for 3 replies from the CRMd... OK
Cleaning up openstack-cinder-api:0 on overcloud-controller-0, removing fail-count-openstack-cinder-api
Cleaning up openstack-cinder-api:0 on overcloud-controller-1, removing fail-count-openstack-cinder-api
Cleaning up openstack-cinder-api:0 on overcloud-controller-2, removing fail-count-openstack-cinder-api

Waiting for 3 replies from the CRMd... OK
Cleaning up openstack-glance-registry:0 on overcloud-controller-0, removing fail-count-openstack-glance-registry
Cleaning up openstack-glance-registry:0 on overcloud-controller-1, removing fail-count-openstack-glance-registry
Cleaning up openstack-glance-registry:0 on overcloud-controller-2, removing fail-count-openstack-glance-registry

Waiting for 3 replies from the CRMd... OK
Cleaning up httpd:0 on overcloud-controller-0, removing fail-count-httpd
Cleaning up httpd:0 on overcloud-controller-1, removing fail-count-httpd
Cleaning up httpd:0 on overcloud-controller-2, removing fail-count-httpd

[root@overcloud-controller-1 ~]# pcs status
Cluster name: tripleo_cluster
Last updated: Sun Aug 21 15:18:04 2016        Last change: Sun Aug 21 15:17:57 2016 by hacluster via crmd on overcloud-controller-0
Stack: corosync
Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum
3 nodes and 127 resources configured

Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

Full list of resources:

 ip-192.0.2.12    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.5    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 ip-172.16.3.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: haproxy-clone [haproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: galera-master [galera]
     Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: memcached-clone [memcached]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 ip-10.0.0.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.2.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-0
 ip-172.16.1.4    (ocf::heartbeat:IPaddr2):    Started overcloud-controller-2
 Clone Set: rabbitmq-clone [rabbitmq]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-core-clone [openstack-core]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Master/Slave Set: redis-master [redis]
     Masters: [ overcloud-controller-0 ]
     Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: mongod-clone [mongod]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 openstack-cinder-volume    (systemd:openstack-cinder-volume):    Started overcloud-controller-0
 Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-clone [openstack-heat-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-api-clone [openstack-glance-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-api-clone [openstack-nova-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: delay-clone [delay]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: neutron-server-clone [neutron-server]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: httpd-clone [httpd]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
 Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
     Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]

PCSD Status:
  overcloud-controller-0: Online
  overcloud-controller-1: Online
  overcloud-controller-2: Online

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled
[root@overcloud-controller-1 ~]# logout
[heat-admin@overcloud-controller-1 ~]$ logout
Connection to 192.0.2.14 closed.
[stack@undercloud ~]$ ssh heat-admin@192.0.2.16
Last login: Sun Aug 21 15:08:18 2016 from 192.0.2.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Sun Aug 21 15:08:24 UTC 2016 on pts/0
[root@overcloud-controller-0 ~]# .  keysstonerc_admin
[root@overcloud-controller-0 ~]# neutron l3-agent-list-hosting-router RouterDSA
+-----------------------------+-----------------------------+----------------+-------+----------+
| id                          | host                        | admin_state_up | alive | ha_state |
+-----------------------------+-----------------------------+----------------+-------+----------+
| 558fe2d4-a709-482f-         | overcloud-                  | True      | :-)   | standby  |
| 85f2-9bb9835cf360           | controller-1.localdomain    |      |       |                |
| ae0f67ce-732b-              | overcloud-                  | True           | :-)  | standby  |
| 4cb2-9b52-d15c22211972      | controller-0.localdomain    |          |       |       |
| fd9bfd34-9e36-4dac-a350-d18 | overcloud-                  | True   :-)   | active   |
| fd1c3489b                   | controller-2.localdomain        |                |     |          |
+-----------------------------+-----------------------------+----------------+-------+----------+

====================================
Verification Galera DB sync on Controllers
====================================

[root@overcloud-controller-0 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32

Galera cluster node is synced.

[root@overcloud-controller-0 ~]# logout
[heat-admin@overcloud-controller-0 ~]$ logout
Connection to 192.0.2.16 closed.

[stack@undercloud ~]$ ssh heat-admin@192.0.2.14
Last login: Sun Aug 21 15:12:27 2016 from 192.0.2.1
[heat-admin@overcloud-controller-1 ~]$ sudo su -
Last login: Sun Aug 21 15:12:34 UTC 2016 on pts/0

[root@overcloud-controller-1 ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32

Galera cluster node is synced.


 ==================
 Setup details
==================

[boris@fedora24wks tripleo-quickstart]$ cat ./config/general_config/ha.yml
# Deploy an HA openstack environment.
#
# This will require (6144 * 4) == approx. 24GB for the overcloud
# nodes, plus another 8GB for the undercloud, for a total of around
# 32GB.
control_memory: 6144
compute_memory: 6144
default_vcpu: 2

undercloud_memory: 8192

# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 2

# Create three controller nodes and one compute node.
overcloud_nodes:
  - name: control_0
    flavor: control
  - name: control_1
    flavor: control
  - name: control_2
    flavor: control

  - name: compute_0
    flavor: compute
  - name: compute_1
    flavor: compute

# We don't need introspection in a virtual environment (because we are
# creating all the "hardware" we really know the necessary
# information).
step_introspect: true

# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
  --control-scale 3 --compute-scale 2 --neutron-network-type vxlan
  --neutron-tunnel-types vxlan
  --ntp-server pool.ntp.org
test_tempest: false
test_ping: true
enable_pacemaker: true

##################################
Virtual Environment Setup Complete
##################################

Access the undercloud by:

    ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud

There are scripts in the home directory to continue the deploy:

    overcloud-deploy.sh will deploy the overcloud
    overcloud-deploy-post.sh will do any post-deploy configuration
    overcloud-validate.sh will run post-deploy validation

Alternatively, you can ignore these scripts and follow the upstream docs,
starting from the overcloud deploy section:

    http://ow.ly/1Vc1301iBlb

##################################
Virtual Environment Setup Complete
##################################

[boris@fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added '192.168.1.74' (ECDSA) to the list of known hosts.
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.
Last login: Wed Aug 24 12:13:16 2016 from gateway
[stack@undercloud ~]$ sudo su


[root@undercloud stack]# cd /etc/yum.repos.d
[root@undercloud yum.repos.d]# ls -l
total 40
-rw-r--r--. 1 root root 1664 Dec  9  2015 CentOS-Base.repo
-rw-r--r--. 1 root root 1057 Aug 24 02:58 CentOS-Ceph-Hammer.repo
-rw-r--r--. 1 root root 1309 Dec  9  2015 CentOS-CR.repo
-rw-r--r--. 1 root root  649 Dec  9  2015 CentOS-Debuginfo.repo
-rw-r--r--. 1 root root  290 Dec  9  2015 CentOS-fasttrack.repo
-rw-r--r--. 1 root root  630 Dec  9  2015 CentOS-Media.repo
-rw-r--r--. 1 root root 1331 Dec  9  2015 CentOS-Sources.repo
-rw-r--r--. 1 root root 1952 Dec  9  2015 CentOS-Vault.repo
-rw-r--r--. 1 root root  162 Aug 24 02:58 delorean-deps.repo
-rw-r--r--. 1 root root  220 Aug 24 02:58 delorean.repo


====================================================
Delorean repos file been installed via quickstart on undercloud
====================================================
[root@undercloud yum.repos.d]# cat delorean-deps.repo
[delorean-mitaka-testing]
name=dlrn-mitaka-testing
baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-mitaka/
enabled=1
gpgcheck=0
priority=2

[root@undercloud yum.repos.d]# cat delorean.repo
[delorean]
name=delorean-openstack-rally-3909299306233247d547bad265a1adb78adfb3d4
baseurl=http://trunk.rdoproject.org/centos7-mitaka/39/09/3909299306233247d547bad265a1adb78adfb3d4_4e6dfa3c
enabled=1
gpgcheck=0

Wednesday, August 17, 2016

TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots ( newly polished )

UPDATE 09/03/2016


Undercloud VM gets created with AutoStart at boot up
in meantime.So just change permissions and allow services
to start on undercloud (5 min - 7 min )

Up on deployment completed
[stack@ServerTQS72 ~]$ virsh dominfo undercloud | grep -i autostart
Autostart:      enable


END UPDATE


UPDATE 08/21/216


In case when virt tools (virsh,virt-manger ) stop to recognise running
qemu-kvm of undercloud as VM  issue `sudo shutdown -P now` via connection
 `ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud`.
It will result graceful shutdown of undercloud's qemu-kvm process on VIRTHOST.


END UPDATE 



This post follows up http://lxer.com/module/newswire/view/230814/index.html
and might work as timer saver unless status undecloud.qcow2 per
http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/
requires fresh installation to be done from scratch.
Current update allows to automate procedure via /etc/rc.d/rc.local and exports
in stack's shell variables which allow to start virt-manager right away , presuming that xhost+ was issued in root's shell.

Thus, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment. Proceed as follows :-

1. System shutdown
    Cleanly commit :-
    [stack@undercloud~] $ openstack stack delete overcloud
2. Login into VIRTHOST as stack and gracefully shutdown undercloud
    [stack@ServerCentOS72 ~]$ virsh shutdown undercloud


=====================
 Make following updates
=====================

[root@ServerTQS72 ~]# cat /etc/rc.d/rc.local
#!/bin/bash
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure
# that this script will be executed during boot.
mkdir -p /run/user/1001
chown -R stack /run/user/1001

if [ $? -ne 0 ]
then
       exit 0   
fi
chgrp -R stack /run/user/1001

touch /var/lock/subsys/local

========================
In stack's .bashrc
========================

[stack@ServerTQS72 ~]$ cat .bashrc
# .bashrc

# Source global definitions
if [ -f /etc/bashrc ]; then
    . /etc/bashrc
fi

# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=

# User specific aliases and functions
# BEGIN ANSIBLE MANAGED BLOCK
# Make sure XDG_RUNTIME_DIR is set (used by libvirt
# for creating config and sockets for qemu:///session
# connections)
: ${XDG_RUNTIME_DIR:=/run/user/$(id -u)}
export XDG_RUNTIME_DIR
export DISPLAY=:0.0
export NO_AT_BRIDGE=1

# END ANSIBLE MANAGED BLOCK

=================
REBOOT  VIRTHOST
=================

$ sudo su -
# xhost +
# su - stack

[stack@ServerTQS72 ~]$ virt-manager --connect qemu:///session

Start VM undercloud
 


  Virt-tools misbehavior (UPDATE 08/21/16) . Six qemu-kvm are up and running

   1. Undercloud
   2. 3 Node HA Controller (Pacemaker/Corosync) cluster
   3. 2 Compute Nodes (nested KVM enabled )

 

=====================================
 Log into undercloud from Ansible Server via  :-
=====================================
[boris@fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud

Deploy overcloud using old overcloud-deploy.sh

# Deploy the overcloud!
openstack overcloud deploy \
 --templates /usr/share/openstack-tripleo-heat-templates \
 --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute \
 --ceph-storage-flavor oooq_ceph --timeout 90 -\
 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
 -e $HOME/network-environment.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
 --control-scale 3 --compute-scale 2 \
 --neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org \
    ${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML} "$@" || true

# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then

    # get the failures list
    openstack stack failures list overcloud > failed_deployment_list.log || true

    # get any puppet related errors
    for failed in $(heat resource-list \
        --nested-depth 5 overcloud | grep FAILED |
        grep 'StructuredDeployment ' | cut -d '|' -f3)
    do
       echo "heat deployment-show out put for deployment: $failed" >> failed_deployments.log
       echo "######################################################" >> failed_deployments.log
       heat deployment-show $failed >> failed_deployments.log
       echo "######################################################" >> failed_deployments.log
       echo "puppet standard error for deployment: $failed" >> failed_deployments.log
       echo "######################################################" >> failed_deployments.log
       # the sed part removes color codes from the text
       heat deployment-show $failed |
           jq -r .output_values.deploy_stderr |
           sed -r "s:\x1B\[[0-9;]*[mK]::g" >> failed_deployments.log
       echo "######################################################" >> failed_deployments.log
    done
fi

[stack@undercloud ~]$ . stackrc
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID                                   | Name                    | Status | Task State | Power State | Networks            |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| b6f105e8-3854-4939-99d9-73c16cf233fd | overcloud-controller-0  | ACTIVE | -          | Running     | ctlplane=192.0.2.23 |
| 30979d6e-773b-4d79-9446-1cd25bade373 | overcloud-controller-1  | ACTIVE | -          | Running     | ctlplane=192.0.2.20 |
| 256627a8-2202-4986-86b6-8cd6e46c21db | overcloud-controller-2  | ACTIVE | -          | Running     | ctlplane=192.0.2.22 |
| 9dc029bf-b096-4be6-b5a3-14b39ac098a4 | overcloud-novacompute-0 | ACTIVE | -          | Running     | ctlplane=192.0.2.21 |
| 16d0e195-c6a2-4286-a368-6fe9851ccd82 | overcloud-novacompute-1 | ACTIVE | -          | Running     | ctlplane=192.0.2.19 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+

Sunday, August 14, 2016

Access to TripleO QuickStart overcloud via sshuttle running on F24 WorkStation

Sshutle may be installed on Fedora 24 via straight forward `dnf -y install sshutle` [Fedora 24 Update: sshuttle-0.78.0-2.fc24]. So, when F24 has been set up as WKS for TripleO QuickStart deployment to VIRTHOST , there is no need to install add-on FoxyProxy and tune it on firefox as well as connect from ansible wks to undercloud via  $ ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090

What is sshuttle? It’s a Python app that uses SSH to create a quick and dirty VPN between your Linux, BSD, or Mac OS X machine and a remote system that has SSH access and Python. Been licensed under the GPLv2, sshuttle is a transparent proxy server that lets users fake a VPN with minimal hassle.

========================================
First install and start sshutle on Fedora 24 :-
========================================
boris@fedora24wks ~] dnf -y install sshutle
[root@fedora24wks ~]# rpm -qa \*sshuttle\*
sshuttle-0.78.0-2.fc24.noarch

========================================================
Now start sshutle via ssh.config.ansible, where 10.0.0.0/24 has been installed
as external network for OverCloud already been set up on VIRTHOST
========================================================
[boris@fedora24wks ~]$ sshuttle -e "ssh -F $HOME/.quickstart/ssh.config.ansible" -r undercloud  -v 10.0.0.0/24 &
[3] 16385
[boris@fedora24wks ~]$ Starting sshuttle proxy.
firewall manager: Starting firewall with Python version 3.5.1
firewall manager: ready method name nat.
IPv6 enabled: False
UDP enabled: False
DNS enabled: False
TCP redirector listening on ('127.0.0.1', 12299).
Starting client with Python version 3.5.1
c : connecting to server...
Warning: Permanently added '192.168.1.74' (ECDSA) to the list of known hosts.
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.
Starting server with Python version 2.7.5
 s: latency control setting = True
 s: available routes:
 s:   2/10.0.0.0/24
 s:   2/192.0.2.0/24
 s:   2/192.168.23.0/24
 s:   2/192.168.122.0/24
c : Connected.
firewall manager: setting up.
>> iptables -t nat -N sshuttle-12299
>> iptables -t nat -F sshuttle-12299
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12299
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12299
>> iptables -t nat -A sshuttle-12299 -j REDIRECT --dest 10.0.0.0/24 -p tcp --to-ports 12299 -m ttl ! --ttl 42
>> iptables -t nat -A sshuttle-12299 -j RETURN --dest 127.0.0.1/8 -p tcp
c : Accept TCP: 192.168.1.13:53068 -> 10.0.0.4:80.
c : warning: closed channel 1 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53072 -> 10.0.0.4:80.
 s: SW'unknown':Mux#1: deleting (3 remain)
 s: SW#6:10.0.0.4:80: deleting (2 remain)
c : warning: closed channel 2 got cmd=TCP_STOP_SENDING len=0
c : Accept TCP: 192.168.1.13:53074 -> 10.0.0.4:80.
 s: SW'unknown':Mux#2: deleting (3 remain)
 s: SW#7:10.0.0.4:80: deleting (2 remain)
c : Accept TCP: 192.168.1.13:58210 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58212 -> 10.0.0.4:6080.
c : SW'unknown':Mux#2: deleting (9 remain)
c : SW#11:192.168.1.13:53072: deleting (8 remain)
c : SW'unknown':Mux#1: deleting (7 remain)
c : SW#9:192.168.1.13:53068: deleting (6 remain)
c : Accept TCP: 192.168.1.13:58214 -> 10.0.0.4:6080.
c : Accept TCP: 192.168.1.13:58216 -> 10.0.0.4:6080.
c : warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0
 s: warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0

Complete log may be seen here



This creates a transparent proxy server on your local machine for all IP addresses that match 10.0.0.0/24. Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh. There is no need to install sshuttle on the remote server; the remote server just needs to have python available. sshuttle will automatically upload and run its source code to the remote python.

So,disable/remove  FoxyProxy add-on from firefox ( if it has been set up ); interrupt connection from work station to undercloud via `ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090`. Restart firefox and launch browser to http://10.0.0.4/dashboard

 

  

   References
   1. http://g33kinfo.com/info/archives/5388