tag:blogger.com,1999:blog-170671012024-02-20T10:19:38.183-08:00Xen Virtualization on Linux and SolarisBoris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comBlogger344125tag:blogger.com,1999:blog-17067101.post-76261120116119749832020-07-17T00:35:00.001-07:002020-07-17T02:37:20.088-07:00Tuning bridge setup in Web Cockpit Console<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIQcwI7p7AvLLookYgjbPeHaLKEPHnkfaT8_eTkITxLQadrradJ9uoRdmpwpj0uK1_8rTOF3RytP_9jqp4ZMrc2Ml9EVkkfCACA9-sx7ypNoqcoVG_j2nKP-NDasE5cbNAqZ_MRQ/s1600/Screenshot+from+2020-07-17+10-16-35.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIQcwI7p7AvLLookYgjbPeHaLKEPHnkfaT8_eTkITxLQadrradJ9uoRdmpwpj0uK1_8rTOF3RytP_9jqp4ZMrc2Ml9EVkkfCACA9-sx7ypNoqcoVG_j2nKP-NDasE5cbNAqZ_MRQ/s640/Screenshot+from+2020-07-17+10-16-35.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAuFMH_ThXk0R2fibDxCfoptZ4_1IbDWx3xahGcAt8KDJteqqQ73OADqnDpRITf3xzdResu6NFcVUnj-c3It3RnA6U1Moyc5AAkl6sJccmMC1EHNVICsMhuXuThqsHib6PKhZbvw/s1600/Screenshot+from+2020-07-17+10-17-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAuFMH_ThXk0R2fibDxCfoptZ4_1IbDWx3xahGcAt8KDJteqqQ73OADqnDpRITf3xzdResu6NFcVUnj-c3It3RnA6U1Moyc5AAkl6sJccmMC1EHNVICsMhuXuThqsHib6PKhZbvw/s640/Screenshot+from+2020-07-17+10-17-47.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4K3X76e5vzmZtgeL4lcRvKdskSrZJzayXnsROjKoW97hdNVWP91cXNs7eyHzdcd_SrRzHMht0t8kewWKhkfPHWhmjFVE_r6Te1viftS1Cs7XTrHg0Ky87zMlfdfzo0CleuZkpFQ/s1600/Screenshot+from+2020-07-17+10-40-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj4K3X76e5vzmZtgeL4lcRvKdskSrZJzayXnsROjKoW97hdNVWP91cXNs7eyHzdcd_SrRzHMht0t8kewWKhkfPHWhmjFVE_r6Te1viftS1Cs7XTrHg0Ky87zMlfdfzo0CleuZkpFQ/s640/Screenshot+from+2020-07-17+10-40-55.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<span style="font-family: "times" , "times new roman" , serif; font-size: large;">We can run at time two CentOS 8.2 KVM guests. First one on office LAN via bridge0, second one</span><span style="font-family: "times" , "times new roman" , serif; font-size: large;"> attached to default libvirt's network via bridge br0 ( default )</span></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZL8msj6jVP5I-BCYnJ_lorB6w9K_Y0Xw5GHvYZ1AKV4uh0PpdX7avQsB4j0GxDPg8_k0AQbWjyA2rJz0Rt4JAKc6XmpLg82CC3mg-vUzeKD62T2xMsXKq7YpuGCxZJpGU-8BV9A/s1600/Screenshot+from+2020-07-17+10-54-39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZL8msj6jVP5I-BCYnJ_lorB6w9K_Y0Xw5GHvYZ1AKV4uh0PpdX7avQsB4j0GxDPg8_k0AQbWjyA2rJz0Rt4JAKc6XmpLg82CC3mg-vUzeKD62T2xMsXKq7YpuGCxZJpGU-8BV9A/s640/Screenshot+from+2020-07-17+10-54-39.png" width="640" /></a></div>
<br />
<span style="font-family: Times, Times New Roman, serif; font-size: large;">Runtime snapshot</span><br />
<span style="font-family: Times, Times New Roman, serif; font-size: large;"><br /></span>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrI-ZroqGEnqqQYwZ8Eiy1SiBhMv1aGhXJE_kZfmoQ1uzkAiAn4xMKq4NkITyihLKHojOgMmqBXkaH0L-QRafiajc3Oe1lDc34lITUxOOT9cYtXwLtJGDlwr7Ltzg6oPkalvhBTA/s1600/Screenshot+from+2020-07-17+12-33-34.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhrI-ZroqGEnqqQYwZ8Eiy1SiBhMv1aGhXJE_kZfmoQ1uzkAiAn4xMKq4NkITyihLKHojOgMmqBXkaH0L-QRafiajc3Oe1lDc34lITUxOOT9cYtXwLtJGDlwr7Ltzg6oPkalvhBTA/s640/Screenshot+from+2020-07-17+12-33-34.png" width="640" /></a></div>
<span style="font-family: Times, Times New Roman, serif; font-size: large;"><br /></span>
<span style="font-family: Times, Times New Roman, serif; font-size: large;"> </span><br />
<span style="font-family: Times, Times New Roman, serif; font-size: large;"><br /></span></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-92095981953167278122018-02-09T02:21:00.002-08:002018-02-09T02:44:04.172-08:00Attempt of evaluation TripleO QuickStart (Master) overcloud containerized HA deployment on 32 GB VIRTHOST (Second Test)<div dir="ltr" style="text-align: left;" trbidi="on">
<pre>Following bellow is a sample of tuning deploy-configHA.yaml which allows to obtain
fairly functional overcloud topology with PCS HA Controller's cluster, then create
Networks, HA Router and launch F27 VM. Install on VM links text browser and surf the Net
pretty fast and smoothly. Sizes of PCS Controller's were tuned rather then default size
of undercloud as well as number VCPUS running undercloud VM.
Initial swap area after deployment is 800 MB</pre>
<pre>Setup Repos </pre>
<pre>[boris@fedora27workstation ~]$git clone \
https://github.com/openstack/tripleo-quickstart
[boris@fedora27workstation ~]$bash ./tripleo-quickstart/quickstart.sh \
--install-deps </pre>
Configure Environment
<br />
<pre>[boris@fedora27workstation ~]$ export CONFIG= ~boris/deploy-configHA.yaml
[boris@fedora27workstation ~]$ cat ~boris/deploy-configHA.yaml </pre>
<pre><b># 32 GB VIRTHOST </b></pre>
<pre><b>controller_memory: 7500</b>
<b>compute memory: 6500 </b></pre>
<pre> overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: control_1
flavor: control
virtualbmc_port: 6231
- name: control_2
flavor: control
virtualbmc_port: 6232
- name: compute_0
flavor: compute
virtualbmc_port: 6233
node_count: 4
containerized_overcloud: true
delete_docker_cache: true
<b>enable_pacemaker: true</b>
run_tempest: false
extra_args: >-
--libvirt-type qemu
--ntp-server pool.ntp.org
<b> --control-scale 3
--compute-scale 1</b>
-e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml
-e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml
[boris@fedora27workstation ~]$ export CONFIG=~boris/deploy-configHA.yaml
[boris@fedora27workstation ~]$<b> bash ./tripleo-quickstart/quickstart.sh \
--clean \
--release master \
--teardown all \
--tags all \
-e @$CONFIG \
$VIRTHOST</b> </pre>
<pre> . . . . . . . </pre>
<pre>Friday 09 February 2018 12:30:38 +0300 (0:00:01.772) 3:23:18.694 *******
===============================================================================
overcloud-deploy : Deploy the overcloud ----------------------------------------------- 4339.72s
/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-deploy/tasks/deploy-overcloud.yml:1
overcloud-prep-containers : Prepare for the containerized deployment ------------------ 3348.90s
/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-containers/tasks/overcloud-prep-containers.yml:28
undercloud-deploy : Install the undercloud -------------------------------------------- 1565.50s
/home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/install-undercloud.yml:20
modify-image : Run virt-customize on the provided image -------------------------------- 490.30s
/home/boris/.quickstart/usr/local/share/ansible/roles/modify-image/tasks/libguestfs.yml:46 -----
tripleo-validations : Run validations tests through Mistral ---------------------------- 396.76s
/home/boris/.quickstart/usr/local/share/ansible/roles/tripleo-validations/tasks/main.yml:21 ----
overcloud-prep-images : Prepare the overcloud images for deploy ------------------------ 288.70s
/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-images/tasks/overcloud-prep-images.yml:1
tripleo-validations : Run validations tests through Mistral ---------------------------- 211.02s
/home/boris/.quickstart/usr/local/share/ansible/roles/tripleo-validations/tasks/main.yml:21 ----
tripleo-validations : Run validations tests through Mistral ---------------------------- 150.33s
/home/boris/.quickstart/usr/local/share/ansible/roles/tripleo-validations/tasks/main.yml:21 ----
convert-image : convert image ---------------------------------------------------------- 141.78s
/home/boris/.quickstart/tripleo-quickstart/roles/convert-image/tasks/main.yml:25 ---------------
setup/undercloud : Upload undercloud volume to storage pool ----------------------------- 98.55s
/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:280 ---
setup/undercloud : Perform selinux relabel on undercloud image -------------------------- 92.20s
/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:162 ---
setup/undercloud : Inject additional images --------------------------------------------- 49.99s
/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:67 ----
overcloud-prep-flavors : Prepare the scripts for overcloud flavors ---------------------- 44.48s
/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-flavors/tasks/overcloud-prep-flavors.yml:1
tripleo-validations : Run negative tests for pre-introspection group -------------------- 36.70s
/home/boris/.quickstart/usr/local/share/ansible/roles/tripleo-validations/tasks/main.yml:55 ----
validate-tempest : Install openstack services tempest plugins --------------------------- 31.58s
/home/boris/.quickstart/usr/local/share/ansible/roles/validate-tempest/tasks/pre-tempest.yml:32
undercloud-deploy : Start the Virtual BMCs ---------------------------------------------- 27.20s
/home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/configure-vbmc.yml:107
setup/undercloud : iptables ------------------------------------------------------------- 20.20s
/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:387 ---
convert-image : Resize the undercloud image using qemu-image resize --------------------- 19.27s
/home/boris/.quickstart/tripleo-quickstart/roles/convert-image/tasks/main.yml:19 ---------------
fetch-images : Get tar images from cache ------------------------------------------------ 19.22s
/home/boris/.quickstart/tripleo-quickstart/roles/fetch-images/tasks/fetch.yml:186 --------------
undercloud-deploy : Create the Virtual BMCs --------------------------------------------- 14.47s
/home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/configure-vbmc.yml:75
+ set +x
##################################
Virtual Environment Setup Complete
##################################
Access the undercloud by:
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Follow the documentation in the link below to complete your deployment.
http://ow.ly/c44w304begR
##################################
Virtual Environment Setup Complete
##################################
[boris@fedora27workstation ~]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
Warning: Permanently added '192.168.0.74' (ECDSA) to the list of known hosts.
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.
Last login: Fri Feb 9 09:29:56 2018 from gateway
[stack@undercloud ~]$ . stackrc
(undercloud) [stack@undercloud ~]$ cat overcloudrc
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
export OS_NO_CACHE=True
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.0.0.5,192.168.24.6
export OS_USER_DOMAIN_NAME=Default
export OS_VOLUME_API_VERSION=3
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=https://10.0.0.5:13000//v3
export NOVA_VERSION=1.1
export OS_IMAGE_API_VERSION=2
export OS_PASSWORD=bM6Yy84qEAdHjXvWu7fcXHnMU
export OS_PROJECT_DOMAIN_NAME=Default
export OS_IDENTITY_API_VERSION=3
export OS_PROJECT_NAME=admin
export OS_AUTH_TYPE=password
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
# Add OS_CLOUDNAME to PS1
if [ -z "${CLOUDPROMPT_ENABLED:-}" ]; then
export PS1=${PS1:-""}
export PS1=\${OS_CLOUDNAME:+"(\$OS_CLOUDNAME)"}\ $PS1
export CLOUDPROMPT_ENABLED=1
fi
(undercloud) [stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| 6bd06df3-c2e9-46b7-b212-2d7472a3db10 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.16 |
| 581cb5be-7ee0-4f8c-be59-603f65fa60bf | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.14 |
| 73151c8f-6678-48bc-8d04-e1ed97007411 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.13 |
| 7eb8f0e5-a743-42a5-83be-83615d5d3f03 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.9 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
(undercloud) [stack@undercloud ~]$ ssh heat-admin@192.168.24.16
The authenticity of host '192.168.24.16 (192.168.24.16)' can't be established.
ECDSA key fingerprint is SHA256:w3fm1lLcwNKHDAo2/tad+mliZe8aJQPUZT5734JOhCo.
ECDSA key fingerprint is MD5:71:3e:23:32:d6:80:36:02:4a:81:ea:05:09:22:50:87.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.24.16' (ECDSA) to the list of known hosts.
Last login: Fri Feb 9 09:29:14 2018 from 192.168.24.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-1 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum
Last updated: Fri Feb 9 09:33:12 2018
Last change: Fri Feb 9 09:20:26 2018 by root via cibadmin on overcloud-controller-0
12 nodes configured
37 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
GuestOnline: [ galera-bundle-0@overcloud-controller-0 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-2 rabbitmq-bundle-0@overcloud-controller-0 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-2 redis-bundle-0@overcloud-controller-0 redis-bundle-1@overcloud-controller-1 redis-bundle-2@overcloud-controller-2 ]
Full list of resources:
Docker container set: rabbitmq-bundle [192.168.24.1:8787/master/centos-binary-rabbitmq:pcmklatest]
rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-0
rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-1
rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-2
Docker container set: galera-bundle [192.168.24.1:8787/master/centos-binary-mariadb:pcmklatest]
galera-bundle-0 (ocf::heartbeat:galera): Master overcloud-controller-0
galera-bundle-1 (ocf::heartbeat:galera): Master overcloud-controller-1
galera-bundle-2 (ocf::heartbeat:galera): Master overcloud-controller-2
Docker container set: redis-bundle [192.168.24.1:8787/master/centos-binary-redis:pcmklatest]
redis-bundle-0 (ocf::heartbeat:redis): Master overcloud-controller-0
redis-bundle-1 (ocf::heartbeat:redis): Slave overcloud-controller-1
redis-bundle-2 (ocf::heartbeat:redis): Slave overcloud-controller-2
ip-192.168.24.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-10.0.0.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
ip-172.16.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.1.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.3.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Docker container set: haproxy-bundle [192.168.24.1:8787/master/centos-binary-haproxy:pcmklatest]
haproxy-bundle-docker-0 (ocf::heartbeat:docker): Started overcloud-controller-0
haproxy-bundle-docker-1 (ocf::heartbeat:docker): Started overcloud-controller-1
haproxy-bundle-docker-2 (ocf::heartbeat:docker): Started overcloud-controller-2
Docker container: openstack-cinder-volume [192.168.24.1:8787/master/centos-binary-cinder-volume:pcmklatest]
openstack-cinder-volume-docker-0 (ocf::heartbeat:docker): Started overcloud-controller-0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
(overcloud) [root@overcloud-controller-0 ~]# nova service-list
+--------------------------------------+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | Forced down |
+--------------------------------------+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
| 971cc1ce-1d70-4950-8f53-81d2c5c06138 | nova-scheduler | overcloud-controller-1.localdomain | internal | enabled | up | 2018-02-09T09:34:41.000000 | - | False |
| c22939e2-caca-48d2-bdbf-b3dfb761a207 | nova-scheduler | overcloud-controller-0.localdomain | internal | enabled | up | 2018-02-09T09:34:42.000000 | - | False |
| 0cbc89d4-8ae6-47bb-90bf-ccbf40008a9a | nova-scheduler | overcloud-controller-2.localdomain | internal | enabled | up | 2018-02-09T09:34:38.000000 | - | False |
| 0439fbf2-dbbd-45f7-8aa3-3fe4a6bf3549 | nova-consoleauth | overcloud-controller-1.localdomain | internal | enabled | up | 2018-02-09T09:34:36.000000 | - | False |
| 3c6af83d-999d-47bb-8c75-d40c6206c7df | nova-consoleauth | overcloud-controller-0.localdomain | internal | enabled | up | 2018-02-09T09:34:35.000000 | - | False |
| b60e0945-9daa-4476-8c92-82e385f1376e | nova-consoleauth | overcloud-controller-2.localdomain | internal | enabled | up | 2018-02-09T09:34:41.000000 | - | False |
| 0d17978f-70f9-4180-ad75-e4b7fa3b46e9 | nova-conductor | overcloud-controller-1.localdomain | internal | enabled | up | 2018-02-09T09:34:35.000000 | - | False |
| 76a526cd-d127-426a-958f-bc6e6bd85553 | nova-compute | overcloud-novacompute-0.localdomain | nova | enabled | up | 2018-02-09T09:34:39.000000 | - | False |
| 4ef0ead0-77c8-4a3e-837b-ad136cbc2bb9 | nova-conductor | overcloud-controller-0.localdomain | internal | enabled | up | 2018-02-09T09:34:35.000000 | - | False |
| aa8b0547-dccb-4460-a5da-a101c0d58ebb | nova-conductor | overcloud-controller-2.localdomain | internal | enabled | up | 2018-02-09T09:34:40.000000 | - | False |
+--------------------------------------+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+-------------+
(overcloud) [root@overcloud-controller-0 ~]# neutron agent-list
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host | availability_zone | alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+----------------+---------------------------+
| 143ea92b-3e32-436b-9d59-87934bbac856 | DHCP agent | overcloud-controller-1.localdomain | nova | :-) | True | neutron-dhcp-agent |
| 19b15be3-26bf-44fd-b3f7-fa1bc1ea81ff | Open vSwitch agent | overcloud-controller-0.localdomain | | :-) | True | neutron-openvswitch-agent |
| 39a919d1-2906-4681-ad82-786a7dde6be0 | Metadata agent | overcloud-controller-1.localdomain | | :-) | True | neutron-metadata-agent |
| 3ae45bcf-ba22-4a3f-a72c-e93b03130084 | Metadata agent | overcloud-controller-0.localdomain | | :-) | True | neutron-metadata-agent |
| 46ad2f5f-d557-4f2a-bbdd-e2326b267371 | L3 agent | overcloud-controller-0.localdomain | nova | :-) | True | neutron-l3-agent |
| 5fd9b8ab-a015-4516-ad13-626765c0ea19 | Open vSwitch agent | overcloud-novacompute-0.localdomain | | :-) | True | neutron-openvswitch-agent |
| 64ee300e-af31-4c9a-ae94-a354be509145 | Metadata agent | overcloud-controller-2.localdomain | | :-) | True | neutron-metadata-agent |
| 707a31b0-65f2-4a05-86c8-640bb5925329 | DHCP agent | overcloud-controller-0.localdomain | nova | :-) | True | neutron-dhcp-agent |
| 7d979f97-3419-4617-96a6-3713bbc00557 | Open vSwitch agent | overcloud-controller-2.localdomain | | :-) | True | neutron-openvswitch-agent |
| 8493c8e2-ddc9-49e3-b4bd-c34e4c153e03 | Open vSwitch agent | overcloud-controller-1.localdomain | | :-) | True | neutron-openvswitch-agent |
| aa48c36e-d6a2-4980-93a8-5a8e6229392b | L3 agent | overcloud-controller-2.localdomain | nova | :-) | True | neutron-l3-agent |
| b5ec07fc-b8b9-4618-bcb9-7cb829c34606 | L3 agent | overcloud-controller-1.localdomain | nova | :-) | True | neutron-l3-agent |
| de863a15-9f8c-4bee-97cf-69e454646066 | DHCP agent | overcloud-controller-2.localdomain | nova | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+----------------+---------------------------+
(overcloud) [root@overcloud-controller-0 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5a1c45f0304c 192.168.24.1:8787/master/centos-binary-gnocchi-statsd:current-tripleo-rdo "kolla_start" 14 minutes ago Up 14 minutes gnocchi_statsd
360975b254a5 192.168.24.1:8787/master/centos-binary-gnocchi-api:current-tripleo-rdo "kolla_start" 14 minutes ago Up 14 minutes gnocchi_api
50caf9b1c5a1 192.168.24.1:8787/master/centos-binary-cinder-volume:pcmklatest "/bin/bash /usr/local" 14 minutes ago Up 14 minutes openstack-cinder-volume-docker-0
efaeadc581d8 192.168.24.1:8787/master/centos-binary-gnocchi-metricd:current-tripleo-rdo "kolla_start" 16 minutes ago Up 16 minutes gnocchi_metricd
f21e683f899d 192.168.24.1:8787/master/centos-binary-neutron-openvswitch-agent:current-tripleo-rdo "kolla_start" 19 minutes ago Up 19 minutes (healthy) neutron_ovs_agent
7f028960e827 192.168.24.1:8787/master/centos-binary-neutron-l3-agent:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) neutron_l3_agent
14465000cebe 192.168.24.1:8787/master/centos-binary-neutron-metadata-agent:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) neutron_metadata_agent
aa1e3dde0344 192.168.24.1:8787/master/centos-binary-neutron-dhcp-agent:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) neutron_dhcp
1560cc6facbf 192.168.24.1:8787/master/centos-binary-glance-api:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) glance_api
1e41468ae39d 192.168.24.1:8787/master/centos-binary-panko-api:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes panko_api
33004ee80524 192.168.24.1:8787/master/centos-binary-nova-api:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes nova_metadata
2916e53225cf 192.168.24.1:8787/master/centos-binary-nova-api:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (unhealthy) nova_api
d01c3adcbcc0 192.168.24.1:8787/master/centos-binary-cron:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes logrotate_crond
12500a9a6310 192.168.24.1:8787/master/centos-binary-heat-api-cfn:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) heat_api_cfn
d12983cbb630 192.168.24.1:8787/master/centos-binary-neutron-server:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes neutron_api
29a6eacae287 192.168.24.1:8787/master/centos-binary-aodh-listener:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes (healthy) aodh_listener
6dd790d2baa1 192.168.24.1:8787/master/centos-binary-swift-container:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes swift_container_auditor
349ebafec130 192.168.24.1:8787/master/centos-binary-heat-api:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes heat_api_cron
70135a559934 192.168.24.1:8787/master/centos-binary-swift-proxy-server:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes swift_object_expirer
23f575566615 192.168.24.1:8787/master/centos-binary-swift-object:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes swift_object_updater
3b464c18f79d 192.168.24.1:8787/master/centos-binary-swift-container:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes swift_container_replicator
e7fbf14023bf 192.168.24.1:8787/master/centos-binary-swift-account:current-tripleo-rdo "kolla_start" 20 minutes ago Up 20 minutes swift_account_auditor
73e12b0ab2b8 192.168.24.1:8787/master/centos-binary-cinder-api:current-tripleo-rdo "kolla_start" 21 minutes ago Up 20 minutes cinder_api_cron
9e7c13b4fb35 192.168.24.1:8787/master/centos-binary-swift-account:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) swift_account_server
d79177eb61db 192.168.24.1:8787/master/centos-binary-nova-conductor:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) nova_conductor
1b4eb184e953 192.168.24.1:8787/master/centos-binary-cinder-scheduler:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) cinder_scheduler
52f38b6914e4 192.168.24.1:8787/master/centos-binary-swift-object:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes swift_object_replicator
0d6e29855cdf 192.168.24.1:8787/master/centos-binary-swift-container:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) swift_container_server
b8bfbc8b8fce 192.168.24.1:8787/master/centos-binary-heat-engine:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) heat_engine
2846456f2e24 192.168.24.1:8787/master/centos-binary-aodh-api:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes aodh_api
4900e4d63dfd 192.168.24.1:8787/master/centos-binary-swift-object:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes swift_rsync
79326d6c58e8 192.168.24.1:8787/master/centos-binary-nova-novncproxy:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) nova_vnc_proxy
e5fd37a26f98 192.168.24.1:8787/master/centos-binary-ceilometer-notification:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes (healthy) ceilometer_agent_notification
81547ad2b6cd 192.168.24.1:8787/master/centos-binary-swift-account:current-tripleo-rdo "kolla_start" 21 minutes ago Up 21 minutes swift_account_reaper
77f1f1cbfe85 192.168.24.1:8787/master/centos-binary-nova-consoleauth:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) nova_consoleauth
d8a62ab1d810 192.168.24.1:8787/master/centos-binary-nova-api:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes nova_api_cron
8c3b3a3f6897 192.168.24.1:8787/master/centos-binary-aodh-notifier:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) aodh_notifier
eeff860cc81f 192.168.24.1:8787/master/centos-binary-ceilometer-central:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) ceilometer_agent_central
5a4ba18c6b1a 192.168.24.1:8787/master/centos-binary-swift-account:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes swift_account_replicator
db51288026b9 192.168.24.1:8787/master/centos-binary-swift-object:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes swift_object_auditor
c8a587bab30a 192.168.24.1:8787/master/centos-binary-heat-api:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) heat_api
91fa1ae1977c 192.168.24.1:8787/master/centos-binary-swift-proxy-server:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) swift_proxy
d40422d89e16 192.168.24.1:8787/master/centos-binary-cinder-api:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes cinder_api
fdf830388534 192.168.24.1:8787/master/centos-binary-swift-object:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) swift_object_server
d3eaf313b9c7 192.168.24.1:8787/master/centos-binary-nova-scheduler:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) nova_scheduler
0e60007aa22c 192.168.24.1:8787/master/centos-binary-aodh-evaluator:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes (healthy) aodh_evaluator
774c17fe3d5d 192.168.24.1:8787/master/centos-binary-swift-container:current-tripleo-rdo "kolla_start" 22 minutes ago Up 22 minutes swift_container_updater
21ac20c76fce 192.168.24.1:8787/master/centos-binary-keystone:current-tripleo-rdo "/bin/bash -c '/usr/l" 27 minutes ago Up 27 minutes keystone_cron
a0c4b0ca68d3 192.168.24.1:8787/master/centos-binary-keystone:current-tripleo-rdo "kolla_start" 28 minutes ago Up 28 minutes (healthy) keystone
dec5a64ac7cd 192.168.24.1:8787/master/centos-binary-iscsid:current-tripleo-rdo "kolla_start" 28 minutes ago Up 28 minutes iscsid
6398cb621da9 192.168.24.1:8787/master/centos-binary-nova-placement-api:current-tripleo-rdo "kolla_start" 28 minutes ago Up 28 minutes nova_placement
03da3bfdfa10 192.168.24.1:8787/master/centos-binary-horizon:current-tripleo-rdo "kolla_start" 28 minutes ago Up 28 minutes horizon
9fdf6b1c9515 192.168.24.1:8787/master/centos-binary-haproxy:pcmklatest "/bin/bash /usr/local" 31 minutes ago Up 31 minutes haproxy-bundle-docker-0
d7cf18e15f66 192.168.24.1:8787/master/centos-binary-redis:pcmklatest "/bin/bash /usr/local" 33 minutes ago Up 33 minutes redis-bundle-docker-0
e39d1e54368e 192.168.24.1:8787/master/centos-binary-mariadb:current-tripleo-rdo "kolla_start" 34 minutes ago Up 34 minutes clustercheck
313125f4753a 192.168.24.1:8787/master/centos-binary-mariadb:pcmklatest "/bin/bash /usr/local" 35 minutes ago Up 35 minutes galera-bundle-docker-0
2af3ccc8e168 192.168.24.1:8787/master/centos-binary-rabbitmq:pcmklatest "/bin/bash /usr/local" 37 minutes ago Up 37 minutes rabbitmq-bundle-docker-0
94dd3d7fe881 192.168.24.1:8787/master/centos-binary-memcached:current-tripleo-rdo "/bin/bash -c 'source" 40 minutes ago Up 40 minutes memcached
(overcloud) [root@overcloud-controller-0 ~]# ovs-vsctl show
779bc1af-d1d6-4a74-91de-7be0503d744c
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port "vlan40"
tag: 40
Interface "vlan40"
type: internal
Port "vlan20"
tag: 20
Interface "vlan20"
type: internal
Port "vlan30"
tag: 30
Interface "vlan30"
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth0"
Interface "eth0"
Port "vlan10"
tag: 10
Interface "vlan10"
type: internal
Port "vlan50"
tag: 50
Interface "vlan50"
type: internal
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port "vxlan-ac100005"
Interface "vxlan-ac100005"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.5"}
Port "vxlan-ac10000c"
Interface "vxlan-ac10000c"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.12"}
Port "vxlan-ac100008"
Interface "vxlan-ac100008"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="172.16.0.10", out_key=flow, remote_ip="172.16.0.8"}
ovs_version: "2.8.1"</pre>
<pre></pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC9FM3Ua7AD-DRO7-MxAeSmcdibn9h1H6OMDI7FpgzwKFnvsRXS_BH2YsHoeEyXtGY6tGWrgoVGAEa6TWIuvCMukeknKoFYQik8te16SNy6CR2RJm9w_WWx8kfCueeOaGuOKwdUA/s1600/Screenshot+from+2018-02-09+13-04-25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjC9FM3Ua7AD-DRO7-MxAeSmcdibn9h1H6OMDI7FpgzwKFnvsRXS_BH2YsHoeEyXtGY6tGWrgoVGAEa6TWIuvCMukeknKoFYQik8te16SNy6CR2RJm9w_WWx8kfCueeOaGuOKwdUA/s640/Screenshot+from+2018-02-09+13-04-25.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<pre> <div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIfcGdn_lY6FYc2u611OcXlHHfFHpR-83PUIp-iVyDkLxptbDpGb5LkP-TvHC5y2E4p5-xrc958z-TSzEIrZRfzp-p4fccB9deQDvqL4Qg_OvD6g1y9JjaKCSIc8rs11EUBPX7Lg/s1600/Screenshot+from+2018-02-09+13-03-49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhIfcGdn_lY6FYc2u611OcXlHHfFHpR-83PUIp-iVyDkLxptbDpGb5LkP-TvHC5y2E4p5-xrc958z-TSzEIrZRfzp-p4fccB9deQDvqL4Qg_OvD6g1y9JjaKCSIc8rs11EUBPX7Lg/s640/Screenshot+from+2018-02-09+13-03-49.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcT7ht7pSaoJnOzuNDXLdZkGLh_5eDpGBd7Zek8ffkP6BZKsh8GzQlJu6vyvRi6pB_db3IOK2nxf8bXeJmFoRDXNYqdvpqDe7CGI8HZvlHWO50IxJtc97bBdZB5r7N2Ay9YWc86w/s1600/Screenshot+from+2018-02-09+13-23-51.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjcT7ht7pSaoJnOzuNDXLdZkGLh_5eDpGBd7Zek8ffkP6BZKsh8GzQlJu6vyvRi6pB_db3IOK2nxf8bXeJmFoRDXNYqdvpqDe7CGI8HZvlHWO50IxJtc97bBdZB5r7N2Ay9YWc86w/s640/Screenshot+from+2018-02-09+13-23-51.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBC5nhRb9OsNpKxifqCrwiz3reGjTGE-YGDklO_SRWtHO42ytDx-WVqXgcXu6HLoCFA5d8olB4webaP5mvSdI3VQAlT_-oKggiqXfQzBHE5LU5vZedHK7BXflamHSsVzeO4678jA/s1600/Screenshot+from+2018-02-09+13-25-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBC5nhRb9OsNpKxifqCrwiz3reGjTGE-YGDklO_SRWtHO42ytDx-WVqXgcXu6HLoCFA5d8olB4webaP5mvSdI3VQAlT_-oKggiqXfQzBHE5LU5vZedHK7BXflamHSsVzeO4678jA/s640/Screenshot+from+2018-02-09+13-25-52.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgINI3A9gMgeqPZqLn6pK45yAnmWPGgbNVn3bTdkzY4bxf8Q0EWkuTSCDWouhFB3IoJkJdWNDcy24ksh243n3hFB6OW-IRlhA6AVdfmxPJegg9VoNiHDVICekh283XOimTXQXV7ww/s1600/Screenshot+from+2018-02-09+13-26-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgINI3A9gMgeqPZqLn6pK45yAnmWPGgbNVn3bTdkzY4bxf8Q0EWkuTSCDWouhFB3IoJkJdWNDcy24ksh243n3hFB6OW-IRlhA6AVdfmxPJegg9VoNiHDVICekh283XOimTXQXV7ww/s640/Screenshot+from+2018-02-09+13-26-52.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8CyEXDQfQ3yM8lRaxc6yrB4_GUOZxiA0oJeb54twZPOspoxerPDtNOq36MHMS4y7rKu0hUrQpRYDhyphenhyphen64sv9xhRDNTtScSfxcpHybQVYNlI_HU8vIn9Fw7Rh_9Cj6b4OjLAZvSDQ/s1600/Screenshot+from+2018-02-09+13-26-16.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8CyEXDQfQ3yM8lRaxc6yrB4_GUOZxiA0oJeb54twZPOspoxerPDtNOq36MHMS4y7rKu0hUrQpRYDhyphenhyphen64sv9xhRDNTtScSfxcpHybQVYNlI_HU8vIn9Fw7Rh_9Cj6b4OjLAZvSDQ/s640/Screenshot+from+2018-02-09+13-26-16.png" width="640" /></a></div>
VIRTHOST INITIAL STATUS </pre>
<pre><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi12Kbd45AC0V8FVcvp9VJWqlvbC4oQl8DkT1hle3UZFMAur3InBL32cYa06vmMFZ4YampteqvMR-CaYFKGOIqlzv_daydz42PssA3VcGzvI5oeX8dtyeU2Z8l6wC4mWGeTMoD07g/s1600/Screenshot+from+2018-02-09+12-52-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"> <img border="0" data-original-height="1024" data-original-width="1280" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi12Kbd45AC0V8FVcvp9VJWqlvbC4oQl8DkT1hle3UZFMAur3InBL32cYa06vmMFZ4YampteqvMR-CaYFKGOIqlzv_daydz42PssA3VcGzvI5oeX8dtyeU2Z8l6wC4mWGeTMoD07g/s640/Screenshot+from+2018-02-09+12-52-03.png" width="640" /></a>
</pre>
<pre></pre>
<pre></pre>
<pre></pre>
<pre></pre>
<pre></pre>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-53575001621255753362017-04-15T03:45:00.000-07:002017-04-16T13:50:17.881-07:00Solution of one system of equations in boolean variables via bitmasks in regards of training for Unified State Examination in Informatics (Russia)<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
In brief, bitmasks are supposed to be a core tool for solution of
systems of equations in Boolean variables versus method suggested at<br />
<a href="https://inf-ege.sdamgia.ru/test?theme=264" target="_blank">https://inf-ege.sdamgia.ru/test?theme=264</a><br />
for task 11 which is pretty much similar to sample been analyzed bellow<br />
<br />
*************************************<br />
Original task looks like :-<br />
*************************************<br />
Determine total number of corteges<br />
{x1,...,x9,y1,....,y9} which and only which<br />
satisfy system :-<br />
<br />
((x1 ≡ y1) → (x2 ≡ y2)) ∧ (x1 → x2) ∧ (y1 → y2) = 1<br />
((x2 ≡ y2) → (x3 ≡ y3)) ∧ (x2 → x3) ∧ (y2 → y3) = 1<br />
...<br />
((x8 ≡ y8) → (x9 ≡ y9)) ∧ (x8 → x9) ∧ (y8 → y9) = 1<br />
<br />
Consider truncated system :-<br />
<br />
(x1 → x2) ∧ (y1 → y2) = 1<br />
(x2 → x3) ∧ (y2 → y3) = 1<br />
...<br />
(x8 → x9) ∧ (y8 → y9) = 1<br />
<br />
Now build well known bitmasks for {x} and {y}<br />
<br />
x1 x2 x3 x4 x5 x6 x7 x8 x9<br />
----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
0 0 1 1 1 1 1 1 1 <br />
0 0 0 1 1 1 1 1 1 <br />
0 0 0 0 1 1 1 1 1 <br />
0 0 0 0 0 1 1 1 1 <br />
0 0 0 0 0 0 1 1 1 <br />
0 0 0 0 0 0 0 1 1 <br />
0 0 0 0 0 0 0 0 1 <br />
0 0 0 0 0 0 0 0 0 <br />
<br />
<br />
<br />
y1 y2 y3 y4 y5 y6 y7 y8 y9<br />
-----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
0 0 1 1 1 1 1 1 1 <br />
0 0 0 1 1 1 1 1 1 <br />
0 0 0 0 1 1 1 1 1 <br />
0 0 0 0 0 1 1 1 1 <br />
0 0 0 0 0 0 1 1 1 <br />
0 0 0 0 0 0 0 1 1 <br />
0 0 0 0 0 0 0 0 1 <br />
0 0 0 0 0 0 0 0 0 <br />
<br />
<br />
We would name bellow first matrix "X" an second "Y"<br />
<br />
For j=2 to j=9 consider following two concatenations<br />
"X" ->"Y" and "Y" -> "X" <br />
<br />
<pre>First one :-
X Y
--------------------------- ------------------------------
| | | |
--------------------------- ------------------------------
<span style="color: #b45f06;">j | </span> | |
--------------------------- ------------------------------
. . . . | <span style="color: #b45f06;">j+1 |</span>
--------------------------- ------------------------------
<span style="color: #b45f06;">j+2 |</span>
------------------------------
<span style="color: #b45f06;">| . . . . . |</span>
--------------------------- ------------------------------
10 | <span style="color: #b45f06;">10 | </span>
--------------------------- ------------------------------
Record {j} from X with records {j+1,j+2,. . . 10} from Y
</pre>
<pre>and vice versa second one :-
Y X
--------------------------- ------------------------------
| | | |
--------------------------- ------------------------------
<span style="color: #b45f06;">j |</span> | |
--------------------------- ------------------------------
. . . . | <span style="color: #b45f06;"> j+1 |</span>
--------------------------- ------------------------------
<span style="color: #b45f06;">j+2 |</span>
------------------------------
<span style="color: #b45f06;"> | . . . . . |</span>
--------------------------- ------------------------------
10 | <span style="color: #b45f06;">10 | </span>
--------------------------- ------------------------------ </pre>
<pre> </pre>
Record { j } from Y with records {j+1,j+2,. . . 10} from X<br />
We'll get total 2*(10-j) сorteges making boolean value of implication<br />
<br />
<span style="color: #b45f06;"> ((x[j-1] ≡ y[j-1])) → (x[j] ≡ y[j]) equal FALSE</span><br />
<br />
************************************** <br />
For instance when j=3 we get<br />
**************************************<br />
<br />
x1 x2 x3 x4 x5 x6 x7 x8 x9<br />
----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
<span style="color: #b45f06;">0 <u>0 1</u> 1 1 1 1 1 1 =></span> <br />
0 0 0 1 1 1 1 1 1 <br />
0 0 0 0 1 1 1 1 1 <br />
0 0 0 0 0 1 1 1 1 <br />
0 0 0 0 0 0 1 1 1 <br />
0 0 0 0 0 0 0 1 1 <br />
0 0 0 0 0 0 0 0 1 <br />
0 0 0 0 0 0 0 0 0 <br />
<br />
<br />
<br />
y1 y2 y3 y4 y5 y6 y7 y8 y9<br />
-----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
0 0 1 1 1 1 1 1 1 <br />
<span style="color: #b45f06;">0 <u> 0 0</u> 1 1 1 1 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0</u><i> </i> 0 1 1 1 1 1 <=</span><br />
<span style="color: #b45f06;">0 <u>0 0</u><i> </i> 0 0 1 1 1 1 <=</span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 0 0 1 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 0 0 0 1 1 <=</span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 0 0 0 0 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0 </u> 0 0 0 0 0 0 <=</span><br />
<br />
<br />
Vice Versa Set :-<br />
<br />
<br />
y1 y2 y3 y4 y5 y6 y7 y8 y9<br />
-----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
<span style="color: #b45f06;">0 <u>0 1</u> 1 1 1 1 1 1 =></span><br />
0 0 0 1 1 1 1 1 1 <br />
0 0 0 0 1 1 1 1 1 <br />
0 0 0 0 0 1 1 1 1 <br />
0 0 0 0 0 0 1 1 1 <br />
0 0 0 0 0 0 0 1 1 <br />
0 0 0 0 0 0 0 0 1 <br />
0 0 0 0 0 0 0 0 0<br />
<br />
x1 x2 x3 x4 x5 x6 x7 x8 x9<br />
----------------------------------------<br />
1 1 1 1 1 1 1 1 1 <br />
0 1 1 1 1 1 1 1 1 <br />
0 0 1 1 1 1 1 1 1 <br />
<span style="color: #b45f06;">0 <u>0 0</u> 1 1 1 1 1 1 <=</span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 1 1 1 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 0 1 1 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0</u> 0 0 0 1 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u>0 0 </u> 0 0 0 0 1 1 <= </span><br />
<span style="color: #b45f06;">0 <u> 0 0</u> 0 0 0 0 0 1 <= </span><br />
<span style="color: #b45f06;">0 <u> 0 0 </u> 0 0 0 0 0 0 <=</span> <br />
<br />
****************************************************************************<br />
So when j=3 we have 2*7 = 14 corteges where x2≡y2 is True<br />
and x3≡y3 is False. So, (x2 ≡ y2) → (x3 ≡ y3) is actually 1 -> 0<br />
what is False by definition. <br />
****************************************************************************<br />
<br />
What is sign that set of corteges generated for each j from [2.3.4,...,9]<br />
should be removed from 100 total solutions of truncated system of boolean<br />
equations. <br />
<br />
Now calculate :-<br />
<br />
s:= 0 ;<br />
for j=2 to j=10 do<br />
begin<br />
s:= s + (10-j) ;<br />
end ;<br />
s= 2*s ;<br />
writeln (s) ;<br />
<br />
Finally we get s=72<br />
<br />
Total
number of corteges obtained via decart multiplication of X and Y is
equal 100. So number of solutions of original system would be 100-72=28<br />
<br />
I appreciate courtesy provided by informatik "BU"<br />
<a href="https://www.youtube.com/watch?v=MDL5Mym5Aac" target="_blank">https://www.youtube.com/watch?v=MDL5Mym5Aac</a><br />
<br />
However, don't behave so nicely as "BU" always does.<br />
<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhASCL-guI2XcFBzdy0I6godMj5nUpq0M1r70HHc1QtNnNA8BcRbadrH-sNLt5EHkiZyb3UAszcH_HzxpRq22LK665rxxSjyxZ6GPktyj8b0jYNcZ0fWuTKr7pmCLuspfxYXL_B9g/s1600/Screenshot+from+2017-04-15+14-17-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhASCL-guI2XcFBzdy0I6godMj5nUpq0M1r70HHc1QtNnNA8BcRbadrH-sNLt5EHkiZyb3UAszcH_HzxpRq22LK665rxxSjyxZ6GPktyj8b0jYNcZ0fWuTKr7pmCLuspfxYXL_B9g/s640/Screenshot+from+2017-04-15+14-17-52.png" width="640" /></a></div>
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-59043809994053502812017-01-27T12:45:00.001-08:002017-01-27T14:21:11.128-08:00RDO Okata M3 TripleO QuickStart HA Deployment<div dir="ltr" style="text-align: left;" trbidi="on">
[alan@fedora24wks general_config]$ cat ha.yml<br />
# Deploy an HA openstack environment.<br />
<span style="color: #b45f06;">control_memory: 7000</span><br />
<span style="color: #b45f06;">compute_memory: 6500</span><br />
<br />
undercloud_memory: 8192<br />
<br />
# Giving the undercloud additional CPUs can greatly improve heat's<br />
# performance (and result in a shorter deploy time).<br />
undercloud_vcpu: 4<br />
<br />
# Since HA has more machines, we set the cpu for controllers and<br />
# compute nodes to 1<br />
default_vcpu: 2<br />
<br />
# This enables TLS for the undercloud which will also make haproxy bind to the<br />
# configured public-vip and admin-vip.<br />
undercloud_generate_service_certificate: True<br />
<br />
# Create three controller nodes and one compute node.<br />
overcloud_nodes:<br />
- name: control_0<br />
flavor: control<br />
virtualbmc_port: 6230<br />
<br />
- name: control_1<br />
flavor: control<br />
virtualbmc_port: 6231<br />
<br />
- name: control_2<br />
flavor: control<br />
virtualbmc_port: 6232<br />
<br />
- name: compute_0<br />
flavor: compute<br />
virtualbmc_port: 6233<br />
<br />
# We don't need introspection in a virtual environment (because we are<br />
# creating all the "hardware" we really know the necessary<br />
# information).<br />
step_introspect: false<br />
<br />
# Tell tripleo about our environment.<br />
network_isolation: true<br />
extra_args: >-<br />
--control-scale 3<br />
--ntp-server pool.ntp.org<br />
test_ping: true<br />
enable_pacemaker: true<br />
<br />
run_tempest: false<br />
<br />
# options below direct automatic doc generation by tripleo-collect-logs<br />
artcl_gen_docs: true<br />
artcl_create_docs_payload:<br />
included_deployment_scripts:<br />
- undercloud-install<br />
- overcloud-custom-tht-script<br />
- overcloud-prep-flavors<br />
- overcloud-prep-images<br />
- overcloud-prep-network<br />
- overcloud-deploy<br />
- overcloud-deploy-post<br />
- overcloud-validate<br />
included_static_docs:<br />
- env-setup-virt<br />
table_of_contents:<br />
- env-setup-virt<br />
- undercloud-install<br />
- overcloud-custom-tht-script<br />
- overcloud-prep-flavors<br />
- overcloud-prep-images<br />
- overcloud-prep-network<br />
- overcloud-deploy<br />
- overcloud-deploy-post<br />
- overcloud-validate<br />
<br />
<br />
[alan@fedora24wks tripleo-quickstart]$ <span style="color: #b45f06;">bash quickstart.sh -R master --config config/general_config/ha.yml $VIRTHOST</span><br />
<br />
<br />
[alan@fedora24wks tripleo-quickstart]$ ssh -F $HOME/.quickstart/ssh.config.ansible undercloud<br />
Warning: Permanently added '192.168.0.74' (ECDSA) to the list of known hosts.<br />
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />
Last login: Fri Jan 27 19:38:30 2017 from gateway<br />
<br />
[stack@undercloud ~]$ . stackrc<br />
<span style="color: #b45f06;">[stack@undercloud ~]$ ./overcloud-deploy.sh</span><br />
<br />
. . . . . <br />
<br />
2017-01-27 19:17:10Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed<br />
2017-01-27 19:17:10Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed<br />
2017-01-27 19:17:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed<br />
2017-01-27 19:17:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed<br />
2017-01-27 19:18:03Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed<br />
2017-01-27 19:18:03Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2017-01-27 19:18:03Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed<br />
2017-01-27 19:18:03Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2017-01-27 19:18:04Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed<br />
2017-01-27 19:18:04Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
<br />
Stack overcloud CREATE_COMPLETE <br />
<br />
Overcloud Endpoint: http://10.0.0.12:5000/v2.0<br />
Overcloud Deployed<br />
+ status_code=0<br />
+ heat stack-list<br />
+ grep -q CREATE_FAILED<br />
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead<br />
+ exit 0<br />
[stack@undercloud ~]$ nova-manage --version<br />
15.0.0<br />
<pre>[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| c0895716-9ad2-4fae-b571-c347896c813c | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.10 |
| 33cb2c05-9292-4478-9d1b-12ffd8c12537 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.13 |
| 5f4ab42c-bd77-4b9c-80a7-8dd96daaec26 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.15 |
| 675b566d-1a05-4ab5-b98e-4ba1ff5ac3db | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.12 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
[stack@undercloud ~]$ cat overcloudrc]
cat: overcloudrc]: No such file or directory
[stack@undercloud ~]$ cat overcloudrc
# Clear any old environment that may conflict.
for key in $( set | awk '{FS="="} /^OS_/ {print $1}' ); do unset $key ; done
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.12:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export OS_PASSWORD=HH2xYgYWGMd4BBREr339AVbf3
export no_proxy=,10.0.0.12,192.168.24.8
export OS_PROJECT_NAME=admin
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
[stack@undercloud ~]$ ssh heat-admin@192.168.24.10
The authenticity of host '192.168.24.10 (192.168.24.10)' can't be established.
ECDSA key fingerprint is 9b:6c:9c:5f:02:13:15:9c:c4:65:7e:78:3d:df:40:b2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.24.10' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-0 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Fri Jan 27 19:22:30 2017 Last change: Fri Jan 27 19:17:21 2017 by root via cibadmin on overcloud-controller-0
3 nodes and 19 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-192.168.24.8 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-10.0.0.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.2.9 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.1.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.3.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@overcloud-controller-0 ~]# wget https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
--2017-01-27 19:24:27-- https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
Resolving download.fedoraproject.org (download.fedoraproject.org)... 140.211.169.206, 8.43.85.67, 209.132.181.15, ...
Connecting to download.fedoraproject.org (download.fedoraproject.org)|140.211.169.206|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://ftp.neva.ru/Linux-Distrib/Fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2 [following]
--2017-01-27 19:24:29-- http://ftp.neva.ru/Linux-Distrib/Fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
Resolving ftp.neva.ru (ftp.neva.ru)... 195.208.113.245, 2001:b08:2:100::245
Connecting to ftp.neva.ru (ftp.neva.ru)|195.208.113.245|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 204590080 (195M) [application/octet-stream]
Saving to: ‘Fedora-Cloud-Base-24-1.2.x86_64.qcow2’
100%[=======================================================>] 204,590,080 2.84MB/s in 76s
2017-01-27 19:25:45 (2.58 MB/s) - ‘Fedora-Cloud-Base-24-1.2.x86_64.qcow2’ saved [204590080/204590080]
[root@overcloud-controller-0 ~]# openstack image create --disk-format qcow2 --container-format bare \</pre>
<pre>--public --file ./Fedora-Cloud-Base-24-1.2.x86_64.qcow2 VF24Cloud-image
+------------------+----------------------------------------------------------------------------+
| Field | Value |
+------------------+----------------------------------------------------------------------------+
| checksum | 8de08e3fe24ee788e50a6a508235aa64 |
| container_format | bare |
| created_at | 2017-01-27T19:26:14Z |
| disk_format | qcow2 |
| file | /v2/images/fdc9a912-d3ae-4099-bc79-b54891f7a3f0/file |
| id | fdc9a912-d3ae-4099-bc79-b54891f7a3f0 |
| min_disk | 0 |
| min_ram | 0 |
| name | VF24Cloud-image |
| owner | 1b276463e41e42279f65f967e230d022 |
| properties | direct_url='swift+config://ref1/glance/fdc9a912-d3ae-4099-bc79-b54891f7a3f |
| | 0' |
| protected | False |
| schema | /v2/schemas/image |
| size | 204590080 |
| status | active |
| tags | |
| updated_at | 2017-01-27T19:26:18Z |
| virtual_size | None |
| visibility | public |
+------------------+----------------------------------------------------------------------------+
[root@overcloud-controller-0 ~]# openstack image list
+--------------------------------------+-----------------+--------+
| ID | Name | Status |
+--------------------------------------+-----------------+--------+
| fdc9a912-d3ae-4099-bc79-b54891f7a3f0 | VF24Cloud-image | active |
+--------------------------------------+-----------------+--------+
[root@overcloud-controller-0 ~]# neutron net-create ext-net --router:external \</pre>
<pre>--provider:physical_network datacentre --provider:network_type flat</pre>
<pre> Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-01-27T19:27:37Z |
| description | |
| id | 7f16f821-1d9c-4a81-a893-ab017eabfcc7 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| is_default | False |
| mtu | 1500 |
| name | ext-net |
| port_security_enabled | True |
| project_id | 1b276463e41e42279f65f967e230d022 |
| provider:network_type | flat |
| provider:physical_network | datacentre |
| provider:segmentation_id | |
| qos_policy_id | |
| revision_number | 4 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:27:37Z |
+---------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# nova-manage --version
15.0.0
[root@overcloud-controller-0 ~]# neutron subnet-create ext-net --name ext-subnet \</pre>
<pre> --allocation-pool start=192.168.24.100,end=192.168.24.120 --disable-dhcp --gateway 192.168.24.1 192.168.24.0/24
Created a new subnet:
---------++-------------------+--------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.24.100", "end": "192.168.24.120"} |
| cidr | 192.168.24.0/24 |
| created_at | 2017-01-27T19:28:47Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.24.1 |
| host_routes | |
| id | 6a42354e-3d39-4aff-a815-4568bbe6f137 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | ext-subnet |
| network_id | 7f16f821-1d9c-4a81-a893-ab017eabfcc7 |
| project_id | 1b276463e41e42279f65f967e230d022 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:28:47Z |
+-------------------+------------------------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-create --ha=True router1
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-01-27T19:29:19Z |
| description | |
| distributed | False |
| external_gateway_info | |
| flavor_id | |
| ha | True |
| id | fc6b97c9-0b63-4412-b514-a4dce4a8f49b |
| name | router1 |
| project_id | 1b276463e41e42279f65f967e230d022 |
| revision_number | 3 |
| routes | |
| status | ACTIVE |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:29:19Z |
+-------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-gateway-set router1 ext-net
Set gateway for router router1
[root@overcloud-controller-0 ~]# neutron net-create int
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2017-01-27T19:30:19Z |
| description | |
| id | 035f9554-8ba7-4781-a6b1-01083e9f4039 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1450 |
| name | int |
| port_security_enabled | True |
| project_id | 1b276463e41e42279f65f967e230d022 |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 9 |
| qos_policy_id | |
| revision_number | 3 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:30:19Z |
+---------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# neutron subnet-create int 30.0.0.0/24 \</pre>
<pre>--dns_nameservers list=true 83.221.202.254
Created a new subnet:
+-------------------+--------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------+
| allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} |
| cidr | 30.0.0.0/24 |
| created_at | 2017-01-27T19:30:53Z |
| description | |
| dns_nameservers | 83.221.202.254 |
| enable_dhcp | True |
| gateway_ip | 30.0.0.1 |
| host_routes | |
| id | e3671a73-a908-4aa7-a28e-853f0364ad8e |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| network_id | 035f9554-8ba7-4781-a6b1-01083e9f4039 |
| project_id | 1b276463e41e42279f65f967e230d022 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:30:53Z |
+-------------------+--------------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-interface-add router1 e3671a73-a908-4aa7-a28e-853f0364ad8e
Added interface edb75b03-196e-46bd-9869-ee2343d57248 to router router1.
[root@overcloud-controller-0 ~]# nova keypair-add oskey012717 > oskey012717.pem
[root@overcloud-controller-0 ~]# nova secgroup-list
WARNING: Command secgroup-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-neutronclient or python-openstackclient instead.
+--------------------------------------+---------+------------------------+
| Id | Name | Description |
+--------------------------------------+---------+------------------------+
| e4c0b726-7d45-41ca-9dea-8592b2b0f8e2 | default | Default security group |
+--------------------------------------+---------+------------------------+
[root@overcloud-controller-0 ~]# neutron security-group-rule-create --protocol tcp \</pre>
<pre>--port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 \</pre>
<pre> e4c0b726-7d45-41ca-9dea-8592b2b0f8e2</pre>
<pre> Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2017-01-27T19:37:39Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 645abc8f-b31c-45e0-acf9-dbf031225f25 |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 1b276463e41e42279f65f967e230d022 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | e4c0b726-7d45-41ca-9dea-8592b2b0f8e2 |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:37:39Z |
+-------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# nova flavor-create "m1.small" 2 1000 20 1
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| 2 | m1.small | 1000 | 20 | 0 | | 1 | 1.0 | True |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+</pre>
<pre>[alan@fedora24wks tripleo-quickstart]$ ssh -F $HOME/.quickstart/ssh.config.ansible undercloudWarning: Permanently added '192.168.0.74' (ECDSA) to the list of known hosts.
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.
Last login: Fri Jan 27 19:20:04 2017 from gateway
[stack@undercloud ~]$ . stackrc
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| c0895716-9ad2-4fae-b571-c347896c813c | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.10 |
| 33cb2c05-9292-4478-9d1b-12ffd8c12537 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.13 |
| 5f4ab42c-bd77-4b9c-80a7-8dd96daaec26 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.15 |
| 675b566d-1a05-4ab5-b98e-4ba1ff5ac3db | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.12 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
[stack@undercloud ~]$ ssh heat-admin@192.168.24.10
Last login: Fri Jan 27 19:21:36 2017 from 192.168.24.1
[heat-admin@overcloud-controller-0 ~]$ sudo su -
Last login: Fri Jan 27 19:21:42 UTC 2017 on pts/0
[root@overcloud-controller-0 ~]# ls
Fedora-Cloud-Base-24-1.2.x86_64.qcow2 oskey012717.pem overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
[root@overcloud-controller-0 ~]# openstack image list
+--------------------------------------+-----------------+--------+
| ID | Name | Status |
+--------------------------------------+-----------------+--------+
| fdc9a912-d3ae-4099-bc79-b54891f7a3f0 | VF24Cloud-image | active |
+--------------------------------------+-----------------+--------+
[root@overcloud-controller-0 ~]# openstack network list
+-------------------------------+-------------------------------+-------------------------------+
| ID | Name | Subnets |
+-------------------------------+-------------------------------+-------------------------------+
| 035f9554-8ba7-4781-a6b1-01083 | int | e3671a73-a908-4aa7-a28e- |
| e9f4039 | | 853f0364ad8e |
| 7f16f821-1d9c- | ext-net | 6a42354e- |
| 4a81-a893-ab017eabfcc7 | | 3d39-4aff-a815-4568bbe6f137 |
| 8d0b61bd-a785-4ab8-b492-7c070 | HA network tenant 1b276463e41 | d8f9017e-b75f-4565-b2ae- |
| 6583ec7 | e42279f65f967e230d022 | c88f4e70467b |
+-------------------------------+-------------------------------+-------------------------------+</pre>
<pre> </pre>
<pre>[root@overcloud-controller-0 ~]# nova boot --flavor 2 --key-name oskey012717 \</pre>
<pre>--image fdc9a912-d3ae-4099-bc79-b54891f7a3f0 \</pre>
<pre>--nic net-id=035f9554-8ba7-4781-a6b1-01083e9f4039 VF24Devs01</pre>
<pre> </pre>
<pre>+--------------------------------------+--------------------------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host | - |
| OS-EXT-SRV-ATTR:hostname | vf24devs01 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | instance-00000003 |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index | 0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-ih0828hh |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data | - |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | B8mwEDz6onG6 |
| config_drive | |
| created | 2017-01-27T19:44:43Z |
| description | - |
| flavor | m1.small (2) |
| hostId | |
| host_status | |
| id | ca234b0a-dc07-4ec5-8b10-ddefe445f109 |
| image | VF24Cloud-image (fdc9a912-d3ae-4099-bc79-b54891f7a3f0) |
| key_name | oskey012717 |
| locked | False |
| metadata | {} |
| name | VF24Devs01 |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tags | [] |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated | 2017-01-27T19:44:44Z |
| user_id | 3f0e953c276f4261bea675808dbfb89f |
+--------------------------------------+--------------------------------------------------------+
[root@overcloud-controller-0 ~]# nova list
+--------------------------------------+------------+--------+------------+-------------+--------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+--------------+
| ca234b0a-dc07-4ec5-8b10-ddefe445f109 | VF24Devs01 | ACTIVE | - | Running | int=30.0.0.6 |
+--------------------------------------+------------+--------+------------+-------------+--------------+
[root@overcloud-controller-0 ~]# neutron port-list --device-id ca234b0a-dc07-4ec5-8b10-ddefe445f109
+---------------------------------+------+-------------------+----------------------------------+
| id | name | mac_address | fixed_ips |
+---------------------------------+------+-------------------+----------------------------------+
| 1546a87c-644d-470a-8d1d- | | fa:16:3e:54:33:75 | {"subnet_id": |
| d7b0a4d41e51 | | | "e3671a73-a908-4aa7-a28e- |
| | | | 853f0364ad8e", "ip_address": |
| | | | "30.0.0.6"} |
+---------------------------------+------+-------------------+----------------------------------+
[root@overcloud-controller-0 ~]# neutron floatingip-create ext-net
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| created_at | 2017-01-27T19:50:15Z |
| description | |
| fixed_ip_address | |
| floating_ip_address | 192.168.24.104 |
| floating_network_id | 7f16f821-1d9c-4a81-a893-ab017eabfcc7 |
| id | c8c49f23-df24-44b7-8fbc-6aa4f9aebdeb |
| port_id | |
| project_id | 1b276463e41e42279f65f967e230d022 |
| revision_number | 1 |
| router_id | |
| status | DOWN |
| tenant_id | 1b276463e41e42279f65f967e230d022 |
| updated_at | 2017-01-27T19:50:15Z |
+---------------------+--------------------------------------+</pre>
<pre> </pre>
<pre>[root@overcloud-controller-0 ~]# neutron floatingip-associate \</pre>
<pre>c8c49f23-df24-44b7-8fbc-6aa4f9aebdeb 1546a87c-644d-470a-8d1d-d7b0a4d41e51</pre>
<pre> </pre>
<pre>Associated floating IP c8c49f23-df24-44b7-8fbc-6aa4f9aebdeb
[root@overcloud-controller-0 ~]# nova list
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ca234b0a-dc07-4ec5-8b10-ddefe445f109 | VF24Devs01 | ACTIVE | - | Running | int=30.0.0.6, 192.168.24.104 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
[root@overcloud-controller-0 ~]# ls -l
total 199804
-rw-r--r--. 1 root root 204590080 Jun 14 2016 Fedora-Cloud-Base-24-1.2.x86_64.qcow2
-rw-r--r--. 1 root root 1676 Jan 27 19:36 oskey012717.pem
-rw-r--r--. 1 root root 519 Jan 27 19:22 overcloudrc
[root@overcloud-controller-0 ~]# nova list
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ca234b0a-dc07-4ec5-8b10-ddefe445f109 | VF24Devs01 | ACTIVE | - | Running | int=30.0.0.6, 192.168.24.104 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+ </pre>
<pre> </pre>
<pre>[stack@undercloud ~]$ ssh -i oskey012717.pem fedora@192.168.24.104
Last login: Fri Jan 27 20:09:54 2017 from 192.168.24.1
[fedora@vf24devs01 ~]$ sudo su -
[root@vf24devs01 ~]# dnf -y update
[root@overcloud-controller-0 ~]# nova list
+--------------------------------------+------------+---------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+---------+------------+-------------+------------------------------+
| ca234b0a-dc07-4ec5-8b10-ddefe445f109 | VF24Devs01 | SHUTOFF | - | Shutdown | int=30.0.0.6, 192.168.24.104 |
+--------------------------------------+------------+---------+------------+-------------+------------------------------+</pre>
<pre> [root@overcloud-controller-0 ~]# nova start VF24Devs01
Request to start server VF24Devs01 has been accepted.
[root@overcloud-controller-0 ~]# nova list
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+
| ca234b0a-dc07-4ec5-8b10-ddefe445f109 | VF24Devs01 | ACTIVE | - | Running | int=30.0.0.6, 192.168.24.104 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------+</pre>
<pre></pre>
<pre>[root@overcloud-controller-0 ~]# nova console-log VF24Devs01
%G%G[[0;32m OK [0m] Started Show Plymouth Boot Screen.
[[0;32m OK [0m] Started Forward Password Requests to Plymouth Directory Watch.
[[0;32m OK [0m] Reached target Paths.
[[0;32m OK [0m] Reached target Basic System.
[[0;32m OK [0m] Started File System Check on /dev/d...816-dc18-452e-8d0f-2b34bd1beced.
Mounting /sysroot...
[[0;32m OK [0m] Mounted /sysroot.
[[0;32m OK [0m] Reached target Initrd Root File System.
Starting Reload Configuration from the Real Root...
[[0;32m OK [0m] Started Reload Configuration from the Real Root.
[[0;32m OK [0m] Reached target Initrd File Systems.
[[0;32m OK [0m] Reached target Initrd Default Target.
Starting Cleaning Up and Shutting Down Daemons...
[[0;32m OK [0m] Stopped Cleaning Up and Shutting Down Daemons.
[[0;32m OK [0m] Stopped target Remote File Systems.
Starting Plymouth switch root service...
[[0;32m OK [0m] Stopped target Timers.
[[0;32m OK [0m] Stopped target Initrd Default Target.
[[0;32m OK [0m] Stopped target Remote File Systems (Pre).
[[0;32m OK [0m] Stopped target Basic System.
[[0;32m OK [0m] Stopped target Paths.
[[0;32m OK [0m] Stopped target Slices.
[[0;32m OK [0m] Stopped target Sockets.
[[0;32m OK [0m] Stopped target System Initialization.
[[0;32m OK [0m] Stopped Apply Kernel Variables.
[[0;32m OK [0m] Stopped target Swap.
[[0;32m OK [0m] Stopped udev Coldplug all Devices.
[[0;32m OK [0m] Stopped target Local File Systems.
Stopping udev Kernel Device Manager...
[[0;32m OK [0m] Stopped udev Kernel Device Manager.
[[0;32m OK [0m] Stopped Create Static Device Nodes in /dev.
[[0;32m OK [0m] Stopped Create list of required sta...ce nodes for the current kernel.
[[0;32m OK [0m] Closed udev Control Socket.
[[0;32m OK [0m] Closed udev Kernel Socket.
Starting Cleanup udevd DB...
[[0;32m OK [0m] Started Cleanup udevd DB.
[[0;32m OK [0m] Reached target Switch Root.
[[0;32m OK [0m] Started Plymouth switch root service.
Starting Switch Root...
%G%G[ 4.331073] cloud-init[335]: Cloud-init v. 0.7.7 running 'init-local' at Fri, 27 Jan 2017 21:48:20 +0000. Up 4.12 seconds.
[ 8.806318] cloud-init[609]: Cloud-init v. 0.7.7 running 'init' at Fri, 27 Jan 2017 21:48:23 +0000. Up 6.33 seconds.
[ 8.806482] cloud-init[609]: ci-info: +++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++
[ 8.806557] cloud-init[609]: ci-info: +--------+------+-----------+---------------+-------+-------------------+
[ 8.806613] cloud-init[609]: ci-info: | Device | Up | Address | Mask | Scope | Hw-Address |
[ 8.806783] cloud-init[609]: ci-info: +--------+------+-----------+---------------+-------+-------------------+
[ 8.806844] cloud-init[609]: ci-info: | lo: | True | 127.0.0.1 | 255.0.0.0 | . | . |
[ 8.806903] cloud-init[609]: ci-info: | lo: | True | . | . | d | . |
[ 8.806959] cloud-init[609]: ci-info: | eth0: | True | 30.0.0.6 | 255.255.255.0 | . | fa:16:3e:54:33:75 |
[ 8.807012] cloud-init[609]: ci-info: | eth0: | True | . | . | d | fa:16:3e:54:33:75 |
[ 8.807164] cloud-init[609]: ci-info: +--------+------+-----------+---------------+-------+-------------------+
[ 8.807218] cloud-init[609]: ci-info: ++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++
[ 8.807271] cloud-init[609]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+
[ 8.807322] cloud-init[609]: ci-info: | Route | Destination | Gateway | Genmask | Interface | Flags |
[ 8.807390] cloud-init[609]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+
[ 8.807478] cloud-init[609]: ci-info: | 0 | 0.0.0.0 | 30.0.0.1 | 0.0.0.0 | eth0 | UG |
[ 8.807531] cloud-init[609]: ci-info: | 1 | 30.0.0.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U |
[ 8.807582] cloud-init[609]: ci-info: | 2 | 169.254.169.254 | 30.0.0.1 | 255.255.255.255 | eth0 | UGH |
[ 8.807633] cloud-init[609]: ci-info: +-------+-----------------+----------+-----------------+-----------+-------+
[ 9.540575] cloud-init[665]: Cloud-init v. 0.7.7 running 'modules:config' at Fri, 27 Jan 2017 21:48:26 +0000. Up 9.47 seconds.
[ 9.946323] cloud-init[679]: Cloud-init v. 0.7.7 running 'modules:final' at Fri, 27 Jan 2017 21:48:26 +0000. Up 9.85 seconds.
[ 9.962109] cloud-init[679]: Cloud-init v. 0.7.7 finished at Fri, 27 Jan 2017 21:48:26 +0000. Datasource DataSourceOpenStack [net,ver=2]. Up 9.93 seconds
Fedora 24 (Cloud Edition)
Kernel 4.9.5-100.fc24.x86_64 on an x86_64 (ttyS0)
vf24devs01 login:
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLRWnA72X-lXjthm2DiEoEhFpd_hPB5upT6CmiBmihe2VzojnXsPk9o7wsgDtp4Rg5CKHmvEzYYSTvRW6N6iufV39IJXIgtfNtWPqBveLYzgB4nIr6eK0c342LoGcfmVSg9d2OoA/s1600/Screenshot+from+2017-01-28+01-18-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLRWnA72X-lXjthm2DiEoEhFpd_hPB5upT6CmiBmihe2VzojnXsPk9o7wsgDtp4Rg5CKHmvEzYYSTvRW6N6iufV39IJXIgtfNtWPqBveLYzgB4nIr6eK0c342LoGcfmVSg9d2OoA/s640/Screenshot+from+2017-01-28+01-18-55.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-cDRcQWYgjASwJT-Ar8CAz_g9yu3IdflEWLZPqDHvzp2FWtMS1PNy1JU_8fSeQYBDLx8ndF7z5k2hXcatNwENvsXgXR4RQVA06QvxYeaxVwzD4igLYyeiOKTOyXEbUw_bkGlkvw/s1600/Screenshot+from+2017-01-28+01-17-39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-cDRcQWYgjASwJT-Ar8CAz_g9yu3IdflEWLZPqDHvzp2FWtMS1PNy1JU_8fSeQYBDLx8ndF7z5k2hXcatNwENvsXgXR4RQVA06QvxYeaxVwzD4igLYyeiOKTOyXEbUw_bkGlkvw/s640/Screenshot+from+2017-01-28+01-17-39.png" width="640" /></a></div>
</pre>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-18312616228564999522017-01-07T11:02:00.000-08:002017-01-21T23:27:41.576-08:00TripleO QuickStart HA&&CEPH Deployment on Fedora 25 VIRTHOST 32 GB<div dir="ltr" style="text-align: left;" trbidi="on">
################## <br />
UPDATE 01/22/2017<br />
##################<br />
<br />
Regular upgrade F25's kernel to 4.9.X makes overcloud deployment on Server<br />
Fedora 25 pretty stable,e.g. building heat stack "overcloud" no longer randomly<br />
hangs , what caused deleting deleting stack and recreating it from scratch. <br />
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
Details here :-</div>
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
<span style="font-weight: normal;"><a href="http://dbaxps.blogspot.ru/2017/01/tripleo-quickstart-ha-deployment-on.html">TripleO QuickStart HA&&CEPH Deployment on Fedora 25 Server VIRTHOST </a></span><br />
</div>
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
<br />
Clean up Server F25 for Tripleo QuickStart redeployment<br />
# rm -fr /home/stack<br />
# userdel stack<br />
<br /></div>
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
<span style="font-weight: normal;">##################<br />END UPDATE</span></div>
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
<span style="font-weight: normal;">##################</span></div>
<div class="post-title entry-title" itemprop="name" style="text-align: left;">
<span style="font-weight: normal;"><br /></span></div>
################## <br />
UPDATE 01/12/2017<br />
################## <br />
It's much more safe to issue<br />
# systemctl set-default multi-user.target<br />
# reboot <br />
before deployment. I didn't get a chance to test F25 Server, seems to be an optimal solution. Overcloud Deployment in text mode<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAsFUVB3kx0W7W-YaSjodlO-bWnHYoX8evJcw0hwpLvUsbzctQj2NLO65nMXqxhBZlmCPx6OzVeorEFnYYinKx4yDcOJXeQuPSR0qZ4b94mCq0aQoiKppHFq7EUMOzP4hr0lwSMg/s1600/Screenshot+from+2017-01-12+23-26-15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgAsFUVB3kx0W7W-YaSjodlO-bWnHYoX8evJcw0hwpLvUsbzctQj2NLO65nMXqxhBZlmCPx6OzVeorEFnYYinKx4yDcOJXeQuPSR0qZ4b94mCq0aQoiKppHFq7EUMOzP4hr0lwSMg/s640/Screenshot+from+2017-01-12+23-26-15.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7VMYkFtS8dg6r8Yubq6UH9VfnEUg_WBLpbwZ_kqfaQ7z_gUmjF12ir9XHAof52475U7yuqAt3JU4QqhwQZF7xtML3xskVjGdaz9c_DnoDq5x9C0Di36u16V4wbPYgHDQy6nBkIA/s1600/Screenshot+from+2017-01-12+23-25-54.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi7VMYkFtS8dg6r8Yubq6UH9VfnEUg_WBLpbwZ_kqfaQ7z_gUmjF12ir9XHAof52475U7yuqAt3JU4QqhwQZF7xtML3xskVjGdaz9c_DnoDq5x9C0Di36u16V4wbPYgHDQy6nBkIA/s640/Screenshot+from+2017-01-12+23-25-54.png" width="640" /></a></div>
<br />
############## <br />
END UPDATE<br />
##############<br />
The most recent commits in <a href="https://github.com/openstack/tripleo-quickstart" target="_blank">https://github.com/openstack/tripleo-quickstart</a><br />
<span class="skimlinks-unlinked">allow to use Fedora 25 Workstaion (32 GB) as target VIRTHOST for TripleO</span><br />
<span class="skimlinks-unlinked">Quickstart HA Deployments and benefit from QEMU's (2.7.0) && Libvirt's (2.2.0) the most recent KVM virtualization features coming with last Fedora release.</span><br />
<br />
<br />
Prior to deployment on VIRTHOST install KSM and enable ksm.service :-<br />
<br />
<pre># dnf -y install python2-dnf <span style="color: #b45f06;">ksm</span>
# systemctl start sshd <span style="color: #b45f06;">ksm</span>
# systemctl enable sshd <span style="color: #b45f06;">ksm</span>
</pre>
<br />
On workstation :-<br />
<pre><span class="skimlinks-unlinked">[boris@fedora24wks</span>]$ export VIRTHOST=192.168.0.74
<span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ git clone \
<span class="skimlinks-unlinked"> https://github.com/openstack/tripleo-quickstart
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ cd tripleo-quickstart
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ sudo bash <span class="skimlinks-unlinked">quickstart.sh</span> --install-deps
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ sudo dnf install redhat-rpm-config
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ ssh-keygen
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ ssh-copy-id root@$VIRTHOST
</span><span class="skimlinks-unlinked"><span class="skimlinks-unlinked">[boris@fedora24wks]</span>$ ssh root@$VIRTHOST uname -a</span></pre>
<pre><span class="skimlinks-unlinked"> </span></pre>
<pre><span class="skimlinks-unlinked">[boris@fedora24wks general_config]$ cat ha.yml</span></pre>
<pre><span class="skimlinks-unlinked"># Deploy an HA openstack environment.
control_memory: 6500
compute_memory: 6500
undercloud_memory: 8192
# Giving the undercloud additional CPUs can greatly improve heat's
# performance (and result in a shorter deploy time).
undercloud_vcpu: 4
# Since HA has more machines, we set the cpu for controllers and
# compute nodes to 1
default_vcpu: 1
# This enables TLS for the undercloud which will also make haproxy bind to the
# configured public-vip and admin-vip.
undercloud_generate_service_certificate: True
# Create three controller nodes and one compute node.
overcloud_nodes:
- name: control_0
flavor: control
virtualbmc_port: 6230
- name: control_1
flavor: control
virtualbmc_port: 6231
- name: control_2
flavor: control
virtualbmc_port: 6232
- name: compute_0
flavor: compute
virtualbmc_port: 6233
- name: ceph_0
flavor: ceph
virtualbmc_port: 6234
# We do introspection in a virtual environment
step_introspect: true
# Tell tripleo about our environment.
network_isolation: true
extra_args: >-
--control-scale 3
--compute-scale 1
--ceph-storage-scale 1
--ntp-server pool.ntp.org
-e {{overcloud_templates_path}}/environments/storage-environment.yaml
test_ping: true
enable_pacemaker: true
run_tempest: false</span></pre>
<pre><span class="skimlinks-unlinked"> </span></pre>
[boris@fedora24wks tripleo-quickstart]$ bash quickstart.sh --no-clone -e supported_distro_check=false -R newton --config config/general_config/ha.yml $VIRTHOST<br />
<br />
<pre></pre>
****** undercloud deployment *******<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi56CAmRLk_YdbnURwaDjZozefbJKusXgpQhNaugjAIdvhV5O-2ocN8xYEAr4EZt8Rntkle7MQUsbzxLwaH1wrE-EZM0ipgyjOOUmVQgsAVZ2eiJKe5_q6h5e-IA7Oa4VevygNM0A/s1600/Screenshot+from+2017-01-08+08-54-02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi56CAmRLk_YdbnURwaDjZozefbJKusXgpQhNaugjAIdvhV5O-2ocN8xYEAr4EZt8Rntkle7MQUsbzxLwaH1wrE-EZM0ipgyjOOUmVQgsAVZ2eiJKe5_q6h5e-IA7Oa4VevygNM0A/s640/Screenshot+from+2017-01-08+08-54-02.png" width="640" /></a></div>
<br />
[boris@fedora24wks ~]$ ssh -F $HOME/.quickstart/ssh.config.ansible undercloud<br />
Warning: Permanently added '192.168.0.74' (ECDSA) to the list of known hosts.<br />
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />
Last login: Sun Jan 8 06:51:39 2017 from gateway<br />
[stack@undercloud ~]$ . stackrc<br />
[stack@undercloud ~]$ ls -l<br />
total 1624856<br />
-rwxr-xr-x. 1 stack stack 770 Jan 8 05:34 containers-default-parameters.yaml<br />
-rw-rw-r--. 1 stack stack 18382 Jan 8 05:14 instackenv.json<br />
-rw-r--r--. 1 root root 355802133 Jan 3 08:59 ironic-python-agent.initramfs<br />
-rwxr-xr-x. 1 root root 5393328 Jan 3 08:59 ironic-python-agent.kernel<br />
-rw-r--r--. 1 stack stack 474 Jan 8 05:34 network-environment.yaml<br />
-rw-rw-r--. 1 stack stack 208 Jan 8 05:40 neutronl3ha.yaml<br />
-rw-------. 1 stack stack 1675 Jan 8 07:21 oskey010817.pem<br />
-rw-rw-r--. 1 stack stack 0 Jan 8 05:34 overcloud_custom_tht_script.log<br />
-rwxr-xr-x. 1 stack stack 293 Jan 8 05:34 overcloud-custom-tht-script.sh<br />
-rwxr-xr-x. 1 stack stack 1012 Jan 8 05:40 overcloud-deploy-post.sh<br />
-rwxr-xr-x. 1 stack stack 2876 Jan 8 05:40 overcloud-deploy.sh<br />
-rw-rw-r--. 1 stack stack 4211 Jan 8 05:59 overcloud-env.json<br />
-rw-r--r--. 1 root root 46800999 Jan 3 08:59 overcloud-full.initrd<br />
-rw-r--r--. 1 root root 1250130432 Jan 3 08:59 overcloud-full.qcow2<br />
-rwxr-xr-x. 1 root root 5393328 Jan 3 09:00 overcloud-full.vmlinuz<br />
-rwxr-xr-x. 1 stack stack 3905 Jan 8 05:34 overcloud-prep-containers.sh<br />
-rw-rw-r--. 1 stack stack 7336 Jan 8 05:40 overcloud_prep_flavors.log<br />
-rwxr-xr-x. 1 stack stack 3672 Jan 8 05:39 overcloud-prep-flavors.sh<br />
-rw-rw-r--. 1 stack stack 4885 Jan 8 05:39 overcloud_prep_images.log<br />
-rwxr-xr-x. 1 stack stack 746 Jan 8 05:34 overcloud-prep-images.sh<br />
-rw-rw-r--. 1 stack stack 1315 Jan 8 05:40 overcloud_prep_network.log<br />
-rwxr-xr-x. 1 stack stack 861 Jan 8 05:40 overcloud-prep-network.sh<br />
-rw-rw-r--. 1 stack stack 391 Jan 8 06:48 overcloudrc<br />
-rw-------. 1 stack stack 351 Jan 8 05:19 quickstart-hieradata-overrides.yaml<br />
-rw-------. 1 stack stack 587 Jan 8 05:33 stackrc<br />
-rw-rw-r--. 1 stack stack 444 Jan 8 06:48 tempest-deployer-input.conf<br />
-rw-------. 1 stack stack 7868 Jan 8 05:19 undercloud.conf<br />
-rw-rw-r--. 1 stack stack 191200 Jan 8 05:34 undercloud_install.log<br />
-rwxr-xr-x. 1 stack stack 151 Jan 8 05:19 undercloud-install.sh<br />
-rw-rw-r--. 1 stack stack 1650 Jan 8 05:19 undercloud-passwords.conf<br />
-rwxr-xr-x. 1 stack stack 463 Jan 8 05:34 upload_images_to_local_registry.py<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ9S011J6GeIZyel_A0gVtdKShi2X9Je7wJDB-9U2LrxMWSnCpZHsHaiFljulRBbaMKfth-XC0i088sD4oKesfW9FA_A30cS_6B3YiQrtWXn_uusripcgNSHmU54e4Tkh6bcvhiw/s1600/Screenshot+from+2017-01-08+10-54-00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJ9S011J6GeIZyel_A0gVtdKShi2X9Je7wJDB-9U2LrxMWSnCpZHsHaiFljulRBbaMKfth-XC0i088sD4oKesfW9FA_A30cS_6B3YiQrtWXn_uusripcgNSHmU54e4Tkh6bcvhiw/s640/Screenshot+from+2017-01-08+10-54-00.png" width="640" /></a></div>
<br />
<br />
************************************************************************************** <br />
Update neutronl3ha.yaml to create in overcloud router with ha=True by default<br />
**************************************************************************************<br />
<br />
[stack@undercloud ~]$ cat neutronl3ha.yaml<br />
# Note: we need to enable the L3 HA for Neutron if we want to use pacemaker<br />
# corosync 3 node controller.<br />
<br />
<br />
parameter_defaults:<br />
<span style="color: #b45f06;">NeutronL3HA: true</span><br />
<br />
***********************************************************<br />
[stack@undercloud ~]$ cat overcloud-deploy.sh<br />
*********************************************************** <br />
<br />
#!/bin/bash<br />
<br />
set -eux<br />
<br />
### --start_docs<br />
## Deploying the overcloud<br />
## =======================<br />
<br />
## Prepare Your Environment<br />
## ------------------------<br />
<br />
## * Source in the undercloud credentials.<br />
## ::<br />
<br />
source /home/stack/stackrc<br />
<br />
### --stop_docs<br />
# Wait until there are hypervisors available.<br />
while true; do<br />
count=$(openstack hypervisor stats show -c count -f value)<br />
if [ $count -gt 0 ]; then<br />
break<br />
fi<br />
done<br />
<br />
### --start_docs<br />
<br />
<br />
## * Deploy the overcloud!<br />
## ::<br />
<span style="color: #b45f06;">openstack overcloud deploy \<br /> --templates /usr/share/openstack-tripleo-heat-templates \<br /> --libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /home/stack/neutronl3ha.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml --validation-warnings-fatal --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --ntp-server pool.ntp.org -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \<br /> ${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML} "$@" && status_code=0 || status_code=$?</span><br />
<br />
### --stop_docs<br />
# We don't always get a useful error code from the openstack deploy command,<br />
# so check `heat stack-list` for a CREATE_FAILED status.<br />
if heat stack-list | grep -q 'CREATE_FAILED'; then<br />
# get the failures list<br />
openstack stack failures list overcloud > failed_deployment_list.log || true<br />
<br />
# get any puppet related errors<br />
for failed in $(heat resource-list \<br />
--nested-depth 5 overcloud | grep FAILED |<br />
grep 'StructuredDeployment ' | cut -d '|' -f3)<br />
do<br />
echo "heat deployment-show out put for deployment: $failed" >> failed_deployments.log<br />
echo "######################################################" >> failed_deployments.log<br />
heat deployment-show $failed >> failed_deployments.log<br />
echo "######################################################" >> failed_deployments.log<br />
echo "puppet standard error for deployment: $failed" >> failed_deployments.log<br />
echo "######################################################" >> failed_deployments.log<br />
# the sed part removes color codes from the text<br />
heat deployment-show $failed |<br />
jq -r .output_values.deploy_stderr |<br />
sed -r "s:\x1B\[[0-9;]*[mK]::g" >> failed_deployments.log<br />
echo "######################################################" >> failed_deployments.log<br />
# We need to exit with 1 because of the above || true<br />
done<br />
fi<br />
[stack@undercloud ~]$./overcloud-deploy.sh<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZ5RwNKDPv7fMQR1WUas-5G3lAMkp_-Wt3Tw2kXHWT_IQT6Z45bOZF3y5rGwx5OIja29lmrLnif3XjlrRbRRUqsm5pJVEdspwL7DBJKmRKdhvBDwwf0fx9AQExl2rhuMw8yYAirw/s1600/Screenshot+from+2017-01-08+08-59-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZ5RwNKDPv7fMQR1WUas-5G3lAMkp_-Wt3Tw2kXHWT_IQT6Z45bOZF3y5rGwx5OIja29lmrLnif3XjlrRbRRUqsm5pJVEdspwL7DBJKmRKdhvBDwwf0fx9AQExl2rhuMw8yYAirw/s640/Screenshot+from+2017-01-08+08-59-47.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
. . . . . . . . . . </div>
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhroKl5yXRhwNos-uUE3VvInQKMoI_Y7Vc1COZHgujetxhM3bptrF0To-Ut0CiIrTGismyr5FFr3zZ0IwKvppH3TsqdAV9gBGr0YJXx1FVN_GhU344KaOvJ5_axX_tr0N3c-DEwaQ/s1600/Screenshot+from+2017-01-08+09-49-56.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhroKl5yXRhwNos-uUE3VvInQKMoI_Y7Vc1COZHgujetxhM3bptrF0To-Ut0CiIrTGismyr5FFr3zZ0IwKvppH3TsqdAV9gBGr0YJXx1FVN_GhU344KaOvJ5_axX_tr0N3c-DEwaQ/s640/Screenshot+from+2017-01-08+09-49-56.png" width="640" /></a></div>
<br />
<br />
[stack@undercloud ~]$ . stackrc<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| 64a90c37-2a71-4be3-afd5-23d92229ecd9 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.168.24.12 |
| 252895f9-b825-4499-910c-6b6385e2a5c1 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.18 |
| 96caa2c5-fec3-46f0-90a4-c8b2975a6bb9 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.9 |
| 647322d8-64c9-4534-916f-fe0208df5e97 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.15 |
| ee522eeb-673b-4782-b136-3706b7eaef99 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.16 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+</pre>
<br />
<pre>[root@overcloud-controller-0 ~]# ceph status
cluster 06f2817c-d564-11e6-98fe-00cf70d8c1e2
health HEALTH_OK
monmap e1: 3 mons at {overcloud-controller-0=172.16.1.8:6789/0,overcloud-controller-1=172.16.1.17:6789/0,overcloud-controller-2=172.16.1.7:6789/0}
election epoch 8, quorum 0,1,2 overcloud-controller-2,overcloud-controller-0,overcloud-controller-1
osdmap e19: 1 osds: 1 up, 1 in
flags sortbitwise
pgmap v1075: 224 pgs, 6 pools, 4762 MB data, 1508 objects
13248 MB used, 37939 MB / 51187 MB avail
224 active+clean</pre>
<pre>[root@overcloud-controller-0 ~]# ceph osd df tree
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 0.04880 - 51187M 13248M 37939M 25.88 1.00 0 root default
-2 0.04880 - 51187M 13248M 37939M 25.88 1.00 0 host overcloud-cephstorage-0
0 0.04880 1.00000 51187M 13248M 37939M 25.88 1.00 224 osd.0
TOTAL 51187M 13248M 37939M 25.88
MIN/MAX VAR: 1.00/1.00 STDDEV: 0</pre>
<pre>[root@overcloud-controller-0 ~]# ceph quorum_status
{"election_epoch":8,"quorum":[0,1,2],"quorum_names":["overcloud-controller-2","overcloud-
controller-0","overcloud-controller-1"],"quorum_leader_name":"overcloud-controller-2",
"monmap":{"epoch":1,"fsid":"06f2817c-d564-11e6-98fe-00cf70d8c1e2",
"modified":"2017-01-08 06:22:24.893808","created":"2017-01-08 06:22:24.893808",
"mons":[{"rank":0,"name":"overcloud-controller-2","addr":"172.16.1.7:6789\/0"},
{"rank":1,"name":"overcloud-controller-0","addr":"172.16.1.8:6789\/0"},
{"rank":2,"name":"overcloud-controller-1","addr":"172.16.1.17:6789\/0"}]}}</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVPFH9FwYKgJUEkoVKN2wGLQLzJzYxmbUqq0ZbRoKITPMpdk3U3XRX2Th0K7Wcv9qpndoyHV5sDuj06lnyrZ7Vryxh-16zMX0sVfxqUDmqDSdKUEFdFscIa0MyAQSLiieksiGZzw/s1600/Screenshot+from+2017-01-08+11-14-17.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVPFH9FwYKgJUEkoVKN2wGLQLzJzYxmbUqq0ZbRoKITPMpdk3U3XRX2Th0K7Wcv9qpndoyHV5sDuj06lnyrZ7Vryxh-16zMX0sVfxqUDmqDSdKUEFdFscIa0MyAQSLiieksiGZzw/s640/Screenshot+from+2017-01-08+11-14-17.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9mgklYN5fyQV9aIH7RpWhPWFdPi7HKJovXh72GlJ8Gv8Yc7JAcmTOJYEIY37zSZF2m_OGrjBP2VZM0l5xY4K3aEJafcnwY6mc1Vd7aYpzy9x_-Z3HxLjr9ZyJatsIPDIIrQi97A/s1600/Screenshot+from+2017-01-08+11-15-45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj9mgklYN5fyQV9aIH7RpWhPWFdPi7HKJovXh72GlJ8Gv8Yc7JAcmTOJYEIY37zSZF2m_OGrjBP2VZM0l5xY4K3aEJafcnwY6mc1Vd7aYpzy9x_-Z3HxLjr9ZyJatsIPDIIrQi97A/s640/Screenshot+from+2017-01-08+11-15-45.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvbV8qEEFbluXMf3Om4CpoSDlufnWIJcmGk7GAYlIHlQBo1xHe7udqaTs0tuTqkMD9Kh08ArovzBkUZbcDIj9EQFHD267VymA9MQp15piu-XqM-goXBW6c6K97ZGzZxGEQ-OeCNQ/s1600/Screenshot+from+2017-01-08+11-13-25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvbV8qEEFbluXMf3Om4CpoSDlufnWIJcmGk7GAYlIHlQBo1xHe7udqaTs0tuTqkMD9Kh08ArovzBkUZbcDIj9EQFHD267VymA9MQp15piu-XqM-goXBW6c6K97ZGzZxGEQ-OeCNQ/s640/Screenshot+from+2017-01-08+11-13-25.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqNDWUkqauptsNXpAlQtAY_ItsU7pBCk-mjOL8Xdfyw86V42Np2mlVAuUMc76UCq_pn6LjAvE8Mlc0KhIhcdoowJAUZU8PZM8i9M2j4YLwDbGdsy-3kaOgdaP8CkOI-mXxa56iwg/s1600/Screenshot+from+2017-01-08+11-12-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqNDWUkqauptsNXpAlQtAY_ItsU7pBCk-mjOL8Xdfyw86V42Np2mlVAuUMc76UCq_pn6LjAvE8Mlc0KhIhcdoowJAUZU8PZM8i9M2j4YLwDbGdsy-3kaOgdaP8CkOI-mXxa56iwg/s640/Screenshot+from+2017-01-08+11-12-52.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
Status of memory allocation on VIRTHOST<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir4dKKPqjzCt7lzXMjeUYQdJMeN87HPNjWHByoIaqVb6PL8ud8tDS7LNOHMOD0TdCe7RsdsB6RE4NcrWoPF2u7C1wVmmbSVtbUlqaZK3yS-H3H7V1Oec56UncdWM52SK83ZyUB0w/s1600/Screenshot+from+2017-01-08+11-27-20.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEir4dKKPqjzCt7lzXMjeUYQdJMeN87HPNjWHByoIaqVb6PL8ud8tDS7LNOHMOD0TdCe7RsdsB6RE4NcrWoPF2u7C1wVmmbSVtbUlqaZK3yS-H3H7V1Oec56UncdWM52SK83ZyUB0w/s640/Screenshot+from+2017-01-08+11-27-20.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
Virsh reports in stack's session on Fedora 25 VIRTHOST<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIQjuUCFcxaTTHzPnLgs9pjad1b775GZ7vgj2Or5VBTDjWF8R41BIYNja8iG9qpR-OuDbKRrKI2JJJaTz-lgkBr71rXQaT3cp5xMOpASFrwG49qkpMgNnt0wFFRqIAaEBLuDIi4w/s1600/ScreenshotF25+from+2017-01-08+10-38-34.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgIQjuUCFcxaTTHzPnLgs9pjad1b775GZ7vgj2Or5VBTDjWF8R41BIYNja8iG9qpR-OuDbKRrKI2JJJaTz-lgkBr71rXQaT3cp5xMOpASFrwG49qkpMgNnt0wFFRqIAaEBLuDIi4w/s640/ScreenshotF25+from+2017-01-08+10-38-34.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-59372676043410174352017-01-02T10:36:00.000-08:002017-01-04T09:08:06.792-08:00 TripleO QuickStart functionality and recent commit Merge "move the undercloud deploy role to quickstart-extras for composability"<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="text-align: left;">
############################### <br />
UPDATE 01/04/2017 11:07 AM EST<br />
############################### <br />
<br />
Fixed in upstream :-<br />
<br />
<span style="color: #b45f06;">commit e2e73b94bd88a3f9cc19925a59cbd12ff6172060</span><br />
<span style="color: #b45f06;">Merge: b6dbf6a 6a05cf5</span><br />
<span style="color: #b45f06;">Author: Jenkins <jenkins review.openstack.org=""><br />Date: Wed Jan 4 15:31:59 2017 +0000<br /><br /> Merge "Run extras playbook by default"<br /><br />commit b6dbf6a084ddc08086c7087af85b575bc7d43799<br />Merge: e0493a2 7528970</jenkins></span><br />
<br />
Details here <a href="https://github.com/openstack/tripleo-quickstart/commit/e2e73b94bd88a3f9cc19925a59cbd12ff6172060" target="_blank">https://github.com/openstack/tripleo-quickstart/commit/e2e73b94bd88a3f9cc19925a59cbd12ff6172060</a> <br />
<br />
############################ <br />
Following commit merged master<br />
############################<br />
<br />
commit 6a05cf5c47f7b46eb1565c910ba9c90ea5f089e4<br />
Author: Sagi Shnaidman <sshnaidm redhat.com=""><br />Date: Tue Dec 6 16:01:30 2016 +0100<br /><br /> Run extras playbook by default<br /><br /> For developer purposes we need scripts for overcloud are ready<br /> in home dir after undercloud install. Now all the<br /> undercloud-scripts and overcloud-scripts tagged tasks are in extras<br /> roles, so we need to run extras playbook by default to get them<br /> ready.<br /><br /> Change-Id: I3e216b21dac5a9086374fda9182a9be1cbe75a4f</sshnaidm><br />
<br />
#################################<br />
END UPDATE<br />
#################################<br />
</div>
</div>
</div>
</div>
</div>
</div>
</div>
Straight forward following <a href="https://github.com/openstack/tripleo-quickstart" target="_blank">https://github.com/openstack/tripleo-quickstart</a><br />
==><span style="font-weight: normal;"> Deploying without instructions</span><br />
<div style="text-align: left;">
<span style="font-weight: normal;"></span></div>
<pre>$ bash quickstart.sh -p quickstart-extras.yml \
-r quickstart-extras-requirements.txt \
--tags all $VIRTHOST</pre>
You may choose to execute an end to end deployment without displaying the
instructions and scripts provided by default. Using the <code>--tags all</code> flag
will instruct quickstart to provision the environment and deploy both the
undercloud and overcloud. Additionally a validation test will be executed to
ensure the overcloud is functional.<br />
<==><br />
results hitting Bug <a href="https://bugs.launchpad.net/tripleo-quickstart/+bug/1653344" target="_blank">https://bugs.launchpad.net/tripleo-quickstart/+bug/1653344</a><br />
<br />
*************************************************************************************** <br />
However cloning <a href="https://github.com/openstack/tripleo-quickstart" target="_blank">https://github.com/openstack/tripleo-quickstart</a> and reverting merges to master several the most recent commits<br />
***************************************************************************************<br />
$ git clone https://github.com/openstack/git clone https://github.com/openstack/tripleo-quickstart<br />
<br />
$ cd tripleo-quickstart<br />
<br />
$ [boris@fedora24wks tripleo-quickstart]$ ./revert.sh<br />
+ git revert -m 1 --no-commit b6dbf6a084ddc08086c7087af85b575bc7d43799<br />
+ git revert -m 1 --no-commit e0493a24dff0a535a3be644eb565eacbe765c59d<br />
+ git revert -m 1 --no-commit 9dd2eb77e0bacc8497aa91c2fc54b0e64a3745f1<br />
+ git revert -m 1 --no-commit 6fea2c037e831738cd59eef61d4073b9771bf51b<br />
+ git commit -m 'Reverting is done'<br />
[master ffc105a] Reverting is done<br />
Committer: boris <boris fedora24wks.localdomain=""><br />Your name and email address were configured automatically based<br />on your username and hostname. Please check that they are accurate.<br />You can suppress this message by setting them explicitly. Run the<br />following command and follow the instructions in your editor to edit<br />your configuration file:<br /><br /> git config --global --edit<br /><br />After doing this, you may fix the identity used for this commit with:<br /><br /> git commit --amend --reset-author<br /><br /> 15 files changed, 640 insertions(+), 108 deletions(-)<br /> delete mode 100644 config/general_config/containers_minimal.yml<br /> create mode 100644 roles/tripleo/undercloud/defaults/main.yml<br /> create mode 100644 roles/tripleo/undercloud/meta/main.yml<br /> create mode 100644 roles/tripleo/undercloud/tasks/create-scripts.yml<br /> create mode 100644 roles/tripleo/undercloud/tasks/install-undercloud.yml<br /> rewrite roles/tripleo/undercloud/tasks/main.yml (99%)<br /> create mode 100644 roles/tripleo/undercloud/tasks/post-install.yml<br /> create mode 100644 roles/tripleo/undercloud/templates/quickstart-hieradata-overrides.yaml.j2<br /> create mode 100644 roles/tripleo/undercloud/templates/undercloud-install.sh.j2<br /> create mode 100644 roles/tripleo/undercloud/templates/undercloud.conf.j2</boris><br />
********************************************************************************<br />
In particular, un-merging from master branch commits<br />
******************************************************************************** <br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain">1. 6c3cd87a6639b15ad84b798f76e8a1f65877855a</span></span></span><br />
<br />
<div class="commit-title">
Move the undercloud deploy role to quickstart-extras for composability
</div>
<pre>In an effort to make more of the tripleo deployment ci more composable
it has been discussed to break out the undercloud deployment into it's
own role. There are examples where additional configuration is needed
prior to the undercloud installation such as dpdk, and installing in
other ci environments.
This patch moves the undercloud deployment from the quickstart.yml
playbook to the quickstart-extras.yml playbook</pre>
<br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain">2. </span></span></span><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">7528970a78545e68da795d91cccb9ab3449e589f</span></span></span></span></span></span><br />
<br />
<div class="commit-title">
Fix for quickstart.sh requirements
</div>
<pre>The correct change did *not* land in
<a href="https://review.openstack.org/#/c/410757">https://review.openstack.org/#/c/410757</a></pre>
<br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">****************************************** </span></span></span></span></span></span><br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">Does allow run successfully :-</span></span></span></span></span></span><br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">****************************************** </span></span></span></span></span></span><br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">[boris@fedora24wks tripleo-quickstart]<span style="color: #b45f06;">$ bash quickstart.sh -R newton --config config/general_config/ha.yml -p quickstart-extras.yml -r quickstart-extras-requirements.txt $VIRTHOST</span></span></span></span></span></span></span><br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><i> </i><br />New python executable in /home/boris/.quickstart/bin/python2<br />Also creating executable in /home/boris/.quickstart/bin/python<br />Installing setuptools, pip, wheel...done.<br />Requirement already up-to-date: pip in /home/boris/.quickstart/lib/python2.7/site-packages<br />Cloning tripleo-quickstart repository...<br />Cloning into '/home/boris/.quickstart/tripleo-quickstart'...<br />remote: Counting objects: 5741, done.<br />remote: Compressing objects: 100% (2/2), done.<br />remote: Total 5741 (delta 0), reused 0 (delta 0), pack-reused 5739<br />Receiving objects: 100% (5741/5741), 914.60 KiB | 686.00 KiB/s, done.<br />Resolving deltas: 100% (2977/2977), done.<br />Checking connectivity... done.<br />Fetching origin<br />~/.quickstart/tripleo-quickstart ~/.quickstart/tripleo-quickstart<br /><br />Installed /home/boris/.quickstart/.eggs/pbr-1.10.0-py2.7.egg<br />[pbr] Generating ChangeLog<br />running install<br />running build<br />running install_data<br />creating /home/boris/.quickstart/usr<br />creating /home/boris/.quickstart/usr/local<br />creating /home/boris/.quickstart/usr/local/share<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/user<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/user/tasks<br />copying roles/libvirt/teardown/user/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/user/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/tasks<br />copying roles/provision/teardown/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/tasks<br />copying roles/parts/kvm/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks<br />copying roles/libvirt/setup/overcloud/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/nodes<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/nodes/tasks<br />copying roles/libvirt/teardown/nodes/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/nodes/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/tasks<br />copying roles/provision/local/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/meta<br />copying roles/provision/remote/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/meta<br />copying roles/libvirt/setup/overcloud/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tasks<br />copying roles/tripleo-inventory/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/files<br />copying roles/libvirt/setup/undercloud/files/get-undercloud-ip.sh -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/files<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/meta<br />copying roles/provision/support_check/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/test_plugins<br />copying test_plugins/equalto.py -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/test_plugins/<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/meta<br />copying roles/libvirt/setup/user/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/meta<br />creating /home/boris/.quickstart/playbooks<br />copying playbooks/build-images-and-quickstart.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/libvirt-teardown.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/tripleo-roles.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/quickstart-extras.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/noop.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/teardown-provision.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/provision.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/quickstart.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/teardown-nodes.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/build-images.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/teardown.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/libvirt-setup.yml -> /home/boris/.quickstart/playbooks/<br />copying playbooks/teardown-environment.yml -> /home/boris/.quickstart/playbooks/<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars<br />copying roles/environment/vars/redhat.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars<br />copying roles/environment/vars/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars<br />copying roles/environment/vars/fedora.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars<br />copying roles/environment/vars/centos-7.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/vars<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/meta<br />copying roles/environment/setup/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/defaults<br />copying roles/parts/kvm/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/kvm/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/tasks<br />copying roles/environment/setup/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/meta<br />copying roles/libvirt/setup/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/inject_gating_repo.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/customize_overcloud.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/inject_repos.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/update_image.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />copying roles/libvirt/setup/undercloud/tasks/convert_image.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/defaults<br />copying roles/tripleo/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/templates<br />copying roles/convert-image/templates/convert_image.sh.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/templates<br />creating /home/boris/.quickstart/config<br />creating /home/boris/.quickstart/config/general_config<br />copying config/general_config/containers_minimal.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/minimal.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/ha_ipv6.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/ha.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/minimal_pacemaker.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/ceph.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/minimal_no_netiso.yml -> /home/boris/.quickstart/config/general_config/<br />copying config/general_config/ha_big.yml -> /home/boris/.quickstart/config/general_config/<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/meta<br />copying roles/provision/user/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/meta<br />creating /home/boris/.quickstart/config/release<br />copying config/release/master.yml -> /home/boris/.quickstart/config/release/<br />copying config/release/master-tripleo-ci.yml -> /home/boris/.quickstart/config/release/<br />copying config/release/liberty.yml -> /home/boris/.quickstart/config/release/<br />copying config/release/mitaka.yml -> /home/boris/.quickstart/config/release/<br />copying config/release/newton.yml -> /home/boris/.quickstart/config/release/<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/meta<br />copying roles/libvirt/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/defaults<br />copying roles/tripleo-inventory/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/tasks<br />copying roles/convert-image/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/convert-image/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/meta<br />copying roles/environment/teardown/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/tasks<br />copying roles/environment/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common/defaults<br />copying roles/common/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/common/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/defaults<br />copying roles/libvirt/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/defaults<br />creating /home/boris/.quickstart/config/release/stable<br />copying config/release/stable/mitaka.yml -> /home/boris/.quickstart/config/release/stable<br />copying config/release/stable/newton.yml -> /home/boris/.quickstart/config/release/stable<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud/meta<br />copying roles/overcloud/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/overcloud/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/templates<br />copying roles/provision/remote/templates/libvirt.pkla.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/templates<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/meta<br />copying roles/provision/local/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/local/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/tasks<br />copying roles/fetch-images/tasks/fetch.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/tasks<br />copying roles/fetch-images/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/tasks<br />copying roles/provision/user/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/user/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/tasks<br />copying roles/provision/support_check/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/support_check/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />copying roles/tripleo-inventory/templates/ssh_config.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />copying roles/tripleo-inventory/templates/ssh_config_localhost.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />copying roles/tripleo-inventory/templates/inventory.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />copying roles/tripleo-inventory/templates/ssh_config_no_undercloud.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />copying roles/tripleo-inventory/templates/get-overcloud-nodes.py.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/templates<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/meta<br />copying roles/provision/teardown/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/teardown/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/tasks<br />copying roles/environment/teardown/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/teardown/tasks<br />creating /home/boris/.quickstart/config/release/trunk<br />copying config/release/trunk/liberty.yml -> /home/boris/.quickstart/config/release/trunk<br />copying config/release/trunk/mitaka.yml -> /home/boris/.quickstart/config/release/trunk<br />copying config/release/trunk/newton.yml -> /home/boris/.quickstart/config/release/trunk<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/defaults<br />copying roles/parts/libvirt/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/defaults<br />copying roles/libvirt/setup/undercloud/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/templates<br />copying roles/environment/setup/templates/network.xml.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/setup/templates<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud<br />copying roles/environment/README.md -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates<br />copying roles/libvirt/setup/overcloud/templates/baremetalvm.xml.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates<br />copying roles/libvirt/setup/overcloud/templates/volume_pool.xml.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates<br />copying roles/libvirt/setup/overcloud/templates/instackenv.json.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/overcloud/templates<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/meta<br />copying roles/tripleo/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests<br />copying roles/tripleo-inventory/tests/test.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests<br />copying roles/tripleo-inventory/tests/inventory -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks<br />copying roles/provision/remote/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/remote/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests/playbooks<br />copying roles/tripleo-inventory/tests/playbooks/quickstart-usb.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo-inventory/tests/playbooks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks<br />copying roles/tripleo/undercloud/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/tripleo/undercloud/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/meta<br />copying roles/fetch-images/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/fetch-images/meta<br />creating /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/master.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/liberty.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/mitaka-cloudsig-testing.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/mitaka.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/newton.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/master-current-tripleo.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/newton-cloudsig-stable.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/master-consistent.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/newton-consistent.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/mitaka-cloudsig-stable.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/liberty-consistent.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/newton-cloudsig-testing.yml -> /home/boris/.quickstart/config/release/centosci<br />copying config/release/centosci/mitaka-consistent.yml -> /home/boris/.quickstart/config/release/centosci<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates<br />copying roles/libvirt/setup/undercloud/templates/inject_gating_repo.sh.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates<br />copying roles/libvirt/setup/undercloud/templates/undercloudvm.xml.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates<br />copying roles/libvirt/setup/undercloud/templates/ssh.config.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates<br />copying roles/libvirt/setup/undercloud/templates/update_image.sh.j2 -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/undercloud/templates<br />copying roles/parts/README.md -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/tasks<br />copying roles/parts/libvirt/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/parts/libvirt/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/library<br />copying library/generate_macs.py -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/library/<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/meta<br />copying roles/provision/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/defaults<br />copying roles/provision/defaults/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/provision/defaults<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/meta<br />copying roles/environment/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/environment/meta<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/tasks<br />copying roles/libvirt/setup/user/tasks/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/setup/user/tasks<br />creating /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/meta<br />copying roles/libvirt/teardown/meta/main.yml -> /home/boris/.quickstart/usr/local/share/tripleo-quickstart/roles/libvirt/teardown/meta<br />running install_egg_info<br />running egg_info<br />creating /home/boris/.quickstart/tripleo_quickstart.egg-info<br />writing pbr to /home/boris/.quickstart/tripleo_quickstart.egg-info/pbr.json<br />writing requirements to /home/boris/.quickstart/tripleo_quickstart.egg-info/requires.txt<br />writing /home/boris/.quickstart/tripleo_quickstart.egg-info/PKG-INFO<br />writing top-level names to /home/boris/.quickstart/tripleo_quickstart.egg-info/top_level.txt<br />writing dependency_links to /home/boris/.quickstart/tripleo_quickstart.egg-info/dependency_links.txt<br />[pbr] Processing SOURCES.txt<br />writing manifest file '/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt'<br />[pbr] In git context, generating filelist from git<br />warning: no files found matching 'AUTHORS'<br />warning: no files found matching 'ChangeLog'<br />warning: no previously-included files matching '*.pyc' found anywhere in distribution<br />writing manifest file '/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt'<br />Copying /home/boris/.quickstart/tripleo_quickstart.egg-info to /home/boris/.quickstart/lib/python2.7/site-packages/tripleo_quickstart-1.0.1.dev217-py2.7.egg-info<br />running install_scripts</span></span></span></span></span></span><br />
********************************************************************************<br />
<div style="text-align: left;">
<i><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"> Reverting commits results following downloads to happen &&</span></span></span></span></span></span></i><br />
<i><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"> `setup.py install`s to run setting up ansible environment for successful </span></span></span></span></span></span></i><br />
<i><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"> quickstart.sh command line running</span></span></span></span></span></span></i><br />
<i><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">********************************************************************************</span></span></span></span></span></span></i></div>
<span style="font-weight: normal;"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">Collecting ansible==2.2.0.0 (from -r requirements.txt (line 1))</span></span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><i> Downloading ansible-2.2.0.0.tar.gz (2.4MB)<br /> 100% |################################| 2.4MB 5.9MB/s <br />Collecting netaddr>=0.7.18 (from -r requirements.txt (line 2))<br /> Downloading netaddr-0.7.18-py2.py3-none-any.whl (1.5MB)<br /> 100% |################################| 1.5MB 3.8MB/s <br />Collecting pbr>=1.6 (from -r requirements.txt (line 3))<br /> Downloading pbr-1.10.0-py2.py3-none-any.whl (96kB)<br /> 100% |################################| 102kB 5.9MB/s <br />Requirement already satisfied: setuptools>=11.3 in /home/boris/.quickstart/lib/python2.7/site-packages (from -r requirements.txt (line 4))<br />Collecting tripleo-quickstart-extras from git+https://git.openstack.org/openstack/tripleo-quickstart-extras/#egg=tripleo-quickstart-extras (from -r quickstart-extras-requirements.txt (line 1))<br /> Cloning https://git.openstack.org/openstack/tripleo-quickstart-extras/ to /tmp/pip-build-QpkA1O/tripleo-quickstart-extras<br />Collecting paramiko (from ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading paramiko-2.1.1-py2.py3-none-any.whl (172kB)<br /> 100% |################################| 174kB 5.0MB/s <br />Collecting jinja2 (from ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading Jinja2-2.8.1-py2.py3-none-any.whl (264kB)<br /> 100% |################################| 266kB 4.0MB/s <br />Collecting PyYAML (from ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading PyYAML-3.12.tar.gz (253kB)<br /> 100% |################################| 256kB 3.8MB/s <br />Collecting pycrypto>=2.6 (from ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading pycrypto-2.6.1.tar.gz (446kB)<br /> 100% |################################| 450kB 5.5MB/s <br />Collecting pyasn1>=0.1.7 (from paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading pyasn1-0.1.9-py2.py3-none-any.whl<br />Collecting cryptography>=1.1 (from paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading cryptography-1.7.1.tar.gz (420kB)<br /> 100% |################################| 430kB 5.9MB/s <br />Collecting MarkupSafe (from jinja2->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading MarkupSafe-0.23.tar.gz<br />Collecting idna>=2.0 (from cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading idna-2.2-py2.py3-none-any.whl (55kB)<br /> 100% |################################| 61kB 8.1MB/s <br />Collecting six>=1.4.1 (from cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading six-1.10.0-py2.py3-none-any.whl<br />Collecting enum34 (from cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading enum34-1.1.6-py2-none-any.whl<br />Collecting ipaddress (from cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading ipaddress-1.0.17-py2-none-any.whl<br />Collecting cffi>=1.4.1 (from cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading cffi-1.9.1-cp27-cp27mu-manylinux1_x86_64.whl (387kB)<br /> 100% |################################| 389kB 5.4MB/s <br />Collecting pycparser (from cffi>=1.4.1->cryptography>=1.1->paramiko->ansible==2.2.0.0->-r requirements.txt (line 1))<br /> Downloading pycparser-2.17.tar.gz (231kB)<br /> 100% |################################| 235kB 6.1MB/s <br />Installing collected packages: pyasn1, idna, six, enum34, ipaddress, pycparser, cffi, cryptography, paramiko, MarkupSafe, jinja2, PyYAML, pycrypto, ansible, netaddr, pbr, tripleo-quickstart-extras</i><br /> Running setup.py install for pycparser ... done<br /> Running setup.py install for cryptography ... done<br /> Running setup.py install for MarkupSafe ... done<br /> Running setup.py install for PyYAML ... done<br /> Running setup.py install for pycrypto ... done<br /> Running setup.py install for ansible ... done<br /> Running setup.py install for tripleo-quickstart-extras ... done<br />Successfully installed MarkupSafe-0.23 PyYAML-3.12 ansible-2.2.0.0 cffi-1.9.1 cryptography-1.7.1 enum34-1.1.6 idna-2.2 ipaddress-1.0.17 jinja2-2.8.1 netaddr-0.7.18 paramiko-2.1.1 pbr-1.10.0 pyasn1-0.1.9 pycparser-2.17 pycrypto-2.6.1 six-1.10.0 tripleo-quickstart-extras-0.0.1.dev528<br />~/.quickstart/tripleo-quickstart<br /><b>----------------------------------------------------------------------------<br />| , . , |<br />| )-_'''_-( |<br />| ./ o\ /o \. |<br />| . \__/ \__/ . |<br />| ... V ... |<br />| ... - - - ... |<br />| . - - . |<br />| `-.....-´ |<br />| ____ ____ ____ _ _ _ _ |<br />| / __ \ / __ \ / __ \ (_) | | | | | | |<br />| | | | | ___ | | | | | | | |_ _ _ ___| | _____| |_ __ _ _ __| |_ |<br />| | | | |/ _ \| | | | | | | | | | | |/ __| |/ / __| __/ _` | '__| __| |<br />| | |__| | |_| | |__| | | |__| | |_| | | (__| <\__ \ |_|(_| | | | |_ |<br />| \____/ \___/ \____/ \___\_\\__,_|_|\___|_|\_\___/\__\__,_|_| \__| |<br />| |<br />| |<br />----------------------------------------------------------------------------</b><br /><br />Installing OpenStack newton on host 192.168.0.74<br />Using directory /home/boris/.quickstart for a local working directory<br />+ export ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg<br />+ ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg<br />+ export ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts<br />+ ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts<br />+ source /home/boris/.quickstart/tripleo-quickstart/ansible_ssh_env.sh<br />++ export OPT_WORKDIR=/home/boris/.quickstart<br />++ OPT_WORKDIR=/home/boris/.quickstart<br />++ export SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible<br />++ SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible<br />++ touch /home/boris/.quickstart/ssh.config.ansible<br />++ export 'ANSIBLE_SSH_ARGS=-F /home/boris/.quickstart/ssh.config.ansible'<br />++ ANSIBLE_SSH_ARGS='-F /home/boris/.quickstart/ssh.config.ansible'<br />+ '[' 0 = 0 ']'<br />+ rm -f /home/boris/.quickstart/hosts<br />+ '[' 192.168.0.74 = localhost ']'<br />+ '[' '' = 1 ']'<br />+ VERBOSITY=vv<br />+ ansible-playbook -vv /home/boris/.quickstart/playbooks/quickstart-extras.yml -e @config/general_config/ha.yml -e ansible_python_interpreter=/usr/bin/python -e @/home/boris/.quickstart/config/release/newton.yml -e local_working_dir=/home/boris/.quickstart -e virthost=192.168.0.74 -t untagged,provision,environment,undercloud-scripts,overcloud-scripts,undercloud-install,undercloud-post-install,teardown-nodes<br />Using /home/boris/.quickstart/tripleo-quickstart/ansible.cfg as config file<br /> [WARNING]: Host file not found: /home/boris/.quickstart/hosts<br /><br /> [WARNING]: provided hosts list is empty, only localhost is available<br /><br />statically included: /home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/inject_repos.yml<br />statically included: /home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/inject_gating_repo.yml<br />statically included: /home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/convert_image.yml<br />statically included: /home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/update_image.yml<br />statically included: /home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/customize_overcloud.yml<br />statically included: /home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/create-scripts.yml<br />statically included: /home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/install-undercloud.yml<br />statically included: /home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/post-install.yml</span></span></span><br /> </span></span></span>
<br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">. . . . . . </span></span></span></span></span></span><br />
<br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">PLAY RECAP <br />
*********************************************************************<br />192.168.0.74 : ok=107 changed=36 unreachable=0 failed=0 <br />localhost : ok=19 changed=8 unreachable=0 failed=0 <br />undercloud : ok=31 changed=22 unreachable=0 failed=0 <br /><br />Monday 02 January 2017 13:03:48 +0300 (0:00:00.716) 0:32:39.725 ******** <br />=================================================<br />undercloud-deploy : Install the undercloud ---------------------------- 993.80s<br />/home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/install-undercloud.yml:15 <br />overcloud-prep-images : Prepare the overcloud images for deploy ------- 329.70s<br />/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-images/tasks/overcloud-prep-images.yml:1 <br />setup/undercloud : Perform selinux relabel on undercloud image -------- 124.89s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:154 <br />setup/undercloud : Resize undercloud image (call virt-resize) ---------- 67.62s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:190 <br />setup/undercloud : Upload undercloud volume to storage pool ------------ 55.47s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:257 <br />setup/undercloud : Copy instackenv.json to appliance ------------------- 36.71s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:73 <br />fetch-images : Get qcow2 image from cache ------------------------------ 30.23s<br />/home/boris/.quickstart/tripleo-quickstart/roles/fetch-images/tasks/fetch.yml:127 <br />overcloud-prep-flavors : Prepare the scripts for overcloud flavors ----- 26.48s<br />/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-flavors/tasks/overcloud-prep-flavors.yml:1 <br />setup/undercloud : Get undercloud vm ip address ------------------------ 12.76s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:295 <br />parts/libvirt : Install packages for libvirt ---------------------------- 8.58s<br />/home/boris/.quickstart/tripleo-quickstart/roles/parts/libvirt/tasks/main.yml:30 <br />setup/overcloud : Create overcloud vm storage --------------------------- 7.58s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:62 <br />setup/overcloud : Define overcloud vms ---------------------------------- 7.04s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:74 <br />parts/libvirt : If ipxe-roms-qemu is not installed, install a known good version --- 6.98s<br />/home/boris/.quickstart/tripleo-quickstart/roles/parts/libvirt/tasks/main.yml:20 <br />setup/undercloud : Inject undercloud ssh public key to appliance -------- 6.77s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:99 <br />teardown/nodes : Delete baremetal vm storage ---------------------------- 6.58s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/teardown/nodes/tasks/main.yml:53 <br />teardown/nodes : Check overcloud vms ------------------------------------ 6.56s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/teardown/nodes/tasks/main.yml:22 <br />setup/overcloud : Check if overcloud volumes exist ---------------------- 6.50s<br />/home/boris/.quickstart/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:53 <br />overcloud-prep-network : Prepare the network-isolation required networks on the undercloud --- 6.18s<br />/home/boris/.quickstart/usr/local/share/ansible/roles/overcloud-prep-network/tasks/overcloud-prep-network.yml:1 <br />undercloud-deploy : Create undercloud configuration --------------------- 5.27s<br />/home/boris/.quickstart/usr/local/share/ansible/roles/undercloud-deploy/tasks/create-scripts.yml:3 <br />setup ------------------------------------------------------------------- 5.05s<br /> ------------------------------------------------------------------------------<br />+ set +x<br />[boris@fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud<br />Warning: Permanently added '192.168.0.74' (ECDSA) to the list of known hosts.<br />Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />Last login: Mon Jan 2 10:03:44 2017 from gateway<br />[stack@undercloud ~]$ . stackrc<br />[stack@undercloud ~]$ ls -l<br />total 1625036<br />-rwxr-xr-x. 1 stack stack 770 Jan 2 09:56 <span style="color: #38761d;">containers-default-parameters.yaml</span><br />-rw-rw-r--. 1 stack stack 22051 Jan 2 09:34 instackenv.json<br />-rw-r--r--. 1 root root 355820146 Dec 29 09:00 ironic-python-agent.initramfs<br />-rwxr-xr-x. 1 root root 5393328 Dec 29 09:00 <span style="color: #38761d;">ironic-python-agent.kernel</span><br />-rw-r--r--. 1 stack stack 474 Jan 2 09:56 network-environment.yaml<br />-rwxr-xr-x. 1 stack stack 208 Jan 2 10:03 neutronl3ha.yaml<br />-rw-rw-r--. 1 stack stack 0 Jan 2 09:56 overcloud_custom_tht_script.log<br />-rwxr-xr-x. 1 stack stack 293 Jan 2 09:56 <span style="color: #38761d;">overcloud-custom-tht-script.sh</span><br />-rwxr-xr-x. 1 stack stack 1012 Jan 2 10:03 <span style="color: #38761d;">overcloud-deploy-post.sh</span><br />-rwxr-xr-x. 1 stack stack 2900 Jan 2 10:03 <span style="color: #38761d;">overcloud-deploy.sh</span><br />-rw-r--r--. 1 root root 46801971 Dec 29 09:01 overcloud-full.initrd<br />-rw-r--r--. 1 root root 1250309120 Dec 29 09:01 overcloud-full.qcow2<br />-rwxr-xr-x. 1 root root 5393328 Dec 29 09:01 overcloud-full.vmlinuz<br />-rwxr-xr-x. 1 stack stack 3932 Jan 2 09:56 <span style="color: #38761d;">overcloud-prep-containers.sh</span><br />-rw-rw-r--. 1 stack stack 7336 Jan 2 10:03 overcloud_prep_flavors.log<br />-rwxr-xr-x. 1 stack stack 3672 Jan 2 10:02 <span style="color: #38761d;">overcloud-prep-flavors.sh</span><br />-rw-rw-r--. 1 stack stack 5039 Jan 2 10:02 overcloud_prep_images.log<br />-rwxr-xr-x. 1 stack stack 746 Jan 2 09:57 <span style="color: #38761d;">overcloud-prep-images.sh</span><br />-rw-rw-r--. 1 stack stack 1315 Jan 2 10:03 overcloud_prep_network.log<br />-rwxr-xr-x. 1 stack stack 861 Jan 2 10:03 <span style="color: #38761d;">overcloud-prep-network.sh</span><br />-rw-------. 1 stack stack 351 Jan 2 09:39 quickstart-hieradata-overrides.yaml<br />-rw-------. 1 stack stack 587 Jan 2 09:55 stackrc<br />-rw-------. 1 stack stack 7868 Jan 2 09:39 undercloud.conf<br />-rw-rw-r--. 1 stack stack 191197 Jan 2 09:56 undercloud_install.log<br />-rwxr-xr-x. 1 stack stack 151 Jan 2 09:39 <span style="color: #38761d;">undercloud-install.sh</span><br />-rw-rw-r--. 1 stack stack 1650 Jan 2 09:40 undercloud-passwords.conf<br />-rwxr-xr-x. 1 stack stack 494 Jan 2 09:57 <span style="color: #38761d;">upload_images_to_local_registry.py</span><br /> </span></span></span></span></span></span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcW5Hj8IKRbvqYgJDdKYeFt3CSwUOWhoDQI5nvIq83zpRnUTUflidYEVnOM6_EPQeEVOlAk8ieb93Y_3H93-r24W87Q-kqeGS4diZ7SUgf6FQ8gu9OGNeYHAkxRVCyLmPojcBghw/s1600/Screenshot+from+2017-01-02+12-26-56.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhcW5Hj8IKRbvqYgJDdKYeFt3CSwUOWhoDQI5nvIq83zpRnUTUflidYEVnOM6_EPQeEVOlAk8ieb93Y_3H93-r24W87Q-kqeGS4diZ7SUgf6FQ8gu9OGNeYHAkxRVCyLmPojcBghw/s640/Screenshot+from+2017-01-02+12-26-56.png" width="640" /></a></div>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"> </span></span></span></span></span></span><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span><br />
<div class="separator" style="clear: both; text-align: center;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCq63HTQEWEgLdcgI8tpI4URsHlqGNmVV2gVsX9SV9Ie_CxxQhZ7Tu2Y5NKuTNijcFZsNFs1lqRYwlhoISH17u1VUv6By3HRH_WyZ3TeJyp31hnHK7dWXZ7eaZnvaDPSVzX1gyg/s1600/Screenshot+from+2017-01-02+12-40-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoCq63HTQEWEgLdcgI8tpI4URsHlqGNmVV2gVsX9SV9Ie_CxxQhZ7Tu2Y5NKuTNijcFZsNFs1lqRYwlhoISH17u1VUv6By3HRH_WyZ3TeJyp31hnHK7dWXZ7eaZnvaDPSVzX1gyg/s640/Screenshot+from+2017-01-02+12-40-55.png" width="640" /> </a></span></span></span></span></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOPYYqyozUR6GUSix4Yb_QS_qqY7rvlIM9jJWRfcLpxdPB9AQchEGMU6lepF-wOUGzq_R2GLLOJWciYNIhuHH5odDdW4oAtEoZkBCuVdPEnK_8KOtUYR-Z7KPN71r5F5MfltbZcw/s1600/Screenshot+from+2017-01-02+12-58-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOPYYqyozUR6GUSix4Yb_QS_qqY7rvlIM9jJWRfcLpxdPB9AQchEGMU6lepF-wOUGzq_R2GLLOJWciYNIhuHH5odDdW4oAtEoZkBCuVdPEnK_8KOtUYR-Z7KPN71r5F5MfltbZcw/s640/Screenshot+from+2017-01-02+12-58-07.png" width="640" /></a></span></span></span></span></span></span></div>
<br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span>
<br />
<div class="separator" style="clear: both; text-align: center;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span></div>
<div class="separator" style="clear: both; text-align: center;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqd8Fof3Em6xMHTafG1qcDu0Ow9Jz1mEE4CNtRhgrH5gBhr2s2yWcEFcUm4s8lPrkgESm9HDF2KsP6ZDdzzbZeOEJ5fJCPAcqYxxOmTU2aFFjioENL60Ce1BoWo15kz_CZyfA20w/s1600/Screenshot+from+2017-01-02+13-05-14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqd8Fof3Em6xMHTafG1qcDu0Ow9Jz1mEE4CNtRhgrH5gBhr2s2yWcEFcUm4s8lPrkgESm9HDF2KsP6ZDdzzbZeOEJ5fJCPAcqYxxOmTU2aFFjioENL60Ce1BoWo15kz_CZyfA20w/s640/Screenshot+from+2017-01-02+13-05-14.png" width="640" /></a></span></span></span></span></span></span></div>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">
</span></span></span></span></span></span>
<br />
<div style="text-align: left;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span>
<br />
<div class="separator" style="clear: both; text-align: left;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span></div>
<div class="separator" style="clear: both; text-align: left;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><br />[stack@undercloud ~]$ ./overcloud-deploy.sh</span></span></span></span></span></span></div>
<pre><div class="separator" style="clear: both; text-align: left;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"></span></span></span></span></span></span></div>
<pre><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">+ source /home/stack/stackrc
++ NOVA_VERSION=1.1
++ export NOVA_VERSION
+++ sudo hiera admin_password
++ OS_PASSWORD=6bf7c75cc8d09686c0fc526c3fa5b452e1996844
++ export OS_PASSWORD
++ OS_AUTH_URL=https://192.168.24.2:13000/v2.0
++ PYTHONWARNINGS='ignore:Certificate has no, ignore:A true SSLContext object is not available'
++ export OS_AUTH_URL
++ export PYTHONWARNINGS
++ OS_USERNAME=admin
++ OS_TENANT_NAME=admin
++ COMPUTE_API_VERSION=1.1
++ OS_BAREMETAL_API_VERSION=1.15
++ OS_NO_CACHE=True
++ OS_CLOUDNAME=undercloud
++ OS_IMAGE_API_VERSION=1
++ export OS_USERNAME
++ export OS_TENANT_NAME
++ export COMPUTE_API_VERSION
++ export OS_BAREMETAL
++ export OS_NO_CACHE
++ export OS_CLOUDNAME
++ export OS_IMAGE_API_VERSION
+ true
++ openstack hypervisor stats show -c count -f value
+ count=6
+ '[' 6 -gt 0 ']'
+ break
+ openstack overcloud deploy --templates /usr/share/openstack-tripleo-heat-templates </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">--libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">--ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">--swift-storage-flavor oooq_objectstorage --timeout 90 </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /home/stack/network-environment.yaml </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /home/stack/neutronl3ha.yaml </span></span></span></span></span></span><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">--validation-warnings-fatal --control-scale 3 --compute-scale 1 --ceph-storage-scale 2 </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">--neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org </span></span></span></span></span></span>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 017ae06f-2b09-4a90-8022-6d5fd2215674
Plan updated
Deploying templates in the directory /tmp/tripleoclient-TvEeVV/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 7c5a7903-4950-47fe-bffe-8b5e51e0809e
2017-01-02 10:50:42Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started
2017-01-02 10:50:42Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:43Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:43Z [overcloud.RabbitCookie]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:43Z [overcloud.Networks]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:44Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:44Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:44Z [overcloud.Networks]: CREATE_IN_PROGRESS Stack CREATE started
2017-01-02 10:50:44Z [overcloud.Networks.InternalNetwork]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:44Z [overcloud.MysqlRootPassword]: CREATE_IN_PROGRESS state changed
2017-01-02 10:50:45Z [overcloud.ServiceNetMap]: CREATE_COMPLETE state changed
. . . . . </span></span></span></span></span></span></pre>
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">2017-01-02 11:42:00Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed</span></span></span></span></span></span>
<div class="separator" style="clear: both; text-align: left;">
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">2017-01-02 11:43:00Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed
2017-01-02 11:43:01Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully
2017-01-02 11:43:02Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed
2017-01-02 11:43:02Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully
2017-01-02 11:43:03Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed
2017-01-02 11:43:03Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE
Started Mistral Workflow. Execution ID: 634338d8-1424-4e31-868b-a4826127a0aa
Overcloud Endpoint: http://10.0.0.7:5000/v2.0
Overcloud Deployed
+ heat stack-list
+ grep -q CREATE_FAILED
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ecd3870d-83c4-46c8-a7a0-24742f6f22a8 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.168.24.6 |
| de9a1166-771e-4a50-b087-23915e97d64f | overcloud-cephstorage-1 | ACTIVE | - | Running | ctlplane=192.168.24.16 |
| dc3b86a2-769e-4616-8a17-fcc4ad0db83d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.13 |
| 8290ffbe-3c8b-4d2d-ae0a-bfc0c2e5bd01 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.18 |
| d05025e8-179e-4d66-a15f-1d33ecd661b1 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.10 |
| 4c3c5717-0868-4d93-bd5e-e1c418cd39ac | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.8 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
[stack@undercloud ~]$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.7:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.0.0.7,192.168.24.7
export OS_PASSWORD=UQzvXK3FexYxsyRrzjYc9Bq9J
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_TENANT_NAME=admin
[stack@undercloud ~]$ ssh heat-admin@192.168.24.13
The authenticity of host '192.168.24.13 (192.168.24.13)' can't be established.
ECDSA key fingerprint is b2:a5:15:6f:ce:04:39:df:37:3a:eb:81:af:d5:68:c9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.24.13' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
[root@overcloud-controller-0 ~]# pcs status
Cluster name: tripleo_cluster
Stack: corosync
Current DC: overcloud-controller-2 (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum
Last updated: Mon Jan 2 11:45:56 2017 Last change: Mon Jan 2 11:41:49 2017 by root via cibadmin on overcloud-controller-1
3 nodes and 19 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
ip-172.16.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.3.8 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-10.0.0.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-192.168.24.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.14 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-0 ]
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
[root@overcloud-controller-0 ~]# ceph status
cluster b2826c88-d0d1-11e6-91bc-00ff8b05e286
health HEALTH_OK
monmap e1: 3 mons at {overcloud-controller-0=172.16.1.5:6789/0,overcloud-controller-1=172.16.1.11:6789/0,overcloud-controller-2=172.16.1.6:6789/0}
election epoch 8, quorum 0,1,2 overcloud-controller-0,overcloud-controller-2,overcloud-controller-1
osdmap e15: 2 osds: 2 up, 2 in
flags sortbitwise
pgmap v144: 224 pgs, 6 pools, 0 bytes data, 0 objects
16964 MB used, 85411 MB / 102375 MB avail</span></span></span></span></span></span></div>
224 active+clean</pre>
</div>
<br />
**********************************************<br />
What happens without revert.sh running<br />
**********************************************<br />
[pbr] Processing SOURCES.txt<br />
writing manifest file '/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt'<br />
[pbr] In git context, generating filelist from git<br />
warning: no files found matching 'AUTHORS'<br />
warning: no files found matching 'ChangeLog'<br />
warning: no previously-included files matching '*.pyc' found anywhere in distribution<br />
writing manifest file '/home/boris/.quickstart/tripleo_quickstart.egg-info/SOURCES.txt'<br />
Copying /home/boris/.quickstart/tripleo_quickstart.egg-info to /home/boris/.quickstart/lib/python2.7/site-packages/tripleo_quickstart-1.0.1.dev217-py2.7.egg-info<br />
running install_scripts<br />
Collecting tripleo-quickstart-extras from git+https://git.openstack.org/openstack/tripleo-quickstart-extras/#egg=tripleo-quickstart-extras (from -r quickstart-extras-requirements.txt (line 1))<br />
<span style="color: #b45f06;"> Cloning https://git.openstack.org/openstack/tripleo-quickstart-extras/ to /tmp/pip-build-6Pyw8Y/tripleo-quickstart-extras</span><br />
<span style="color: #b45f06;">Installing collected packages: tripleo-quickstart-extras<br /> Running setup.py install for tripleo-quickstart-extras ... done<br />Successfully installed tripleo-quickstart-extras-0.0.1.dev542<br />~/.quickstart/tripleo-quickstart</span><br />
----------------------------------------------------------------------------<br />
| , . , |<br />
| )-_'''_-( |<br />
| ./ o\ /o \. |<br />
| . \__/ \__/ . |<br />
| ... V ... |<br />
| ... - - - ... |<br />
| . - - . |<br />
| `-.....-´ |<br />
| ____ ____ ____ _ _ _ _ |<br />
| / __ \ / __ \ / __ \ (_) | | | | | | |<br />
| | | | | ___ | | | | | | | |_ _ _ ___| | _____| |_ __ _ _ __| |_ |<br />
| | | | |/ _ \| | | | | | | | | | | |/ __| |/ / __| __/ _` | '__| __| |<br />
| | |__| | |_| | |__| | | |__| | |_| | | (__| <\__ \ |_|(_| | | | |_ |<br />
| \____/ \___/ \____/ \___\_\\__,_|_|\___|_|\_\___/\__\__,_|_| \__| |<br />
| |<br />
| |<br />
----------------------------------------------------------------------------<br />
<br />
<br />
Installing OpenStack newton on host 192.168.0.74<br />
Using directory /home/boris/.quickstart for a local working directory<br />
+ export ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg<br />
+ ANSIBLE_CONFIG=/home/boris/.quickstart/tripleo-quickstart/ansible.cfg<br />
+ export ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts<br />
+ ANSIBLE_INVENTORY=/home/boris/.quickstart/hosts<br />
+ source /home/boris/.quickstart/tripleo-quickstart/ansible_ssh_env.sh<br />
++ export OPT_WORKDIR=/home/boris/.quickstart<br />
++ OPT_WORKDIR=/home/boris/.quickstart<br />
++ export SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible<br />
++ SSH_CONFIG=/home/boris/.quickstart/ssh.config.ansible<br />
++ touch /home/boris/.quickstart/ssh.config.ansible<br />
++ export 'ANSIBLE_SSH_ARGS=-F /home/boris/.quickstart/ssh.config.ansible'<br />
++ ANSIBLE_SSH_ARGS='-F /home/boris/.quickstart/ssh.config.ansible'<br />
+ '[' 0 = 0 ']'<br />
+ rm -f /home/boris/.quickstart/hosts<br />
+ '[' 192.168.0.74 = localhost ']'<br />
+ '[' '' = 1 ']'<br />
+ VERBOSITY=vv<br />
+ ansible-playbook -vv /home/boris/.quickstart/playbooks/quickstart-extras.yml -e @config/general_config/ha.yml -e ansible_python_interpreter=/usr/bin/python -e @/home/boris/.quickstart/config/release/newton.yml -e local_working_dir=/home/boris/.quickstart -e virthost=192.168.0.74 -t untagged,provision,environment,undercloud-scripts,overcloud-scripts,undercloud-install,undercloud-post-install,teardown-nodes<br />
<span style="color: #b45f06;">quickstart.sh: line 433: ansible-playbook: command not found</span><br />
<span class="float-right"><span class="sha-block"><span class="sha user-select-contain"><span class="float-right"><span class="sha-block"><span class="sha user-select-contain">
</span></span></span></span></span></span><span class="float-right"><span class="sha-block"><span class="sha user-select-contain"> </span></span></span>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-82113141684770021772016-12-25T05:39:00.000-08:002016-12-30T04:46:11.295-08:00RDO Newton Instack-virt-setup deployment with routable control plane on CentOS 7.3<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
Following bellow is instack-virt-setup deployment creating rout-able control plane via modified ~stack/undercloud.conf setting 192.168.24.0/24 to serve this purpose. It also also utilizes RDO Newton "current-passed-ci" trunk and corresponding TripleO QuickStart pre-built images, which are in sync with trunk as soon as appear to be built during CI. TripleO QS itself seems to be under heavy development almost all the time even for Newton stable branch.<br />
<br />
<br />
*********************************************************************************<br />
Run on VIRTHOST (presuming that stack acoount has been already setup<br />
and tuned as required for deployment ) <br />
*********************************************************************************<br />
<br />
<br />
<pre>sudo yum -y install yum-plugin-priorities
sudo curl -o /etc/yum.repos.d/delorean-newton.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/delorean.repo
sudo curl -o /etc/yum.repos.d/delorean-deps-newton.repo https://trunk.rdoproject.org/centos7-newton/delorean-deps.repo
$ sudo yum -y update
$ sudo yum install -y instack-undercloud
$ instack-virt-setup
</pre>
<br />
<i><span style="color: #b45f06;">***** Instack VM setup *****</span></i><br />
<br />
After log into "instack VM" (undercloud VM) create 4GB swap file and restart "instack VM"<br />
<br />
<pre><code class="no-highlight">[root@instack ~]# dd if=/dev/zero of=/swapfile bs=1024 count=4194304
4194304+0 records in
4194304+0 records out
4294967296 bytes (4.3 GB) copied, 6.13213 s, 700 MB/s
[root@instack ~]# mkswap /swapfile
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=5d32541b-09f1-4fdd-a4a8-fd284c358255
[root@instack ~]# chmod 600 /swapfile
[root@instack ~]# swapon /swapfile
[root@instack ~]# echo "/swapfile swap swap defaults 0 0" \</code>
<code class="no-highlight"> >> /etc/fstab</code></pre>
<i><span style="color: #b45f06;">***** Restart and logging again ******</span></i><br />
<pre>$ sudo yum -y install yum-plugin-priorities
$ sudo curl -o /etc/yum.repos.d/delorean-newton.repo \
http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/delorean.repo
$ sudo curl -o /etc/yum.repos.d/delorean-deps-newton.repo \
https://trunk.rdoproject.org/centos7-newton/delorean-deps.repo
$ sudo yum -y upgrade mariadb-libs ( case 7.3 )
$ sudo yum -y install --enablerepo=extras centos-release-ceph-jewel
$ sudo sed -i -e 's%gpgcheck=.*%gpgcheck=0%' /etc/yum.repos.d/CentOS-Ceph-Jewel.repo
</pre>
stack@instack ~]# su - stack<br />
[stack@instack ~]$ cat .bashrc<br />
<br />
# .bashrc<br />
# Source global definitions<br />
if [ -f /etc/bashrc ]; then<br />
. /etc/bashrc<br />
fi<br />
<br />
# Uncomment the following line if you don't like systemctl's auto-paging feature:<br />
# export SYSTEMD_PAGER=<br />
<span style="color: #b45f06;">export NODE_DIST=centos7</span><br />
<span style="color: #b45f06;">export USE_DELOREAN_TRUNK=1</span><br />
<span style="color: #b45f06;">export DELOREAN_TRUNK_REPO=" http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-newton-tested/"</span><br />
<span style="color: #b45f06;">export DELOREAN_REPO_FILE="delorean.repo"</span><br />
<span style="color: #b45f06;">export DIB_YUM_REPO_CONF=/etc/yum.repos.d/delorean*</span><br />
<br />
# User specific aliases and functions<br />
<br />
RELOGIN as stack=>root=>stack<br />
<br />
[stack@instack ~]$ export DIB_YUM_REPO_CONF="$DIB_YUM_REPO_CONF /etc/yum.repos.d/CentOS-Ceph-Jewel.repo"<br />
[stack@instack ~]$ echo $DIB_YUM_REPO_CONF<br />
/etc/yum.repos.d/delorean-deps-newton.repo /etc/yum.repos.d/delorean-newton.repo /etc/yum.repos.d/CentOS-Ceph-Jewel.repo<br />
<br />
##############################################<br />
At this point tune undercloud.conf to get rout-able ctlplane<br />
set up after `openstack install undercloud` completition<br />
############################################## <br />
<br />
<span style="color: #b45f06;">[stack@instack ~]$ cat undercloud.conf | grep -v ^$|grep -v ^#</span><br />
<span style="color: #b45f06;">[DEFAULT]</span><br />
<span style="color: #b45f06;">local_ip = 192.168.24.1/24</span><br />
<span style="color: #b45f06;">network_gateway = 192.168.24.1</span><br />
<span style="color: #b45f06;">undercloud_public_vip = 192.168.24.2</span><br />
<span style="color: #b45f06;">undercloud_admin_vip = 192.168.24.3</span><br />
<span style="color: #b45f06;">network_cidr = 192.168.24.0/24</span><br />
<span style="color: #b45f06;">masquerade_network = 192.168.24.0/24</span><br />
<span style="color: #b45f06;">dhcp_start = 192.168.24.5</span><br />
<span style="color: #b45f06;">dhcp_end = 192.168.24.24</span><br />
<span style="color: #b45f06;">inspection_iprange = 192.168.24.100,192.168.24.120</span><br />
<span style="color: #b45f06;">[auth]</span><br />
<br />
[stack@instack ~]$ sudo yum install -y python-tripleoclient <br />
[stack@instack ~]$ openstack undercloud install<br />
. . . .<br />
<br />
###############################################################<br />
Undercloud install complete.<br />
The file containing this installation's passwords is at<br />
/home/stack/undercloud-passwords.conf.<br />
There is also a stackrc file at /home/stack/stackrc.<br />
These files are needed to interact with the OpenStack services, and should be<br />
secured.<br />
################################################################<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ neutron net-list<br />
<pre>+--------------------------------------+----------+------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+------------------------------------------+
| c645a750-6958-40b1-9e68-0d4d771ea024 | ctlplane | c18bf437-f14c-4dcb-8c7a-7928fa3e0cd7 |
| | | 192.168.24.0/24 |
+--------------------------------------+----------+------------------------------------------+</pre>
<br />
[stack@instack ~]$ sudo yum -y install wget<br />
<br />
#############################################<br />
Install pre-built TripleO QS images<br />
#############################################<br />
<pre>[stack@instack ~]$ wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/undercloud.qcow2
--2016-12-25 11:07:41-- http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/undercloud.qcow2
Resolving buildlogs.centos.org (buildlogs.centos.org)... 162.252.80.138, 2001:bc8:242c::10
. . . . .
Length: 2955677696 (2.8G)
Saving to: ‘undercloud.qcow2’
100%[=====================================================>] 2,955,677,696 10.4MB/s in 5m 24s
2016-12-25 11:13:06 (8.70 MB/s) - ‘undercloud.qcow2’ saved [2955677696/2955677696]
[stack@instack ~]$ wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/overcloud-full.tar
--2016-12-25 11:13:26-- http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/overcloud-full.tar
Resolving buildlogs.centos.org (buildlogs.centos.org)... 162.252.80.138, 2001:bc8:242c::10
. . . .
Length: 1303572480 (1.2G) [application/x-tar]
Saving to: ‘overcloud-full.tar’</pre>
<pre> 100%[=====================================================>] 1,303,572,480 9.86MB/s in 3m 31s
2016-12-25 11:16:57 (5.90 MB/s) - ‘overcloud-full.tar’ saved [1303572480/1303572480]
[stack@instack ~]$ wget http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/ironic-python-agent.tar
--2016-12-25 11:17:16-- http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/newton/delorean/ironic-python-agent.tar
Resolving buildlogs.centos.org (buildlogs.centos.org)... 162.252.80.138, 2001:bc8:242c::10
. . . .
Length: 362536960 (346M) [application/x-tar]
Saving to: ‘ironic-python-agent.tar’
100%[=======================================================>] 362,536,960 10.2MB/s in 39s
2016-12-25 11:17:55 (8.97 MB/s) - ‘ironic-python-agent.tar’ saved [362536960/362536960]
[stack@instack ~]$ tar -xvf overcloud-full.tar
overcloud-full.d/
overcloud-full.d/dib-manifests/
overcloud-full.d/dib-manifests/dib-manifests-pip/
overcloud-full.d/dib-manifests/dib-element-manifest
overcloud-full.d/dib-manifests/dib_arguments
overcloud-full.d/dib-manifests/dib_environment
overcloud-full.initrd
overcloud-full.qcow2
overcloud-full.qcow2.log
overcloud-full.vmlinuz
[stack@instack ~]$ tar -xvf ironic-python-agent.tar
ironic-python-agent.d/
ironic-python-agent.d/dib-manifests/
ironic-python-agent.d/dib-manifests/dib_arguments
ironic-python-agent.d/dib-manifests/dib-element-manifest
ironic-python-agent.d/dib-manifests/dib_environment
ironic-python-agent.initramfs
ironic-python-agent.kernel
ironic-python-agent.qcow2.log
ironic-python-agent.vmlinuz
</pre>
<br />
################################################<br />
Proceed with Overcloud Deployment<br />
################################################<br />
<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ openstack overcloud image upload<br />
Image "overcloud-full-vmlinuz" was uploaded.<br />
<pre>+--------------------------------------+------------------------+-------------+---------+--------+
| ID | Name | Disk Format | Size | Status |
+--------------------------------------+------------------------+-------------+---------+--------+
| 58c795b9-895c-453c-b0b4-7ab3ea49de8a | overcloud-full-vmlinuz | aki | 5393328 | active |
+--------------------------------------+------------------------+-------------+---------+--------+
Image "overcloud-full-initrd" was uploaded.
+--------------------------------------+-----------------------+-------------+----------+--------+
| ID | Name | Disk Format | Size | Status |
+--------------------------------------+-----------------------+-------------+----------+--------+
| 4d5dfa69-1efb-4e1f-accb-27e6c5552858 | overcloud-full-initrd | ari | 46801427 | active |
+--------------------------------------+-----------------------+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------+------------+--------+
| ID | Name | Disk Format | Size | Status |
+--------------------------------------+----------------+-------------+------------+--------+
| 51ae20aa-7af5-43ef-927c-6e249d8010b7 | overcloud-full | qcow2 | 1250039808 | active |
+--------------------------------------+----------------+-------------+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------+---------+--------+
| ID | Name | Disk Format | Size | Status |
+--------------------------------------+------------------+-------------+---------+--------+
| 2af93f6e-8755-44f2-a085-5130ee93b092 | bm-deploy-kernel | aki | 5393328 | active |
+--------------------------------------+------------------+-------------+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------+-----------+--------+
| ID | Name | Disk Format | Size | Status |
+--------------------------------------+-------------------+-------------+-----------+--------+
| 4c222de9-0ac8-41af-ba13-830166c6f7ac | bm-deploy-ramdisk | ari | 355866050 | active |
+--------------------------------------+-------------------+-------------+-----------+--------+</pre>
<br />
<pre>[stack@instack ~]$ openstack baremetal import instackenv.json
Started Mistral Workflow. Execution ID: 13f14bdf-d85d-43c0-94d3-fc65363592d9
Successfully registered node UUID 52837965-8e3f-4c45-9808-5f3d0c111b39
Successfully registered node UUID 44c290c0-98fe-43fc-a98d-c63e5dcb9afd
Successfully registered node UUID e2d762ff-3157-4e1a-a63c-41875164b71f
Successfully registered node UUID 59e69a8c-5eb4-4ce0-852c-8bec544dd42b
Successfully registered node UUID d2198d5c-4cd2-4d68-814f-f93054b868ab
Started Mistral Workflow. Execution ID: f5ecb9f0-b83e-416d-bae5-799b339e9677
Successfully set all nodes to available.
[stack@instack ~]$ openstack baremetal configure boot
[stack@instack ~]$ openstack baremetal introspection bulk start
Setting nodes for introspection to manageable...
Starting introspection of manageable nodes
Started Mistral Workflow. Execution ID: 22bf03ed-665e-47df-96ca-689583ee7b59
Waiting for introspection to finish...
Introspection for UUID d2198d5c-4cd2-4d68-814f-f93054b868ab finished successfully.
Introspection for UUID 44c290c0-98fe-43fc-a98d-c63e5dcb9afd finished successfully.
Introspection for UUID 59e69a8c-5eb4-4ce0-852c-8bec544dd42b finished successfully.
Introspection for UUID 52837965-8e3f-4c45-9808-5f3d0c111b39 finished successfully.
Introspection for UUID e2d762ff-3157-4e1a-a63c-41875164b71f finished successfully.
Introspection completed.
Setting manageable nodes to available...
Started Mistral Workflow. Execution ID: ae6be6e9-aff6-4d6e-a060-128189c586d3
</pre>
<br />
<pre>******************************
Set up External Network
******************************
[stack@instack ~]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10
DEVICE=vlan10
ONBOOT=yes
DEVICETYPE=ovs
TYPE=OVSIntPort
BOOTPROTO=static
IPADDR=10.0.0.1
NETMASK=255.255.255.0
OVS_BRIDGE=br-ctlplane
OVS_OPTIONS="tag=10"
[stack@instack ~]$ sudo ifup vlan10
</pre>
<br />
[stack@instack ~]$ sudo ovs-vsctl show<br />
42783cd9-9460-4aab-8fad-748e7015d80b<br />
Manager "ptcp:6640:127.0.0.1"<br />
is_connected: true<br />
Bridge br-int<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port "tapac626204-7d"<br />
tag: 1<br />
Interface "tapac626204-7d"<br />
type: internal<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Bridge br-ctlplane<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
Port "eth1"<br />
Interface "eth1"<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<br />
[stack@instack ~]$ vi $HOME/network_env.yaml<br />
{<br />
"parameter_defaults": {<br />
"ControlPlaneDefaultRoute": "192.168.24.1",<br />
"ControlPlaneSubnetCidr": "24",<br />
"DnsServers": [<br />
"83.221.202.254" <== ISP DNS Server IP<br />
],<br />
"EC2MetadataIp": "192.168.24.1",<br />
"ExternalAllocationPools": [<br />
{<br />
"end": "10.0.0.250",<br />
"start": "10.0.0.4"<br />
}<br />
],<br />
"ExternalNetCidr": "10.0.0.1/24",<br />
"NeutronExternalNetworkBridge": ""<br />
}<br />
}<br />
<br />
<pre>********************************************************************************************
Next step :-
$ sudo vi /usr/share/openstack-tripleo-heat-templates/puppet/services/ceph-mon.yaml
Update line :-
<span style="color: #b45f06;">ceph::profile::params::osd_pool_default_size: 1 </span>
instead of default value "3". This step is acceptable only in Virtual Environment.
Setting the <tt class="docutils literal"><span class="pre">osd_</span><span class="pre">pool_</span><span class="pre">default</span>_<span class="pre">size</span></tt> set to <tt class="docutils literal"><span class="pre">1</span></tt>,you will only have
one copy of the object. As a general rule, you should run your cluster
with more than one OSD and a pool size greater than 1 object replica. So
having 48GB RAM on VIRTHOST the optimal setting is <span style="color: #b45f06;">osd_pool_default_size = 3 (at least 2) </span></pre>
<pre>********************************************************************************************
outputs:
role_data:
description: Role data for the Ceph Monitor service.
value:
service_name: ceph_mon
monitoring_subscription: {get_param: MonitoringSubscriptionCephMon}
config_settings:
map_merge:
- get_attr: [CephBase, role_data, config_settings]
- ceph::profile::params::ms_bind_ipv6: {get_param: CephIPv6}
ceph::profile::params::mon_key: {get_param: CephMonKey}
ceph::profile::params::osd_pool_default_pg_num: 32
ceph::profile::params::osd_pool_default_pgp_num: 32
ceph::profile::params::osd_pool_default_size: <span style="color: #b45f06;">1 <== instead of "3" </span>
# repeat returns items in a list, so we need to map_merge twice
tripleo::profile::base::ceph::mon::ceph_pools:
map_merge:
- map_merge:
repeat:
for_each:
<%pool%>:
- {get_param: CinderRbdPoolName}
- {get_param: CinderBackupRbdPoolName}
- {get_param: NovaRbdPoolName}
- {get_param: GlanceRbdPoolName}
- {get_param: GnocchiRbdPoolName}
</pre>
[stack@instack ~]$ vi overcloud-deploy.sh<br />
<pre>#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
--control-scale 3 --compute-scale 1 --ceph-storage-scale 1 \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /usr/share/openstack-tripleo-heat-templates \
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml</pre>
<br />
[stack@instack ~]$ chmod a+x overcloud-deploy.sh<br />
<br />
[stack@instack ~]$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat<br />
[stack@instack ~]$ sudo touch -f /usr/share/openstack-tripleo-heat-templates/puppet/post.yaml<br />
[stack@instack ~]$ vi overcloud-deploy.sh<br />
<pre></pre>
<br />
<pre>[stack@instack ~]$ ./overcloud-deploy.sh
+ source /home/stack/stackrc
++ NOVA_VERSION=1.1
++ export NOVA_VERSION
+++ sudo hiera admin_password
++ OS_PASSWORD=22ecb766eb723d5bc287c7e47cfb9e7b2d427304
++ export OS_PASSWORD
++ OS_AUTH_URL=http://192.168.24.1:5000/v2.0
++ export OS_AUTH_URL
++ OS_USERNAME=admin
++ OS_TENANT_NAME=admin
++ COMPUTE_API_VERSION=1.1
++ OS_BAREMETAL_API_VERSION=1.15
++ OS_NO_CACHE=True
++ OS_CLOUDNAME=undercloud
++ OS_IMAGE_API_VERSION=1
++ export OS_USERNAME
++ export OS_TENANT_NAME
++ export COMPUTE_API_VERSION
++ export OS_BAREMETAL
++ export OS_NO_CACHE
++ export OS_CLOUDNAME
++ export OS_IMAGE_API_VERSION
+ openstack overcloud deploy --control-scale 3 --compute-scale 1 --ceph-storage-scale 1 --libvirt-type qemu --ntp-server pool.ntp.org --templates /usr/share/openstack-tripleo-heat-templates -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network_env.yaml
Removing the current plan files
Uploading new plan files
Started Mistral Workflow. Execution ID: 83b9918a-ce91-4b7e-bc76-87c61c92028b
Plan updated
Deploying templates in the directory /tmp/tripleoclient-v2qWQD/tripleo-heat-templates
Started Mistral Workflow. Execution ID: 4cb66db2-2b8b-40e9-ae48-336b999b0ef6
2016-12-25 11:30:57Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started
. . . . . .
2016-12-25 11:30:57Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0]: CREATE_COMPLETE state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:05Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: CREATE_IN_PROGRESS state changed
2016-12-25 12:00:01Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment 0c57482b-76e8-44d7-9197-3cf5be5d5df0 succeeded
2016-12-25 12:00:02Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:00:02Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:00:03Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:03:01Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment bd58bab5-cd77-4dc4-a012-c94939c2f409 succeeded
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:04:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: SIGNAL_IN_PROGRESS Signal: deployment cfa41ca2-fac0-40d2-b172-639b12aba790 succeeded
2016-12-25 12:04:05Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: CREATE_COMPLETE state changed
2016-12-25 12:04:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: SIGNAL_IN_PROGRESS Signal: deployment 8f54670a-d54f-44d6-a34e-83ad16a734bb succeeded
2016-12-25 12:04:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: CREATE_COMPLETE state changed
2016-12-25 12:05:55Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment eee42321-d94d-4aea-a3f0-4e49886b1bc4 succeeded
2016-12-25 12:05:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:05:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment cef8295c-5026-41dc-8f89-a8ec29d5a195 succeeded
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:06:41Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment 22d20811-24fb-4b53-a356-e580b688fd2c succeeded
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:10:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: SIGNAL_IN_PROGRESS Signal: deployment a466258f-0bb9-4363-b7c5-a81a4b9fd8e4 succeeded
2016-12-25 12:11:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: CREATE_COMPLETE state changed
2016-12-25 12:11:13Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: SIGNAL_IN_PROGRESS Signal: deployment 63a0fdb5-7459-4e3d-819b-0dd6c680b90e succeeded
2016-12-25 12:11:14Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: CREATE_COMPLETE state changed
2016-12-25 12:16:37Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment 5a45a608-4458-48bf-9ca0-cb7e002307a2 succeeded
2016-12-25 12:16:38Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:16:38Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:41Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:41Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 12:17:55Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed
2016-12-25 12:17:55Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed
2016-12-25 12:18:58Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed
2016-12-25 12:18:58Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed
2016-12-25 12:18:59Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE
Started Mistral Workflow. Execution ID: f189f0d4-4287-4c45-8f3d-aca5a65e0843
Overcloud Endpoint: http://10.0.0.10:5000/v2.0
Overcloud Deployed
[stack@instack ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+
| e4124368-7e3b-42de-abc9-348c232c7560 | overcloud-cephstorage-0 | ACTIVE | - | Running | ctlplane=192.168.24.9 |
| 23cc18b7-acef-4c58-b8ef-835b3766f0ba | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.24.17 |
| 4dad4b56-0e57-4685-aea6-87fb2a5745c7 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.168.24.13 |
| 6c013bc1-9e0b-4809-a200-6a47e84c1fbf | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.168.24.7 |
| 55120150-37ac-4160-9ba6-17324af2db8c | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.168.24.16 |
+--------------------------------------+-------------------------+--------+------------+-------------+------------------------+</pre>
<br />
<a href="http://bderzhavets.blogspot.ru/2016/12/instack-virt-setup-routable-ctlplane.html" target="_blank">Complete overcloud deployment output</a><br />
<br />
<pre>[stack@instack ~]$ cat overcloudrc
export OS_NO_CACHE=True
export OS_CLOUDNAME=overcloud
export OS_AUTH_URL=http://10.0.0.10:5000/v2.0
export NOVA_VERSION=1.1
export COMPUTE_API_VERSION=1.1
export OS_USERNAME=admin
export no_proxy=,10.0.0.10,192.168.24.10
export OS_PASSWORD=KYwmKd6QnscTAZTkayuHM8jHV
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
export OS_TENANT_NAME=admin
[stack@instack ~]$ ssh heat-admin@192.168.24.17
The authenticity of host '192.168.24.17 (192.168.24.17)' can't be established.
ECDSA key fingerprint is 67:53:b3:30:85:f5:b0:d6:df:bf:6a:fc:03:30:f1:53.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.24.17' (ECDSA) to the list of known hosts.
[heat-admin@overcloud-controller-0 ~]$ sudo su -
[root@overcloud-controller-0 ~]# ping lxer.com
PING lxer.com (108.166.170.174) 56(84) bytes of data.
64 bytes from dal4.wmkt.net (108.166.170.174): icmp_seq=1 ttl=48 time=179 ms
64 bytes from dal4.wmkt.net (108.166.170.174): icmp_seq=2 ttl=48 time=174 ms
64 bytes from dal4.wmkt.net (108.166.170.174): icmp_seq=3 ttl=48 time=176 ms
64 bytes from dal4.wmkt.net (108.166.170.174): icmp_seq=4 ttl=48 time=174 ms
64 bytes from dal4.wmkt.net (108.166.170.174): icmp_seq=5 ttl=48 time=177 ms
^C
--- lxer.com ping statistics ---
6 packets transmitted, 5 received, 16% packet loss, time 5005ms
rtt min/avg/max/mdev = 174.601/176.462/179.337/1.809 ms
[root@overcloud-controller-0 ~]# vi overcloudrc
[root@overcloud-controller-0 ~]# . overcloudrc
[root@overcloud-controller-0 ~]# ceph status
cluster bd2d8da8-ca90-11e6-b7a3-525400fb0aa9
health HEALTH_OK
monmap e1: 3 mons at {overcloud-controller-0=172.16.1.13:6789/0,overcloud-controller-1=172.16.1.12:6789/0,overcloud-controller-2=172.16.1.11:6789/0}
election epoch 6, quorum 0,1,2 overcloud-controller-2,overcloud-controller-1,overcloud-controller-0
osdmap e13: 1 osds: 1 up, 1 in
flags sortbitwise
pgmap v82: 224 pgs, 6 pools, 0 bytes data, 0 objects
8481 MB used, 38603 MB / 47084 MB avail
224 active+clean
[root@overcloud-controller-0 ~]# wget https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
--2016-12-25 12:25:28-- https://download.fedoraproject.org/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
Resolving download.fedoraproject.org (download.fedoraproject.org)... 67.219.144.68, 152.19.134.198, 85.236.55.6, ...
Connecting to download.fedoraproject.org (download.fedoraproject.org)|67.219.144.68|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: http://fedora-mirror01.rbc.ru/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2 [following]
--2016-12-25 12:25:29-- http://fedora-mirror01.rbc.ru/pub/fedora/linux/releases/24/CloudImages/x86_64/images/Fedora-Cloud-Base-24-1.2.x86_64.qcow2
Resolving fedora-mirror01.rbc.ru (fedora-mirror01.rbc.ru)... 80.68.250.217
Connecting to fedora-mirror01.rbc.ru (fedora-mirror01.rbc.ru)|80.68.250.217|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 204590080 (195M) [application/octet-stream]
Saving to: ‘Fedora-Cloud-Base-24-1.2.x86_64.qcow2’
100%[=======================================================>] 204,590,080 682KB/s in 5m 21s
2016-12-25 12:30:50 (623 KB/s) - ‘Fedora-Cloud-Base-24-1.2.x86_64.qcow2’ saved [204590080/204590080]
[root@overcloud-controller-0 ~]# glance image-create --name "VF24Cloud" --disk-format qcow2 \</pre>
<pre> --container-format bare --progress < Fedora-Cloud-Base-24-1.2.x86_64.qcow2</pre>
<pre> </pre>
<pre> [=============================>] 100%
+------------------+----------------------------------------------------------------------------------+
| Property | Value |
+------------------+----------------------------------------------------------------------------------+
| checksum | 8de08e3fe24ee788e50a6a508235aa64 |
| container_format | bare |
| created_at | 2016-12-25T12:31:27Z |
| direct_url | rbd://bd2d8da8-ca90-11e6-b7a3-525400fb0aa9/images/303526e2-60bb- |
| | 4dcf-8781-b722bf718392/snap |
| disk_format | qcow2 |
| id | 303526e2-60bb-4dcf-8781-b722bf718392 |
| locations | [{"url": "rbd://bd2d8da8-ca90-11e6-b7a3-525400fb0aa9/images/303526e2-60bb- |
| | 4dcf-8781-b722bf718392/snap", "metadata": {}}] |
| min_disk | 0 |
| min_ram | 0 |
| name | VF24Cloud |
| owner | 96c20310965d436a83e6f3ea648bab8c |
| protected | False |
| size | 204590080 |
| status | active |
| tags | [] |
| updated_at | 2016-12-25T12:31:30Z |
| virtual_size | None |
| visibility | private |
+------------------+----------------------------------------------------------------------------------+
[root@overcloud-controller-0 ~]# ceph status
cluster bd2d8da8-ca90-11e6-b7a3-525400fb0aa9
health HEALTH_OK
monmap e1: 3 mons at {overcloud-controller-0=172.16.1.13:6789/0,overcloud-controller-1=172.16.1.12:6789/0,overcloud-controller-2=172.16.1.11:6789/0}
election epoch 6, quorum 0,1,2 overcloud-controller-2,overcloud-controller-1,overcloud-controller-0
osdmap e16: 1 osds: 1 up, 1 in
flags sortbitwise
pgmap v104: 224 pgs, 6 pools, 195 MB data, 30 objects
8868 MB used, 38216 MB / 47084 MB avail
224 active+clean
client io 65147 B/s rd, 26940 kB/s wr, 75 op/s rd, 19 op/s wr
[root@overcloud-controller-0 ~]# glance image-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| 303526e2-60bb-4dcf-8781-b722bf718392 | VF24Cloud |
+--------------------------------------+-----------+
[root@overcloud-controller-0 ~]# cinder create --image-id 303526e2-60bb-4dcf-8781-b722bf718392 \</pre>
<pre>--display_name=vf24volume-ceph 7
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-12-25T12:32:51.000000 |
| description | None |
| encrypted | False |
| id | 06bfaf39-6795-4a4e-b1a4-ff2ad49a2333 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | vf24volume-ceph |
| os-vol-host-attr:host | hostgroup@tripleo_ceph#tripleo_ceph |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| replication_status | disabled |
| size | 7 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2016-12-25T12:32:52.000000 |
| user_id | 1ece4a02a3f44610938be469e164d7c7 |
| volume_type | None |
+--------------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# cinder list </pre>
<pre>+--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+
| 06bfaf39-6795-4a4e-b1a4-ff2ad49a2333 | available | vf24volume-ceph | 7 | - | true | |
+--------------------------------------+-----------+-----------------+------+-------------+----------+-------------+</pre>
<pre> </pre>
<pre>[root@overcloud-controller-0 ~]# ceph osd df tree
ID WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME
-1 0.04489 - 47084M 13262M 33822M 28.17 1.00 0 root default
-2 0.04489 - 47084M 13262M 33822M 28.17 1.00 0 host overcloud-cephstorage-0
0 0.04489 1.00000 47084M 13262M 33822M 28.17 1.00 224 osd.0
TOTAL 47084M 13262M 33822M 28.17
MIN/MAX VAR: 1.00/1.00 STDDEV: 0</pre>
<pre>[root@overcloud-controller-0 ~]# rbd -p images ls
303526e2-60bb-4dcf-8781-b722bf718392</pre>
<pre>[root@overcloud-controller-0 ~]# rbd -p volumes ls
volume-06bfaf39-6795-4a4e-b1a4-ff2ad49a2333
</pre>
<pre>######################################################################################
Setup Neutron Router, external , tenant's networks on PCS Controllers Cluster via CLI
sourcing into shell overcloudrc ( just testing overcloud setup )
######################################################################################</pre>
<pre> </pre>
<pre>[root@overcloud-controller-0 ~]# neutron net-create ext-net --router:external \</pre>
<pre>--provider:physical_network datacentre --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-25T12:36:31Z |
| description | |
| id | b26e01d7-8bd2-4666-bed3-c55c355fc82b |
| ipv4_address_scope | |
| ipv6_address_scope | |
| is_default | False |
| mtu | 1496 |
| name | ext-net |
| port_security_enabled | True |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| provider:network_type | flat |
| provider:physical_network | datacentre |
| provider:segmentation_id | |
| qos_policy_id | |
| revision_number | 3 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:36:31Z |
+---------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# neutron subnet-create ext-net --name ext-subnet \</pre>
<pre>--allocation-pool start=192.168.24.100,end=192.168.24.120 --disable-dhcp --gateway 192.168.24.1 192.168.24.0/24
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "192.168.24.100", "end": "192.168.24.120"} |
| cidr | 192.168.24.0/24 |
| created_at | 2016-12-25T12:36:59Z |
| description | |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.24.1 |
| host_routes | |
| id | 87fa1511-4ded-4705-90ba-7e363b0e9905 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | ext-subnet |
| network_id | b26e01d7-8bd2-4666-bed3-c55c355fc82b |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:36:59Z |
+-------------------+------------------------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-create router1
Created a new router:
+-------------------------+--------------------------------------+
| Field | Value |
+-------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-25T12:37:34Z |
| description | |
| distributed | False |
| external_gateway_info | |
| flavor_id | |
| ha | True |
| id | 791cbaa1-a777-4b36-827e-ec8636e66c1b |
| name | router1 |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| revision_number | 2 |
| routes | |
| status | ACTIVE |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:37:34Z |
+-------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-gateway-set router1 ext-net
Set gateway for router router1
[root@overcloud-controller-0 ~]# neutron net-create int
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-25T12:38:22Z |
| description | |
| id | b7a03b53-46e9-4f66-904f-3afb764c1d49 |
| ipv4_address_scope | |
| ipv6_address_scope | |
| mtu | 1446 |
| name | int |
| port_security_enabled | True |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| provider:network_type | vxlan |
| provider:physical_network | |
| provider:segmentation_id | 79 |
| qos_policy_id | |
| revision_number | 3 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:38:22Z |
+---------------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# neutron subnet-create int 30.0.0.0/24 \</pre>
<pre> --dns_nameservers list=true 83.221.202.254
Created a new subnet:
+-------------------+--------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------+
| allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} |
| cidr | 30.0.0.0/24 |
| created_at | 2016-12-25T12:38:43Z |
| description | |
| dns_nameservers | 83.221.202.254 |
| enable_dhcp | True |
| gateway_ip | 30.0.0.1 |
| host_routes | |
| id | 4d059161-05de-4804-991b-d7a03b8cae97 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | |
| network_id | b7a03b53-46e9-4f66-904f-3afb764c1d49 |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| revision_number | 2 |
| service_types | |
| subnetpool_id | |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:38:43Z |
+-------------------+--------------------------------------------+
[root@overcloud-controller-0 ~]# neutron router-interface-add router1 4d059161-05de-4804-991b-d7a03b8cae97
Added interface fb1773e3-eed9-4deb-bcbf-57963cf77f98 to router router1.
[root@overcloud-controller-0 ~]# nova keypair-add oskey122516 >oskey122516.pem
[root@overcloud-controller-0 ~]# nova secgroup-list
WARNING: Command secgroup-list is deprecated and will be removed after Nova 15.0.0 is released. Use python-neutronclient or python-openstackclient instead.
+--------------------------------------+---------+------------------------+
| Id | Name | Description |
+--------------------------------------+---------+------------------------+
| 54e7b1a2-125d-4cd9-9335-b7d536745945 | default | Default security group |
+--------------------------------------+---------+------------------------+
[root@overcloud-controller-0 ~]# neutron security-group-rule-create --protocol tcp \</pre>
<pre>--port-range-min 22 --port-range-max 22 --direction ingress --remote-ip-prefix 0.0.0.0/0 \</pre>
<pre> 54e7b1a2-125d-4cd9-9335-b7d536745945
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2016-12-25T12:41:49Z |
| description | |
| direction | ingress |
| ethertype | IPv4 |
| id | 1be8efd2-2f3d-460d-8afd-4df597601620 |
| port_range_max | 22 |
| port_range_min | 22 |
| project_id | 96c20310965d436a83e6f3ea648bab8c |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 1 |
| security_group_id | 54e7b1a2-125d-4cd9-9335-b7d536745945 |
| tenant_id | 96c20310965d436a83e6f3ea648bab8c |
| updated_at | 2016-12-25T12:41:49Z |
+-------------------+--------------------------------------+
[root@overcloud-controller-0 ~]# nova flavor-create "m1.small" 2 1000 20 1
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+
| 2 | m1.small | 1000 | 20 | 0 | | 1 | 1.0 | True |
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+</pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrTlBHZFeNzJpPtayGY63vLuWi_EAIyT5Ya0CABxc1XjCqnA_h8iyFecPx6bmqGLeACCkge4RSCt8uKFfxEraUUEBHwtKU8Hz6j4bJQ1ldM1OmawLnqRSQrpNe5i4tg9UjxpsZQg/s1600/ScreenshotRoutable+from+2016-12-25+17-56-58.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjrTlBHZFeNzJpPtayGY63vLuWi_EAIyT5Ya0CABxc1XjCqnA_h8iyFecPx6bmqGLeACCkge4RSCt8uKFfxEraUUEBHwtKU8Hz6j4bJQ1ldM1OmawLnqRSQrpNe5i4tg9UjxpsZQg/s640/ScreenshotRoutable+from+2016-12-25+17-56-58.png" width="640" /></a></div>
<pre> <div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD0E83421wtGilyelRsYvZl95s31of_g92PZNYz_x07kN9rbpn2tRBGaartYcWO1_GG8Kwyr4Nwv8aiomITtgJbIlCfplfrRBudhGHyqywU7MdXdqbLubdCsW8FX5Iz4SPEcdoyw/s1600/ScreenshotRoutable+from+2016-12-25+17-58-13.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD0E83421wtGilyelRsYvZl95s31of_g92PZNYz_x07kN9rbpn2tRBGaartYcWO1_GG8Kwyr4Nwv8aiomITtgJbIlCfplfrRBudhGHyqywU7MdXdqbLubdCsW8FX5Iz4SPEcdoyw/s640/ScreenshotRoutable+from+2016-12-25+17-58-13.png" width="640" /> </a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkIpu4BqZHvdOArijTLg5Izp9E6WhMZjIXPrVS_t7sBcAnRMHRkHLq3wu27bkatvmhPn73zeCSzMi2tLznexMLTQkIUNWCkBpdJTWclhBNMwwCShoeybBRyi-LRpHyiaZaA1WB4g/s1600/ScreenshotRoutable+from+2016-12-25+18-00-49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgkIpu4BqZHvdOArijTLg5Izp9E6WhMZjIXPrVS_t7sBcAnRMHRkHLq3wu27bkatvmhPn73zeCSzMi2tLznexMLTQkIUNWCkBpdJTWclhBNMwwCShoeybBRyi-LRpHyiaZaA1WB4g/s640/ScreenshotRoutable+from+2016-12-25+18-00-49.png" width="640" /></a></div>
Running Fedora 24 Cloud VM on Compute Node && HTOP Memory map</pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA3MxS7ztR6YmbVV6h-Ql99_2-K863bEmX9l5GwpIavyMTElCK8fDStdxEa1awhOJELb8av85AsOy9wkd72tP6K7vwniZSUNy2IG919sp4hCfwSXfN9ZZk8o3T50Yf4hNaYrsMVQ/s1600/ScreenshotROUTABLE+from+2016-12-25+18-56-46.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjA3MxS7ztR6YmbVV6h-Ql99_2-K863bEmX9l5GwpIavyMTElCK8fDStdxEa1awhOJELb8av85AsOy9wkd72tP6K7vwniZSUNy2IG919sp4hCfwSXfN9ZZk8o3T50Yf4hNaYrsMVQ/s640/ScreenshotROUTABLE+from+2016-12-25+18-56-46.png" width="640" /></a></div>
</div>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge4h6HdpJmOmCAa-XkRQiS44djVx8FRZiG1Dv_ABkw0EY0hkReoYC8EKUEQjrUHKi-KzZBn2vd2KWi3dVnYgmFhx8FScEE5pHjnkXYYFG6IYCBHi4OsBbB4dtDiL1Qf5gKgcRmwg/s1600/ScreenshotRoutable+from+2016-12-25+18-10-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEge4h6HdpJmOmCAa-XkRQiS44djVx8FRZiG1Dv_ABkw0EY0hkReoYC8EKUEQjrUHKi-KzZBn2vd2KWi3dVnYgmFhx8FScEE5pHjnkXYYFG6IYCBHi4OsBbB4dtDiL1Qf5gKgcRmwg/s640/ScreenshotRoutable+from+2016-12-25+18-10-37.png" width="640" /> </a><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH7Czc9SxhdGxmZCSAOUAw-EgTgW2wrRtquv0O_Ex_pbvSWsnqAmgqUEVPn-pQKZN77Howjjhe5mdtx9nwriT3KSdRc_v6VsjjWjy0CMvuNO8LD_Nzd5DIXyrtrcLGN0OrlYMhsw/s1600/ScreenshotRoutable+from+2016-12-25+18-11-26.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiH7Czc9SxhdGxmZCSAOUAw-EgTgW2wrRtquv0O_Ex_pbvSWsnqAmgqUEVPn-pQKZN77Howjjhe5mdtx9nwriT3KSdRc_v6VsjjWjy0CMvuNO8LD_Nzd5DIXyrtrcLGN0OrlYMhsw/s640/ScreenshotRoutable+from+2016-12-25+18-11-26.png" width="640" /></a></div>
<br />
<br />
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-88229110817090320822016-12-25T05:32:00.000-08:002016-12-25T05:32:29.127-08:00Instack-virt-setup ( routable CTLPLANE ) overcloud deployment protocol on CentOS 7.3<div dir="ltr" style="text-align: left;" trbidi="on">
<pre>2016-12-25 11:30:57Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:57Z [overcloud.PcsdPassword]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.RabbitCookie]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.MysqlRootPassword]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.Networks]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.RabbitCookie]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.MysqlRootPassword]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.HeatAuthEncryptionKey]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.PcsdPassword]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.Networks]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:57Z [overcloud.ServiceNetMap]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.Networks.TenantNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.HorizonSecret]: CREATE_COMPLETE state changed
2016-12-25 11:30:57Z [overcloud.DefaultPasswords]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:57Z [overcloud.Networks.ManagementNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.TenantNetwork]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:58Z [overcloud.Networks.ExternalNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.TenantNetwork.TenantNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.ExternalNetwork]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:58Z [overcloud.Networks.ExternalNetwork.ExternalNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.TenantNetwork.TenantNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:58Z [overcloud.Networks.NetworkExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.TenantNetwork.TenantSubnet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageNetwork]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:58Z [overcloud.Networks.InternalNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageNetwork.StorageNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.InternalNetwork]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:58Z [overcloud.Networks.InternalNetwork.InternalApiNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.ManagementNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:58Z [overcloud.Networks.ExternalNetwork.ExternalNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:58Z [overcloud.Networks.ExternalNetwork.ExternalSubnet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.NetworkExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:30:58Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:58Z [overcloud.Networks.StorageNetwork.StorageNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:59Z [overcloud.Networks.StorageNetwork.StorageSubnet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:59Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:59Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtSubnet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:30:59Z [overcloud.DefaultPasswords]: CREATE_COMPLETE state changed
2016-12-25 11:30:59Z [overcloud.Networks.InternalNetwork.InternalApiNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:30:59Z [overcloud.Networks.InternalNetwork.InternalApiSubnet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:00Z [overcloud.Networks.TenantNetwork.TenantSubnet]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.TenantNetwork]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:00Z [overcloud.Networks.ExternalNetwork.ExternalSubnet]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.ExternalNetwork]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:00Z [overcloud.Networks.StorageNetwork.StorageSubnet]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.StorageNetwork]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:00Z [overcloud.Networks.InternalNetwork.InternalApiSubnet]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.StorageMgmtNetwork.StorageMgmtSubnet]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.InternalNetwork]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:00Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:00Z [overcloud.Networks.StorageMgmtNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.TenantNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.ExternalNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.InternalNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks.StorageNetwork]: CREATE_COMPLETE state changed
2016-12-25 11:31:00Z [overcloud.Networks]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:01Z [overcloud.Networks]: CREATE_COMPLETE state changed
2016-12-25 11:31:01Z [overcloud.ControlVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:02Z [overcloud.ControlVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:02Z [overcloud.InternalApiVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:02Z [overcloud.StorageVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:02Z [overcloud.InternalApiVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:02Z [overcloud.PublicVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:02Z [overcloud.StorageVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:02Z [overcloud.InternalApiVirtualIP.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.StorageVirtualIP.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.PublicVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:03Z [overcloud.StorageMgmtVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.PublicVirtualIP.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.StorageVirtualIP.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:31:03Z [overcloud.StorageMgmtVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:03Z [overcloud.StorageVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:03Z [overcloud.RedisVirtualIP]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.StorageMgmtVirtualIP.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.RedisVirtualIP]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:03Z [overcloud.RedisVirtualIP.VipPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:03Z [overcloud.InternalApiVirtualIP.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.InternalApiVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:04Z [overcloud.RedisVirtualIP.VipPort]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.RedisVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:04Z [overcloud.PublicVirtualIP.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.PublicVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:04Z [overcloud.StorageMgmtVirtualIP.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.StorageMgmtVirtualIP]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:04Z [overcloud.InternalApiVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.PublicVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.StorageMgmtVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.StorageVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:04Z [overcloud.RedisVirtualIP]: CREATE_COMPLETE state changed
2016-12-25 11:31:05Z [overcloud.VipMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:06Z [overcloud.VipMap]: CREATE_COMPLETE state changed
2016-12-25 11:31:06Z [overcloud.EndpointMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:07Z [overcloud.EndpointMap]: CREATE_COMPLETE state changed
2016-12-25 11:31:07Z [overcloud.ComputeServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.ObjectStorageServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.ComputeServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:08Z [overcloud.ComputeServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.ObjectStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:08Z [overcloud.ObjectStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:08Z [overcloud.BlockStorageServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:09Z [overcloud.ComputeServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed
2016-12-25 11:31:09Z [overcloud.CephStorageServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:09Z [overcloud.BlockStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:09Z [overcloud.BlockStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:09Z [overcloud.ObjectStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed
2016-12-25 11:31:09Z [overcloud.ControllerServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:09Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:10Z [overcloud.CephStorageServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:10Z [overcloud.CephStorageServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:10Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:10Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:10Z [overcloud.ObjectStorageServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:10Z [overcloud.ObjectStorageServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:11Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:11Z [overcloud.ComputeServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:11Z [overcloud.CephStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed
2016-12-25 11:31:11Z [overcloud.BlockStorageServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed
2016-12-25 11:31:11Z [overcloud.ObjectStorageServiceChain.ServiceChain.11]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:11Z [overcloud.ComputeServiceChain.ServiceChain.20]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.ControllerServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:12Z [overcloud.ControllerServiceChain.LoggingConfiguration]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:12Z [overcloud.ObjectStorageServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.ComputeServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.ObjectStorageServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.BlockStorageServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:12Z [overcloud.ComputeServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:13Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:13Z [overcloud.CephStorageServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:13Z [overcloud.ComputeServiceChain.ServiceChain.17]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:13Z [overcloud.ComputeServiceChain.ServiceChain.11]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:13Z [overcloud.CephStorageServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.ControllerServiceChain.LoggingConfiguration]: CREATE_COMPLETE state changed
2016-12-25 11:31:14Z [overcloud.ObjectStorageServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.BlockStorageServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.CephStorageServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.ComputeServiceChain.ServiceChain.13]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.ComputeServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.ObjectStorageServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.CephStorageServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:14Z [overcloud.BlockStorageServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ObjectStorageServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ComputeServiceChain.ServiceChain.19]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.CephStorageServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ComputeServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ComputeServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.BlockStorageServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ObjectStorageServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.CephStorageServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ObjectStorageServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:15Z [overcloud.ComputeServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.BlockStorageServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.CephStorageServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.18]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.16]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ObjectStorageServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.CephStorageServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.BlockStorageServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.CephStorageServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.BlockStorageServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ObjectStorageServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.12]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ComputeServiceChain.ServiceChain.14]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:16Z [overcloud.ObjectStorageServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed
2016-12-25 11:31:16Z [overcloud.CephStorageServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.11]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.15]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.CephStorageServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.20]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.18]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.16]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.14]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.15]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.11]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.13]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.BlockStorageServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ObjectStorageServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.19]: CREATE_COMPLETE state changed
2016-12-25 11:31:17Z [overcloud.ComputeServiceChain.ServiceChain.17]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.ComputeServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:18Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:18Z [overcloud.ComputeServiceChain.ServiceChain.12]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.BlockStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.BlockStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:18Z [overcloud.BlockStorageServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.ObjectStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.ObjectStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:18Z [overcloud.CephStorageServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:18Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain.70]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain.65]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain.42]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain.31]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:18Z [overcloud.ControllerServiceChain.ServiceChain.69]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.25]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.ComputeServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed
2016-12-25 11:31:19Z [overcloud.ComputeServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed
2016-12-25 11:31:19Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.29]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.14]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.ObjectStorageServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.50]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.CephStorageServiceChain.ServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:19Z [overcloud.CephStorageServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.67]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:19Z [overcloud.ComputeServiceChain.ServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:19Z [overcloud.ComputeServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:19Z [overcloud.ControllerServiceChain.ServiceChain.18]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.13]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.21]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ComputeServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:20Z [overcloud.CephStorageServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.47]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.38]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.55]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:20Z [overcloud.ControllerServiceChain.ServiceChain.64]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:21Z [overcloud.ControllerServiceChain.ServiceChain.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:21Z [overcloud.ControllerServiceChain.ServiceChain.73]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:21Z [overcloud.ControllerServiceChain.ServiceChain.45]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:21Z [overcloud.ControllerServiceChain.ServiceChain.6]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:21Z [overcloud.ControllerServiceChain.ServiceChain.58]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:22Z [overcloud.ControllerServiceChain.ServiceChain.22]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:22Z [overcloud.ControllerServiceChain.ServiceChain.4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:22Z [overcloud.ControllerServiceChain.ServiceChain.9]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:22Z [overcloud.ControllerServiceChain.ServiceChain.49]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:23Z [overcloud.ControllerServiceChain.ServiceChain.43]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:24Z [overcloud.ControllerServiceChain.ServiceChain.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.46]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.24]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.72]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.17]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.33]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:25Z [overcloud.ControllerServiceChain.ServiceChain.60]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.11]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.32]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.20]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.57]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.71]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:26Z [overcloud.ControllerServiceChain.ServiceChain.12]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.19]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.54]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.36]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.66]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.39]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.8]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.62]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.37]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:27Z [overcloud.ControllerServiceChain.ServiceChain.51]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.68]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.34]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.26]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.56]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.41]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.16]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.53]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.52]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:28Z [overcloud.ControllerServiceChain.ServiceChain.7]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.40]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.27]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.48]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.44]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.28]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.61]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:29Z [overcloud.ControllerServiceChain.ServiceChain.23]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.30]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.10]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.59]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.63]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.35]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.5]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.15]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:30Z [overcloud.ControllerServiceChain.ServiceChain.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.45]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.19]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.53]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.70]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.54]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.24]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.65]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.56]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.42]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.41]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.66]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.25]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.26]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.51]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.39]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.50]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.67]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.18]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.33]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.4]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.37]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.21]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.12]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.20]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.7]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.34]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.47]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.38]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.2]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.27]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.40]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.62]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.3]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.55]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.64]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.48]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.73]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.31]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.16]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.29]: CREATE_COMPLETE state changed
2016-12-25 11:31:31Z [overcloud.ControllerServiceChain.ServiceChain.10]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.6]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.58]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.72]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.69]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.9]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.49]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.43]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.11]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.0]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.5]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.46]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.14]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.13]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.17]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.22]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.61]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.59]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.68]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.60]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.35]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.44]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.32]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.28]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.63]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.57]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.52]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.8]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.71]: CREATE_COMPLETE state changed
2016-12-25 11:31:32Z [overcloud.ControllerServiceChain.ServiceChain.36]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain.23]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain.30]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain.1]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain.15]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain.ServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:33Z [overcloud.ControllerServiceChain]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:34Z [overcloud.ControllerServiceChain]: CREATE_COMPLETE state changed
2016-12-25 11:31:40Z [overcloud.Compute]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:40Z [overcloud.Compute]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:40Z [overcloud.CephStorage]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:40Z [overcloud.Compute.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:41Z [overcloud.CephStorage]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:41Z [overcloud.CephStorage.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:41Z [overcloud.BlockStorage]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:41Z [overcloud.Controller]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:42Z [overcloud.Compute.0]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:42Z [overcloud.Compute.0.UpdateConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:42Z [overcloud.Compute.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:42Z [overcloud.Compute.0.NodeUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:42Z [overcloud.Compute.0.UpdateConfig]: CREATE_COMPLETE state changed
2016-12-25 11:31:43Z [overcloud.ObjectStorage]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:43Z [overcloud.Controller]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:43Z [overcloud.Controller.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:43Z [overcloud.CephStorage.0]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:43Z [overcloud.CephStorage.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:43Z [overcloud.CephStorage.0.NodeUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:43Z [overcloud.CephStorage.0.UpdateConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:43Z [overcloud.Compute.0.NodeUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:44Z [overcloud.Controller.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.ObjectStorage]: CREATE_COMPLETE state changed
2016-12-25 11:31:44Z [overcloud.BlockStorage]: CREATE_COMPLETE state changed
2016-12-25 11:31:44Z [overcloud.ObjectStorageIpListMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.Controller.1]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:44Z [overcloud.Controller.1.NodeAdminUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.ObjectStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:44Z [overcloud.ObjectStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.ObjectStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed
2016-12-25 11:31:44Z [overcloud.BlockStorageIpListMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.ObjectStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:44Z [overcloud.Controller.1.UpdateConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:44Z [overcloud.CephStorage.0.NodeUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:45Z [overcloud.Compute.0.NodeAdminUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:45Z [overcloud.Controller.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:45Z [overcloud.Controller.1.NodeUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:45Z [overcloud.CephStorage.0.UpdateConfig]: CREATE_COMPLETE state changed
2016-12-25 11:31:45Z [overcloud.Compute.0.UserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:45Z [overcloud.BlockStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:45Z [overcloud.BlockStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:45Z [overcloud.BlockStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed
2016-12-25 11:31:45Z [overcloud.BlockStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:31:45Z [overcloud.Controller.0]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:45Z [overcloud.Controller.0.NodeAdminUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.0.UpdateConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.0.NodeUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.CephStorage.0.NodeAdminUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.Controller.0.UpdateConfig]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.ObjectStorageIpListMap]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.BlockStorageIpListMap]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.CephStorage.0.UserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.1.UpdateConfig]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.Compute.0.UserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.Controller.1.NodeUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:46Z [overcloud.Controller.2]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:31:46Z [overcloud.Compute.0.NovaCompute]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.2.NodeAdminUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.2.UpdateConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:46Z [overcloud.Controller.2.NodeUserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:47Z [overcloud.Controller.0.NodeUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:47Z [overcloud.CephStorage.0.UserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:47Z [overcloud.Controller.1.NodeAdminUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:47Z [overcloud.Controller.1.UserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:47Z [overcloud.CephStorage.0.CephStorage]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:48Z [overcloud.Controller.2.UpdateConfig]: CREATE_COMPLETE state changed
2016-12-25 11:31:48Z [overcloud.Controller.2.NodeUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:48Z [overcloud.Controller.0.NodeAdminUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:48Z [overcloud.Controller.0.UserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:49Z [overcloud.Controller.1.UserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:49Z [overcloud.Controller.2.NodeAdminUserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:49Z [overcloud.Controller.2.UserData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:49Z [overcloud.Controller.0.UserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:49Z [overcloud.Controller.1.Controller]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:49Z [overcloud.Controller.0.Controller]: CREATE_IN_PROGRESS state changed
2016-12-25 11:31:50Z [overcloud.Controller.2.UserData]: CREATE_COMPLETE state changed
2016-12-25 11:31:51Z [overcloud.Controller.2.Controller]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:16Z [overcloud.Controller.0.Controller]: CREATE_COMPLETE state changed
2016-12-25 11:43:18Z [overcloud.Controller.0.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:19Z [overcloud.Controller.2.Controller]: CREATE_COMPLETE state changed
2016-12-25 11:43:19Z [overcloud.Controller.0.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:20Z [overcloud.Controller.2.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:21Z [overcloud.Controller.0.ManagementPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:21Z [overcloud.Controller.2.ManagementPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:22Z [overcloud.CephStorage.0.CephStorage]: CREATE_COMPLETE state changed
2016-12-25 11:43:22Z [overcloud.Controller.0.TenantPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:23Z [overcloud.CephStorage.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:24Z [overcloud.Controller.2.TenantPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:24Z [overcloud.Controller.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:24Z [overcloud.CephStorage.0.TenantPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:24Z [overcloud.Controller.2.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:25Z [overcloud.Compute.0.NovaCompute]: CREATE_COMPLETE state changed
2016-12-25 11:43:25Z [overcloud.Controller.0.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:25Z [overcloud.CephStorage.0.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:26Z [overcloud.Compute.0.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:26Z [overcloud.Controller.2.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:26Z [overcloud.CephStorage.0.ManagementPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:27Z [overcloud.Compute.0.TenantPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:27Z [overcloud.Controller.0.UpdateDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:27Z [overcloud.Controller.2.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:28Z [overcloud.CephStorage.0.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:28Z [overcloud.Compute.0.ManagementPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:29Z [overcloud.Controller.2.UpdateDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:29Z [overcloud.CephStorage.0.UpdateDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:30Z [overcloud.Compute.0.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:32Z [overcloud.Controller.1.Controller]: CREATE_COMPLETE state changed
2016-12-25 11:43:33Z [overcloud.Compute.0.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:33Z [overcloud.Controller.1.TenantPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:34Z [overcloud.Compute.0.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:34Z [overcloud.CephStorage.0.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:34Z [overcloud.Controller.1.ManagementPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:34Z [overcloud.Compute.0.UpdateDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:35Z [overcloud.Controller.0.TenantPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:35Z [overcloud.Controller.1.StorageMgmtPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:36Z [overcloud.Controller.2.ManagementPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:36Z [overcloud.Controller.0.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:36Z [overcloud.CephStorage.0.TenantPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:36Z [overcloud.CephStorage.0.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:36Z [overcloud.Controller.0.ManagementPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:36Z [overcloud.Controller.1.ExternalPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:36Z [overcloud.Controller.2.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:37Z [overcloud.Controller.0.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:37Z [overcloud.CephStorage.0.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:37Z [overcloud.Controller.0.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:37Z [overcloud.Controller.2.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:38Z [overcloud.Controller.1.InternalApiPort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:38Z [overcloud.Compute.0.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:38Z [overcloud.Controller.0.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:43:38Z [overcloud.CephStorage.0.ManagementPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:38Z [overcloud.Controller.2.TenantPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:38Z [overcloud.Compute.0.TenantPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:39Z [overcloud.CephStorage.0.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:43:39Z [overcloud.Controller.1.UpdateDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:39Z [overcloud.Controller.2.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:43:39Z [overcloud.Compute.0.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:43:39Z [overcloud.CephStorage.0.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:39Z [overcloud.Controller.2.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:40Z [overcloud.Compute.0.ManagementPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:40Z [overcloud.Controller.0.NetworkConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:40Z [overcloud.Controller.1.StoragePort]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:40Z [overcloud.CephStorage.0.NetworkConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:40Z [overcloud.Compute.0.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:41Z [overcloud.Compute.0.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:41Z [overcloud.Controller.0.NetIpMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:42Z [overcloud.Controller.2.NetworkConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:43Z [overcloud.Compute.0.NetIpMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:43Z [overcloud.CephStorage.0.NetIpMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:44Z [overcloud.Compute.0.NetworkConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:44Z [overcloud.Controller.2.NetIpMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:44Z [overcloud.Controller.1.TenantPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:44Z [overcloud.Controller.1.ManagementPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:44Z [overcloud.Controller.0.NetIpMap]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.1.StorageMgmtPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.0.NetworkConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:45Z [overcloud.CephStorage.0.NetworkConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.1.StoragePort]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.CephStorage.0.NetIpMap]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.CephStorage.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:45Z [overcloud.Compute.0.NetworkConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.1.ExternalPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Controller.2.NetIpMap]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Compute.0.NetIpMap]: CREATE_COMPLETE state changed
2016-12-25 11:43:45Z [overcloud.Compute.0.NovaComputeConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:46Z [overcloud.CephStorage.0.CephStorageConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:46Z [overcloud.Compute.0.NetworkDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:46Z [overcloud.Controller.1.InternalApiPort]: CREATE_COMPLETE state changed
2016-12-25 11:43:46Z [overcloud.Controller.2.NetworkConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:46Z [overcloud.Controller.2.NetworkDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:47Z [overcloud.CephStorage.0.CephStorageConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:47Z [overcloud.Controller.1.NetIpMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:47Z [overcloud.Compute.0.NovaComputeConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:48Z [overcloud.Controller.1.NetworkConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:43:50Z [overcloud.Controller.1.NetworkConfig]: CREATE_COMPLETE state changed
2016-12-25 11:43:50Z [overcloud.Controller.1.NetIpMap]: CREATE_COMPLETE state changed
2016-12-25 11:43:50Z [overcloud.Controller.1.NetworkDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:46:48Z [overcloud.Controller.2.UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 86c369c1-ca29-432c-a072-894f31661188 succeeded
2016-12-25 11:46:49Z [overcloud.Controller.2.UpdateDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:46:49Z [overcloud.Controller.0.UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment e50af26b-61fa-433d-92df-76f8d9b5e1ef succeeded
2016-12-25 11:46:49Z [overcloud.Controller.0.UpdateDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:46:50Z [overcloud.CephStorage.0.UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 9448f0e2-f6d2-4ea3-90e9-d7e361135ee3 succeeded
2016-12-25 11:46:50Z [overcloud.CephStorage.0.UpdateDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:46:54Z [overcloud.Compute.0.UpdateDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:46:54Z [overcloud.Compute.0.UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment ae3e91c1-1f62-4fd5-b97e-e95a1750e42e succeeded
2016-12-25 11:46:58Z [overcloud.Controller.2.NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment a3979fec-3229-499e-b090-ee9f621f3a0e succeeded
2016-12-25 11:47:00Z [overcloud.Controller.2.NetworkDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:00Z [overcloud.Controller.0.NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 11940935-7f20-4d50-8083-471bd40c5cd7 succeeded
2016-12-25 11:47:00Z [overcloud.Controller.0.NetworkDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:00Z [overcloud.CephStorage.0.NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment e34f3420-b15d-413d-88b0-ddaed1c3a6c7 succeeded
2016-12-25 11:47:00Z [overcloud.Controller.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:00Z [overcloud.Controller.2.NodeTLSCAData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:00Z [overcloud.CephStorage.0.NetworkDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:00Z [overcloud.CephStorage.0.CephStorageDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:01Z [overcloud.Controller.0.NodeTLSCAData]: CREATE_COMPLETE state changed
2016-12-25 11:47:01Z [overcloud.Compute.0.NetworkDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:01Z [overcloud.Controller.2.NodeTLSCAData]: CREATE_COMPLETE state changed
2016-12-25 11:47:01Z [overcloud.Compute.0.NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment e2749d14-c6e1-4c7b-bef2-ff31d7d7a329 succeeded
2016-12-25 11:47:01Z [overcloud.Controller.0.NodeTLSData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:01Z [overcloud.Controller.2.NodeTLSData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:01Z [overcloud.Compute.0.NovaComputeDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:02Z [overcloud.Controller.0.NodeTLSData]: CREATE_COMPLETE state changed
2016-12-25 11:47:02Z [overcloud.Controller.0.ControllerConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:02Z [overcloud.Controller.2.NodeTLSData]: CREATE_COMPLETE state changed
2016-12-25 11:47:03Z [overcloud.Controller.2.ControllerConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:04Z [overcloud.Controller.0.ControllerConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:04Z [overcloud.Controller.0.ControllerDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:05Z [overcloud.Controller.2.ControllerConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:05Z [overcloud.Controller.2.ControllerDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:22Z [overcloud.CephStorage.0.CephStorageDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 5c03a894-1288-45e8-92b0-961a99cc5b73 succeeded
2016-12-25 11:47:23Z [overcloud.CephStorage.0.CephStorageDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:23Z [overcloud.CephStorage.0.CephStorageExtraConfigPre]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:24Z [overcloud.CephStorage.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:25Z [overcloud.CephStorage.0.CephStorageExtraConfigPre]: CREATE_COMPLETE state changed
2016-12-25 11:47:25Z [overcloud.CephStorage.0.NodeTLSCAData]: CREATE_COMPLETE state changed
2016-12-25 11:47:25Z [overcloud.CephStorage.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:26Z [overcloud.CephStorage.0.NodeExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:26Z [overcloud.CephStorage.0]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:26Z [overcloud.Controller.1.UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 65adadf2-3b77-4500-bbd7-1ab709430faf succeeded
2016-12-25 11:47:27Z [overcloud.Controller.1.UpdateDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:27Z [overcloud.CephStorage.0]: CREATE_COMPLETE state changed
2016-12-25 11:47:27Z [overcloud.CephStorage]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:27Z [overcloud.CephStorage]: CREATE_COMPLETE state changed
2016-12-25 11:47:30Z [overcloud.Compute.0.NovaComputeDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 09e98b31-6586-436f-a99d-75d5801e0d27 succeeded
2016-12-25 11:47:31Z [overcloud.CephStorageIpListMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:31Z [overcloud.CephStorageIpListMap]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:47:31Z [overcloud.CephStorageIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:31Z [overcloud.Compute.0.NovaComputeDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:31Z [overcloud.Compute.0.ComputeExtraConfigPre]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:31Z [overcloud.CephStorageIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed
2016-12-25 11:47:31Z [overcloud.CephStorageIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:32Z [overcloud.Compute.0.NodeTLSCAData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:32Z [overcloud.CephStorageIpListMap]: CREATE_COMPLETE state changed
2016-12-25 11:47:33Z [overcloud.Compute.0.NodeTLSCAData]: CREATE_COMPLETE state changed
2016-12-25 11:47:33Z [overcloud.Compute.0.ComputeExtraConfigPre]: CREATE_COMPLETE state changed
2016-12-25 11:47:33Z [overcloud.Compute.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:33Z [overcloud.Controller.0.ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment a13a0574-7db2-4b67-ac95-e853dc05bf1a succeeded
2016-12-25 11:47:33Z [overcloud.Controller.1.NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment bcc75eaa-e25f-4577-9a96-40dc7eedadcb succeeded
2016-12-25 11:47:33Z [overcloud.Controller.1.NetworkDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:33Z [overcloud.Controller.0.ControllerDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:33Z [overcloud.Controller.0.ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:33Z [overcloud.Controller.1.NodeTLSCAData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:34Z [overcloud.Compute.0.NodeExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:34Z [overcloud.Compute.0]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:35Z [overcloud.Controller.1.NodeTLSCAData]: CREATE_COMPLETE state changed
2016-12-25 11:47:35Z [overcloud.Controller.0.ControllerExtraConfigPre]: CREATE_COMPLETE state changed
2016-12-25 11:47:35Z [overcloud.Controller.1.NodeTLSData]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:35Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:35Z [overcloud.Controller.2.ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 035ea794-e89e-420c-a52c-38a3c02aa56a succeeded
2016-12-25 11:47:35Z [overcloud.Compute.0]: CREATE_COMPLETE state changed
2016-12-25 11:47:35Z [overcloud.Compute]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:35Z [overcloud.Controller.2.ControllerDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:35Z [overcloud.Controller.2.ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:36Z [overcloud.Compute]: CREATE_COMPLETE state changed
2016-12-25 11:47:36Z [overcloud.Controller.1.NodeTLSData]: CREATE_COMPLETE state changed
2016-12-25 11:47:36Z [overcloud.Controller.1.ControllerConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:36Z [overcloud.Controller.0.NodeExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:36Z [overcloud.Controller.0]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:36Z [overcloud.Controller.0]: CREATE_COMPLETE state changed
2016-12-25 11:47:37Z [overcloud.Controller.2.ControllerExtraConfigPre]: CREATE_COMPLETE state changed
2016-12-25 11:47:37Z [overcloud.Controller.2.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:37Z [overcloud.ComputeIpListMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:37Z [overcloud.ComputeIpListMap]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:47:37Z [overcloud.ComputeIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:37Z [overcloud.ComputeIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed
2016-12-25 11:47:37Z [overcloud.Controller.1.ControllerConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:37Z [overcloud.ComputeIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:37Z [overcloud.Controller.1.ControllerDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:38Z [overcloud.Controller.2.NodeExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:38Z [overcloud.Controller.2]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:38Z [overcloud.ComputeIpListMap]: CREATE_COMPLETE state changed
2016-12-25 11:47:38Z [overcloud.Controller.2]: CREATE_COMPLETE state changed
2016-12-25 11:47:56Z [overcloud.Controller.1.ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment 3570b3cb-d078-4537-a10d-cb75643c7b11 succeeded
2016-12-25 11:47:57Z [overcloud.Controller.1.ControllerDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:47:57Z [overcloud.Controller.1.ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:58Z [overcloud.Controller.1.ControllerExtraConfigPre]: CREATE_COMPLETE state changed
2016-12-25 11:47:58Z [overcloud.Controller.1.NodeExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:47:59Z [overcloud.Controller.1.NodeExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:47:59Z [overcloud.Controller.1]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:47:59Z [overcloud.Controller.1]: CREATE_COMPLETE state changed
2016-12-25 11:47:59Z [overcloud.Controller]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:00Z [overcloud.Controller]: CREATE_COMPLETE state changed
2016-12-25 11:48:03Z [overcloud.hostsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.hostsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:03Z [overcloud.hostsConfig.hostsConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.ControllerIpListMap]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.hostsConfig.hostsConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:48:03Z [overcloud.hostsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:03Z [overcloud.AllNodesValidationConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.ControllerIpListMap]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:03Z [overcloud.ControllerIpListMap.EnabledServicesValue]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.ControllerIpListMap.EnabledServicesValue]: CREATE_COMPLETE state changed
2016-12-25 11:48:03Z [overcloud.ControllerIpListMap]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:03Z [overcloud.AllNodesValidationConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:03Z [overcloud.AllNodesValidationConfig.AllNodesValidationsImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:03Z [overcloud.AllNodesValidationConfig.AllNodesValidationsImpl]: CREATE_COMPLETE state changed
2016-12-25 11:48:03Z [overcloud.AllNodesValidationConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:04Z [overcloud.AllNodesValidationConfig]: CREATE_COMPLETE state changed
2016-12-25 11:48:04Z [overcloud.hostsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:48:04Z [overcloud.ControllerIpListMap]: CREATE_COMPLETE state changed
2016-12-25 11:48:04Z [overcloud.ObjectStorageHostsDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.CephStorageHostsDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.CephStorageHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:05Z [overcloud.ControllerHostsDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.CephStorageHostsDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.BlockStorageHostsDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.ControllerHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:05Z [overcloud.ControllerHostsDeployment.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.ComputeHostsDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:05Z [overcloud.ComputeHostsDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:05Z [overcloud.ComputeHostsDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:06Z [overcloud.ControllerHostsDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:06Z [overcloud.allNodesConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:07Z [overcloud.ControllerHostsDeployment.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:07Z [overcloud.allNodesConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:07Z [overcloud.allNodesConfig.allNodesConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:08Z [overcloud.allNodesConfig.allNodesConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:48:08Z [overcloud.allNodesConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:08Z [overcloud.ObjectStorageHostsDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:08Z [overcloud.allNodesConfig]: CREATE_COMPLETE state changed
2016-12-25 11:48:08Z [overcloud.BlockStorageHostsDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:27Z [overcloud.ControllerHostsDeployment.2]: SIGNAL_IN_PROGRESS Signal: deployment c2ec932c-a187-438f-8a91-eab08d85307d succeeded
2016-12-25 11:48:28Z [overcloud.ControllerHostsDeployment.2]: CREATE_COMPLETE state changed
2016-12-25 11:48:33Z [overcloud.ControllerHostsDeployment.1]: SIGNAL_IN_PROGRESS Signal: deployment 6d6133a1-3479-441e-8372-560e3bec39d5 succeeded
2016-12-25 11:48:34Z [overcloud.CephStorageHostsDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 74d62c6d-5202-4afc-a9f9-93e2d8686d6d succeeded
2016-12-25 11:48:34Z [overcloud.ControllerHostsDeployment.1]: CREATE_COMPLETE state changed
2016-12-25 11:48:34Z [overcloud.CephStorageHostsDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:48:34Z [overcloud.CephStorageHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:34Z [overcloud.CephStorageHostsDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:41Z [overcloud.ComputeHostsDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 58e077be-6b79-44a7-8d4f-397e3c166283 succeeded
2016-12-25 11:48:41Z [overcloud.ComputeHostsDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:48:41Z [overcloud.ComputeHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:42Z [overcloud.ComputeHostsDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:55Z [overcloud.ControllerHostsDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment c7902888-b6f2-46fe-866e-95d192f11038 succeeded
2016-12-25 11:48:55Z [overcloud.ControllerHostsDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:48:55Z [overcloud.ControllerHostsDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:48:56Z [overcloud.ControllerHostsDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:56Z [overcloud.CephStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:56Z [overcloud.CephStorageAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:56Z [overcloud.ObjectStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:56Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:56Z [overcloud.ControllerAllNodesDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.BlockStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.ControllerAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:57Z [overcloud.ControllerAllNodesDeployment.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.ComputeAllNodesDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.ComputeAllNodesDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:48:57Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:57Z [overcloud.ControllerAllNodesDeployment.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:58Z [overcloud.ObjectStorageAllNodesDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:58Z [overcloud.BlockStorageAllNodesDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:58Z [overcloud.ObjectStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:58Z [overcloud.BlockStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:48:59Z [overcloud.ObjectStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:48:59Z [overcloud.BlockStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:16Z [overcloud.ComputeAllNodesDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 9c5b1094-cacb-4cd7-8249-aeaf3ebe5ed0 succeeded
2016-12-25 11:49:16Z [overcloud.ComputeAllNodesDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:49:16Z [overcloud.ComputeAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:17Z [overcloud.ComputeAllNodesDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:17Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:17Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:17Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:27Z [overcloud.CephStorageAllNodesDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 3111dd35-c002-469c-b8fb-e6877cfeefd9 succeeded
2016-12-25 11:49:28Z [overcloud.CephStorageAllNodesDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:49:28Z [overcloud.CephStorageAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:29Z [overcloud.ControllerAllNodesDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment f8d261a2-2bca-4bc3-b066-76c01178df6f succeeded
2016-12-25 11:49:29Z [overcloud.CephStorageAllNodesDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:29Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:29Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:29Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:29Z [overcloud.ControllerAllNodesDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:49:34Z [overcloud.ControllerAllNodesDeployment.2]: SIGNAL_IN_PROGRESS Signal: deployment 46044f3f-c891-4fa2-9031-966843a9d42e succeeded
2016-12-25 11:49:34Z [overcloud.ControllerAllNodesDeployment.2]: CREATE_COMPLETE state changed
2016-12-25 11:49:38Z [overcloud.ComputeAllNodesValidationDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 3c0ec9f1-5bc7-4fab-9e1c-4627eb75da55 succeeded
2016-12-25 11:49:38Z [overcloud.ComputeAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:49:38Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:38Z [overcloud.ControllerAllNodesDeployment.1]: SIGNAL_IN_PROGRESS Signal: deployment 45690a63-6361-441f-ab6c-995f04b40e38 succeeded
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesDeployment.1]: CREATE_COMPLETE state changed
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:39Z [overcloud.ComputeAllNodesValidationDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:39Z [overcloud.AllNodesDeploySteps]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:39Z [overcloud.ControllerAllNodesValidationDeployment.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:40Z [overcloud.UpdateWorkflow]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:40Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.ControllerAllNodesValidationDeployment.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.UpdateWorkflow]: CREATE_COMPLETE state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStoragePreConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.ControllerPreConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.ComputeConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed
2016-12-25 11:49:41Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputePreConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.BlockStorageConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeConfig.ComputePuppetConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeConfig.ComputePuppetConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.BlockStorageConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.BlockStorageConfig.BlockStoragePuppetConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStorageConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.BlockStorageConfig.BlockStoragePuppetConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.BlockStorageConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStorageConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStorageConfig.CephStoragePuppetConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStoragePreConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStorageConfig.CephStoragePuppetConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.CephStorageConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ControllerConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:42Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ControllerConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ControllerConfig.ControllerPuppetConfigImpl]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ObjectStorageConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ObjectStoragePreConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.ControllerPrePuppet]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.CephStoragePreConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:43Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.BlockStorageConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.CephStorageConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ControllerArtifactsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ControllerPreConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.BlockStoragePreConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig.DeployArtifacts]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ControllerConfig.ControllerPuppetConfigImpl]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [ControllerPrePuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ObjectStoragePreConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ControllerConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig.DeployArtifacts]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ComputePreConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [ControllerPrePuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ComputeConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ComputeArtifactsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ObjectStorageConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:44Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:44Z [overcloud.CephStorageAllNodesValidationDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 8e07f73e-24e9-4cba-bb7e-8faa179ee384 succeeded
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:45Z [overcloud.CephStorageAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:45Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:49:45Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:46Z [overcloud.CephStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:49:46Z [overcloud.AllNodesDeploySteps.ObjectStorageArtifactsDeploy]: CREATE_COMPLETE state changed
2016-12-25 11:49:46Z [overcloud.AllNodesDeploySteps.BlockStorageArtifactsDeploy]: CREATE_COMPLETE state changed
2016-12-25 11:49:46Z [overcloud.AllNodesDeploySteps.ControllerConfig]: CREATE_COMPLETE state changed
2016-12-25 11:49:47Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:47Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:47Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:47Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:49:48Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step1]: CREATE_COMPLETE state changed
2016-12-25 11:49:48Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step1]: CREATE_COMPLETE state changed
2016-12-25 11:50:05Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy.0]: SIGNAL_IN_PROGRESS Signal: deployment 916f5067-70d3-4535-b83d-cb97583683e7 succeeded
2016-12-25 11:50:05Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy.0]: CREATE_COMPLETE state changed
2016-12-25 11:50:05Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:50:06Z [overcloud.AllNodesDeploySteps.ComputeArtifactsDeploy]: CREATE_COMPLETE state changed
2016-12-25 11:50:06Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy.0]: SIGNAL_IN_PROGRESS Signal: deployment b163edf2-9eeb-4de3-b106-783f540da227 succeeded
2016-12-25 11:50:06Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:50:07Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:50:07Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:50:07Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy.0]: CREATE_COMPLETE state changed
2016-12-25 11:50:07Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:50:08Z [overcloud.AllNodesDeploySteps.CephStorageArtifactsDeploy]: CREATE_COMPLETE state changed
2016-12-25 11:50:08Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:50:09Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:50:09Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:50:17Z [overcloud.ControllerAllNodesValidationDeployment.1]: SIGNAL_IN_PROGRESS Signal: deployment d8942741-39b7-4e7b-af6a-cc93a99aa279 succeeded
2016-12-25 11:50:17Z [overcloud.ControllerAllNodesValidationDeployment.1]: CREATE_COMPLETE state changed
2016-12-25 11:50:23Z [overcloud.ControllerAllNodesValidationDeployment.2]: SIGNAL_IN_PROGRESS Signal: deployment d1650394-c4dd-4751-95c3-8c24ae88ee87 succeeded
2016-12-25 11:50:24Z [overcloud.ControllerAllNodesValidationDeployment.0]: SIGNAL_IN_PROGRESS Signal: deployment 8459eb0b-1cd1-427f-8a94-719a2b1efc9d succeeded
2016-12-25 11:50:24Z [overcloud.ControllerAllNodesValidationDeployment.2]: CREATE_COMPLETE state changed
2016-12-25 11:50:25Z [overcloud.ControllerAllNodesValidationDeployment.0]: CREATE_COMPLETE state changed
2016-12-25 11:50:25Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:50:26Z [overcloud.ControllerAllNodesValidationDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:50:26Z [overcloud.AllNodesExtraConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 11:50:27Z [overcloud.AllNodesExtraConfig]: CREATE_COMPLETE state changed
2016-12-25 11:50:43Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: SIGNAL_IN_PROGRESS Signal: deployment 535ce9a5-30bf-45b0-b3a6-b9342154380e succeeded
2016-12-25 11:50:43Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1.0]: SIGNAL_IN_PROGRESS Signal: deployment 2356020c-81b8-428d-a5fb-feb4e321584a succeeded
2016-12-25 11:50:43Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1.0]: CREATE_COMPLETE state changed
2016-12-25 11:50:43Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1.0]: CREATE_COMPLETE state changed
2016-12-25 11:50:44Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:50:44Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:50:44Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step1]: CREATE_COMPLETE state changed
2016-12-25 11:50:44Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step1]: CREATE_COMPLETE state changed
2016-12-25 11:50:52Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.1]: SIGNAL_IN_PROGRESS Signal: deployment e766ff88-5e57-4c0e-95df-0f745f1d9c41 succeeded
2016-12-25 11:50:52Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.1]: CREATE_COMPLETE state changed
2016-12-25 11:50:57Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.2]: SIGNAL_IN_PROGRESS Signal: deployment 59b21e60-7f32-4e69-baec-2abc5c174e89 succeeded
2016-12-25 11:50:57Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.2]: CREATE_COMPLETE state changed
2016-12-25 11:50:59Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.0]: SIGNAL_IN_PROGRESS Signal: deployment ab6a6e57-ef3a-42c7-a03c-90fa0c62ce46 succeeded
2016-12-25 11:51:00Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy.0]: CREATE_COMPLETE state changed
2016-12-25 11:51:00Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:51:00Z [ControllerPrePuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed
2016-12-25 11:51:00Z [overcloud.AllNodesDeploySteps.ControllerPrePuppet]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerArtifactsDeploy]: CREATE_COMPLETE state changed
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerPrePuppet]: CREATE_COMPLETE state changed
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:51:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.1]: SIGNAL_IN_PROGRESS Signal: deployment 4ff4c74a-3838-49ec-8c0e-0a243ac93e4a succeeded
2016-12-25 11:53:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.2]: SIGNAL_IN_PROGRESS Signal: deployment 163ff76d-e197-4f9b-ab49-3eadd1cdefd9 succeeded
2016-12-25 11:53:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.2]: CREATE_COMPLETE state changed
2016-12-25 11:53:01Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.1]: CREATE_COMPLETE state changed
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]: SIGNAL_IN_PROGRESS Signal: deployment dcb765d5-2cbd-44c0-9efc-aaa1d78aedfe succeeded
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1.0]: CREATE_COMPLETE state changed
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step1]: CREATE_COMPLETE state changed
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:08Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:53:09Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:10Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:10Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step2]: CREATE_COMPLETE state changed
2016-12-25 11:53:10Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step2]: CREATE_COMPLETE state changed
2016-12-25 11:53:10Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:53:43Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2.0]: SIGNAL_IN_PROGRESS Signal: deployment d4048b70-9aa2-406b-b5a9-db569775e753 succeeded
2016-12-25 11:53:43Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2.0]: CREATE_COMPLETE state changed
2016-12-25 11:53:43Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:53:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2.0]: SIGNAL_IN_PROGRESS Signal: deployment cc60ded8-4557-4489-be60-8df22252f493 succeeded
2016-12-25 11:53:45Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step2]: CREATE_COMPLETE state changed
2016-12-25 11:53:46Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2.0]: CREATE_COMPLETE state changed
2016-12-25 11:53:46Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:53:47Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step2]: CREATE_COMPLETE state changed
2016-12-25 11:54:14Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.2]: SIGNAL_IN_PROGRESS Signal: deployment 9e4dce18-56c9-4b01-bac1-075d9a8d1563 succeeded
2016-12-25 11:54:14Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.1]: SIGNAL_IN_PROGRESS Signal: deployment c2529e4f-9f2c-4e07-a8db-74d9703c666c succeeded
2016-12-25 11:54:14Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.2]: CREATE_COMPLETE state changed
2016-12-25 11:54:15Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.1]: CREATE_COMPLETE state changed
2016-12-25 11:55:28Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.0]: SIGNAL_IN_PROGRESS Signal: deployment 2872a510-8d20-42a7-9db3-b39254b1c722 succeeded
2016-12-25 11:55:29Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2.0]: CREATE_COMPLETE state changed
2016-12-25 11:55:29Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step2]: CREATE_COMPLETE state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:55:30Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:31Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:55:31Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:55:31Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:55:31Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.2]: CREATE_IN_PROGRESS state changed
2016-12-25 11:56:24Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3.0]: SIGNAL_IN_PROGRESS Signal: deployment 9c7b13aa-f1b5-43a0-b875-eb8e4574e1d0 succeeded
2016-12-25 11:56:24Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3.0]: SIGNAL_IN_PROGRESS Signal: deployment 215b0da2-92c7-4f4a-9653-8c9b277cd17f succeeded
2016-12-25 11:56:24Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3.0]: CREATE_COMPLETE state changed
2016-12-25 11:56:25Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:56:25Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3.0]: CREATE_COMPLETE state changed
2016-12-25 11:56:25Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:56:25Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:56:25Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:56:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.1]: SIGNAL_IN_PROGRESS Signal: deployment d09a76ac-68e6-4f7b-8f8e-41b42d092a9b succeeded
2016-12-25 11:56:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.1]: CREATE_COMPLETE state changed
2016-12-25 11:56:59Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.2]: SIGNAL_IN_PROGRESS Signal: deployment 9909dac5-737e-4ab2-add8-b0638e4dfd62 succeeded
2016-12-25 11:57:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.2]: CREATE_COMPLETE state changed
2016-12-25 11:59:02Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0]: SIGNAL_IN_PROGRESS Signal: deployment e0ca0154-ea0e-4dc7-a132-fb31b6cba351 succeeded
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3.0]: CREATE_COMPLETE state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step3]: CREATE_COMPLETE state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:03Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 11:59:04Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:05Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: CREATE_IN_PROGRESS state changed
2016-12-25 11:59:06Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: CREATE_IN_PROGRESS state changed
2016-12-25 12:00:01Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment 0c57482b-76e8-44d7-9197-3cf5be5d5df0 succeeded
2016-12-25 12:00:02Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:00:02Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:00:03Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:03:01Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment bd58bab5-cd77-4dc4-a012-c94939c2f409 succeeded
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:03:02Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:04:04Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: SIGNAL_IN_PROGRESS Signal: deployment cfa41ca2-fac0-40d2-b172-639b12aba790 succeeded
2016-12-25 12:04:05Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.1]: CREATE_COMPLETE state changed
2016-12-25 12:04:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: SIGNAL_IN_PROGRESS Signal: deployment 8f54670a-d54f-44d6-a34e-83ad16a734bb succeeded
2016-12-25 12:04:11Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.2]: CREATE_COMPLETE state changed
2016-12-25 12:05:55Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: SIGNAL_IN_PROGRESS Signal: deployment eee42321-d94d-4aea-a3f0-4e49886b1bc4 succeeded
2016-12-25 12:05:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4.0]: CREATE_COMPLETE state changed
2016-12-25 12:05:56Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step4]: CREATE_COMPLETE state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:57Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.BlockStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: CREATE_IN_PROGRESS state changed
2016-12-25 12:05:59Z [overcloud.AllNodesDeploySteps.ObjectStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment cef8295c-5026-41dc-8f89-a8ec29d5a195 succeeded
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:06:40Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:06:41Z [overcloud.AllNodesDeploySteps.CephStorageDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment 22d20811-24fb-4b53-a356-e580b688fd2c succeeded
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:06:45Z [overcloud.AllNodesDeploySteps.ComputeDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:10:58Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: SIGNAL_IN_PROGRESS Signal: deployment a466258f-0bb9-4363-b7c5-a81a4b9fd8e4 succeeded
2016-12-25 12:11:00Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.1]: CREATE_COMPLETE state changed
2016-12-25 12:11:13Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: SIGNAL_IN_PROGRESS Signal: deployment 63a0fdb5-7459-4e3d-819b-0dd6c680b90e succeeded
2016-12-25 12:11:14Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.2]: CREATE_COMPLETE state changed
2016-12-25 12:16:37Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: SIGNAL_IN_PROGRESS Signal: deployment 5a45a608-4458-48bf-9ca0-cb7e002307a2 succeeded
2016-12-25 12:16:38Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5.0]: CREATE_COMPLETE state changed
2016-12-25 12:16:38Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ControllerDeployment_Step5]: CREATE_COMPLETE state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:39Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ControllerPostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ObjectStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.BlockStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ComputePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.CephStoragePostConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:40Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:41Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:41Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ComputeExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.CephStorageExtraConfigPost]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_IN_PROGRESS Stack CREATE started
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeConfig]: CREATE_IN_PROGRESS state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed
2016-12-25 12:16:42Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed
2016-12-25 12:17:55Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed
2016-12-25 12:17:55Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed
2016-12-25 12:18:58Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed
2016-12-25 12:18:58Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully
2016-12-25 12:18:59Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed
2016-12-25 12:18:59Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully
Stack overcloud CREATE_COMPLETE
Started Mistral Workflow. Execution ID: f189f0d4-4287-4c45-8f3d-aca5a65e0843
Overcloud Endpoint: http://10.0.0.10:5000/v2.0
Overcloud Deployed</pre>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-75387782291232833492016-10-03T06:57:00.000-07:002016-10-03T11:00:15.085-07:00Set up sshuttle connection to TripleO Overcloud been deployed via instack-virt-setup on remote VIRTHOST<div dir="ltr" style="text-align: left;" trbidi="on">
Sshuttle may be installed on Fedora 24 via straight forward `dnf -y install sshuttle` [<a href="https://lists.fedoraproject.org/pipermail/package-announce/2016-April/182490.html" target="_blank">Fedora 24 Update: sshuttle-0.78.0-2.fc24</a>]. Set up F24 as WKS for "TripleO instack-virt-setup overcloud/undercloud deployment
to VIRTHOST" via<br />
ssh (trusted) connection . This setup works much more stable then configuring<br />
FoxyProxy on VIRTHOST running "instack" ( actually undercloud VM) hosting<br />
heat stack "overcloud" and several overcloud Controllers and Compute VMs <br />
<br />
Instack-virt-setup deployments don't provide ( vs QuickStart ) ksmd daemon<br />
sharing pages between overcloud VMs ( and supporting copy-on-write feature)<br />
what results significantly more memory utilization on VIRTHOST and require<br />
better CPUs and 48 GB RAM for testing HA overcloud deployments.<br />
Regarding KSM see <a href="https://en.wikipedia.org/wiki/Kernel_same-page_merging" target="_blank">https://en.wikipedia.org/wiki/Kernel_same-page_merging</a> <br />
<br />
The last was verified on RDO Mitaka. Newton seems to have issues with HA overcloud deployments at least at the time of writing. See <a href="https://bugs.launchpad.net/tripleo" target="_blank">https://bugs.launchpad.net/tripleo</a> #1585275, <span class="bugnumber" id="yui_3_10_3_1_1475507591689_528">#1629366.</span> <br />
<br />
What is sshuttle? It’s a Python app that uses SSH to create a
quick and dirty VPN between your Linux, BSD, or Mac OS X machine and a
remote system that has SSH access and Python. Been licensed under the GPLv2, sshuttle is a transparent proxy server
that lets users fake a VPN with minimal hassle. <br />
<br />
***************************************************************************<br />
First install sshutle on Fedora 24 :-<br />
$ dnf -y install sshuttle<br />
Then switch to VIRTHOST an set up standard Linux Bridge<br />
***************************************************************************<br />
<br />
# cat ifcfg-br0 <br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=static<br />
DNS1=192.168.1.1<br />
DNS2=83.221.202.254<br />
GATEWAY=192.168.1.1<br />
IPADDR=192.168.1.57<br />
NETMASK=255.255.255.0<br />
ONBOOT=yes<br />
<br />
# cat ifcfg-enp3s0<br />
DEVICE=enp3s0<br />
HWADDR=78:24:af:43:1b:53<br />
ONBOOT=yes<br />
TYPE=Ethernet<br />
IPV6INIT=no<br />
USERCTL=no<br />
BRIDGE=br0<br />
<br />
*************************** <br />
Then run script<br />
***************************<br />
<span style="color: #b45f06;">#!/bin/bash -x <br />
chkconfig network on<br />
systemctl stop NetworkManager<br />
systemctl disable NetworkManager <br />
service network restart<br />
</span><br />
Switch to VIRTHOST and follow <a href="http://lxer.com/module/newswire/view/234346/index.html" target="_blank">http://lxer.com/module/newswire/view/234346/index.html</a> until instack VM will be up and running , then shutdown "instack VM" and add third VNIC to this VM and second VNIC to each one baremetal_(X) VMs created by instack-virt-setup run :-<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOdOAuMrgGjej3_NGZOLWInhjWljXy7G1Xy0edccMrBI99Evpku5w_I40T4BmXBZwhNe6pAAcMoeHvJTNPf3GpYElp7qve1KCNZ6zttExuQvdvPtLL0BtAut_HhE0olVtvF6nEMw/s1600/Screenshot+from+2016-10-03+16-04-02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOdOAuMrgGjej3_NGZOLWInhjWljXy7G1Xy0edccMrBI99Evpku5w_I40T4BmXBZwhNe6pAAcMoeHvJTNPf3GpYElp7qve1KCNZ6zttExuQvdvPtLL0BtAut_HhE0olVtvF6nEMw/s640/Screenshot+from+2016-10-03+16-04-02.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9C-WzzLym9UIX404uOKYlhIW_Zy6bHYG15-Rnofkkq2Ar6aVVV2sIcQ48p8PGER7IU7sMMfww3O1Y6Qze8dFk4XguYiyOu-Br3yG4OUAFQ1SpScQSTH_jcC9h0u7RHNzY8wylPw/s1600/Screenshot+from+2016-10-03+16-05-15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9C-WzzLym9UIX404uOKYlhIW_Zy6bHYG15-Rnofkkq2Ar6aVVV2sIcQ48p8PGER7IU7sMMfww3O1Y6Qze8dFk4XguYiyOu-Br3yG4OUAFQ1SpScQSTH_jcC9h0u7RHNzY8wylPw/s640/Screenshot+from+2016-10-03+16-05-15.png" width="640" /></a></div>
<br />
On instack VM create /etc/sysconfig/network-interfaces/ifcfg-eth2 file configured<br />
BOOTPROTO=dhcp && sudo ifup eth2 .<br />
<br />
[stack@instack ~]$ sudo su -<br />
<br />
Last login: Mon Oct 3 12:32:04 UTC 2016 from 192.168.1.4 on pts/2<br />
[root@instack ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth2<br />
TYPE=Ethernet<br />
BOOTPROTO=dhcp<br />
DEFROUTE=yes<br />
IPV4_FAILURE_FATAL=no<br />
IPV6INIT=yes<br />
IPV6_AUTOCONF=yes<br />
IPV6_DEFROUTE=yes<br />
IPV6_FAILURE_FATAL=no<br />
NAME=eth2<br />
DEVICE=eth2<br />
ONBOOT=yes<br />
PREFIX=24<br />
GATEWAY=192.168.1.1<br />
DNS1=83.221.202.254<br />
IPV6_PEERDNS=yes<br />
IPV6_PEERROUTES=yes<br />
IPV6_PRIVACY=no<br />
<br />
********************************************************************************** <br />
Issue `ifconfig` and make sure eth2 obtained IP via your office router usually<br />
192.168.1.1. Thus "instack VM" appears to belong mentioned network and would serve as ssh tunnel for sshuttle supposed to provide access to external<br />
network 10.0.0.0/24 which would be created in TripleO Master Branch overcloud<br />
after completion of deployment procedure.<br />
********************************************************************************** <br />
[root@instack ~]# ifconfig<br />
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::222:cdff:fe52:11cf prefixlen 64 scopeid 0x20<link></link><br /> ether 00:22:cd:52:11:cf txqueuelen 0 (Ethernet)<br /> RX packets 3203772 bytes 242696157 (231.4 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 4663339 bytes 20369572127 (18.9 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.122.23 netmask 255.255.255.0 broadcast 192.168.122.255<br /> inet6 fe80::5054:ff:fec7:6356 prefixlen 64 scopeid 0x20<link></link><br /> ether 52:54:00:c7:63:56 txqueuelen 1000 (Ethernet)<br /> RX packets 50868 bytes 5455013 (5.2 MiB)<br /> RX errors 0 dropped 2 overruns 0 frame 0<br /> TX packets 44668 bytes 10199981 (9.7 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::222:cdff:fe52:11cf prefixlen 64 scopeid 0x20<link></link><br /> ether 00:22:cd:52:11:cf txqueuelen 1000 (Ethernet)<br /> RX packets 3218015 bytes 439876673 (419.4 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 4672723 bytes 20370569657 (18.9 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br /><span style="color: #b45f06;">eth2: flags=4163<up> mtu 1500<br /> inet 192.168.1.14 netmask 255.255.255.0 broadcast 192.168.1.255</up></span><br /> inet6 fe80::5054:ff:fe90:4024 prefixlen 64 scopeid 0x20<link></link><br /> ether 52:54:00:90:40:24 txqueuelen 1000 (Ethernet)<br /> RX packets 1696493 bytes 2312670704 (2.1 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 927189 bytes 266047551 (253.7 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 2468459 bytes 13170730356 (12.2 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 2468459 bytes 13170730356 (12.2 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br /><span style="color: #b45f06;">vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255</up></span><br /> inet6 fe80::48c5:c0ff:feff:3d00 prefixlen 64 scopeid 0x20<link></link><br /> ether 4a:c5:c0:ff:3d:00 txqueuelen 0 (Ethernet)<br /> RX packets 14440 bytes 196996148 (187.8 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 9337 bytes 947132 (924.9 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</host></up></up></up></up><br />
<br />
<br />
Thus eth2 on "instack VM" obtained IP address belongs to office network ( 192.168.1.0/24 ) say 192.168.1.14. Switch back F24 remote WKS and issue as user "john"<br />
<br />
[john@fedora24wks]$ export VIRTHOST=192.168.1.14 <br />
[john@fedora24wks]$ ssh-copy-id root@$VIRTHOST <br />
[john@fedora24wks]$ ssh root@$VIRTHOST uname -a <== no prompt<br />
<br />
Login to instack VM from WKS as root && su - stack && source stackrc<br />
<br />
Switch back to instructions from <a href="http://lxer.com/module/newswire/view/234346/index.html" target="_blank">http://lxer.com/module/newswire/view/234346</a><br />
and proceed with build Tripleo Master Branch undercloud/overcloud.<br />
When done open another terminal session on WKS and issue in this session<br />
<br />
[jon@fedora24wks ~]$ export VIRTHOST=192.168.1.14<br />
[jon@fedora24wks ~]$ sshuttle -r root@$VIRTHOST -v 10.0.0.0/24<br />
<br />
Due to instack VM is trusting via ssh F24 WKS you won't be prompted to<br />
connection to VIRTHOST and following output will appear in terminal session <br />
<br />
Starting sshuttle proxy.<br />
firewall manager: Starting firewall with Python version 3.5.1<br />
firewall manager: ready method name nat.<br />
IPv6 enabled: False<br />
UDP enabled: False<br />
DNS enabled: False<br />
TCP redirector listening on ('127.0.0.1', 12300).<br />
Starting client with Python version 3.5.1<br />
c : connecting to server...<br />
Starting server with Python version 2.7.5<br />
s: latency control setting = True<br />
s: available routes:<br />
s: 2/10.0.0.0/24<br />
s: 2/192.0.2.0/24<br />
s: 2/192.168.1.0/24<br />
s: 2/192.168.122.0/24<br />
c : Connected.<br />
firewall manager: setting up.<br />
>> iptables -t nat -N sshuttle-12300<br />
>> iptables -t nat -F sshuttle-12300<br />
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12300<br />
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12300<br />
>> iptables -t nat -A sshuttle-12300 -j RETURN --dest 127.0.0.1/32 -p tcp<br />
>> iptables -t nat -A sshuttle-12300 -j REDIRECT --dest 10.0.0.0/24 -p tcp --to-ports 12300 -m ttl ! --ttl 42<br />
c : Accept TCP: 192.168.1.4:36580 -> 10.0.0.8:80.<br />
c : Accept TCP: 192.168.1.4:36582 -> 10.0.0.8:80.<br />
c : Accept TCP: 192.168.1.4:36584 -> 10.0.0.8:80.<br />
c : Accept TCP: 192.168.1.4:36586 -> 10.0.0.8:80.<br />
c : Accept TCP: 192.168.1.4:36588 -> 10.0.0.8:80.<br />
c : Accept TCP: 192.168.1.4:36590 -> 10.0.0.8:80.<br />
<br />
Open SSH window to instak VM<br />
# su - stack<br />
# source stackrc<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHf5w5HF425fYJqH7Gqp2l8iDZDrAcq54Tw9k9oBr28RNpsm2st3_kIozvmFtvX0QrT9l_rsYZ7KSoTikQUYAz-sjlUGEgerPFPSG1qddhh3usAyfwy-sTuNdimxY8PV5FSWGdOg/s1600/Screenshot+from+2016-10-03+17-04-37.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhHf5w5HF425fYJqH7Gqp2l8iDZDrAcq54Tw9k9oBr28RNpsm2st3_kIozvmFtvX0QrT9l_rsYZ7KSoTikQUYAz-sjlUGEgerPFPSG1qddhh3usAyfwy-sTuNdimxY8PV5FSWGdOg/s640/Screenshot+from+2016-10-03+17-04-37.png" width="640" /></a></div>
<br />
Reports above instructs you to launch browser ( on F24 WKS ) to<br />
http://10.0.0.8/dashboard/ . Login password for "admin" is in "overcloudrc" file<br />
generated under ~stack/ folder on "instack VM"<br />
<br />
Login to overcloud controller and up on restarting keepalived daemon you are going to get :-<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju_4SX1trbOjxhIQCG_AFf4XFDKzlAv3DktvZ_hM4lDZQYpZDYH7qNS2RSUVj2RErkUo09qlve8wj0G91bH60Wx9Mvf7xY1h6oDWc-Li-GJ15J8edzt-Mx9uAQsDAPaYDNgQtlTw/s1600/Screenshot+from+2016-10-03+17-10-43.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEju_4SX1trbOjxhIQCG_AFf4XFDKzlAv3DktvZ_hM4lDZQYpZDYH7qNS2RSUVj2RErkUo09qlve8wj0G91bH60Wx9Mvf7xY1h6oDWc-Li-GJ15J8edzt-Mx9uAQsDAPaYDNgQtlTw/s640/Screenshot+from+2016-10-03+17-10-43.png" width="640" /></a></div>
<br />
Create VM via nova/neutron CLI sourcing overcloudrc on Controller and make<br />
sure that remote sshuttle connection to "instack VM" via http://10.0.0.8/dashboard will provide you option to manage vms running on overcloud compute<br />
nodes . Also ctlplane (192.0.2.0/24) defined as public network might serve<br />
for outbound Internet connectivity for those vms<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTbfp8vaSl5tfP6SYe0Jd48LcetGqdfi6hO3yADpo5eSiywu7rEPaEwHa4FA-D3Cdlyvc3EeiUG7rcTlqdSVmChaFxTVzLgfvqCcWXfsZstwyjjcq_Gg98jyirtNrARUxNAPQdzQ/s1600/Screenshot+from+2016-10-03+17-26-10.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhTbfp8vaSl5tfP6SYe0Jd48LcetGqdfi6hO3yADpo5eSiywu7rEPaEwHa4FA-D3Cdlyvc3EeiUG7rcTlqdSVmChaFxTVzLgfvqCcWXfsZstwyjjcq_Gg98jyirtNrARUxNAPQdzQ/s640/Screenshot+from+2016-10-03+17-26-10.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL6WftjhGrtyON78bDCkh3RRzckFV1BKC2HBHR6rHvZu5j_RKi8pmMMjr76pe0UlhcEVoE-iPLt0CVmhJK1ddE991ZILrt6lYN-NYGcLf7kFiiRDgnIoV72ez8zjmyfPvAxx_2gw/s1600/Screenshot+from+2016-10-03+17-17-23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgL6WftjhGrtyON78bDCkh3RRzckFV1BKC2HBHR6rHvZu5j_RKi8pmMMjr76pe0UlhcEVoE-iPLt0CVmhJK1ddE991ZILrt6lYN-NYGcLf7kFiiRDgnIoV72ez8zjmyfPvAxx_2gw/s640/Screenshot+from+2016-10-03+17-17-23.png" width="640" /></a></div>
<br />
<br />
<br />
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-11690804179975904002016-09-24T06:21:00.000-07:002016-09-24T08:17:01.925-07:00TripleO Master Branch Overcloud deployment via QuickStart<div dir="ltr" style="text-align: left;" trbidi="on">
Following bellow is set of instructions required to perform TripleoO QuickStart<br />
deployment for release "master" It differs a bit from testing default release<br />
"mitaka" . First on F24(23) workstation.<br />
<br />
Git clone repo bellow :-<br />
[jon@fedora24wks release]$ git clone https://github.com/openstack/tripleo-quickstart<br />
[jon@fedora24wks release]$ cd tripleo* ; cd ./config/release<br />
<br />
**********************************************<br />
Now verify that master.yml is here.<br />
**********************************************<br />
[jon@fedora24wks release]$ cat master.yml<br />
release: master<br />
undercloud_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/master/delorean/undercloud.qcow2<br />
overcloud_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/master/delorean/overcloud-full.tar<br />
ipa_image_url: http://buildlogs.centos.org/centos/7/cloud/x86_64/tripleo_images/master/delorean/ironic-python-agent.tar<br />
<br />
Launch browser to mentioned above location<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizoqD5S6E2R5ZQijPtChltKyeIale4-alEBx61SHzGda2poZk51zzI2nQTFamINUK1_CrXg5vSmNKFwaFq43Xp88wQmj7_uDcRec78N5NXq_WEOG9wMXLN_ZQt7CFxpbyLQ7ibIg/s1600/Screenshot+from+2016-09-24+17-22-23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizoqD5S6E2R5ZQijPtChltKyeIale4-alEBx61SHzGda2poZk51zzI2nQTFamINUK1_CrXg5vSmNKFwaFq43Xp88wQmj7_uDcRec78N5NXq_WEOG9wMXLN_ZQt7CFxpbyLQ7ibIg/s640/Screenshot+from+2016-09-24+17-22-23.png" width="640" /></a></div>
<br />
Now proceed as follows :-<br />
<br />
<pre># put your own IP here</pre>
<pre>[jon@fedora24wks tripleo-quickstart]$ export VIRTHOST=192.168.1.74
[jon@fedora24wks tripleo-quickstart]$ ssh-copy-id root@$VIRTHOST
[jon@fedora24wks tripleo-quickstart]$ ssh root@$VIRTHOST uname -a
[jon@fedora24wks tripleo-quickstart]$ bash quickstart.sh \
-R master $VIRTHOST
[jon@fedora24wks tripleo-quickstart]$ ssh -F \
/home/jon/.quickstart/ssh.config.ansible \
undercloud</pre>
<pre></pre>
******************************************************************************** <br />
Now you are logged into undecloud VM running on VIRTHOST as stack<br />
Building overcloud images is skipped due to QuickStart CI. There is no harm in attempt of building them. It will take a second, they are already there. <br />
******************************************************************************** <br />
<pre># Upload per-built overcloud images </pre>
<pre>[stack@undercloud ~]$ source stackrc
[stack@undercloud ~]$ openstack overcloud image upload
[stack@undercloud ~]$ openstack baremetal import instackenv.json
[stack@undercloud ~]$ openstack baremetal configure boot
[stack@undercloud ~]$ openstack baremetal introspection bulk start
[stack@undercloud ~]$ ironic node-list</pre>
<pre>[stack@undercloud ~]$ neutron subnet-list </pre>
<pre>[stack@undercloud ~]$ neutron subnet-update 1b7d82e5-0bf1-4ba5-8008-4aa402598065 \
--dns-nameserver 8.8.8.8</pre>
************************************* <br />
Create external interface vlan10<br />
*************************************<br />
<br />
[stack@undercloud ~]$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10<br />
DEVICE=vlan10<br />
ONBOOT=yes<br />
DEVICETYPE=ovs<br />
TYPE=OVSIntPort<br />
BOOTPROTO=static<br />
IPADDR=10.0.0.1<br />
NETMASK=255.255.255.0<br />
OVS_BRIDGE=br-ctlplane<br />
OVS_OPTIONS="tag=10"<br />
<br />
[stack@undercloud ~]$ sudo ifup vlan10<br />
<br />
[stack@undercloud ~]$ sudo ovs-vsctl show<br />
7011423b-55c8-4943-9d7d-94f052a4b6f1<br />
Manager "ptcp:6640:127.0.0.1"<br />
is_connected: true<br />
Bridge br-int<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port "tapb54de3dd-24"<br />
tag: 1<br />
Interface "tapb54de3dd-24"<br />
type: internal<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Bridge br-ctlplane<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
Port "eth1"<br />
Interface "eth1"<br />
ovs_version: "2.5.0"<br />
<br />
********************************************* <br />
Create manually network_env.yaml<br />
********************************************* <br />
[stack@instack ~]$vi<span style="color: #b45f06;"> </span>network_env.yaml<br />
<span style="color: #b45f06;"> {</span><br />
<span style="color: #b45f06;"> "parameter_defaults": {</span><br />
<span style="color: #b45f06;"> "ControlPlaneDefaultRoute": "192.0.2.1",</span><br />
<span style="color: #b45f06;"> "ControlPlaneSubnetCidr": "24",</span><br />
<span style="color: #b45f06;"> "DnsServers": [</span><br />
<span style="color: #b45f06;"> "192.168.122.1"</span><br />
<span style="color: #b45f06;"> ],</span><br />
<span style="color: #b45f06;"> "EC2MetadataIp": "192.0.2.1",</span><br />
<span style="color: #b45f06;"> "ExternalAllocationPools": [</span><br />
<span style="color: #b45f06;"> {</span><br />
<span style="color: #b45f06;"> "end": "10.0.0.250",</span><br />
<span style="color: #b45f06;"> "start": "10.0.0.4"</span><br />
<span style="color: #b45f06;"> }</span><br />
<span style="color: #b45f06;"> ],</span><br />
<span style="color: #b45f06;"> "ExternalNetCidr": "10.0.0.1/24",</span><br />
<span style="color: #b45f06;"> "NeutronExternalNetworkBridge": ""</span><br />
<span style="color: #b45f06;"> }</span><br />
<span style="color: #b45f06;"> }</span><br />
<br />
$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat<br />
<br />
<pre>[stack@undercloud ~]$ sudo touch -f \
/usr/share/openstack-tripleo-heat-templates/puppet/post.yaml</pre>
<pre>[stack@undercloud ~]$ cat overcloud-deploy.sh
#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /usr/share/openstack-tripleo-heat-templates \
-e /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml \
--control-scale 1 --compute-scale 1 </pre>
<pre>[stack@undercloud ~]$ ./overcloud-deploy.sh</pre>
<pre>[stack@undercloud ~]$ sudo route add -net 192.0.2.0/24 gw 192.0.2.1</pre>
<pre>[stack@undercloud ~]$ sudo ip route
default via 192.168.23.1 dev eth0
10.0.0.0/24 dev vlan10 proto kernel scope link src 10.0.0.1
192.0.2.0/24 via 192.0.2.1 dev br-ctlplane scope link
192.0.2.0/24 dev br-ctlplane proto kernel scope link src 192.0.2.1
192.168.23.0/24 dev eth0 proto kernel scope link src 192.168.23.28
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1
</pre>
<pre>[stack@undercloud ~]$ source stackrc
</pre>
<pre>[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 607034d9-22c5-4f8f-a192-487b732f80aa | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 |
| 15ddf8be-e9c0-4ea8-9a2f-52eb98663669 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
[stack@undercloud ~]$ openstack stack list
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
+------------------------+------------+-----------------+----------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+------------------------+------------+-----------------+----------------------+--------------+
| a2193f65-bac7-458f- | overcloud | CREATE_COMPLETE | 2016-09-24T10:49:16Z | None |
| 8eef-8daf909fa41f | | | | |
+------------------------+------------+-----------------+----------------------+--------------+ </pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB6fELl6AjjUEi0E_jN59rpxw9Vub_UNS8memCdJlWhjhqrIoENFbbxsvFI2M5wKYEvGQNchvA2XEPrbrP8RwP7irmmPPwg_uf_HAS1jRbj9cgHYopJHHBA9xZ05f11MYR7PKpwQ/s1600/Screenshot+from+2016-09-24+17-30-58.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB6fELl6AjjUEi0E_jN59rpxw9Vub_UNS8memCdJlWhjhqrIoENFbbxsvFI2M5wKYEvGQNchvA2XEPrbrP8RwP7irmmPPwg_uf_HAS1jRbj9cgHYopJHHBA9xZ05f11MYR7PKpwQ/s640/Screenshot+from+2016-09-24+17-30-58.png" width="640" /></a></div>
<br />
Now connect to overcloud via sshuttle running on F24 WKS , starting in separate<br />
terminal on WKS :-<br />
$ sshuttle -e "ssh -F $HOME/.quickstart/ssh.config.ansible" -r undercloud -v 10.0.0.0/24 <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt2ByoxWnrXgI3fcPHkntcLYxxvIoAY20kmvzKG0JVimY5rwzAVJiYaHOmBh0bV-W4tNMtAApjmRBTZhEY9WvbEl-dR8ul24mIJOGERevwUGF7bwx7JL3-8ushlXURvOLMcM7RZg/s1600/Screenshot+from+2016-09-24+16-37-45.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt2ByoxWnrXgI3fcPHkntcLYxxvIoAY20kmvzKG0JVimY5rwzAVJiYaHOmBh0bV-W4tNMtAApjmRBTZhEY9WvbEl-dR8ul24mIJOGERevwUGF7bwx7JL3-8ushlXURvOLMcM7RZg/s640/Screenshot+from+2016-09-24+16-37-45.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzPKa4k7kPoDyqcXzBuO2WT9V8RINZIg45U1Vgg1cxp0ELyhcW8V3ptMD9lIn1dm1YSpsFoozSYytF5eAZia02ySff5dRukA664xUQLPda0wwSClviW87yjr-84rkJ8Rc0jXV7Rw/s1600/Screenshot+from+2016-09-24+16-39-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzPKa4k7kPoDyqcXzBuO2WT9V8RINZIg45U1Vgg1cxp0ELyhcW8V3ptMD9lIn1dm1YSpsFoozSYytF5eAZia02ySff5dRukA664xUQLPda0wwSClviW87yjr-84rkJ8Rc0jXV7Rw/s640/Screenshot+from+2016-09-24+16-39-07.png" width="640" /></a></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-48383082694793883522016-09-18T23:39:00.000-07:002016-09-20T06:45:11.872-07:00Switch to Overcloud with Network isolation been setup via TripleO Master branch<div dir="ltr" style="text-align: left;" trbidi="on">
This post follows up <a href="http://lxer.com/module/newswire/view/233968/index.html" target="_blank">TripleO deployment of 'master' branch via instack-virt-setup</a><br />
Launchpad bug "Updating plans breaks deployment"<a href="https://bugs.launchpad.net/tripleo/+bug/1622683" target="_blank"> https://bugs.launchpad.net/tripleo/+bug/1622683</a> still has status "In Progress" so to be able redeploy overcloud the workaround from<a href="https://bugs.launchpad.net/tripleo/+bug/1622720/comments/1" target="_blank"> https://bugs.launchpad.net/tripleo/+bug/1622720/comments/1</a> would be applied.<br />
If "overcloud deployment" starts reporting "Uploading new plan files" and crashes by some reasons later you still have to issue 2 commands bellow to be able restart "overcloud deployment" until tripleo-common package would be fixed ( track bug mentioned above )<br />
<br />
**************************<br />
Redeployment<br />
**************************<br />
<br />
[stack@instack ~]$ openstack stack delete overcloud<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ mistral environment-delete overcloud<br />
Request to delete environment overcloud has been accepted.<br />
[stack@instack ~]$ swift delete --all<br />
<br />
Add NAT Default VNIC to each of bare metal nodes (VMs)<br />
To enable Internet connectivity from Controller after<br />
overcloud deployment at the moment when "overcloud" stack got<br />
gracefully deleted and status of bare metal nodes (VMs) is down<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjehpSmQKI4AeMWFkhPpG58yJDriZo7K7Q_TR7Q0NSAsFuUv6oqpMG2cUgXu6M5NfiF5_L7J5e7DR29C3IC-qKjXlFCm1WW-YIO8vCAQZROvV904SLEGIw0yRXSTgHWBjxDfyzvfQ/s1600/Screenshot+from+2016-09-19+16-39-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjehpSmQKI4AeMWFkhPpG58yJDriZo7K7Q_TR7Q0NSAsFuUv6oqpMG2cUgXu6M5NfiF5_L7J5e7DR29C3IC-qKjXlFCm1WW-YIO8vCAQZROvV904SLEGIw0yRXSTgHWBjxDfyzvfQ/s640/Screenshot+from+2016-09-19+16-39-07.png" width="640" /></a></div>
<br />
****************************************<br />
Make following updates on instack<br />
****************************************<br />
<br />
$ sudo ovs-vsctl show<br />
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10<br />
DEVICE=vlan10<br />
ONBOOT=yes<br />
DEVICETYPE=ovs<br />
TYPE=OVSIntPort<br />
BOOTPROTO=static<br />
IPADDR=10.0.0.1<br />
NETMASK=255.255.255.0<br />
OVS_BRIDGE=br-ctlplane<br />
OVS_OPTIONS="tag=10"<br />
<br />
$ sudo ifup vlan10<br />
<br />
**********************************************************************<br />
Make sure ovs-vsctl on undercloud has been updated<br />
**********************************************************************<br />
[stack@instack ~]$ sudo ovs-vsctl show<br />
3dfb403a-c31d-4bb3-9851-08f2e7b7778f<br />
Manager "ptcp:6640:127.0.0.1"<br />
is_connected: true<br />
Bridge br-int<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port "tapb104ab9a-36"<br />
tag: 1<br />
Interface "tapb104ab9a-36"<br />
type: internal<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Bridge br-ctlplane<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port "eth1"<br />
Interface "eth1"<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
<span style="color: #b45f06;"> Port "vlan10"<br /> tag: 10<br /> Interface "vlan10"</span><br />
type: internal<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
ovs_version: "2.5.0"<br />
***************************************************<br />
Create network_env.yaml under ~stack/<br />
***************************************************<br />
<br />
[stack@instack ~]$ cat network_env.yaml<br />
{<br />
"parameter_defaults": {<br />
"ControlPlaneDefaultRoute": "192.0.2.1",<br />
"ControlPlaneSubnetCidr": "24",<br />
"DnsServers": [<br />
"192.168.122.5"<br />
],<br />
"EC2MetadataIp": "192.0.2.1",<br />
"ExternalAllocationPools": [<br />
{<br />
"end": "10.0.0.250",<br />
"start": "10.0.0.4"<br />
}<br />
],<br />
"ExternalNetCidr": "10.0.0.1/24",<br />
"NeutronExternalNetworkBridge": ""<br />
}<br />
}<br />
<br />
Where 192.168.122.5 is instack VM Ip.<br />
<br />
*************************<br />
Deploy overcloud<br />
*************************<br />
<br />
<span style="color: #b45f06;">#!/bin/bash -x</span><br />
<span style="color: #b45f06;">source /home/stack/stackrc</span><br />
<span style="color: #b45f06;">openstack overcloud deploy \</span><br />
<span style="color: #b45f06;">--libvirt-type qemu \</span><br />
<span style="color: #b45f06;">--ntp-server pool.ntp.org \</span><br />
<span style="color: #b45f06;">--templates /home/stack/tripleo-heat-templates \</span><br />
<span style="color: #b45f06;">-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \</span><br />
<span style="color: #b45f06;">-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \</span><br />
<span style="color: #b45f06;">-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \</span><br />
<span style="color: #b45f06;">-e $HOME/network_env.yaml \</span><br />
<span style="color: #b45f06;">--control-scale 1 --compute-scale 2</span><br />
<br />
********************************************************************************<br />
Up on completion proceed on undercloud (instack VM) as follows<br />
********************************************************************************<br />
<br />
<br />
Add route to ctlplane network<br />
<br />
[stack@instack ~]$ sudo route add -net 192.0.2.0/24 gw 192.0.2.1<br />
<br />
[stack@instack ~]$ sudo ip route<br />
default via 192.168.122.1 dev eth0 <br />
10.0.0.0/24 dev vlan10 proto kernel scope link src 10.0.0.1 <br />
192.0.2.0/24 via 192.0.2.1 dev br-ctlplane scope link <br />
192.0.2.0/24 dev br-ctlplane proto kernel scope link src 192.0.2.1 <br />
192.168.122.0/24 dev eth0 proto kernel scope link src 192.168.122.5 <br />
<br />
<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ nova list<br />
<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 0212a5cc-c73e-43c3-bddb-51cac22f0060 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| a421c80b-54a5-4cc8-9414-45d45a27845b | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.18 |
| 3641a8da-c5fa-4975-9e43-c926522ecc2b | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.13 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br />
[stack@instack ~]$ neutron net-list<br />
<pre>+--------------------------------------+--------------+----------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+----------------------------------------+
| 5309b1a3-f6c6-4bdd-a0bc-93f418853080 | external | 56fe052f-ba26-437b-94ab-b03688e06ad9 |
| | | 10.0.0.0/24 |
| 77440f54-0ce4-444c-8983-2ef2ae1408b4 | ctlplane | 76055a99-45e4-4b5a-b1fc-846c91137427 |
| | | 192.0.2.0/24 |
| 7b3e788a-ebdd-4e7c-b076-517ca62befb3 | tenant | 0a028e34-8a0a-48ce-88d8-5523b19eac0f |
| | | 172.16.0.0/24 |
| 813d17c3-bd58-490f-94a4-aefeb2057d22 | storage_mgmt | e3cdcf74-64fa-4837-b480-304a1329d109 |
| | | 172.16.3.0/24 |
| bcba764c-0b27-4785-b875-8b20bd28cd96 | internal_api | 1de0ff85-7525-4e1f-94ea-1bc6e060a096 |
| | | 172.16.2.0/24 |
| d4c8e9d8-bffc-4803-8ee4-bbff63eef9e1 | storage | f76d3eeb-c7d8-47e9-a2e3-95765975c292 |
| | | 172.16.1.0/24 |
+--------------------------------------+--------------+----------------------------------------+</pre>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3fIP8sI7-xnkMRiRxgVxWeL84H0XPJ71Ap-U03y486Gb1M9eoYmVPYsv_pTihu8SN_cAEUJanKaVfMst6-X6WEvCY3fWPWLmfge2EmoiEAf7uDb2q9m_4jQ1l_oY6uNsZR6Ycqg/s1600/Screenshot+from+2016-09-19+09-05-20.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj3fIP8sI7-xnkMRiRxgVxWeL84H0XPJ71Ap-U03y486Gb1M9eoYmVPYsv_pTihu8SN_cAEUJanKaVfMst6-X6WEvCY3fWPWLmfge2EmoiEAf7uDb2q9m_4jQ1l_oY6uNsZR6Ycqg/s640/Screenshot+from+2016-09-19+09-05-20.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD9kq04fOoYyiSQVSpVHtO1_b7DCEwHAu_VQCpeNlmByN6PmWDyminFXP2AXF9F-UTbd-bb0BRn_2BYNuL_2mPtK73WAAnIkrUh2jux2nCIAip9qR6VJhmsyFzCSiT_n2_3oFGsw/s1600/Screenshot+from+2016-09-19+12-34-16.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhD9kq04fOoYyiSQVSpVHtO1_b7DCEwHAu_VQCpeNlmByN6PmWDyminFXP2AXF9F-UTbd-bb0BRn_2BYNuL_2mPtK73WAAnIkrUh2jux2nCIAip9qR6VJhmsyFzCSiT_n2_3oFGsw/s640/Screenshot+from+2016-09-19+12-34-16.png" width="640" /></a></div>
<br />
[root@overcloud-controller-0 ~]# nova service-list<br />
<pre>+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+
| 3 | nova-consoleauth | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-19T10:02:37.000000 | - |
| 4 | nova-scheduler | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-19T10:02:31.000000 | - |
| 5 | nova-conductor | overcloud-controller-0.localdomain | internal | enabled | up | 2016-09-19T10:02:30.000000 | - |
| 6 | nova-compute | overcloud-novacompute-1.localdomain | nova | enabled | up | 2016-09-19T10:02:29.000000 | - |
| 7 | nova-compute | overcloud-novacompute-0.localdomain | nova | enabled | up | 2016-09-19T10:02:35.000000 | - |
+----+------------------+-------------------------------------+----------+---------+-------+----------------------------+-----------------+</pre>
<br />
FoxyProxy tuned for external network<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQr0s3mSqH-p39yrEv5rZYGamn6IhfgGnjhuNMnIC2UHwC6aFNrI1GsYaFePU-jxh1yIG29VjYn4XJu5syt1LVtEhZigyfWOn5uSl3URZohBoVicyQi6ZAbMosuJD1Mz1vY0zAbQ/s1600/Screenshot+from+2016-09-19+11-12-06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhQr0s3mSqH-p39yrEv5rZYGamn6IhfgGnjhuNMnIC2UHwC6aFNrI1GsYaFePU-jxh1yIG29VjYn4XJu5syt1LVtEhZigyfWOn5uSl3URZohBoVicyQi6ZAbMosuJD1Mz1vY0zAbQ/s640/Screenshot+from+2016-09-19+11-12-06.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmIykBYNSYEqcouAfpCTi2Quhwe-vX0ANhpc5Pu2zd0-XJHd5kD7tC_Vj6XF5V1vPEansfuuYG3q945hQn0vGIQpqvFtqGkyYoZDlPiX0cDHx5unFFHuWdXIgDBkQdo_lnrRTxlQ/s1600/Screenshot+from+2016-09-19+11-23-25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmIykBYNSYEqcouAfpCTi2Quhwe-vX0ANhpc5Pu2zd0-XJHd5kD7tC_Vj6XF5V1vPEansfuuYG3q945hQn0vGIQpqvFtqGkyYoZDlPiX0cDHx5unFFHuWdXIgDBkQdo_lnrRTxlQ/s640/Screenshot+from+2016-09-19+11-23-25.png" width="640" /></a></div>
<br />
List of instances launched and running via Nova CLI<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmtMaokOXZzSRotTOEUWi7p_6BOi5BE5KmodUSMvCWvZPAlWnJO6CEp4NhuxXFKFfDk3E1fChGULjs4I8I5lnl7cKZ8F8BXm-n_t5_cfM1Ixt8m3bz6EpV9XM4cH2lXXuegN23kA/s1600/Screenshot+from+2016-09-19+10-12-49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmtMaokOXZzSRotTOEUWi7p_6BOi5BE5KmodUSMvCWvZPAlWnJO6CEp4NhuxXFKFfDk3E1fChGULjs4I8I5lnl7cKZ8F8BXm-n_t5_cfM1Ixt8m3bz6EpV9XM4cH2lXXuegN23kA/s640/Screenshot+from+2016-09-19+10-12-49.png" width="640" /> </a> </div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLYj0u4on633ExTKBfty5Mve7QmAMRr-fQahO1wiJsQ5lv62gC409wJAcE17A1YD4BrHZOLpe7nILjTS3kNZb20aBM7pmnss6rV5RXwU7S1AlFjHltSMjBx9qSOzc3E6AauZLq9g/s1600/Screenshot+from+2016-09-19+10-11-39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLYj0u4on633ExTKBfty5Mve7QmAMRr-fQahO1wiJsQ5lv62gC409wJAcE17A1YD4BrHZOLpe7nILjTS3kNZb20aBM7pmnss6rV5RXwU7S1AlFjHltSMjBx9qSOzc3E6AauZLq9g/s640/Screenshot+from+2016-09-19+10-11-39.png" width="640" /></a></div>
*****************************************************<br />
Controller's ovs-vsctl show report<br />
*****************************************************<br />
[root@overcloud-controller-0 ~]# ovs-vsctl show<br />
d818c01e-d0ce-425d-a9c8-07e0ff541ea9<br />
Manager "ptcp:6640:127.0.0.1"<br />
is_connected: true<br />
Bridge br-int<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port patch-tun<br />
Interface patch-tun<br />
type: patch<br />
options: {peer=patch-int}<br />
Port "tap19ce4553-8f"<br />
tag: 2<br />
Interface "tap19ce4553-8f"<br />
type: internal<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port "qr-4a00fb57-90"<br />
tag: 2<br />
Interface "qr-4a00fb57-90"<br />
type: internal<br />
Port "qg-5b1fb5eb-d5"<br />
tag: 4<br />
Interface "qg-5b1fb5eb-d5"<br />
type: internal<br />
Port int-br-ex<br />
Interface int-br-ex<br />
type: patch<br />
options: {peer=phy-br-ex}<br />
Bridge br-ex<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port phy-br-ex<br />
Interface phy-br-ex<br />
type: patch<br />
options: {peer=int-br-ex}<br />
Port "vlan40"<br />
tag: 40<br />
Interface "vlan40"<br />
type: internal<br />
Port "eth0"<br />
Interface "eth0"<br />
Port "vlan20"<br />
tag: 20<br />
Interface "vlan20"<br />
type: internal<br />
Port br-ex<br />
Interface br-ex<br />
type: internal<br />
Port "vlan30"<br />
tag: 30<br />
Interface "vlan30"<br />
type: internal<br />
Port "vlan50"<br />
tag: 50<br />
Interface "vlan50"<br />
type: internal<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
Bridge br-tun<br />
Controller "tcp:127.0.0.1:6633"<br />
is_connected: true<br />
fail_mode: secure<br />
Port br-tun<br />
Interface br-tun<br />
type: internal<br />
Port patch-int<br />
Interface patch-int<br />
type: patch<br />
options: {peer=patch-tun}<br />
Port "vxlan-ac100009"<br />
Interface "vxlan-ac100009"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.12", out_key=flow, remote_ip="172.16.0.9"}<br />
Port "vxlan-ac10000d"<br />
Interface "vxlan-ac10000d"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.12", out_key=flow, remote_ip="172.16.0.13"}<br />
ovs_version: "2.5.0"<br />
<br />
********************************************************************** <br />
Hypervisor status on Compute nodes (Newton RC1)<br />
Qemu-kvm-ev-2.31 gets installed by default<br />
**********************************************************************<br />
[root@overcloud-novacompute-0 ~]# virsh --connect qemu:///system<br />
Welcome to virsh, the virtualization interactive terminal.<br />
<br />
Type: 'help' for help with commands<br />
'quit' to quit<br />
<br />
<span style="color: #b45f06;">virsh # version</span><br />
<span style="color: #b45f06;">Compiled against library: libvirt 1.2.17</span><br />
<span style="color: #b45f06;">Using library: libvirt 1.2.17</span><br />
<span style="color: #b45f06;">Using API: QEMU 1.2.17</span><br />
<span style="color: #b45f06;">Running hypervisor: QEMU 2.3.0</span><br />
<br />
virsh # list --all<br />
Id Name State<br />
----------------------------------------------------<br />
6 instance-00000004 running<br />
<br />
<br />
[root@overcloud-novacompute-1 ~]# virsh --connect qemu:///system<br />
Welcome to virsh, the virtualization interactive terminal.<br />
<br />
Type: 'help' for help with commands<br />
'quit' to quit<br />
<br />
virsh # version<br />
Compiled against library: libvirt 1.2.17<br />
Using library: libvirt 1.2.17<br />
Using API: QEMU 1.2.17<br />
Running hypervisor: QEMU 2.3.0<br />
<br />
virsh # list --all<br />
Id Name State<br />
----------------------------------------------------<br />
5 instance-00000005 running<br />
<br />
*************************************<br />
VIRTHOST Configuration<br />
*************************************<br />
[root@ServerVIRT1608 ~]# brctl show<br />
bridge name bridge id STP enabled interfaces<br />
brext 8000.525400b017dc no brext-nic<br />
brovc 8000.525400948dc8 no brovc-nic<br />
virbr0 8000.525400f83b3b yes virbr0-nic<br />
vnet0<br />
vnet3<br />
vnet5<br />
vnet7<br />
<br />
[root@ServerVIRT1608 ~]# ovs-vsctl show<br />
96876d44-cca3-4e93-b89c-8238b4745c3c<br />
Bridge brbm<br />
Port "vnet6"<br />
Interface "vnet6"<br />
Port "vnet4"<br />
Interface "vnet4"<br />
Port "vnet1"<br />
Interface "vnet1"<br />
Port "vnet2"<br />
Interface "vnet2"<br />
Port brbm<br />
Interface brbm<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<br />
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-80257479121557419272016-09-15T11:42:00.001-07:002016-10-01T10:26:16.949-07:00TripleO deployment of 'master' branch via instack-virt-setup<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
<h4 style="text-align: left;">
<b>UPDATE 09/30/2016</b></h4>
<br />
$ sudo route add -net 192.0.2.0/24 gw 192.0.2.1 ( on instack VM )<br />
no longer needed , moreover affects ssh connect to overcloud nodes<br />
<br />
*************************** <br />
Overcloud-deploy.sh <br />
***************************<br />
<pre>#!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
--templates /home/stack/tripleo-heat-templates \
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml \
--control-scale 1 --compute-scale 2 </pre>
<b>END UPDATE</b><br />
<br />
<h4 style="text-align: left;">
UPDATE 09/23/2016</h4>
<br />
Fix released for (<a href="https://bugs.launchpad.net/tripleo/+bug/1622683" target="_blank">1622683</a>, <a href="https://bugs.launchpad.net/tripleo/+bug/1622720" target="_blank">1622720</a> ) in :- <br />
<a href="https://bugs.launchpad.net/tripleo/+bug/1622683" target="_blank">https://bugs.launchpad.net/tripleo/+bug/1622683 </a><br />
<br />
**************************************************** <br />
Deploy completed OK the first time <br />
****************************************************<br />
2016-09-23 09:08:28Z [overcloud-AllNodesDeploySteps-yrsd7pkitjij]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-09-23 09:08:28Z [AllNodesDeploySteps]: CREATE_COMPLETE state changed<br />
2016-09-23 09:08:28Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
<br />
Stack overcloud CREATE_COMPLETE <br />
<br />
Overcloud Endpoint: http://10.0.0.6:5000/v2.0<br />
Overcloud Deployed<br />
[stack@instack ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| b3d97bcf-9318-48ef-91c7-09c8386a75aa | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.13 |
| 148aa223-513d-44d5-b865-2cb2c3dcbc6f | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| e3ee61fb-c243-4454-949d-84c22e66b147 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br />
[stack@instack ~]$ mistral environment-list<br />
<pre>+-----------+-------------+---------+---------------------+---------------------+
| Name | Description | Scope | Created at | Updated at |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+</pre>
<br />
[stack@instack ~]$ swift list<br />
ov-jjf6fn4qyjt-0-gfpul73m4fdl-Controller-dekw3w5stcqd<br />
ov-pb3uu5djue-0-lmazr26t3z4u-NovaCompute-sqfaz5lstqov<br />
ov-pb3uu5djue-1-7prlyxolsdhd-NovaCompute-ltmkwmq74iyq<br />
overcloud<br />
<br />
[stack@instack ~]$ openstack stack delete overcloud<br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
Are you sure you want to delete this stack(s) [y/N]? y<br />
<br />
[stack@instack ~]$ openstack stack list<br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
<pre>+---------------------+------------+--------------------+----------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+---------------------+------------+--------------------+----------------------+--------------+
| 6e3ae2b6-5ce1-45db- | overcloud | DELETE_IN_PROGRESS | 2016-09-23T08:41:38Z | None |
| bde5-06d2ce2e571b | | | | |
+---------------------+------------+--------------------+----------------------+--------------+</pre>
<br />
[stack@instack ~]$ openstack stack list<br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
<br />
*************************************************************************** <br />
Empty output - overcloud stack has been deleted <br />
****************************************************************************<br />
<br />
[stack@instack ~]$ mistral environment-list<br />
<pre>+-----------+-------------+---------+---------------------+---------------------+
| Name | Description | Scope | Created at | Updated at |
+-----------+-------------+---------+---------------------+---------------------+
| overcloud | None | private | 2016-09-23 07:33:40 | 2016-09-23 08:41:29 |
+-----------+-------------+---------+---------------------+---------------------+</pre>
<br />
[stack@instack ~]$ swift list<br />
overcloud<br />
<br />
******************************************************************************<br />
Now attempt to redeploy the second time . Success on 09/23/2016<br />
******************************************************************************<br />
[stack@instack ~]$ touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml<br />
<br />
[stack@instack ~]$ ./overcloud-deploy.sh<br />
+ source /home/stack/stackrc<br />
++ export NOVA_VERSION=1.1<br />
++ NOVA_VERSION=1.1<br />
+++ sudo hiera admin_password<br />
++ export OS_PASSWORD=68a350a2972f7ff9e88d0e9ea79056b3e0bb90ec<br />
++ OS_PASSWORD=68a350a2972f7ff9e88d0e9ea79056b3e0bb90ec<br />
++ export OS_AUTH_URL=http://192.0.2.1:5000/v2.0<br />
++ OS_AUTH_URL=http://192.0.2.1:5000/v2.0<br />
++ export OS_USERNAME=admin<br />
++ OS_USERNAME=admin<br />
++ export OS_TENANT_NAME=admin<br />
++ OS_TENANT_NAME=admin<br />
++ export COMPUTE_API_VERSION=1.1<br />
++ COMPUTE_API_VERSION=1.1<br />
++ export OS_BAREMETAL_API_VERSION=1.15<br />
++ OS_BAREMETAL_API_VERSION=1.15<br />
++ export OS_NO_CACHE=True<br />
++ OS_NO_CACHE=True<br />
++ export OS_CLOUDNAME=undercloud<br />
++ OS_CLOUDNAME=undercloud<br />
++ export OS_IMAGE_API_VERSION=1<br />
++ OS_IMAGE_API_VERSION=1<br />
<span style="color: #b45f06;">+ openstack overcloud deploy --libvirt-type qemu --ntp-server pool.ntp.org --templates /home/stack/tripleo-heat-templates -e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml -e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network_env.yaml --control-scale 1 --compute-scale 2</span><br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
Removing the current plan files<br />
Uploading new plan files<br />
Started Mistral Workflow. Execution ID: 4d744a89-a2e7-43a5-82af-26bab11e6342<br />
Plan updated<br />
Deploying templates in the directory /home/stack/tripleo-heat-templates<br />
Object GET failed: http://192.0.2.1:8080/v1/AUTH_7ea6220c67c84c828f4249b95886259f/overcloud/overcloud-without-mergepy.yaml 404 Not Found [first 60 chars of response]<br />
<span style="color: #b45f06;">Started Mistral Workflow. Execution ID: 807a7047-a1c3-4686-9be7-11d73e72dfb8</span><br />
<span style="color: #b45f06;">2016-09-23 09:15:34Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started</span><br />
2016-09-23 09:15:34Z [HorizonSecret]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:34Z [RabbitCookie]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [MysqlRootPassword]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [Networks]: CREATE_IN_PROGRESS state changed<br />
2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_COMPLETE state changed<br />
2016-09-23 09:15:35Z [RabbitCookie]: CREATE_COMPLETE state changed<br />
2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_COMPLETE state changed<br />
2016-09-23 09:15:35Z [PcsdPassword]: CREATE_COMPLETE state changed<br />
2016-09-23 09:15:35Z [HorizonSecret]: CREATE_COMPLETE state changed<br />
. . . . . .<br />
2016-09-23 09:39:50Z [BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-23 09:39:51Z [CephStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-23 09:39:51Z [ComputeExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-23 09:39:51Z [ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-23 09:39:51Z [overcloud-AllNodesDeploySteps-5bfecsxdagiz]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-09-23 09:39:51Z [AllNodesDeploySteps]: CREATE_COMPLETE state changed<br />
2016-09-23 09:39:51Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
<br />
Stack overcloud CREATE_COMPLETE <br />
<br />
Overcloud Endpoint: http://10.0.0.12:5000/v2.0<br />
Overcloud Deployed<br />
<br />
******************************<br />
Another test on 09/25/2016<br />
******************************<br />
Stack .bashrc on VIRTHOST<br />
<br />
export NODE_MEM=8000<br />
export NODE_COUNT=2<br />
export UNDERCLOUD_NODE_CPU=4<br />
export NODE_DISK=80<br />
export UNDERCLOUD_NODE_DISK=35<br />
export NODE_CPU=4<br />
export NODE_DIST=centos7<br />
export UNDERCLOUD_NODE_MEM=12000<br />
export FS_TYPE=ext4<br />
# Use rspecific aliases and functions<br />
export LIBVIRT_DEFAULT_URI="qemu:///system"<br />
<br />
********************************<br />
overcloud-deploy.sh<br />
********************************<br />
<br />
[stack@instack ~]$ cat overcloud-deploy.sh<br />
#!/bin/bash -x<br />
source /home/stack/stackrc<br />
openstack overcloud deploy \<br />
--libvirt-type qemu \<br />
--ntp-server pool.ntp.org \<br />
--templates /home/stack/tripleo-heat-templates \<br />
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \<br />
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \<br />
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \<br />
-e $HOME/network_env.yaml \<br />
--control-scale 1 --compute-scale 1<br />
<br />
<br />
<br />
[stack@instack ~]$ mistral execution-list
<br />
<pre>+----------+-------------+---------------+-------------+-------------------+---------+------------+------------+---------------+
| ID | Workflow ID | Workflow name | Description | Task Execution ID | State | State info | Created at | Updated at |
+----------+-------------+---------------+-------------+-------------------+---------+------------+------------+---------------+
| 179ee399 | 91b46d6e-f6 | tripleo.plan_ | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -f9ff-4b | 29-41a8-bf4 | management.v1 | | | | | 13:33:01 | 13:33:15 |
| b4-ab68- | 3-79e6cef19 | .create_defau | | | | | | |
| ad09697e | 53c | lt_deployment | | | | | | |
| 8820 | | _plan | | | | | | |
| 2722c9ce | 2a3c6e58-63 | tripleo.barem | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -d8ee- | 17-449a-87c | etal.v1.regis | | | | | 13:50:41 | 13:50:55 |
| 470b- | 7-a0b1ac0ab | ter_or_update | | | | | | |
| 957b-107 | 18b | | | | | | | |
| b2859ccd | | | | | | | | |
| c | | | | | | | | |
| b98bbae3 | e82c37ad- | tripleo.barem | sub- | 76ff2748-4922 | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -185e- | 8d07-4a7b- | etal.v1.set_n | workflow | -4b1f- | | | 13:50:48 | 13:50:51 |
| 4ba5 | 85cf-1140ab | ode_state | execution | a0c5-d607dc6a24d4 | | | | |
| -a97e-9e | 425c9d | | | | | | | |
| 9ecc8b51 | | | | | | | | |
| ee | | | | | | | | |
| d5fff31f | e82c37ad- | tripleo.barem | sub- | 76ff2748-4922 | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -5883 | 8d07-4a7b- | etal.v1.set_n | workflow | -4b1f- | | | 13:50:48 | 13:50:51 |
| -46dd-a3 | 85cf-1140ab | ode_state | execution | a0c5-d607dc6a24d4 | | | | |
| bc-24926 | 425c9d | | | | | | | |
| 8747688 | | | | | | | | |
| 07bc3a5d | e82c37ad- | tripleo.barem | sub- | 3516fb1c-3b2a- | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -f1fa-41 | 8d07-4a7b- | etal.v1.set_n | workflow | 4bab- | | | 13:50:55 | 13:50:58 |
| 41-bda6- | 85cf-1140ab | ode_state | execution | ac07-481cbe74796e | | | | |
| a824b9cd | 425c9d | | | | | | | |
| 9a3b | | | | | | | | |
| 51738f17 | 96820b49-32 | tripleo.barem | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -27d4-4f | 39-4197 | etal.v1.provi | | | | | 13:50:55 | 13:51:02 |
| 12-af56- | -9caf-a6c42 | de | | | | | | |
| a948a522 | d418264 | | | | | | | |
| 97c2 | | | | | | | | |
| 87607542 | e82c37ad- | tripleo.barem | sub- | 3516fb1c-3b2a- | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -42c6 | 8d07-4a7b- | etal.v1.set_n | workflow | 4bab- | | | 13:50:55 | 13:50:58 |
| -4a6d-8b | 85cf-1140ab | ode_state | execution | ac07-481cbe74796e | | | | |
| 26-4ab85 | 425c9d | | | | | | | |
| d7998ad | | | | | | | | |
| de8a9706 | ae47b2fc- | tripleo.barem | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -3f29-4c | bf56-4f43 | etal.v1.intro | | | | | 13:51:16 | 13:53:16 |
| 1e-b969- | -92bf-b069f | spect_managea | | | | | | |
| a42da88e | b7cc268 | ble_nodes | | | | | | |
| f6d3 | | | | | | | | |
| 81369764 | 1ce6499e- | tripleo.barem | sub- | 45d1490d-f20a-497 | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -db33-4a | 7a0b-4610 | etal.v1.intro | workflow | 7-ba81-e7bfe760ca | | | 13:51:17 | 13:53:13 |
| c8-b626- | -96af-09362 | spect | execution | fb | | | | |
| f509e83a | 65674ca | | | | | | | |
| 7746 | | | | | | | | |
| 355bc308 | 9a9f0be3-0f | tripleo.barem | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -9c2d-49 | b8-4f92 | etal.v1.provi | | | | | 13:53:15 | 13:53:27 |
| 67-8e7e- | -bc3f-0b4d0 | de_manageable | | | | | | |
| 393c0389 | 82ccd90 | _nodes | | | | | | |
| 50c0 | | | | | | | | |
| 43d70508 | 96820b49-32 | tripleo.barem | sub- | 608c9105-fa1a- | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -f07c-48 | 39-4197 | etal.v1.provi | workflow | 4e32-b13a- | | | 13:53:16 | 13:53:24 |
| 42-a2ff- | -9caf-a6c42 | de | execution | 333d9de066be | | | | |
| 7aff0a3c | d418264 | | | | | | | |
| 579f | | | | | | | | |
| 7fd2a5fe | e82c37ad- | tripleo.barem | sub- | 89e7b777-d5a0 | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -1038-4f | 8d07-4a7b- | etal.v1.set_n | workflow | -4ebb-bd6f- | | | 13:53:16 | 13:53:20 |
| 81-a19e- | 85cf-1140ab | ode_state | execution | 06c652de461a | | | | |
| c77c0c96 | 425c9d | | | | | | | |
| fa65 | | | | | | | | |
| 9aa3e64b | e82c37ad- | tripleo.barem | sub- | 89e7b777-d5a0 | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -1323 | 8d07-4a7b- | etal.v1.set_n | workflow | -4ebb-bd6f- | | | 13:53:16 | 13:53:20 |
| -41eb-bb | 85cf-1140ab | ode_state | execution | 06c652de461a | | | | |
| b2-c3750 | 425c9d | | | | | | | |
| e95fd75 | | | | | | | | |
| 02d9dd3c | b33dd0c4-b8 | tripleo.plan_ | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -ccbe-48 | 10-4cde- | management.v1 | | | | | 14:01:30 | 14:01:37 |
| a1-ab72- | b49f-d22e90 | .update_deplo | | | | | | |
| 094aa7b1 | 5560cb | yment_plan | | | | | | |
| 6c1a | | | | | | | | |
| f6c95474 | b2f4ab26 | tripleo.deplo | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -5b7d- | -7c5a-4ee2- | yment.v1.depl | | | | | 14:01:38 | 14:01:51 |
| 441b- | b665-7238c4 | oy_plan | | | | | | |
| b17c-102 | 19ab3b | | | | | | | |
| fbb91191 | | | | | | | | |
| b | | | | | | | | |
| 7f2a1e96 | b33dd0c4-b8 | tripleo.plan_ | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -7ce0-44 | 10-4cde- | management.v1 | | | | | 16:43:15 | 16:43:24 |
| d7-8af9- | b49f-d22e90 | .update_deplo | | | | | | |
| a814c686 | 5560cb | yment_plan | | | | | | |
| 51b1 | | | | | | | | |
| e0967fc4 | b2f4ab26 | tripleo.deplo | | <none> | SUCCESS | None | 2016-09-25 | 2016-09-25 |
| -011d-42 | -7c5a-4ee2- | yment.v1.depl | | | | | 16:43:25 | 16:43:40 |
| 36-a8ae- | b665-7238c4 | oy_plan | | | | | | |
| 53ed4e29 | 19ab3b | | | | | | | |
| f561 | | | | | | | | |
+----------+-------------+---------------+-------------+-------------------+---------+------------------------------------------
</none></none></none></none></none></none></none></none></none></pre>
<br />
<h4 style="text-align: left;">
END UPDATE</h4>
<br />
Due to Launchpad Bug <a href="https://bugs.launchpad.net/tripleo/+bug/1604770" target="_blank">introspection hangs due to broken ipxe config</a><br />
finally resolved on 09/01/2016 approach suggested in <br />
<a href="http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html" target="_blank">TripleO manual deployment of 'master' branch by Carlo Camacho</a><br />
has
been retested. As appears things in meantime have been changed.
Following bellow is the way how mentioned above post worked for me right
now on 32 GB VIRTHOST (i7 4790)<br />
<br />
*****************************************<br />
Tune stack environment on VIRTHOST<br />
*****************************************<br />
# useradd stack<br />
# echo "stack:stack" | chpasswd<br />
# echo "stack ALL=(root) NOPASSWD:ALL" | sudo tee -a /etc/sudoers.d/stack<br />
# chmod 0440 /etc/sudoers.d/stack<br />
# su - stack<br />
<br />
***************************<br />
Tune stack ENV<br />
**************************<br />
export NODE_DIST=centos7<br />
export NODE_CPU=2<br />
export NODE_MEM=7550<br />
export NODE_COUNT=2<br />
export UNDERCLOUD_NODE_CPU=2<br />
export UNDERCLOUD_NODE_MEM=9000<br />
export FS_TYPE=ext4<br />
<br />
***********************<i>*****************************************</i><br />
<div style="text-align: left;">
<i>Re-login to stack (highlight long line and copy if needed)</i></div>
****************************************************************<br />
<pre> $ sudo yum -y install epel-release sudo
$ yum -y install yum-plugin-priorities
$ sudo curl -o /etc/yum.repos.d/delorean.repo http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/delorean.repo
$ sudo curl -o /etc/yum.repos.d/delorean-deps.repo http://trunk.rdoproject.org/centos7/delorean-deps.repo
$ sudo yum install -y instack-undercloud
$ instack-virt-setup
</pre>
<br />
*********************<br />
<div style="text-align: left;">
<i>On instack VM</i></div>
*********************<br />
Create swap file per <a href="http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html" target="_blank">http://www.anstack.com/blog/2016/07/04/manually-installing-tripleo-recipe.html</a> :-<br />
<br />
#Add a 4GB swap file to the Undercloud<br />
sudo dd if=/dev/zero of=/swapfile bs=1024 count=4194304<br />
sudo mkswap /swapfile<br />
#Turn ON the swap file<br />
sudo chmod 600 /swapfile<br />
sudo swapon /swapfile<br />
#Enable it on start<br />
sudo echo "/swapfile swap swap defaults 0 0" >> /etc/fstab<br />
<br />
***************************<br />
Restart instack VM<br />
***************************<br />
<br />
Next<br />
<br />
su - stack<br />
sudo yum -y install yum-plugin-priorities<br />
<br />
*************************************<br />
Update .bashrc under ~stack/ <br />
*************************************<br />
<pre> export USE_DELOREAN_TRUNK=1
export DELOREAN_TRUNK_REPO="http://buildlogs.centos.org/centos/7/cloud/x86_64/rdo-trunk-master-tripleo/"
export DELOREAN_REPO_FILE="delorean.repo"
export FS_TYPE=ext4
</pre>
<br />
************************************<br />
<div style="text-align: left;">
<span style="color: #b45f06;"><i> Re-login to stack</i></span></div>
************************************ <br />
<br />
$ git clone https://github.com/openstack/tripleo-heat-templates<br />
$ git clone https://github.com/openstack-infra/tripleo-ci.git<br />
<br />
$ ./tripleo-ci/scripts/tripleo.sh --repo-setup<br />
$ ./tripleo-ci/scripts/tripleo.sh --undercloud<br />
$ source stackrc <br />
$ ./tripleo-ci/scripts/tripleo.sh --overcloud-images<br />
$ ./tripleo-ci/scripts/tripleo.sh --register-nodes<br />
$ ./tripleo-ci/scripts/tripleo.sh --introspect-nodes<br />
<br />
************************************************<br />
<div style="text-align: left;">
<span style="color: #b45f06;"><i> Passing step affected by mentioned bug</i></span></div>
************************************************<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq0IT5ehmCc1Qd-8YCGclldkrFvK9uAh-mGgvfixf76BrmU36Xi9I07FW__G9da1juwPNVcABHNr4k5FiHFxL8_cniioVnQWlNs1XGpUKm7q8v4Nhk8IiU8_YFuitW_Wgv2JmXng/s1600/Screenshot+from+2016-09-15+17-48-03.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjq0IT5ehmCc1Qd-8YCGclldkrFvK9uAh-mGgvfixf76BrmU36Xi9I07FW__G9da1juwPNVcABHNr4k5FiHFxL8_cniioVnQWlNs1XGpUKm7q8v4Nhk8IiU8_YFuitW_Wgv2JmXng/s640/Screenshot+from+2016-09-15+17-48-03.png" width="640" /></a></div>
<br />
<br />
$ ./tripleo-ci/scripts/tripleo.sh --overcloud-deploy<br />
<br />
<span style="color: #b45f06;"> Issue at start up of Overcloud deployment</span><br />
<br />
<br />
##################################################<br />
tripleo.sh -- Overcloud create started.<br />
################################################## <br />
See Launchpad bugs <a href="https://bugs.launchpad.net/tripleo/+bug/1622720" target="_blank">1622720</a> <a href="https://bugs.launchpad.net/tripleo/+bug/1622683" target="_blank">1622683</a> status . UPDATE 09/17/2016 is providing links. Back porting patch <a href="https://review.openstack.org/gitweb?p=openstack/tripleo-common.git;a=patch;h=203460176750aeda6c0a2d39ce349ad827053b11" target="_blank">https://review.openstack.org/gitweb?p=openstack/tripleo-common.git;a=patch;h=203460176750aeda6c0a2d39ce349ad827053b11</a><br />
via
rebuilding
openstack-tripleo-common-5.0.1-0.20160917031337.15c97e6.el7.centos.src.rpm && re-installing new rpm doesn't work for me.<br />
##################################################<br />
<pre>WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
WARNING: openstackclient.common.exceptions is deprecated and will be removed after Jun 2017. Please use osc_lib.exceptions
Creating Swift container to store the plan
Creating plan from template files in: /home/stack/openstack-tripleo-heat-templates/
Plan created
Deploying templates in the directory /home/stack/openstack-tripleo-heat-templates
Object GET failed: http://192.0.2.1:8080/v1/AUTH_b4438648a72446eca04d2d216261c373/overcloud/overcloud-without-mergepy.yaml 404 Not Found [first 60 chars of response] </pre>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPycksgg6PM3WRATdFmvyxxIMDSVI8Uw73gZQSW-VENkoVZrGfTSihYeim8Qg3YdwH9007s0EsJRIdW18C0Uls6_JwLQYTcHMlcarHcn1mz9xr8eJNbVRWWov56bZ_xMLzpK0Q6Q/s1600/Screenshot+from+2016-09-15+17-48-56.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjPycksgg6PM3WRATdFmvyxxIMDSVI8Uw73gZQSW-VENkoVZrGfTSihYeim8Qg3YdwH9007s0EsJRIdW18C0Uls6_JwLQYTcHMlcarHcn1mz9xr8eJNbVRWWov56bZ_xMLzpK0Q6Q/s640/Screenshot+from+2016-09-15+17-48-56.png" width="640" /></a><br />
<br />
Finally overcloud gets deployed<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgalx9aVNJNbrpLA_d20BNZvJt8NO4peK5BNSND-2WVyAE_hUdFuTeX4Nzl3QRyxJIp9iMRLXDUUC8JmUwt5odKQybGW4LDWv3UmdNv5m1ySqrkGyKQdE3Cc6BsGFJx_UWfWrbKdA/s1600/Screenshot+from+2016-09-15+21-27-21.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgalx9aVNJNbrpLA_d20BNZvJt8NO4peK5BNSND-2WVyAE_hUdFuTeX4Nzl3QRyxJIp9iMRLXDUUC8JmUwt5odKQybGW4LDWv3UmdNv5m1ySqrkGyKQdE3Cc6BsGFJx_UWfWrbKdA/s640/Screenshot+from+2016-09-15+21-27-21.png" width="640" /> </a><br />
<br />
****************************************************************************************<br />
<div style="text-align: left;">
On instack VM verified <a href="https://bugs.launchpad.net/tripleo/+bug/1604770" target="_blank">https://bugs.launchpad.net/tripleo/+bug/1604770</a> #9<br />
**************************************************************************************** <br />
[stack@instack ~]$ sudo su -<br />
Last login: Thu Sep 15 16:19:07 UTC 2016 from 192.168.122.1 on pts/1<br />
[root@instack ~]# rpm -qa \*ipxe\*<br />
<span style="color: #b45f06;">ipxe-roms-qemu-20160127-1.git6366fa7a.el7.noarch</span><br />
ipxe-bootimgs-20160127-1.git6366fa7a.el7.noarch</div>
<br />
<br />
[stack@instack ~]$ openstack stack list<br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
+------------------------+------------+-----------------+----------------------+--------------+<br />
| ID | Stack Name | Stack Status | Creation Time | Updated Time |<br />
+------------------------+------------+-----------------+----------------------+--------------+<br />
| 7657df62-da09-4c0f- | overcloud | CREATE_COMPLETE | 2016-09-15T14:48:49Z | None |<br />
| bbdb-b9c95bdad537 | | | | |<br />
+------------------------+------------+-----------------+----------------------+--------------+<br />
<br />
[stack@instack ~]$ nova list<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| 400e1499-5e02-4c92-a41b-814918f0edc3 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
| 58f3591f-c72f-4d97-9278-a33b3f631248 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.6 |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
<br />
<div style="text-align: left;">
<span style="color: #b45f06;"><i>Managing and fixes required in overcloud</i></span></div>
<br />
********************************************************************<br />
Fix IP on Compute node && Open 6080 on Controller<br />
********************************************************************<br />
<br />
On Compute :-<br />
<br />
[vnc]<br />
vncserver_proxyclient_address=192.0.2.6<br />
vncserver_listen=0.0.0.0<br />
keymap=en-us<br />
enabled=True<br />
<span style="color: #b45f06;">novncproxy_base_url=http://192.0.2.15:6080/vnc_auto.html <===</span><br />
<br />
On Controller<br />
<br />
Add line to /etc/sysconfig/iptables<br />
<br />
<span style="color: #b45f06;">-A INPUT -p tcp -m multiport --dports 6080 -m comment --comment "novncproxy" -m state --state NEW -j ACCEPT</span><br />
<br />
Save /etc/sysconfig/iptables<br />
<br />
#service iptables restart<br />
<br />
<span style="color: #b45f06;">[root@overcloud-controller-0 ~(keystone_admin)]# netstat -antp | grep 6080</span><br />
<span style="color: #b45f06;">tcp 0 0 192.0.2.15:6080 0.0.0.0:* LISTEN 8397/python2 </span><br />
tcp 1 0 192.0.2.8:56080 192.0.2.8:8080 CLOSE_WAIT 11606/gnocchi-metri<br />
tcp 0 0 192.0.2.15:6080 192.0.2.1:47598 ESTABLISHED 28260/python2 <br />
tcp 0 0 192.0.2.15:6000 192.0.2.15:36080 TIME_WAIT - <br />
<br />
[root@overcloud-controller-0 ~(keystone_admin)]# ps -ef | grep 8397<br />
<br />
nova 8397 1 0 15:06 ? 00:00:05 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/<br />
nova 28260 8397 3 17:37 ? 00:00:56 /usr/bin/python2 /usr/bin/nova-novncproxy --web /usr/share/novnc/<br />
root 31149 23941 0 18:06 pts/0 00:00:00 grep --color=auto 8397<br />
<br />
**********************************<br />
Create flavors as follows<br />
**********************************<br />
<br />
<br />
<span style="color: #b45f06;">[root@overcloud-controller-0 ~]# nova flavor-create "m2.small" 2 1000 20 1</span><br />
<br />
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+<br />
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |<br />
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+<br />
| 2 | m2.small | 1000 | 20 | 0 | | 1 | 1.0 | True |<br />
+----+----------+-----------+------+-----------+------+-------+-------------+-----------+<br />
[root@overcloud-controller-0 ~]# nova flavor-list<br />
+--------------------------------------+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
|
ID | Name | Memory_MB |
Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |<br />
+--------------------------------------+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
|
1 | 500MB Tiny Instance | 500 |
1 | 0 | | 1 | 1.0 | True |<br />
|
2 | m2.small | 1000 |
20 | 0 | | 1 | 1.0 | True |<br />
+--------------------------------------+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
<br />
[root@overcloud-controller-0 ~]# nova flavor-list<br />
+----+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |<br />
+----+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
| 1 | 500MB Tiny Instance | 500 | 1 | 0 | | 1 | 1.0 | True |<br />
| 2 | m2.small | 1000 | 20 | 0 | | 1 | 1.0 | True |<br />
+----+---------------------+-----------+------+-----------+------+-------+-------------+-----------+<br />
<br />
[root@overcloud-controller-0 ~]# glance image-list<br />
+--------------------------------------+---------------+<br />
| ID | Name |<br />
+--------------------------------------+---------------+<br />
| c9faf86d-4a06-401a-839c-c5bd48ff704a | CirrOS34Cloud |<br />
| 4bf6f43d-8cba-43d7-9e34-347cff2d4769 | UbuntuCloud |<br />
| 81e031b0-11b7-440b-946f-b8f9e3a83c95 | VF24Cloud |<br />
+--------------------------------------+---------------+<br />
<br />
[root@overcloud-controller-0 ~]# neutron net-list<br />
+--------------------------------------+--------------+----------------------------------------+<br />
| id | name | subnets |<br />
+--------------------------------------+--------------+----------------------------------------+<br />
| 2d0ccb5f-0cc8-4710-819d-7c148137aea2 | public | 795e0fea-0550-44e8-abf3-afd316cd7843 |<br />
| | | 192.0.2.0/24 |<br />
| e2a9edb9-8e01-4e99-83b2-6c6e705967fe | demo_network | 56b70753-e776-4ce8-9b28-650431b43a63 |<br />
| | | 50.0.0.0/24 |<br />
+--------------------------------------+--------------+----------------------------------------+<br />
<br />
[root@overcloud-controller-0 ~]# nova boot --flavor 2 --key-name oskey09152016 \<br />
--image 81e031b0-11b7-440b-946f-b8f9e3a83c95 \<br />
--nic net-id=e2a9edb9-8e01-4e99-83b2-6c6e705967fe VF24Devs05<br />
+--------------------------------------+--------------------------------------------------+<br />
| Property | Value |<br />
+--------------------------------------+--------------------------------------------------+<br />
| OS-DCF:diskConfig | MANUAL |<br />
| OS-EXT-AZ:availability_zone | |<br />
| OS-EXT-SRV-ATTR:host | - |<br />
| OS-EXT-SRV-ATTR:hostname | vf24devs05 |<br />
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |<br />
| OS-EXT-SRV-ATTR:instance_name | |<br />
| OS-EXT-SRV-ATTR:kernel_id | |<br />
| OS-EXT-SRV-ATTR:launch_index | 0 |<br />
| OS-EXT-SRV-ATTR:ramdisk_id | |<br />
| OS-EXT-SRV-ATTR:reservation_id | r-psorddod |<br />
| OS-EXT-SRV-ATTR:root_device_name | - |<br />
| OS-EXT-SRV-ATTR:user_data | - |<br />
| OS-EXT-STS:power_state | 0 |<br />
| OS-EXT-STS:task_state | scheduling |<br />
| OS-EXT-STS:vm_state | building |<br />
| OS-SRV-USG:launched_at | - |<br />
| OS-SRV-USG:terminated_at | - |<br />
| accessIPv4 | |<br />
| accessIPv6 | |<br />
| adminPass | dsFB8vrfUmv4 |<br />
| config_drive | |<br />
| created | 2016-09-15T12:01:34Z |<br />
| description | - |<br />
| flavor | m2.small (2) |<br />
| hostId | |<br />
| host_status | |<br />
| id | 212e06de-e971-428b-9e94-79dc8d91b6db |<br />
| image | VF24Cloud (81e031b0-11b7-440b-946f-b8f9e3a83c95) |<br />
| key_name | oskey09152016 |<br />
| locked | False |<br />
| metadata | {} |<br />
| name | VF24Devs05 |<br />
| os-extended-volumes:volumes_attached | [] |<br />
| progress | 0 |<br />
| security_groups | default |<br />
| status | BUILD |<br />
| tags | [] |<br />
| tenant_id | a1c9c1c1a1134384b4a496d585981aff |<br />
| updated | 2016-09-15T12:01:34Z |<br />
| user_id | e2383104829c45e1a3d70e11cc87d399 |<br />
+--------------------------------------+--------------------------------------------------+<br />
[root@overcloud-controller-0 ~]# nova list<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------+<br />
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | - | Running | demo_network=50.0.0.17, 192.0.2.104 |<br />
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05 | BUILD | spawning | NOSTATE | demo_network=50.0.0.15 |<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------+<br />
<br />
[root@overcloud-controller-0 ~]# nova list<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------+<br />
| c7cea368-9602-421d-beb3-c0ed37379b57 | CirrOSDevs1 | ACTIVE | - | Running | demo_network=50.0.0.17, 192.0.2.104 |<br />
| 212e06de-e971-428b-9e94-79dc8d91b6db | VF24Devs05 | ACTIVE | - | Running | demo_network=50.0.0.15 |<br />
+--------------------------------------+-------------+--------+------------+-------------+-------------------------------------<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgRRnfUHRQpCrosWqDSQcAXQq18_b6Akvyk-xOOy6OojyiPG_jsws8ECplzsMW_Ng3-OqCgpvYvu9Rt2GV5vqrIrT6eYMDflGZivb8oyUmLtCQPJM4N2d0ki1bh81r62IgtMJpcg/s1600/Screenshot+from+2016-09-15+18-37-53.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgRRnfUHRQpCrosWqDSQcAXQq18_b6Akvyk-xOOy6OojyiPG_jsws8ECplzsMW_Ng3-OqCgpvYvu9Rt2GV5vqrIrT6eYMDflGZivb8oyUmLtCQPJM4N2d0ki1bh81r62IgtMJpcg/s640/Screenshot+from+2016-09-15+18-37-53.png" width="640" /></a><br />
<br />
Another option activate vlan10 following<br />
<a href="http://bderzhavets.blogspot.ru/2016/07/stable-mitaka-ha-instack-virt-setup.html" target="_blank"> http://bderzhavets.blogspot.com/2016/07/stable-mitaka-ha-instack-virt-setup.html</a><br />
and instead of `./tripleo-ci/scripts/tripleo.sh --overcloud-deploy`<br />
run following deployment with network isolation activated :-<br />
<br />
$touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml<br />
<br />
#!/bin/bash -x<br />
source /home/stack/stackrc<br />
openstack overcloud deploy \<br />
--control-scale 1 --compute-scale 1 \<br />
--libvirt-type qemu \<br />
--ntp-server pool.ntp.org \<br />
--templates /home/stack/tripleo-heat-templates \<br />
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \<br />
-e /home/stack/tripleo-heat-templates/environments/network-isolation.yaml \<br />
-e /home/stack/tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \<br />
-e $HOME/network_env.yaml<br />
<br />
<br />
*****************************************************************<br />
One more sample (no network isolation) :-<br />
*****************************************************************<br />
$touch -f /home/stack/tripleo-heat-templates/puppet/post.yaml<br />
<br />
$ cat deploy.sh<br />
#!/bin/bash -x<br />
source /home/stack/stackrc<br />
openstack overcloud deploy \<br />
--libvirt-type qemu \<br />
--ntp-server pool.ntp.org \<br />
--templates /home/stack/tripleo-heat-templates \<br />
-e /home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml \<br />
--control-scale 1 --compute-scale 2<br />
<br />
[stack@instack ~]$ ./deploy.sh<br />
+
openstack overcloud deploy --libvirt-type qemu --ntp-server
pool.ntp.org --templates /home/stack/tripleo-heat-templates -e
/home/stack/tripleo-heat-templates/overcloud-resource-registry-puppet.yaml
--control-scale 1 --compute-scale 2<br />
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils<br />
WARNING: openstackclient.common.exceptions is deprecated and will be removed after Jun 2017. Please use osc_lib.exceptions<br />
Creating Swift container to store the plan<br />
Creating plan from template files in: /home/stack/tripleo-heat-templates<br />
Plan created<br />
Deploying templates in the directory /home/stack/tripleo-heat-templates<br />
<span style="color: #b45f06;">Object
GET failed:
http://192.0.2.1:8080/v1/AUTH_c79b54306a9044448b871f489749adef/overcloud/overcloud-without-mergepy.yaml
404 Not Found [first 60 chars of response] </span><br />
2016-09-17 19:15:50Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-09-17 19:15:50Z [HorizonSecret]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:50Z [RabbitCookie]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:50Z [PcsdPassword]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [Networks]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [MysqlRootPassword]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [ServiceNetMap]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [overcloud-Networks-abtd3qkalqzy]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-09-17 19:15:51Z [ExternalNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [NetworkExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [StorageNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [ManagementNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [TenantNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [InternalNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:15:51Z [InternalNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [NetworkExtraConfig]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [StorageMgmtNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [ExternalNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [StorageNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [ManagementNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [HorizonSecret]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [TenantNetwork]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [RabbitCookie]: CREATE_COMPLETE state changed<br />
2016-09-17 19:15:51Z [overcloud-Networks-abtd3qkalqzy]: CREATE_COMPLETE Stack CREATE completed successfully<br />
<br />
. . . . . . .<br />
<br />
2016-09-17 19:41:31Z [ObjectStorageExtraConfigPost]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:41:31Z [ControllerExtraConfigPost]: CREATE_IN_PROGRESS state changed<br />
2016-09-17 19:41:32Z [BlockStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:32Z [CephStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:32Z [ControllerExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:32Z [ComputeExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:32Z [ObjectStorageExtraConfigPost]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:32Z [overcloud-AllNodesDeploySteps-z3cb4xbleprv]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-09-17 19:41:33Z [AllNodesDeploySteps]: CREATE_COMPLETE state changed<br />
2016-09-17 19:41:33Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
<br />
Stack overcloud CREATE_COMPLETE <br />
<br />
Overcloud Endpoint: http://192.0.2.13:5000/v2.0<br />
Overcloud Deployed<br />
[stack@instack ~]$ nova list<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| 53d60a0c-d4fe-48fd-af78-fbc16c59bd5e | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
| 098344d1-d403-40a7-8f20-6e417c132884 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.12 |<br />
| 3dc3338f-c6e4-47b8-8b30-08fe45053e43 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
<br />
<html><h4>
Not Found</h4>
The resource could not be found.<<br />Traceback (most recent call last):<br /> File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1647, in _retry<br /> service_token=self.service_token, **kwargs)<br /> File "/usr/lib/python2.7/site-packages/swiftclient/client.py", line 1139, in get_object<br /> raise ClientException.from_response(resp, 'Object GET failed', body)<br />ClientException: Object GET failed: http://192.0.2.1:8080/v1/AUTH_7ea6220c67c84c828f4249b95886259f/overcloud/overcloud-without-mergepy.yaml 404 Not Found [first 60 chars of response] <html><h4>
Not Found</h4>
The resource could not be found.<<br />Started Mistral Workflow. Execution ID: 807a7047-a1c3-4686-9be7-11d73e72dfb8<br />2016-09-23 09:15:34Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started<br />2016-09-23 09:15:34Z [HorizonSecret]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:34Z [RabbitCookie]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [MysqlRootPassword]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [PcsdPassword]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [Networks]: CREATE_IN_PROGRESS state changed<br />2016-09-23 09:15:35Z [ServiceNetMap]: CREATE_COMPLETE state changed<br />2016-09-23 09:15:35Z [RabbitCookie]: CREATE_COMPLETE state changed<br />2016-09-23 09:15:35Z [HeatAuthEncryptionKey]: CREATE_COMPLETE state changed<br />2016-09-23 09:15:35Z [PcsdPassword]: CREATE_COMPLETE state changed<br />2016-09-23 09:15:35Z [HorizonSecret]: CREATE_COMPLETE state changed<br />
<br />
<br />
END UPDATE<br />
</html></html></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-11174454917445920972016-09-08T08:59:00.000-07:002017-05-09T08:37:23.952-07:00Red Hat's policy in regards of support packstack utility features allowing to perform production deployments<div dir="ltr" style="text-align: left;" trbidi="on">
<h4 style="text-align: left;">
<span style="font-weight: normal;"><span class="OMGM5KC-e-g"> <i> </i></span></span><i><span style="font-weight: normal;"><span class="OMGM5KC-e-g">It's hard to know what the right thing is.<br /> Once you know it's hard not to do it.<br /> Harry Fertig (Kingsley,The Confession film 1999) </span></span></i></h4>
<h4 style="text-align: left;">
<i><span style="font-weight: normal;"><span class="OMGM5KC-e-g"> </span></span></i></h4>
<h4 style="text-align: left;">
<span style="color: #b45f06;"><i><span style="font-weight: normal;"><span class="OMGM5KC-e-g">Views count </span></span></i></span><i><span style="font-weight: normal;"><span class="OMGM5KC-e-g"><span style="color: #b45f06;"><span class="OYKEW4D-c-f">is 1668 as of 05/09/17</span></span> </span></span></i></h4>
<br />
<h4 style="text-align: left;">
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">I believe that in meantime RH actually forces people</span></span></h4>
<h4 style="text-align: left;">
<span class="OMGM5KC-e-g">shoot a gun on sparrows</span></h4>
<div style="text-align: left;">
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">presuming that customers ( RDO community members ) are not responsible</span></span></div>
<div style="text-align: left;">
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">to decide on their own when switching to TripleO ( TripleO QuickStart ) really providing huge
benefits like PCS/Corosync HA Controller's cluster , automated
deployment of Ceph cluster Nodes via invoking python-tripleoclient (in turn performing overcloud's Heat Stack
deployment on undercloud node as was pre-required) does make sense and when
simple Controller+N*Compute+ Storage Cluster might be painlessly
deployed by packstack(considering the</span></span><br />
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">last task mentioned as task out of scope) </span></span><span style="font-weight: normal;"><span class="OMGM5KC-e-g">with no IPMI requirements for boxes on landscape.</span></span><br />
<br />
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">Per <a href="https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface">https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface</a> </span></span><br />
<br />
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">The <b>Intelligent Platform Management Interface</b> (<b>IPMI</b>) is a set of <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Interface_%28computer_science%29" title="Interface (computer science)">computer interface</a>
specifications for an autonomous computer subsystem that provides
management and monitoring capabilities independently of the host
system's <a class="mw-redirect" href="https://en.wikipedia.org/wiki/CPU" title="CPU">CPU</a>, <a href="https://en.wikipedia.org/wiki/Firmware" title="Firmware">firmware</a> (<a href="https://en.wikipedia.org/wiki/BIOS" title="BIOS">BIOS</a> or <a class="mw-redirect" href="https://en.wikipedia.org/wiki/UEFI" title="UEFI">UEFI</a>) and <a href="https://en.wikipedia.org/wiki/Operating_system" title="Operating system">operating system</a>. IPMI defines a set of interfaces used by <a href="https://en.wikipedia.org/wiki/System_administrator" title="System administrator">system administrators</a> for <a href="https://en.wikipedia.org/wiki/Out-of-band_management" title="Out-of-band management">out-of-band management</a> of <a class="mw-redirect" href="https://en.wikipedia.org/wiki/Computer_systems" title="Computer systems">computer systems</a>
and monitoring of their operation. For example, IPMI provides a way to
manage a computer that may be powered off or otherwise unresponsive by
using a network connection to the hardware rather than to an operating
system or login shell. </span></span></div>
<div style="text-align: left;">
<span style="font-weight: normal;"><span class="OMGM5KC-e-g"><br /></span></span>
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">References</span></span><br />
<span style="font-weight: normal;"><span class="OMGM5KC-e-g">1. <a href="http://alesnosek.com/blog/2017/01/15/tripleo-installer-production-ready/" target="_blank">TripleO Installer, Production Ready?</a></span></span></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-34290655466795130592016-08-23T08:30:00.000-07:002016-09-08T02:36:29.149-07:00Attempt to reproduce Deploying Kubernetes on Openstack using Heat by Ales Nosek (CentOS 7.2)<div dir="ltr" style="text-align: left;" trbidi="on">
UPDATE 09/07/2016<br />
Issue with RDO Mitaka ( CentOS repos based ) escalated to RH<br />
"<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1374183">Bug 1374183</a> -<span id="summary_alias_container">
<span id="short_desc_nonedit_display">Import Error for python-senlinclient python-zaqarclient python-magnumclient python-mistralclient</span>"
</span> <br />
END UPDATE <br />
<br />
UPDATE 09/05/2016<br />
Attempt on RDO Newton M3 results kubernetes stack CREATE_IN_PROGRESS to hang, reporting waiting for Master in heat logs.<br />
Conditions from <a href="http://kubernetes.io/docs/getting-started-guides/openstack-heat/" target="_blank">http://kubernetes.io/docs/getting-started-guides/openstack-heat/</a><br />
for python clients are sartisfied in Newton (Master is running )<br />
However, RDO Newton M3 itself fails with simple `nova boot ... ` issued on Compute Node.<br />
END UPDATE<br />
<br />
UPDATE 08/27/2016<br />
I tested updated <a href="http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1607.qcow2">CentOS-7-x86_64-GenericCloud-1607.qcow2</a> with python2-boto 2.41 preinstalled.It eliminates "ERROR" during Master boot and provides the option to login into Master via ssh-keypair, exported in build environment.
There is no any httpd daemon in SSL mode running into VM.Obviously https://Master-IP fails.<br />
END UPDATE<br />
<br />
I got negative results attempting to reproduce blog <a href="http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/" target="_blank">http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/</a> . Following bellow is my step by step procedure which finally<br />
builds kubernetes heat stack which is not functional in meantime and troubleshooting kubernetes VM's boot logs having ERRORS . The last ones been fixed still don't make kubernetes stack functional.<br />
<br />
Two Node Cluster Controller/Network/Compute && Storage deployed on RDO<br />
Mitaka. <br />
<br />
====================================<br />
Environment set up for kubernetes stack build via heat<br />
====================================<br />
[boris@CentOS72Server ~(keystone_build)]$ cat openrc.sh<br />
unset OS_SERVICE_TOKEN<br />
export OS_USERNAME=admin<br />
export OS_PASSWORD=dda05d8fb4554e93<br />
export OS_AUTH_URL=http://192.168.1.52:5000/v3<br />
export PS1='[\u@\h \W(keystone_build)]\$ '<br />
<br />
export OS_PROJECT_NAME=admin<br />
export OS_USER_DOMAIN_NAME=Default<br />
export OS_PROJECT_DOMAIN_NAME=Default<br />
export OS_IDENTITY_API_VERSION=3<br />
export OS_REGION_NAME=RegionOne<br />
export OS_TENANT_ID=6e72c704971d4da3845f0ae9982bca6b<br />
<br />
[boris@CentOS72Server ~(keystone_build)]$ cat openstack-heat.sh<br />
export KUBERNETES_PROVIDER=openstack-heat<br />
export STACK_NAME=kubernetes<br />
export KUBERNETES_KEYPAIR_NAME=oskey082316<br />
export NUMBER_OF_MINIONS=1<br />
export MAX_NUMBER_OF_MINIONS=1<br />
export EXTERNAL_NETWORK=public<br />
export CREATE_IMAGE=false<br />
export DOWNLOAD_IMAGE=false<br />
export IMAGE_ID=7133dcf8-21a7-4beb-be1d-4a1f9d972cd8<br />
export DNS_SERVER=83.221.202.254<br />
export SWIFT_SERVER_URL=http://192.168.1.54:8080/v1/AUTH_6e72c704971d4da3845f0ae9982bca6b<br />
<br />
1. Storage node separated during packstack deployment ( localhost:8080 causes issue on AIO box due to proxy-swift default endpoint )<br />
2. SSL connection via horizon enabled in packstack deployment.<br />
3. Security rules provide access to ports 443,80,22<br />
========<br />
Results<br />
========<br />
[root@CentOS72Server ~(keystone_admin)]# nova list<br />
<pre>+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+
| f72bcec6-2def-4103-bb84-fcdc4a8af65e | CentOS72Devs01 | ACTIVE | - | Running | private=10.0.0.3, 192.168.1.150 |
| 462e5122-fe5b-486e-8b1d-4379345271d6 | kubernetes-master | ACTIVE | - | Running | kubernetes-fixed_network-htt6bujn7umv=10.0.0.3, 192.168.1.155 |
| 9c0f4e2c-1e9c-4370-8906-6b104b9bedbd | kubernetes-node-FhUQ6AJz | ACTIVE | - | Running | kubernetes-fixed_network-htt6bujn7umv=10.0.0.4, 192.168.1.156 |
+--------------------------------------+--------------------------+--------+------------+-------------+---------------------------------------------------------------+</pre>
<br />
[root@CentOS72Server ~(keystone_admin)]# openstack stack list<br />
<pre>+------------------------+------------+-----------------+---------------------+--------------+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+------------------------+------------+-----------------+---------------------+--------------+
| 57b4511f-d264-4a29 | kubernetes | CREATE_COMPLETE | 2016-08-23T14:29:43 | None |
| -ab8c-9ce273a4d9bb | | | | |
+------------------------+------------+-----------------+---------------------+--------------+</pre>
[root@CentOS72Server ~(keystone_admin)]# nova secgroup-list<br />
<pre>+--------------------------------------+-----------------------------------------+------------------------+
| Id | Name | Description |
+--------------------------------------+-----------------------------------------+------------------------+
| 9763cead-5816-40c5-a6e0-50a821347e52 | default | Default security group |
| fc918814-db18-4be9-a319-4d8988b9060f | kubernetes-secgroup_base-7raauykt5owy | |
| 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 | kubernetes-secgroup_master-ztdnfr6paudu | |
| 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 | kubernetes-secgroup_node-dt77fol3a7og | |
+--------------------------------------+-----------------------------------------+------------------------+</pre>
<br />
<br />
[boris@CentOS72Server kubernetes(keystone_build)]$ ./cluster/kube-up.sh<br />
... Starting cluster using provider: openstack-heat<br />
... calling verify-prereqs<br />
swift client installed<br />
glance client installed<br />
nova client installed<br />
heat client installed<br />
openstack client installed<br />
... calling kube-up<br />
kube-up for provider openstack-heat<br />
[INFO] Execute commands to create Kubernetes cluster<br />
[INFO] Uploading kubernetes-server-linux-amd64.tar.gz<br />
kubernetes-server.tar.gz<br />
[INFO] Uploading kubernetes-salt.tar.gz<br />
kubernetes-salt.tar.gz<br />
[INFO] Key pair already exists<br />
Stack not found: kubernetes<br />
[INFO] Create stack kubernetes<br />
<pre>+---------------------+-------------------------------------------------------------------------+
| Field | Value |
+---------------------+-------------------------------------------------------------------------+
| id | 57b4511f-d264-4a29-ab8c-9ce273a4d9bb |
| stack_name | kubernetes |
| description | Kubernetes cluster with one master and one or more worker nodes (as |
| | specified by the number_of_minions parameter, which defaults to 3). |
| | |
| creation_time | 2016-08-23T14:29:43 |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | |
+---------------------+-------------------------------------------------------------------------+</pre>
<br />
... calling validate-cluster<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_IN_PROGRESS<br />
Cluster status CREATE_COMPLETE<br />
cluster "openstack-kubernetes" set.<br />
user "openstack-kubernetes" set.<br />
context "openstack-kubernetes" set.<br />
switched to context "openstack-kubernetes".<br />
Wrote config for openstack-kubernetes to /home/boris/.kube/config<br />
Done, listing cluster services:<br />
<br />
The connection to the server 192.168.1.155 was refused - did you specify the right host or port?<br />
=========================================<br />
Status of <a href="http://textuploader.com/58zzu" target="_blank">heat-engine.log up on successful completition</a><br />
As far as I understand<br />
python-senlinclient<br />
python-zaqarclient<br />
are not packaged with RDO Mitaka on CentOS 7.2<br />
See also :-<br />
<a href="https://bugs.launchpad.net/heat/+bug/1544220" target="_blank">https://bugs.launchpad.net/heat/+bug/1544220</a><br />
<a href="https://bugs.launchpad.net/heat/+bug/1597593" target="_blank">https://bugs.launchpad.net/heat/+bug/1597593</a><br />
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=1294489" target="_blank">https://bugzilla.redhat.com/show_bug.cgi?id=1294489</a><br />
=========================================<br />
[boris@CentOS72Server kubernetes(keystone_build)]$ cat /home/boris/.kube/config<br />
apiVersion: v1<br />
clusters:<br />
- cluster:<br />
insecure-skip-tls-verify: true<br />
server: https://192.168.1.155<br />
name: openstack-kubernetes<br />
contexts:<br />
- context:<br />
cluster: openstack-kubernetes<br />
user: openstack-kubernetes<br />
name: openstack-kubernetes<br />
current-context: openstack-kubernetes<br />
kind: Config<br />
preferences: {}<br />
users:<br />
- name: openstack-kubernetes<br />
user:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivHh8f_ZOdXNJiFBsOmok4Vkkd9c23Lsaev0s_78OaRRWvVoGcf3_BSzNDwzIQxOfAXvQU-kVT_IifHXfQeHpLhUyr3zTAx3f1bdkoYTc9ECz3RwelmwHUipVBtx81ZHX1mTnmoQ/s1600/Screenshot+from+2016-08-23+18-26-57.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivHh8f_ZOdXNJiFBsOmok4Vkkd9c23Lsaev0s_78OaRRWvVoGcf3_BSzNDwzIQxOfAXvQU-kVT_IifHXfQeHpLhUyr3zTAx3f1bdkoYTc9ECz3RwelmwHUipVBtx81ZHX1mTnmoQ/s640/Screenshot+from+2016-08-23+18-26-57.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8KPMYzlw9xXnKfweANw-4dITvKEs548dXKdWyhzKVZ9uks4OQPkrVmucPQM6Z1YFheufAgO4QKixwwcalNu-zWuvzcvBOESnrVTNs7PZ3-xgXXtmv5m7lvx-LqQ6RSRAhNYhxVg/s1600/Screenshot+from+2016-08-23+18-27-27.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8KPMYzlw9xXnKfweANw-4dITvKEs548dXKdWyhzKVZ9uks4OQPkrVmucPQM6Z1YFheufAgO4QKixwwcalNu-zWuvzcvBOESnrVTNs7PZ3-xgXXtmv5m7lvx-LqQ6RSRAhNYhxVg/s640/Screenshot+from+2016-08-23+18-27-27.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_IJJBxigiTXHjdvDoX5M-1gECf6DRYs38cc__bDKDFHFN7DocCzhyhYTnEiuk_DUzAgBoOWVnQ08n3yi2kz0avrTCXhtBymLYmDTDibzYD20YKUACQDBharZqGyFM2YkWduI5DQ/s1600/Screenshot+from+2016-08-23+18-27-43.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_IJJBxigiTXHjdvDoX5M-1gECf6DRYs38cc__bDKDFHFN7DocCzhyhYTnEiuk_DUzAgBoOWVnQ08n3yi2kz0avrTCXhtBymLYmDTDibzYD20YKUACQDBharZqGyFM2YkWduI5DQ/s640/Screenshot+from+2016-08-23+18-27-43.png" width="640" /></a></div>
<br />
=======<br />
Finally<br />
======= <br />
[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 fc918814-db18-4be9-a319-4d8988b9060f <br />
Created a new security_group_rule:<br />
+-------------------+--------------------------------------+<br />
| Field | Value |<br />
+-------------------+--------------------------------------+<br />
| description | |<br />
| direction | ingress |<br />
| ethertype | IPv4 |<br />
| id | 83e43587-1f6f-4f1b-b8b9-85e353b4d030 |<br />
| port_range_max | |<br />
| port_range_min | |<br />
| protocol | icmp |<br />
| remote_group_id | |<br />
| remote_ip_prefix | 0.0.0.0/0 |<br />
| security_group_id | fc918814-db18-4be9-a319-4d8988b9060f |<br />
| tenant_id | 6e72c704971d4da3845f0ae9982bca6b |<br />
+-------------------+--------------------------------------+<br />
<br />
[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 29a1ff1d-be63-4bec-bac7-fdfa00a9c551<br />
Created a new security_group_rule:<br />
+-------------------+--------------------------------------+<br />
| Field | Value |<br />
+-------------------+--------------------------------------+<br />
| description | |<br />
| direction | ingress |<br />
| ethertype | IPv4 |<br />
| id | 275f5b0b-4521-4b40-abb8-97bc1ab9566f |<br />
| port_range_max | |<br />
| port_range_min | |<br />
| protocol | icmp |<br />
| remote_group_id | |<br />
| remote_ip_prefix | 0.0.0.0/0 |<br />
| security_group_id | 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 |<br />
| tenant_id | 6e72c704971d4da3845f0ae9982bca6b |<br />
+-------------------+--------------------------------------+<br />
<br />
[root@CentOS72Server ~(keystone_admin)]# nova secgroup-list<br />
+--------------------------------------+-----------------------------------------+------------------------+<br />
| Id | Name | Description |<br />
+--------------------------------------+-----------------------------------------+------------------------+<br />
| 9763cead-5816-40c5-a6e0-50a821347e52 | default | Default security group |<br />
| fc918814-db18-4be9-a319-4d8988b9060f | kubernetes-secgroup_base-7raauykt5owy | |<br />
| 29a1ff1d-be63-4bec-bac7-fdfa00a9c551 | kubernetes-secgroup_master-ztdnfr6paudu | |<br />
| 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 | kubernetes-secgroup_node-dt77fol3a7og | |<br />
+--------------------------------------+-----------------------------------------+------------------------+<br />
<br />
[root@CentOS72Server ~(keystone_admin)]# neutron security-group-rule-create --protocol icmp --direction ingress --remote-ip-prefix 0.0.0.0/0 08d5e1d7-0223-4acb-bf74-ed7230e98bf1<br />
Created a new security_group_rule:<br />
+-------------------+--------------------------------------+<br />
| Field | Value |<br />
+-------------------+--------------------------------------+<br />
| description | |<br />
| direction | ingress |<br />
| ethertype | IPv4 |<br />
| id | 8ef7ae78-42ff-4f82-baab-ce41e5e90cc8 |<br />
| port_range_max | |<br />
| port_range_min | |<br />
| protocol | icmp |<br />
| remote_group_id | |<br />
| remote_ip_prefix | 0.0.0.0/0 |<br />
| security_group_id | 08d5e1d7-0223-4acb-bf74-ed7230e98bf1 |<br />
| tenant_id | 6e72c704971d4da3845f0ae9982bca6b |<br />
+-------------------+--------------------------------------+<br />
<br />
Can ping 192.168.1.155,192.168.1.156<br />
<br />
Security rules for each kubernetes secgroup have ports 1 - 6535 open , however<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVlFUl4ceTXgTlp0kYH50aZgZ3GU2H9TjSX96Tga6TDiP7a8c-VQm_u_4u6t3I4E3qBLFC3JQGJBSruFZFgROcgRYLqX7ZrAcj2Y2dC2os712worCxRFGau6dO42I-NhuiGRINpQ/s1600/Screenshot+from+2016-08-23+19-44-39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjVlFUl4ceTXgTlp0kYH50aZgZ3GU2H9TjSX96Tga6TDiP7a8c-VQm_u_4u6t3I4E3qBLFC3JQGJBSruFZFgROcgRYLqX7ZrAcj2Y2dC2os712worCxRFGau6dO42I-NhuiGRINpQ/s640/Screenshot+from+2016-08-23+19-44-39.png" width="640" /></a></div>
<br />
==========================<br />
Kubernetes Master VM boot log contains<br />
===========================<br />
<br />
<br />
<pre>[[32m OK [0m] Started Update UTMP about System Runlevel Changes.
[ 380.104758] cloud-init[4161]: [ERROR ] boto_route53 requires at least boto 2.35.0.
[ 455.439213] cloud-init[4161]: [ERROR ] boto_route53 requires at least boto 2.35.0.
[ 469.546079] cloud-init[4161]: [WARNING ] /usr/lib/python2.7/site-packages/salt/states/cmd.py:1041: DeprecationWarning: The legacy user/group arguments are deprecated. Replace them with runas. These arguments will be removed in Salt Oxygen.
[ 521.559170] cloud-init[4161]: [WARNING ] State for file: /var/log/kube-apiserver.log - Neither 'source' nor 'contents' nor 'contents_pillar' nor 'contents_grains' was defined, yet 'replace' was set to 'True'. As there is no source to replace the file with, 'replace' has been set to 'False' to avoid reading the file unnecessarily.
[ 521.723063] cloud-init[4161]: [ERROR ] boto_route53 requires at least boto 2.35.0.</pre>
<br />
Even if I checkout branch :-<br />
<br />
<pre><code class="sh"><span class="line">$ git clone https://github.com/kubernetes/kubernetes.git </span></code></pre>
<pre><code class="sh"><span class="line"><span class="nb">$ cd </span>kubernetes</span></code> </pre>
<pre>$ git checkout origin/release-1.3.0</pre>
<pre>$<code class="sh"><span class="line"> make quick-release</span></code></pre>
<pre><code class="sh"><span class="line"> </span></code></pre>
Same error in Master VM boot log.<br />
<br />
I believe CentOS 7.2 image has to be updated up to python2-boto 2.41 via EPEL 7 during cloud-init run ( first boot )<br />
<br />
<br />
References<br />
<a href="http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/" target="_blank">http://alesnosek.com/blog/2016/06/26/deploying-kubernetes-on-openstack-using-heat/</a></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-79084829649899991942016-08-21T08:28:00.000-07:002016-08-24T06:53:26.906-07:00Emulation Triple0 QuickStart HA Controller's Cluster failover<div dir="ltr" style="text-align: left;" trbidi="on">
Procedure bellow identify Controller which has RouterDSA in active state and<br />
shutdown/startup this Controller ( controller-1 in particular case).<br />
Then log into conntroller-1 and restart pcs cluster on particular Controller,<br />
afterwards runs `pacemaker resource cleanup` for several resources what<br />
results bringing back cluster nodes in proper status<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitffgz36DgwA_k-mEqUN5BNmYA0B2POW-d1Nde1N9twfDIc0nGUtrCufWlrUmkFxpfcSvNK6nIbHACFzNE98QAgXYFsczXwjpjuwnB2hUzch1Lu_hmt3As50Wdf-ODdMqCpfgwnA/s1600/Screenshot+from+2016-08-21+19-00-24.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEitffgz36DgwA_k-mEqUN5BNmYA0B2POW-d1Nde1N9twfDIc0nGUtrCufWlrUmkFxpfcSvNK6nIbHACFzNE98QAgXYFsczXwjpjuwnB2hUzch1Lu_hmt3As50Wdf-ODdMqCpfgwnA/s640/Screenshot+from+2016-08-21+19-00-24.png" width="640" /></a></div>
<br />
<br />
[root@overcloud-controller-0 ~]# neutron l3-agent-list-hosting-router RouterDSA<br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
| id | host | admin_state_up | alive | ha_state |<br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
<span style="color: #b45f06;">| 558fe2d4-a709-482f- | overcloud- | True | :-) | active |</span><br />
<span style="color: #b45f06;">| 85f2-9bb9835cf360 | controller-1.localdomain | | | |</span><br />
| ae0f67ce-732b- | overcloud- | True | :-) | standby |<br />
| 4cb2-9b52-d15c22211972 | controller-0.localdomain | | | |<br />
| fd9bfd34-9e36-4dac-a350-d18 | overcloud- | True | :-) | standby |<br />
| fd1c3489b | controller-2.localdomain | | | |<br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
[root@overcloud-controller-0 ~]# logout<br />
[heat-admin@overcloud-controller-0 ~]$ logout<br />
Connection to 192.0.2.16 closed.<br />
[stack@undercloud ~]$ nova list<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 |<br />
| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.14 |<br />
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.17 |<br />
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.13 |<br />
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
<span style="color: #b45f06;">[stack@undercloud ~]$ nova stop overcloud-controller-1</span><br />
<span style="color: #b45f06;">Request to stop server overcloud-controller-1 has been accepted.</span><br />
[stack@undercloud ~]$ nova list<br />
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+<br />
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 |<br />
<span style="color: #b45f06;">| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1 | SHUTOFF | - | Shutdown | ctlplane=192.0.2.14 |</span><br />
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.17 |<br />
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.13 |<br />
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
+--------------------------------------+-------------------------+---------+------------+-------------+---------------------+<br />
<span style="color: #b45f06;">[stack@undercloud ~]$ nova start overcloud-controller-1</span><br />
<span style="color: #b45f06;">Request to start server overcloud-controller-1 has been accepted.</span><br />
[stack@undercloud ~]$ nova list<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| 5387385d-69a1-40ab-a77a-40d97949dc16 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.16 |<br />
<span style="color: #b45f06;">| 456031a7-21c4-497f-a7d8-baa3d403ee2f | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.14 |</span><br />
| 80b6ce3a-23a0-42d3-a1b3-fec22ca8f615 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.17 |<br />
| b5a8c17c-e170-4f66-a5dd-846546afcfce | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.13 |<br />
| c10e25b3-6732-4afb-b51c-5d9f859bd7d6 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.14<br />
The authenticity of host '192.0.2.14 (192.0.2.14)' can't be established.<br />
ECDSA key fingerprint is a3:e6:de:2e:2b:45:e4:33:3d:d0:75:e5:b7:7f:da:0a.<br />
Are you sure you want to continue connecting (yes/no)? yes<br />
Warning: Permanently added '192.0.2.14' (ECDSA) to the list of known hosts.<br />
<br />
[heat-admin@overcloud-controller-1 ~]$ sudo su -<br />
[root@overcloud-controller-1 ~]# pcs status<br />
Cluster name: tripleo_cluster<br />
Last updated: Sun Aug 21 15:12:39 2016 Last change: Sun Aug 21 13:24:42 2016 by root via cibadmin on overcloud-controller-1<br />
Stack: corosync<br />
Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum<br />
3 nodes and 127 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Slaves: [ overcloud-controller-1 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-0 ]<br />
Slaves: [ overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Stopped<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 ]<br />
Stopped: [ overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: delay-clone [delay]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Stopped: [ overcloud-controller-1 ]<br />
<br />
Failed Actions:<br />
* rabbitmq_monitor_10000 on overcloud-controller-0 'not running' (7): call=81, status=complete, exitreason='none',<br />
last-rc-change='Sun Aug 21 15:11:13 2016', queued=0ms, exec=0ms<br />
* rabbitmq_monitor_10000 on overcloud-controller-2 'not running' (7): call=79, status=complete, exitreason='none',<br />
last-rc-change='Sun Aug 21 15:11:13 2016', queued=0ms, exec=0ms<br />
<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
<span style="color: #b45f06;">[root@overcloud-controller-1 ~]# pcs cluster stop</span><br />
<span style="color: #b45f06;">Stopping Cluster (pacemaker)... Stopping Cluster (corosync)...</span><br />
<span style="color: #b45f06;">[root@overcloud-controller-1 ~]# pcs cluster start</span><br />
<span style="color: #b45f06;">Starting Cluster...</span><br />
[root@overcloud-controller-1 ~]# <br />
Broadcast message from systemd-journald@overcloud-controller-1.localdomain (Sun 2016-08-21 15:16:07 UTC):<br />
<br />
haproxy[16997]: proxy nova_ec2 has no server available!<br />
<br />
======================================<br />
Script start.sh [ <a href="http://docs.openstack.org/developer/tripleo-docs/post_deployment/replace_controller.html" target="_blank">1</a> ]<br />
======================================<br />
#!/bin/bash -x<br />
<span style="color: #b45f06;">pcs resource cleanup rabbitmq-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup neutron-server-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup openstack-nova-api-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup openstack-nova-consoleauth-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup openstack-heat-engine-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup openstack-cinder-api-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup openstack-glance-registry-clone ;</span><br />
<span style="color: #b45f06;">sleep 10</span><br />
<span style="color: #b45f06;">pcs resource cleanup httpd-clone</span><br />
=======================================<br />
<br />
<br />
<span style="color: #b45f06;">[root@overcloud-controller-1 ~]# . ./start.sh</span><br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up rabbitmq:0 on overcloud-controller-0, removing fail-count-rabbitmq<br />
Cleaning up rabbitmq:0 on overcloud-controller-1, removing fail-count-rabbitmq<br />
Cleaning up rabbitmq:0 on overcloud-controller-2, removing fail-count-rabbitmq<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up neutron-server:0 on overcloud-controller-0, removing fail-count-neutron-server<br />
Cleaning up neutron-server:0 on overcloud-controller-1, removing fail-count-neutron-server<br />
Cleaning up neutron-server:0 on overcloud-controller-2, removing fail-count-neutron-server<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up openstack-nova-api:0 on overcloud-controller-0, removing fail-count-openstack-nova-api<br />
Cleaning up openstack-nova-api:0 on overcloud-controller-1, removing fail-count-openstack-nova-api<br />
Cleaning up openstack-nova-api:0 on overcloud-controller-2, removing fail-count-openstack-nova-api<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-0, removing fail-count-openstack-nova-consoleauth<br />
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-1, removing fail-count-openstack-nova-consoleauth<br />
Cleaning up openstack-nova-consoleauth:0 on overcloud-controller-2, removing fail-count-openstack-nova-consoleauth<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up openstack-heat-engine:0 on overcloud-controller-0, removing fail-count-openstack-heat-engine<br />
Cleaning up openstack-heat-engine:0 on overcloud-controller-1, removing fail-count-openstack-heat-engine<br />
Cleaning up openstack-heat-engine:0 on overcloud-controller-2, removing fail-count-openstack-heat-engine<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up openstack-cinder-api:0 on overcloud-controller-0, removing fail-count-openstack-cinder-api<br />
Cleaning up openstack-cinder-api:0 on overcloud-controller-1, removing fail-count-openstack-cinder-api<br />
Cleaning up openstack-cinder-api:0 on overcloud-controller-2, removing fail-count-openstack-cinder-api<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up openstack-glance-registry:0 on overcloud-controller-0, removing fail-count-openstack-glance-registry<br />
Cleaning up openstack-glance-registry:0 on overcloud-controller-1, removing fail-count-openstack-glance-registry<br />
Cleaning up openstack-glance-registry:0 on overcloud-controller-2, removing fail-count-openstack-glance-registry<br />
<br />
Waiting for 3 replies from the CRMd... OK<br />
Cleaning up httpd:0 on overcloud-controller-0, removing fail-count-httpd<br />
Cleaning up httpd:0 on overcloud-controller-1, removing fail-count-httpd<br />
Cleaning up httpd:0 on overcloud-controller-2, removing fail-count-httpd<br />
<br />
[root@overcloud-controller-1 ~]# pcs status<br />
Cluster name: tripleo_cluster<br />
Last updated: Sun Aug 21 15:18:04 2016 Last change: Sun Aug 21 15:17:57 2016 by hacluster via crmd on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum<br />
3 nodes and 127 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-0 ]<br />
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: delay-clone [delay]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
[root@overcloud-controller-1 ~]# logout<br />
[heat-admin@overcloud-controller-1 ~]$ logout<br />
Connection to 192.0.2.14 closed.<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.16<br />
Last login: Sun Aug 21 15:08:18 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
Last login: Sun Aug 21 15:08:24 UTC 2016 on pts/0<br />
[root@overcloud-controller-0 ~]# . keysstonerc_admin<br />
[root@overcloud-controller-0 ~]# neutron l3-agent-list-hosting-router RouterDSA<br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
| id | host | admin_state_up | alive | ha_state |<br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
| 558fe2d4-a709-482f- | overcloud- | True | :-) | standby |<br />
| 85f2-9bb9835cf360 | controller-1.localdomain | | | |<br />
| ae0f67ce-732b- | overcloud- | True | :-) | standby |<br />
| 4cb2-9b52-d15c22211972 | controller-0.localdomain | | | |<br />
<span style="color: #b45f06;">| fd9bfd34-9e36-4dac-a350-d18 | overcloud- | True :-) | active |</span><br />
<span style="color: #b45f06;">| fd1c3489b | controller-2.localdomain | | | |</span><br />
+-----------------------------+-----------------------------+----------------+-------+----------+<br />
<br />
====================================<br />
Verification Galera DB sync on Controllers<br />
====================================<br />
<br />
[root@overcloud-controller-0 ~]# clustercheck<br />
HTTP/1.1 200 OK<br />
Content-Type: text/plain<br />
Connection: close<br />
Content-Length: 32<br />
<br />
Galera cluster node is synced.<br />
<br />
[root@overcloud-controller-0 ~]# logout<br />
[heat-admin@overcloud-controller-0 ~]$ logout<br />
Connection to 192.0.2.16 closed.<br />
<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.14<br />
Last login: Sun Aug 21 15:12:27 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-1 ~]$ sudo su -<br />
Last login: Sun Aug 21 15:12:34 UTC 2016 on pts/0<br />
<br />
[root@overcloud-controller-1 ~]# clustercheck<br />
HTTP/1.1 200 OK<br />
Content-Type: text/plain<br />
Connection: close<br />
Content-Length: 32<br />
<br />
Galera cluster node is synced.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivvHLdJ3sakjymZWgP8EJXYq5iAy-Hx-ZDuMWHMo8UFmy5B7UlEy4E0FmZ5ZqCCoo15IZW8knKEIW2gRVhjiz9QDfXCcGyYVyLr-pOzngIJCkgGaeZp2Cj7Wxy87kp1VqEW8wZDA/s1600/Screenshot+from+2016-08-21+19-01-50.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivvHLdJ3sakjymZWgP8EJXYq5iAy-Hx-ZDuMWHMo8UFmy5B7UlEy4E0FmZ5ZqCCoo15IZW8knKEIW2gRVhjiz9QDfXCcGyYVyLr-pOzngIJCkgGaeZp2Cj7Wxy87kp1VqEW8wZDA/s640/Screenshot+from+2016-08-21+19-01-50.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFoxvVLzBJy0r7BkHoKY9sz7HHCZ_hcDrhCSbvZjqsd-yAGZ343TDnkAuXI3AfDK5_0y86qZvRMHUHqFg6WjpnaMdhBZ7RyTp3liqLEhq3a7diHab-oeL-ZDXepVnTx5HUBvWotg/s1600/Screenshot+from+2016-08-21+19-02-02.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFoxvVLzBJy0r7BkHoKY9sz7HHCZ_hcDrhCSbvZjqsd-yAGZ343TDnkAuXI3AfDK5_0y86qZvRMHUHqFg6WjpnaMdhBZ7RyTp3liqLEhq3a7diHab-oeL-ZDXepVnTx5HUBvWotg/s640/Screenshot+from+2016-08-21+19-02-02.png" width="640" /></a></div>
<br />
==================<br />
Setup details<br />
==================<br />
<br />
[boris@fedora24wks tripleo-quickstart]$ cat ./config/general_config/ha.yml<br />
# Deploy an HA openstack environment.<br />
#<br />
# This will require (6144 * 4) == approx. 24GB for the overcloud<br />
# nodes, plus another 8GB for the undercloud, for a total of around<br />
# 32GB.<br />
control_memory: 6144<br />
compute_memory: 6144<br />
default_vcpu: 2<br />
<br />
undercloud_memory: 8192<br />
<br />
# Giving the undercloud additional CPUs can greatly improve heat's<br />
# performance (and result in a shorter deploy time).<br />
undercloud_vcpu: 2<br />
<br />
# Create three controller nodes and one compute node.<br />
overcloud_nodes:<br />
- name: control_0<br />
flavor: control<br />
- name: control_1<br />
flavor: control<br />
- name: control_2<br />
flavor: control<br />
<br />
- name: compute_0<br />
flavor: compute<br />
- name: compute_1<br />
flavor: compute<br />
<br />
# We don't need introspection in a virtual environment (because we are<br />
# creating all the "hardware" we really know the necessary<br />
# information).<br />
step_introspect: true<br />
<br />
# Tell tripleo about our environment.<br />
network_isolation: true<br />
extra_args: >-<br />
--control-scale 3 --compute-scale 2 --neutron-network-type vxlan<br />
--neutron-tunnel-types vxlan<br />
--ntp-server pool.ntp.org<br />
test_tempest: false<br />
test_ping: true<br />
enable_pacemaker: true<br />
<br />
##################################<br />
Virtual Environment Setup Complete<br />
##################################<br />
<br />
Access the undercloud by:<br />
<br />
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud<br />
<br />
There are scripts in the home directory to continue the deploy:<br />
<br />
overcloud-deploy.sh will deploy the overcloud<br />
overcloud-deploy-post.sh will do any post-deploy configuration<br />
overcloud-validate.sh will run post-deploy validation<br />
<br />
Alternatively, you can ignore these scripts and follow the upstream docs,<br />
starting from the overcloud deploy section:<br />
<br />
http://ow.ly/1Vc1301iBlb<br />
<br />
##################################<br />
Virtual Environment Setup Complete<br />
##################################<br />
<br />
<span style="color: #b45f06;">[boris@fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud</span><br />
Warning: Permanently added '192.168.1.74' (ECDSA) to the list of known hosts.<br />
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />
Last login: Wed Aug 24 12:13:16 2016 from gateway<br />
[stack@undercloud ~]$ sudo su<br />
<br />
<br />
[root@undercloud stack]# cd /etc/yum.repos.d<br />
[root@undercloud yum.repos.d]# ls -l<br />
total 40<br />
-rw-r--r--. 1 root root 1664 Dec 9 2015 CentOS-Base.repo<br />
-rw-r--r--. 1 root root 1057 Aug 24 02:58 CentOS-Ceph-Hammer.repo<br />
-rw-r--r--. 1 root root 1309 Dec 9 2015 CentOS-CR.repo<br />
-rw-r--r--. 1 root root 649 Dec 9 2015 CentOS-Debuginfo.repo<br />
-rw-r--r--. 1 root root 290 Dec 9 2015 CentOS-fasttrack.repo<br />
-rw-r--r--. 1 root root 630 Dec 9 2015 CentOS-Media.repo<br />
-rw-r--r--. 1 root root 1331 Dec 9 2015 CentOS-Sources.repo<br />
-rw-r--r--. 1 root root 1952 Dec 9 2015 CentOS-Vault.repo<br />
<span style="color: #b45f06;">-rw-r--r--. 1 root root 162 Aug 24 02:58 delorean-deps.repo<br />-rw-r--r--. 1 root root 220 Aug 24 02:58 delorean.repo</span><br />
<span style="color: #b45f06;"></span><br />
====================================================<br />
Delorean repos file been installed via quickstart on undercloud<br />
====================================================<br />
[root@undercloud yum.repos.d]# cat delorean-deps.repo<br />
[delorean-mitaka-testing]<br />
name=dlrn-mitaka-testing<br />
baseurl=http://buildlogs.centos.org/centos/7/cloud/$basearch/openstack-mitaka/<br />
enabled=1<br />
gpgcheck=0<br />
priority=2<br />
<br />
[root@undercloud yum.repos.d]# cat delorean.repo<br />
[delorean]<br />
name=delorean-openstack-rally-3909299306233247d547bad265a1adb78adfb3d4<br />
baseurl=http://trunk.rdoproject.org/centos7-mitaka/39/09/3909299306233247d547bad265a1adb78adfb3d4_4e6dfa3c<br />
enabled=1<br />
gpgcheck=0<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlg7tqvAa8uki7Yn1-HFfAvNOEreauhqPg0XfK2_OHGyMvEmBxVsKGZewefN-8kv1AII_4gSbZrEdFuLoOEO7tqjKIlHAoho5uBk_TadWRFGE1FTgWh9czuKcWc_HLJuQoEXzOZw/s1600/Screenshot+from+2016-08-24+16-52-08.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlg7tqvAa8uki7Yn1-HFfAvNOEreauhqPg0XfK2_OHGyMvEmBxVsKGZewefN-8kv1AII_4gSbZrEdFuLoOEO7tqjKIlHAoho5uBk_TadWRFGE1FTgWh9czuKcWc_HLJuQoEXzOZw/s640/Screenshot+from+2016-08-24+16-52-08.png" width="640" /></a></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-37560348697917312922016-08-17T23:11:00.001-07:002016-09-03T14:34:55.828-07:00TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots ( newly polished ) <div dir="ltr" style="text-align: left;" trbidi="on">
<h4 style="text-align: left;">
UPDATE 09/03/2016</h4>
<br />
Undercloud VM gets created with AutoStart at boot up<br />
in meantime.So just change permissions and allow services<br />
to start on undercloud (5 min - 7 min )<br />
<br />
Up on deployment completed <br />
[stack@ServerTQS72 ~]$ virsh dominfo undercloud | grep -i autostart<br />
Autostart: enable<br />
<h4 style="text-align: left;">
<br />
END UPDATE </h4>
<h4 style="text-align: left;">
<br />
UPDATE 08/21/216</h4>
<br />
In case when virt tools (virsh,virt-manger ) stop to recognise running<br />
qemu-kvm of undercloud as VM issue `sudo shutdown -P now` via connection<br />
`ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud`.<br />
It will result graceful shutdown of undercloud's qemu-kvm process on VIRTHOST.<br />
<h4 style="text-align: left;">
<br />
END UPDATE </h4>
<br />
<br />
This post follows up <a href="http://lxer.com/module/newswire/view/230814/index.html" target="_blank">http://lxer.com/module/newswire/view/230814/index.html</a><br />
and might work as timer saver unless status undecloud.qcow2 per<br />
<a href="http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/" target="_blank">http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/</a><br />
requires fresh installation to be done from scratch.<br />
Current update allows to automate procedure via /etc/rc.d/rc.local and exports<br />
in stack's shell variables which allow to start virt-manager right away , presuming that xhost+ was issued in root's shell.<br />
<br />
Thus,
we intend to survive VIRTHOST cold reboot (downtime) and keep previous
version of undercloud VM been able to bring it up avoiding build via
quickstart.sh and restart procedure from logging into undercloud and
immediately run overcloud deployment. Proceed as follows :-<br />
<br />
1. System shutdown<br />
Cleanly commit :-<br />
[stack@undercloud~] $ openstack stack delete overcloud<br />
2. Login into VIRTHOST as stack and gracefully shutdown undercloud<br />
[stack@ServerCentOS72 ~]$ virsh shutdown undercloud<br />
<br />
<br />
=====================<br />
Make following updates<br />
=====================<br />
<br />
[root@ServerTQS72 ~]# cat /etc/rc.d/rc.local<br />
#!/bin/bash<br />
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure<br />
# that this script will be executed during boot.<br />
<span style="color: #b45f06;">mkdir -p /run/user/1001<br />chown -R stack /run/user/1001</span><br />
<span style="color: #b45f06;">if [ $? -ne 0 ]<br />then<br /> exit 0 <br />fi<br />chgrp -R stack /run/user/1001</span><br />
touch /var/lock/subsys/local<br />
<br />
========================<br />
In stack's .bashrc<br />
========================<br />
<br />
[stack@ServerTQS72 ~]$ cat .bashrc<br />
# .bashrc<br />
<br />
# Source global definitions<br />
if [ -f /etc/bashrc ]; then<br />
. /etc/bashrc<br />
fi<br />
<br />
# Uncomment the following line if you don't like systemctl's auto-paging feature:<br />
# export SYSTEMD_PAGER=<br />
<br />
# User specific aliases and functions<br />
# BEGIN ANSIBLE MANAGED BLOCK<br />
# Make sure XDG_RUNTIME_DIR is set (used by libvirt<br />
# for creating config and sockets for qemu:///session<br />
# connections)<br />
: ${XDG_RUNTIME_DIR:=/run/user/$(id -u)}<br />
export XDG_RUNTIME_DIR<br />
<span style="color: #b45f06;">export DISPLAY=:0.0<br />export NO_AT_BRIDGE=1</span><br />
# END ANSIBLE MANAGED BLOCK<br />
<br />
=================<br />
REBOOT VIRTHOST<br />
=================<br />
<br />
$ sudo su -<br />
# xhost +<br />
# su - stack<br />
<br />
[stack@ServerTQS72 ~]$ virt-manager --connect qemu:///session<br />
<br />
Start VM undercloud<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjPBqBKRlYVLtHBuPrPOuvNG8zNYf1A42mqCFB3P2DCP33LDiAQ2GLa2_ftjX1XY0tNR68zjCg0XgN1MIXpjpdGVCo-iKyIyGq2oehDFfbFw18cZ1BosL_878GqpuvBcB-lINtsg/s1600/Screenshot+from+2016-08-18+08-37-29.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjPBqBKRlYVLtHBuPrPOuvNG8zNYf1A42mqCFB3P2DCP33LDiAQ2GLa2_ftjX1XY0tNR68zjCg0XgN1MIXpjpdGVCo-iKyIyGq2oehDFfbFw18cZ1BosL_878GqpuvBcB-lINtsg/s640/Screenshot+from+2016-08-18+08-37-29.png" width="640" /></a><br />
<br />
Virt-tools misbehavior (UPDATE 08/21/16) . Six qemu-kvm are up and running<br />
<br />
1. Undercloud<br />
2. 3 Node HA Controller (Pacemaker/Corosync) cluster<br />
3. 2 Compute Nodes (nested KVM enabled )<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiffIAJTWGiKPuxBlpExkRXiN_x6xTGct7RO5UPr1tlXyQ3dFVj_ROodGxuTU6EKOD-olQIo9kqIk-vuBHNkp3Mwh7jg6es9t6NEQHt1bSmlI57NT5OLmuNL_qRYbU9ibkQPHeXPA/s1600/Screenshot+from+2016-08-21+16-41-35.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiffIAJTWGiKPuxBlpExkRXiN_x6xTGct7RO5UPr1tlXyQ3dFVj_ROodGxuTU6EKOD-olQIo9kqIk-vuBHNkp3Mwh7jg6es9t6NEQHt1bSmlI57NT5OLmuNL_qRYbU9ibkQPHeXPA/s640/Screenshot+from+2016-08-21+16-41-35.png" width="640" /></a></div>
<br />
=====================================<br />
Log into undercloud from Ansible Server via :-<br />
===================================== <br />
[boris@fedora24wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud<br />
<br />
Deploy overcloud using old overcloud-deploy.sh<br />
<br />
<pre># Deploy the overcloud!
openstack overcloud deploy \
--templates /usr/share/openstack-tripleo-heat-templates \
--libvirt-type qemu --control-flavor oooq_control --compute-flavor oooq_compute \
--ceph-storage-flavor oooq_ceph --timeout 90 -\
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network-environment.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
--control-scale 3 --compute-scale 2 \
--neutron-network-type vxlan --neutron-tunnel-types vxlan --ntp-server pool.ntp.org \
${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML} "$@" || true
# We don't always get a useful error code from the openstack deploy command,
# so check `heat stack-list` for a CREATE_FAILED status.
if heat stack-list | grep -q 'CREATE_FAILED'; then
# get the failures list
openstack stack failures list overcloud > failed_deployment_list.log || true
# get any puppet related errors
for failed in $(heat resource-list \
--nested-depth 5 overcloud | grep FAILED |
grep 'StructuredDeployment ' | cut -d '|' -f3)
do
echo "heat deployment-show out put for deployment: $failed" >> failed_deployments.log
echo "######################################################" >> failed_deployments.log
heat deployment-show $failed >> failed_deployments.log
echo "######################################################" >> failed_deployments.log
echo "puppet standard error for deployment: $failed" >> failed_deployments.log
echo "######################################################" >> failed_deployments.log
# the sed part removes color codes from the text
heat deployment-show $failed |
jq -r .output_values.deploy_stderr |
sed -r "s:\x1B\[[0-9;]*[mK]::g" >> failed_deployments.log
echo "######################################################" >> failed_deployments.log
done
fi</pre>
<br />
[stack@undercloud ~]$ . stackrc<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| b6f105e8-3854-4939-99d9-73c16cf233fd | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.23 |
| 30979d6e-773b-4d79-9446-1cd25bade373 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.20 |
| 256627a8-2202-4986-86b6-8cd6e46c21db | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.22 |
| 9dc029bf-b096-4be6-b5a3-14b39ac098a4 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.21 |
| 16d0e195-c6a2-4286-a368-6fe9851ccd82 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.19 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-56974535818605044922016-08-14T03:55:00.000-07:002016-08-20T10:50:56.262-07:00Access to TripleO QuickStart overcloud via sshuttle running on F24 WorkStation<div dir="ltr" style="text-align: left;" trbidi="on">
Sshutle may be installed on Fedora 24 via straight forward `dnf -y install sshutle` [<a href="https://lists.fedoraproject.org/pipermail/package-announce/2016-April/182490.html" target="_blank">Fedora 24 Update: sshuttle-0.78.0-2.fc24</a>]. So, when F24 has been set up as WKS for TripleO QuickStart deployment to VIRTHOST , there is no need to install add-on FoxyProxy and tune it on firefox as well as connect from ansible wks to undercloud via $ ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090<br />
<br />
What is sshuttle? It’s a Python app that uses SSH to create a
quick and dirty VPN between your Linux, BSD, or Mac OS X machine and a
remote system that has SSH access and Python. Been licensed under the GPLv2, sshuttle is a transparent proxy server
that lets users fake a VPN with minimal hassle. <br />
<br />
========================================<br />
First install and start sshutle on Fedora 24 :-<br />
========================================<br />
boris@fedora24wks ~] <span style="color: #b45f06;">dnf -y install sshutle</span><br />
[root@fedora24wks ~]# <span style="color: #b45f06;">rpm -qa \*sshuttle\*</span><br />
<span style="color: #b45f06;">sshuttle-0.78.0-2.fc24.noarch</span><br />
<br />
======================================================== <br />
Now start sshutle via ssh.config.ansible, where 10.0.0.0/24 has been installed<br />
as external network for OverCloud already been set up on VIRTHOST <br />
========================================================<br />
[boris@fedora24wks ~]$ <span style="color: #b45f06;">sshuttle -e "ssh -F $HOME/.quickstart/ssh.config.ansible" -r undercloud -v 10.0.0.0/24 &</span><br />
<span style="color: #b45f06;">[3] 16385</span><br />
[boris@fedora24wks ~]$ Starting sshuttle proxy.<br />
firewall manager: Starting firewall with Python version 3.5.1<br />
firewall manager: ready method name nat.<br />
IPv6 enabled: False<br />
UDP enabled: False<br />
DNS enabled: False<br />
TCP redirector listening on ('127.0.0.1', 12299).<br />
Starting client with Python version 3.5.1<br />
c : connecting to server...<br />
Warning: Permanently added '192.168.1.74' (ECDSA) to the list of known hosts.<br />
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />
Starting server with Python version 2.7.5<br />
s: latency control setting = True<br />
s: available routes:<br />
s: 2/10.0.0.0/24<br />
s: 2/192.0.2.0/24<br />
s: 2/192.168.23.0/24<br />
s: 2/192.168.122.0/24<br />
c : Connected.<br />
firewall manager: setting up.<br />
>> iptables -t nat -N sshuttle-12299<br />
>> iptables -t nat -F sshuttle-12299<br />
>> iptables -t nat -I OUTPUT 1 -j sshuttle-12299<br />
>> iptables -t nat -I PREROUTING 1 -j sshuttle-12299<br />
>> iptables -t nat -A sshuttle-12299 -j REDIRECT --dest 10.0.0.0/24 -p tcp --to-ports 12299 -m ttl ! --ttl 42<br />
>> iptables -t nat -A sshuttle-12299 -j RETURN --dest 127.0.0.1/8 -p tcp<br />
c : Accept TCP: 192.168.1.13:53068 -> 10.0.0.4:80.<br />
c : warning: closed channel 1 got cmd=TCP_STOP_SENDING len=0<br />
c : Accept TCP: 192.168.1.13:53072 -> 10.0.0.4:80.<br />
s: SW'unknown':Mux#1: deleting (3 remain)<br />
s: SW#6:10.0.0.4:80: deleting (2 remain)<br />
c : warning: closed channel 2 got cmd=TCP_STOP_SENDING len=0<br />
c : Accept TCP: 192.168.1.13:53074 -> 10.0.0.4:80.<br />
s: SW'unknown':Mux#2: deleting (3 remain)<br />
s: SW#7:10.0.0.4:80: deleting (2 remain)<br />
c : Accept TCP: 192.168.1.13:58210 -> 10.0.0.4:6080.<br />
c : Accept TCP: 192.168.1.13:58212 -> 10.0.0.4:6080.<br />
c : SW'unknown':Mux#2: deleting (9 remain)<br />
c : SW#11:192.168.1.13:53072: deleting (8 remain)<br />
c : SW'unknown':Mux#1: deleting (7 remain)<br />
c : SW#9:192.168.1.13:53068: deleting (6 remain)<br />
c : Accept TCP: 192.168.1.13:58214 -> 10.0.0.4:6080.<br />
c : Accept TCP: 192.168.1.13:58216 -> 10.0.0.4:6080.<br />
c : warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0<br />
s: warning: closed channel 4 got cmd=TCP_STOP_SENDING len=0<br />
<br />
Complete log may be seen <a href="http://textuploader.com/58lyp" target="_blank">here</a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjM8BO9kM3uteqyO3LyqzQk8lqNK84R3TFIWljKp-puTOnHxPi98aNMZK-CtYcx0vWubNWEvXFN1k6n3yHG7kuNPlz0IAGxBSGzy0SrOaDcBBBVXzFwfQVT2KZMY0A-TcVhkO_yYA/s1600/Screenshot+from+2016-08-20+20-40-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjM8BO9kM3uteqyO3LyqzQk8lqNK84R3TFIWljKp-puTOnHxPi98aNMZK-CtYcx0vWubNWEvXFN1k6n3yHG7kuNPlz0IAGxBSGzy0SrOaDcBBBVXzFwfQVT2KZMY0A-TcVhkO_yYA/s640/Screenshot+from+2016-08-20+20-40-47.png" width="640" /></a></div>
<br />
<br />
This creates a transparent proxy server on your local machine for all IP addresses that match 10.0.0.0/24. Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh. There is no need to install sshuttle on the remote server; the remote
server just needs to have python available. sshuttle will automatically
upload and run its source code to the remote python. <br />
<br />
So,disable/remove FoxyProxy add-on from firefox ( if it has been set up ); interrupt connection from work station to undercloud via `ssh -F ~/.quickstart/ssh.config.ansible undercloud -D 9090`. Restart firefox and launch browser to http://10.0.0.4/dashboard<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWjvT3gHDY4BwOBvIwvwNzDWRrdsq5mwxJm9kqibvQmd_1k9lPDnoY-dAP-xLwlDSemP75jKbAoDIeXxxsqGFBBRzqnzasDg13QHel8XAC7H1q_5zeLy7NfcMbqARwZWmD8KX3Dw/s1600/Screenshot+from+2016-08-20+20-47-15.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWjvT3gHDY4BwOBvIwvwNzDWRrdsq5mwxJm9kqibvQmd_1k9lPDnoY-dAP-xLwlDSemP75jKbAoDIeXxxsqGFBBRzqnzasDg13QHel8XAC7H1q_5zeLy7NfcMbqARwZWmD8KX3Dw/s640/Screenshot+from+2016-08-20+20-47-15.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXiePgi41DieTTNoba2eFAaafVc_XKZOlhLzLYkDRyj1YsJtiTSFKoGZPDbdcIYBvOyzAioBJDrOVb4MgAf0PdHj90fe9pWSIXqSCwNYps_3x10EZ1D-4qZFpdVLzN7slqwkV6KQ/s1600/Screenshot+from+2016-08-14+13-51-49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXiePgi41DieTTNoba2eFAaafVc_XKZOlhLzLYkDRyj1YsJtiTSFKoGZPDbdcIYBvOyzAioBJDrOVb4MgAf0PdHj90fe9pWSIXqSCwNYps_3x10EZ1D-4qZFpdVLzN7slqwkV6KQ/s640/Screenshot+from+2016-08-14+13-51-49.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgd2ZAV03fbGThaVJTypnrofj4a-8btH5j3EMmKMiUKWeNp49ZLqLY2Xag03HW16tAtW2Or9Fnf7f0IpZekF73orYKgmylxz2rOT6mOMtQaEF4o916L0FdDrkRhjSWnqEduoyzbA/s1600/Screenshot+from+2016-08-14+15-31-32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgd2ZAV03fbGThaVJTypnrofj4a-8btH5j3EMmKMiUKWeNp49ZLqLY2Xag03HW16tAtW2Or9Fnf7f0IpZekF73orYKgmylxz2rOT6mOMtQaEF4o916L0FdDrkRhjSWnqEduoyzbA/s640/Screenshot+from+2016-08-14+15-31-32.png" width="640" /></a></div>
<br />
References<br />
1. <a href="http://g33kinfo.com/info/archives/5388" target="_blank">http://g33kinfo.com/info/archives/5388</a></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-35513675114556530102016-07-31T01:29:00.000-07:002016-09-14T11:57:41.029-07:00Stable Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
UPDATE 09/13/2016<br />
As of now schema bellow by some reasons fails requesting<br />
python2-keystoneauth1 2.10 when stable repo priority 2 provides<br />
just 2.4 in phase `openstack overcloud build images --all`<br />
Attempt to build based on :-<br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-newton/current/delorean.repo</span><br />
<span style="color: #b45f06;"><span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-newton/delorean-deps.repo</span></span><br />
comes to phase of overcloud deployment, however attempt to proceed fails :-<br />
<pre class="highlight"><code></code></pre>
<pre class="highlight"><code>The files ('overcloud-without-mergepy.yaml', 'overcloud.yaml') not found
in the /usr/share/openstack-tripleo-heat-templates/ directory
</code></pre>
The last message is understandable due to oncoming updates<br />
in Newton release <a href="https://marc.ttias.be/openstack-dev/2016-08/msg01920.php" target="_blank">https://marc.ttias.be/openstack-dev/2016-08/msg01920.php</a><br />
See also " CI broken: RDO picks stable/newton tripleoclient version in master"<br />
<a href="https://bugs.launchpad.net/tripleo/+bug/1622353" target="_blank">https://bugs.launchpad.net/tripleo/+bug/1622353</a><br />
I would expect Newton Delorean trunks start to work early October 2016. <br />
END UPDATE <br />
<br />
UPDATE 09/03/2016<br />
The most recent changes at page <a href="http://tripleo.org/basic_deployment/basic_deployment_cli.html" target="_blank">http://tripleo.org/basic_deployment/basic_deployment_cli.html</a><br />
Tuning stack's environment as advised on mentioned page<br />
<span style="color: #b45f06;"> export DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/"</span><br />
<span style="color: #b45f06;">export DIB_YUM_REPO_CONF=/etc/yum.repos.d/delorean* </span><br />
<b></b>
doesn't work for me. Overcloud deployment just hangs<br />
END UPDATE <br />
<br />
Following is step by step self sufficient instruction performing<br />
Mitaka HA instack-virt-setup on CentOS 7.2 VIRTHOST based on delorean<br />
repos :-<br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo</span><br />
<span style="color: #b45f06;"><span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo</span></span><br />
It follows official guide lines and updates undercloud with OVSIntPort vlan10<br />
for br-ctlplane OVS bridge making posible HA and/or Ceph overcloud deployments with "Network Isolation" enabled. See also an upstream commit <a href="https://review.openstack.org/#/c/329438/" target="_blank">https://review.openstack.org/#/c/329438/</a> been done by <a class="gwt-InlineHyperlink" href="https://review.openstack.org/#/q/owner:marios%2540redhat.com+status:merged" title="Search for changes by this user">Marios Andreou</a> on 06/14/2016<br />
<br />
=========================================<br />
VIRTHOST - stack's .bashrc configuration<br />
=========================================<br />
<span style="color: #b45f06;"># curl -o /etc/yum.repos.d/delorean-mitaka.repo \</span><br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo</span><br />
<span style="color: #b45f06;"># curl -o /etc/yum.repos.d/delorean-deps-mitaka.repo \</span><br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo</span><br />
<span style="color: #b45f06;"># yum -y install yum-plugin-priorities</span><br />
<span style="color: #b45f06;"># yum -y install epel-release</span><br />
<br />
[stack@ServerCentOS7$ ~]$ env | grep NODE<br />
export NODE_MEM=6000<br />
export NODE_COUNT=5<br />
export UNDERCLOUD_NODE_CPU=2<br />
export NODE_CPU=2<br />
export NODE_DIST=centos7<br />
export UNDERCLOUD_NODE_MEM=7500<code><span class="nb"> </span></code><br />
<code><span class="nb">export FS_TYPE=EXT4</span></code><br />
<code><span class="nb"><br /></span></code>
$ sudo yum install instack-undercloud<br />
$ instack-virt-setup<br />
<br />
===========================<br />
INSTACK<br />
===========================<br />
<span style="color: #b45f06;">[stack@instack ~]$</span><span style="color: #b45f06;"> sudo curl -o /etc/yum.repos.d/delorean-mitaka.repo \</span><br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo</span><br />
<span style="color: #b45f06;"> [stack@instack ~]$ sudo curl -o /etc/yum.repos.d/delorean-deps-mitaka.repo \</span><br />
<span style="color: #b45f06;"> http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo</span><br />
<span style="color: #b45f06;">[stack@instack ~]$ sudo yum -y install yum-plugin-priorities</span><br />
[stack@instack ~]$ cat .bashrc<br />
# .bashrc<br />
# Source global definitions<br />
if [ -f /etc/bashrc ]; then<br />
. /etc/bashrc<br />
fi<br />
<br />
# Uncomment the following line if you don't like systemctl's auto-paging feature:<br />
# export SYSTEMD_PAGER=<br />
<span style="color: #b45f06;">export NODE_DIST=centos7</span><br />
<span style="color: #b45f06;"> export USE_DELOREAN_TRUNK=1</span><br />
<span style="color: #b45f06;"> export DELOREAN_TRUNK_REPO="http://trunk.rdoproject.org/centos7-mitaka/current/"</span><br />
<span style="color: #b45f06;"> export DELOREAN_REPO_FILE="delorean.repo"</span><br />
# User specific aliases and functions<br />
<br />
$ sudo yum install -y python-tripleoclient<br />
$ openstack undercloud install<br />
$ source stackrc<br />
$ env | grep DEL<br />
$ source stackrc<br />
$ openstack overcloud image build --all<br />
$ openstack overcloud image upload<br />
$ openstack baremetal import instackenv.json<br />
$ openstack baremetal configure boot<br />
$ openstack baremetal introspection bulk start<br />
$ neutron subnet-list<br />
$ neutron subnet-update 1b7d82e5-0bf1-4ba5-8008-4aa402598065 --dns-nameserver 8.8.8.8<br />
$ sudo ovs-vsctl show<br />
$ sudo vi /etc/sysconfig/network-scripts/ifcfg-vlan10<br />
DEVICE=vlan10<br />
ONBOOT=yes<br />
DEVICETYPE=ovs<br />
TYPE=OVSIntPort<br />
BOOTPROTO=static<br />
IPADDR=10.0.0.1<br />
NETMASK=255.255.255.0<br />
OVS_BRIDGE=br-ctlplane<br />
OVS_OPTIONS="tag=10"<br />
<br />
$ sudo ifup vlan10<br />
[stack@instack ~]$ <span style="color: #b45f06;">cat network_env.yaml</span><br />
<span style="color: #b45f06;"> {</span><br />
<span style="color: #b45f06;"> "parameter_defaults": {</span><br />
<span style="color: #b45f06;"> "ControlPlaneDefaultRoute": "192.0.2.1",</span><br />
<span style="color: #b45f06;"> "ControlPlaneSubnetCidr": "24",</span><br />
<span style="color: #b45f06;"> "DnsServers": [</span><br />
<span style="color: #b45f06;"> "192.168.122.43"</span><br />
<span style="color: #b45f06;"> ],</span><br />
<span style="color: #b45f06;"> "EC2MetadataIp": "192.0.2.1",</span><br />
<span style="color: #b45f06;"> "ExternalAllocationPools": [</span><br />
<span style="color: #b45f06;"> {</span><br />
<span style="color: #b45f06;"> "end": "10.0.0.250",</span><br />
<span style="color: #b45f06;"> "start": "10.0.0.4"</span><br />
<span style="color: #b45f06;"> }</span><br />
<span style="color: #b45f06;"> ],</span><br />
<span style="color: #b45f06;"> "ExternalNetCidr": "10.0.0.1/24",</span><br />
<span style="color: #b45f06;"> "NeutronExternalNetworkBridge": ""</span><br />
<span style="color: #b45f06;"> }</span><br />
<span style="color: #b45f06;"> }</span><br />
<br />
<span style="color: #b45f06;">Where 192.168.122.43 is instack VM Ip.</span><br />
<br />
$ sudo ifup ifcfg-vlan10<br />
$ sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat<br />
<br />
$ sudo ovs-vsctl show<br />
$ [stack@instack ~]$ sudo ovs-vsctl show<br />
<span style="color: #b45f06;">Bridge br-ctlplane</span><br />
<span style="color: #b45f06;"> Port "vlan10"</span><br />
<span style="color: #b45f06;"> tag: 10</span><br />
<span style="color: #b45f06;"> Interface "vlan10"</span><br />
type: internal<br />
Port "eth1"<br />
Interface "eth1"<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Bridge br-int<br />
fail_mode: secure<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port "tape8042136-81"<br />
tag: 1<br />
Interface "tape8042136-81"<br />
type: internal<br />
ovs_version: "2.5.0"<br />
[stack@instack ~]$ ifconfig<br />
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::28c:73ff:fee7:a0c7 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:8c:73:e7:a0:c7 txqueuelen 0 (Ethernet)<br /> RX packets 969827 bytes 64170341 (61.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 24905 bytes 1403706189 (1.3 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.122.175 netmask 255.255.255.0 broadcast 192.168.122.255<br /> inet6 fe80::5054:ff:fee0:88ae prefixlen 64 scopeid 0x20<link></link><br /> ether 52:54:00:e0:88:ae txqueuelen 1000 (Ethernet)<br /> RX packets 792414 bytes 1147953477 (1.0 GiB)<br /> RX errors 0 dropped 2 overruns 0 frame 0<br /> TX packets 504124 bytes 41553838 (39.6 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::28c:73ff:fee7:a0c7 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:8c:73:e7:a0:c7 txqueuelen 1000 (Ethernet)<br /> RX packets 969821 bytes 64170033 (61.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 24917 bytes 1403707133 (1.3 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 345762 bytes 4297080640 (4.0 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 345762 bytes 4297080640 (4.0 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::b0f8:92ff:feed:99bb prefixlen 64 scopeid 0x20<link></link><br /> ether b2:f8:92:ed:99:bb txqueuelen 0 (Ethernet)<br /> RX packets 0 bytes 0 (0.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 12 bytes 816 (816.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />$ vi overcloud-deploy.sh</up></host></up></up></up></up><br />
<br />
<pre> #!/bin/bash -x
source /home/stack/stackrc
openstack overcloud deploy --templates --control-scale 3 \
--compute-scale 2 \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml</pre>
$ chmod a+x overcloud-deploy.sh<br />
[stack@instack ~]$ ./overcloud-deploy.sh<br />
Deploying templates in the directory /usr/share/openstack-tripleo-heat-templates<br />
2016-07-31 05:39:20 [overcloud]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:20 [Networks]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [PcsdPassword]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [HeatAuthEncryptionKey]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [MysqlRootPassword]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [MysqlClusterUniquePart]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [RabbitCookie]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [VipConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [HorizonSecret]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [PcsdPassword]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:20 [HeatAuthEncryptionKey]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:20 [MysqlRootPassword]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:20 [MysqlClusterUniquePart]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:20 [overcloud-VipConfig-7kalyksojixl]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:20 [VipConfigImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:20 [overcloud-Networks-rto6netaoodk]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:20 [ManagementNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [RabbitCookie]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:21 [HorizonSecret]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:21 [VipConfigImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:21 [overcloud-VipConfig-7kalyksojixl]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:21 [ExternalNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [InternalNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [StorageNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [TenantNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-ManagementNetwork-fbn47jr7bptq]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-ManagementNetwork-fbn47jr7bptq]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-InternalNetwork-rwmlbfr5dzdk]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:21 [InternalApiNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-StorageMgmtNetwork-4e5xbdltuz7q]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:21 [StorageMgmtNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-StorageNetwork-e3tr6nhcwdss]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:21 [StorageNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [StorageNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:21 [StorageSubnet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [overcloud-Networks-rto6netaoodk-ExternalNetwork-ge5f4ydo2yw6]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:21 [ExternalNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:21 [ExternalNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:21 [ExternalSubnet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:22 [VipConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:22 [ManagementNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:22 [InternalApiNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:22 [InternalApiSubnet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:22 [StorageMgmtNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:22 [StorageMgmtSubnet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:22 [overcloud-Networks-rto6netaoodk-TenantNetwork-xpwjehudwk6n]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:22 [TenantNetwork]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:22 [TenantNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:22 [TenantSubnet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:23 [InternalApiSubnet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:23 [overcloud-Networks-rto6netaoodk-InternalNetwork-rwmlbfr5dzdk]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:23 [StorageMgmtSubnet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:23 [overcloud-Networks-rto6netaoodk-StorageMgmtNetwork-4e5xbdltuz7q]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:23 [StorageSubnet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:23 [overcloud-Networks-rto6netaoodk-StorageNetwork-e3tr6nhcwdss]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:23 [TenantSubnet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:23 [overcloud-Networks-rto6netaoodk-TenantNetwork-xpwjehudwk6n]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:23 [ExternalSubnet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:23 [overcloud-Networks-rto6netaoodk-ExternalNetwork-ge5f4ydo2yw6]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:24 [ExternalNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:24 [InternalNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:24 [StorageNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:24 [StorageMgmtNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:24 [TenantNetwork]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:24 [overcloud-Networks-rto6netaoodk]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:25 [Networks]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:25 [CephStorage]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:25 [ObjectStorage]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:25 [ControlVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:25 [overcloud-CephStorage-szz57nuogpix]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:25 [overcloud-CephStorage-szz57nuogpix]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:25 [overcloud-ObjectStorage-gtvrpkjraesp]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:25 [overcloud-ObjectStorage-gtvrpkjraesp]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:26 [ObjectStorage]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:26 [CephStorage]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:26 [ControlVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:26 [StorageMgmtVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:26 [InternalApiVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:26 [overcloud-StorageMgmtVirtualIP-livasae7af57]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:27 [RedisVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [StorageVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [PublicVirtualIP]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [overcloud-RedisVirtualIP-t7iyxhta3pno]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:27 [VipPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [overcloud-InternalApiVirtualIP-4pgssemrxleu]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:27 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:27 [overcloud-StorageVirtualIP-jzamyndluhjn]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:27 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:27 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:27 [overcloud-StorageMgmtVirtualIP-livasae7af57]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:28 [VipPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:28 [overcloud-RedisVirtualIP-t7iyxhta3pno]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:28 [overcloud-InternalApiVirtualIP-4pgssemrxleu]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:28 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:28 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:28 [overcloud-StorageVirtualIP-jzamyndluhjn]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:28 [overcloud-PublicVirtualIP-fauhkcufypps]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:28 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:28 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:28 [overcloud-PublicVirtualIP-fauhkcufypps]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:29 [InternalApiVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:29 [RedisVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:29 [StorageMgmtVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:29 [StorageVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:29 [PublicVirtualIP]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:29 [VipMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:29 [overcloud-VipMap-zu6i4tigbvs6]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:29 [overcloud-VipMap-zu6i4tigbvs6]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:30 [VipMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:30 [EndpointMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:30 [overcloud-EndpointMap-up3wygzq76fu]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:30 [overcloud-EndpointMap-up3wygzq76fu]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:31 [EndpointMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:31 [Compute]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:33 [BlockStorage]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:33 [Controller]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:33 [overcloud-BlockStorage-bo6qcyunjb45]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:33 [overcloud-BlockStorage-bo6qcyunjb45]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:39:33 [overcloud-Compute-rbxr7ncefffr]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:33 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:34 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:35 [overcloud-Compute-rbxr7ncefffr-1-6piplzp6ryef]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:35 [NovaComputeConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:35 [NodeUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:35 [UpdateConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:36 [overcloud-Controller-ksfwi2vsa5cj]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:36 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:36 [NodeAdminUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:36 [NodeUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:36 [NovaComputeConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:36 [UpdateConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:36 [overcloud-Compute-rbxr7ncefffr-0-wyatkkjwlwtz]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:36 [NodeAdminUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:36 [UpdateConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:37 [BlockStorage]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:37 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:37 [overcloud-Controller-ksfwi2vsa5cj-1-4zb4t5yuz3ex]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:37 [NodeAdminUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:37 [NodeUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:37 [NovaComputeConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:37 [UpdateConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:37 [NovaComputeConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:38 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:38 [NodeUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:38 [UpdateConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:38 [NodeAdminUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:38 [UserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:38 [NodeUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [NodeUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [NodeAdminUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [UpdateConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [UserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [overcloud-Controller-ksfwi2vsa5cj-0-rvldqd37mq4u]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:39 [NodeAdminUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [UpdateConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [NodeUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [overcloud-Controller-ksfwi2vsa5cj-2-ohnilfuws65s]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:39:39 [UpdateConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [UserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [NovaCompute]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:39 [NodeAdminUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:39 [UserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:40 [UserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [NodeUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [NodeAdminUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [UpdateConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [UserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:40 [NodeAdminUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:40 [NodeUserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:40 [UpdateConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [UserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:40 [NovaCompute]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:41 [Controller]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:41 [UserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:41 [NodeUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:42 [Controller]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:42 [NodeAdminUserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:42 [UserData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:39:44 [UserData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:39:44 [Controller]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:50:55 [NovaCompute]: CREATE_COMPLETE state changed<br />
2016-07-31 05:50:56 [Controller]: CREATE_COMPLETE state changed<br />
2016-07-31 05:50:56 [Controller]: CREATE_COMPLETE state changed<br />
2016-07-31 05:50:57 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:50:57 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:50:58 [ManagementPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:50:59 [TenantPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:50:59 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:00 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:00 [Controller]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:00 [UpdateDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:00 [UpdateDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:02 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:03 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:03 [NovaCompute]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:03 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:04 [ManagementPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:04 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:05 [UpdateDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:05 [TenantPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:05 [TenantPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:05 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:06 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:06 [ManagementPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:07 [InternalApiPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:07 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:09 [UpdateDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:09 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:09 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:09 [TenantPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:10 [ManagementPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:11 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:11 [TenantPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:12 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:12 [UpdateDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:12 [ManagementPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:14 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:14 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:15 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:16 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:16 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:16 [StorageMgmtPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:16 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:16 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:17 [ManagementPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:17 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:17 [TenantPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:17 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:17 [ExternalPort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:17 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:18 [TenantPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:18 [ManagementPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:18 [ManagementPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:19 [TenantPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:19 [TenantPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:21 [StoragePort]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:22 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:22 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:22 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:22 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:22 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:23 [ManagementPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:23 [NetIpMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:24 [NetworkConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:24 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:24 [ExternalPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:25 [NetIpSubnetMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:25 [NetIpMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:25 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:25 [TenantPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:25 [ManagementPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:25 [NetworkConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:26 [NetIpMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:26 [InternalApiPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:27 [NetIpMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:27 [NetIpSubnetMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:27 [StorageMgmtPort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:28 [NetworkConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:28 [NetworkConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:28 [NetIpMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:28 [NetworkConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:28 [NetworkDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:29 [NetworkConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:29 [NetIpMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:29 [NetIpSubnetMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:29 [StoragePort]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:30 [NetIpSubnetMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:30 [NetIpMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:30 [NetworkConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:30 [NetworkDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:30 [NetIpSubnetMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:30 [NetworkDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:30 [NetworkConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:30 [NetIpMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:31 [NetworkConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:31 [NetIpMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:31 [NetIpSubnetMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:32 [NetworkDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:51:32 [NetIpMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:32 [NetworkConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:51:32 [NetworkDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:23 [UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:24 [UpdateDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:26 [NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:26 [NetworkDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:26 [NovaComputeDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:30 [UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:30 [UpdateDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:31 [UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:32 [UpdateDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:32 [UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:32 [NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:33 [UpdateDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:33 [NetworkDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:33 [NodeTLSCAData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:33 [UpdateDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:33 [UpdateDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:34 [NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:34 [NetworkDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:34 [NodeTLSCAData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:35 [NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:35 [NodeTLSCAData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:35 [NodeTLSData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:35 [NetworkDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:36 [NetworkDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:36 [NodeTLSCAData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:36 [NetworkDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:36 [NovaComputeDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:37 [NovaComputeDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:37 [NovaComputeDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:37 [ComputeExtraConfigPre]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:37 [NodeTLSCAData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:37 [NodeTLSData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:37 [NodeTLSCAData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:38 [NodeTLSData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:38 [ControllerConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:38 [NodeTLSData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:38 [ControllerConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:38 [ControllerConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:38 [ControllerDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:38 [NodeTLSCAData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:38 [ComputeExtraConfigPre]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:38 [NodeExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:39 [ControllerConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:39 [ControllerDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:39 [NodeExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:39 [overcloud-Compute-rbxr7ncefffr-0-wyatkkjwlwtz]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:53:40 [ControllerConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:40 [ControllerDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:40 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:44 [NovaComputeDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:44 [NovaComputeDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:44 [ComputeExtraConfigPre]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:44 [NodeTLSCAData]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:46 [NodeTLSCAData]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:46 [ComputeExtraConfigPre]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:46 [NodeExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:47 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:47 [overcloud-Compute-rbxr7ncefffr]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:53:47 [NodeExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:47 [overcloud-Compute-rbxr7ncefffr-1-6piplzp6ryef]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:53:48 [Compute]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:56 [ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:56 [ControllerDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:56 [ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:57 [ControllerExtraConfigPre]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:57 [NodeExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:58 [ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:53:58 [ControllerDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:58 [ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:53:58 [NodeExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:59 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:53:59 [overcloud-Controller-ksfwi2vsa5cj-2-ohnilfuws65s]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:00 [ControllerDeployment]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:00 [ControllerExtraConfigPre]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:00 [NodeExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:01 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:01 [ControllerDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:01 [ControllerExtraConfigPre]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:01 [NodeExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:01 [overcloud-Controller-ksfwi2vsa5cj-0-rvldqd37mq4u]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:02 [ControllerExtraConfigPre]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:02 [NodeExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:03 [NodeExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:03 [overcloud-Controller-ksfwi2vsa5cj-1-4zb4t5yuz3ex]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:04 [Controller]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:04 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:04 [overcloud-Controller-ksfwi2vsa5cj]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:05 [UpdateWorkflow]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [ControllerBootstrapNodeConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [SwiftDevicesAndProxyConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [VipDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [overcloud-UpdateWorkflow-4n5h2wcxblb3]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:05 [overcloud-UpdateWorkflow-4n5h2wcxblb3]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:05 [overcloud-SwiftDevicesAndProxyConfig-4oamyzvbqcrb]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:05 [SwiftDevicesAndProxyConfigImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [SwiftDevicesAndProxyConfigImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:05 [overcloud-SwiftDevicesAndProxyConfig-4oamyzvbqcrb]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:05 [overcloud-VipDeployment-wo6kjtzdoqzl]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:05 [overcloud-ControllerBootstrapNodeConfig-idpjh3rkfxkk]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:05 [BootstrapNodeConfigImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:05 [BootstrapNodeConfigImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:05 [overcloud-ControllerBootstrapNodeConfig-idpjh3rkfxkk]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:06 [AllNodesValidationConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [ControllerClusterConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [overcloud-AllNodesValidationConfig-uyaykmk5udka]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:06 [AllNodesValidationsImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:06 [AllNodesValidationsImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:06 [overcloud-AllNodesValidationConfig-uyaykmk5udka]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:07 [ControllerIpListMap]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:07 [overcloud-ControllerIpListMap-mx33gmf6yaml]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:07 [overcloud-ControllerIpListMap-mx33gmf6yaml]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:08 [UpdateWorkflow]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [ControllerBootstrapNodeConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [SwiftDevicesAndProxyConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [AllNodesValidationConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [ControllerClusterConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [ControllerIpListMap]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:08 [ControllerBootstrapNodeDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:08 [ControllerSwiftDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:08 [overcloud-ControllerBootstrapNodeDeployment-rb5tuhata5zw]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:08 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:09 [ControllerClusterDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:09 [overcloud-ControllerSwiftDeployment-2fhzhyjsqtox]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:09 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:09 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:09 [overcloud-ControllerClusterDeployment-n47rsmrrjhm3]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:09 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:10 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:10 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:10 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:11 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:11 [CephClusterConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:11 [overcloud-allNodesConfig-dlvmzfcjczrv]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:11 [allNodesConfigImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:12 [allNodesConfigImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:12 [overcloud-allNodesConfig-dlvmzfcjczrv]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:12 [overcloud-CephClusterConfig-mas6ebaob76u]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:12 [CephClusterConfigImpl]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:12 [CephClusterConfigImpl]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:12 [overcloud-CephClusterConfig-mas6ebaob76u]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:13 [allNodesConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:13 [ObjectStorageSwiftDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:13 [CephClusterConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:13 [BlockStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:13 [ObjectStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:13 [CephStorageAllNodesDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:13 [overcloud-BlockStorageAllNodesDeployment-diu37hed5rd4]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:13 [overcloud-BlockStorageAllNodesDeployment-diu37hed5rd4]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:13 [overcloud-ObjectStorageAllNodesDeployment-tz2t245d276l]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:14 [ComputeAllNodesDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:14 [ControllerAllNodesDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:14 [CephStorageCephDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:14 [overcloud-CephStorageAllNodesDeployment-7evcwwskk2um]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:14 [overcloud-CephStorageAllNodesDeployment-7evcwwskk2um]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:14 [overcloud-ComputeAllNodesDeployment-nwmxao5py32t]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:14 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:14 [overcloud-ObjectStorageAllNodesDeployment-tz2t245d276l]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:15 [ControllerCephDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:15 [overcloud-CephStorageCephDeployment-z3cay5ank3lf]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:15 [overcloud-CephStorageCephDeployment-z3cay5ank3lf]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:15 [overcloud-ControllerAllNodesDeployment-mqgisqdiqojq]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:15 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:16 [ComputeCephDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:16 [overcloud-ComputeCephDeployment-56zf3zkhbhtq]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:16 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:16 [overcloud-ControllerCephDeployment-2t5ix3hth3ty]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:16 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:16 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:16 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:17 [BlockStorageAllNodesDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:17 [CephStorageCephDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:17 [ObjectStorageAllNodesDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:17 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:17 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:17 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:18 [CephStorageAllNodesDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:18 [ObjectStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:18 [BlockStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:18 [CephStorageAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:18 [overcloud-ObjectStorageAllNodesValidationDeployment-cinldvvf4rii]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:18 [overcloud-ObjectStorageAllNodesValidationDeployment-cinldvvf4rii]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:18 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:18 [overcloud-BlockStorageAllNodesValidationDeployment-mmwxw7p7w7ps]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:19 [overcloud-BlockStorageAllNodesValidationDeployment-mmwxw7p7w7ps]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:19 [overcloud-CephStorageAllNodesValidationDeployment-cwpp4jh7qzzc]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:19 [overcloud-CephStorageAllNodesValidationDeployment-cwpp4jh7qzzc]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:20 [ObjectStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:20 [BlockStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:20 [CephStorageAllNodesValidationDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:23 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:24 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:30 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:30 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:31 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:31 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:33 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:33 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:33 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:33 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:34 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:34 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:34 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:34 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:34 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:34 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:35 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:35 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:35 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:35 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:35 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:36 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:36 [overcloud-VipDeployment-wo6kjtzdoqzl]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:37 [VipDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:47 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:47 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:48 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:48 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:48 [overcloud-ControllerBootstrapNodeDeployment-rb5tuhata5zw]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:48 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:49 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:49 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:50 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:50 [overcloud-ControllerClusterDeployment-n47rsmrrjhm3]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:52 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:52 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:52 [overcloud-ComputeAllNodesDeployment-nwmxao5py32t]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:53 [ComputeAllNodesDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:53 [ComputeAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:53 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:53 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:53 [overcloud-ComputeCephDeployment-56zf3zkhbhtq]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:53 [overcloud-ComputeAllNodesValidationDeployment-p3xhswchrxq2]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:54:53 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:54 [ComputeCephDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:54 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:58 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:58 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:58 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:59 [ControllerAllNodesDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:59 [ControllerAllNodesValidationDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:54:59 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:59 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:59 [overcloud-ControllerCephDeployment-2t5ix3hth3ty]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:54:59 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:54:59 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:59 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:54:59 [overcloud-ControllerAllNodesDeployment-mqgisqdiqojq]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:00 [overcloud-ControllerAllNodesValidationDeployment-6ak7t5gcquby]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:00 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:00 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:01 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:05 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:06 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:16 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:17 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:17 [overcloud-ComputeAllNodesValidationDeployment-p3xhswchrxq2]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:18 [ComputeAllNodesValidationDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:25 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:26 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:26 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:27 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:30 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:31 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:31 [overcloud-ControllerAllNodesValidationDeployment-6ak7t5gcquby]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:32 [ControllerAllNodesValidationDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:32 [AllNodesExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:32 [overcloud-AllNodesExtraConfig-yejvpdp543x4]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:32 [overcloud-AllNodesExtraConfig-yejvpdp543x4]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:33 [AllNodesExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:33 [ComputeNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [ObjectStorageNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [ControllerNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:34 [ComputeArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [ComputePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [ComputePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:34 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputeArtifactsConfig-g7rb4jq3zzo5]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:34 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:34 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputeArtifactsConfig-g7rb4jq3zzo5]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:34 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:34 [StorageRingbuilderPuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [StorageArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [StoragePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [StorageRingbuilderPuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:34 [StoragePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:34 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageArtifactsConfig-mqi2izssk2xf]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:34 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:34 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:34 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageArtifactsConfig-mqi2izssk2xf]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:35 [ComputeArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:35 [ComputeArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:35 [StorageArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:36 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputeArtifactsDeploy-dtfanvkrdx3p]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:36 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:36 [StorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:36 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageArtifactsDeploy-fklf72gbmbum]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:36 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageArtifactsDeploy-fklf72gbmbum]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:37 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:37 [StorageArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:37 [StorageDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:37 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageDeployment_Step1-oerm7md7ixbe]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:38 [StorageDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:38 [StorageRingbuilderDeployment_Step2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:38 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageDeployment_Step1-oerm7md7ixbe]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:39 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageRingbuilderDeployment_Step2-pgyombae2wtv]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:39 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-StorageRingbuilderDeployment_Step2-pgyombae2wtv]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:40 [StorageRingbuilderDeployment_Step2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:40 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:55:40 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-ExtraConfig-mqgvwxccn6r5]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:55:40 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f-ExtraConfig-mqgvwxccn6r5]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:41 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:41 [overcloud-ObjectStorageNodesPostDeployment-64keujysbe5f]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:55:42 [ObjectStorageNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:55:50 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:55:50 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:04 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:56:04 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:56:05 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:05 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:05 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputeArtifactsDeploy-dtfanvkrdx3p]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:56:06 [ComputeArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:06 [ComputePuppetDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:07 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputePuppetDeployment-6iey3yhrflle]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:56:07 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:08 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:09 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:56:10 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:11 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:56:12 [ControllerArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:12 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:12 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerArtifactsDeploy-rng3m5gi7vkn]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:56:13 [ControllerPrePuppet]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:13 [ControllerLoadBalancerDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:13 [ControllerPrePuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 05:56:13 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerPrePuppet-sswsnxvmedf6]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:56:14 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerLoadBalancerDeployment_Step1-emrhcpofvvcc]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:56:14 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:14 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:56:14 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:58:04 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:58:05 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 05:58:09 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:58:09 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:58:10 [ControllerLoadBalancerDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:58:10 [ControllerServicesBaseDeployment_Step2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:58:10 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:58:10 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 05:58:10 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerLoadBalancerDeployment_Step1-emrhcpofvvcc]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 05:58:11 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerServicesBaseDeployment_Step2-ofxpiumqvk2t]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 05:58:11 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:58:11 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:58:11 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 05:58:42 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:58:43 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 05:58:43 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 05:58:44 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:00 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:00:01 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:01 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerServicesBaseDeployment_Step2-ofxpiumqvk2t]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:00:02 [ControllerServicesBaseDeployment_Step2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:02 [ControllerRingbuilderDeployment_Step3]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:03 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerRingbuilderDeployment_Step3-sekrzrwigsll]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:00:03 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:03 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:04 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:35 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:00:35 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:49 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:00:50 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:50 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:00:51 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:51 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerRingbuilderDeployment_Step3-sekrzrwigsll]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:00:52 [ControllerRingbuilderDeployment_Step3]: CREATE_COMPLETE state changed<br />
2016-07-31 06:00:52 [ControllerOvercloudServicesDeployment_Step4]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:52 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step4-xanhnrjlhlck]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:00:52 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:53 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:00:53 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:03:09 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:03:10 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:03:13 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:03:14 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:04:12 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:04:13 [ControllerOvercloudServicesDeployment_Step4]: CREATE_COMPLETE state changed<br />
2016-07-31 06:04:13 [ControllerOvercloudServicesDeployment_Step5]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:04:13 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:04:13 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step4-xanhnrjlhlck]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:04:14 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step5-hebgu4yslwh2]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:04:14 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:04:14 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:04:15 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:05:29 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:05:29 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:06:14 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:06:14 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:06:33 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:06:34 [ControllerOvercloudServicesDeployment_Step5]: CREATE_COMPLETE state changed<br />
2016-07-31 06:06:34 [ControllerOvercloudServicesDeployment_Step6]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:06:34 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:06:34 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step5-hebgu4yslwh2]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:06:34 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step6-2mzciuctoola]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:06:34 [1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:06:35 [0]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:06:35 [2]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:08:09 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:08:10 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:08:25 [2]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:08:25 [2]: CREATE_COMPLETE state changed<br />
2016-07-31 06:09:29 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:09:30 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:09:30 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerOvercloudServicesDeployment_Step6-2mzciuctoola]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:09:31 [ControllerOvercloudServicesDeployment_Step6]: CREATE_COMPLETE state changed<br />
2016-07-31 06:09:31 [ControllerPostPuppet]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:09:31 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerPostPuppet-bceb6374n3yu]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:09:31 [ControllerPostPuppetMaintenanceModeConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:09:31 [ControllerPostPuppetRestartConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:09:31 [ControllerPostPuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:09:31 [ControllerPostPuppetRestartConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:09:31 [ControllerPostPuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:10:36 [ControllerPostPuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:10:36 [ControllerPostPuppetRestartDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:10:54 [1]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:10:54 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-31 06:10:54 [0]: CREATE_COMPLETE state changed<br />
2016-07-31 06:10:59 [1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:00 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ComputePuppetDeployment-6iey3yhrflle]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:01 [ComputePuppetDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:01 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:02 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ExtraConfig-bq5rbyg2xnor]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:02 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4-ExtraConfig-bq5rbyg2xnor]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:03 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:03 [overcloud-ComputeNodesPostDeployment-unvzoeo2d6h4]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:04 [ComputeNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:26 [ControllerPostPuppetRestartDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:26 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ControllerPostPuppet-bceb6374n3yu]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:27 [ControllerPostPuppet]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:27 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:28 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ExtraConfig-jhzhhys3nwsd]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:28 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a-ExtraConfig-jhzhhys3nwsd]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:29 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:29 [overcloud-ControllerNodesPostDeployment-jd3rmihnzq6a]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:29 [ControllerNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:30 [CephStorageNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:30 [BlockStorageNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:30 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:30 [CephStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:31 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:31 [VolumeArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:31 [CephStoragePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:31 [CephStoragePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:31 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageArtifactsConfig-6v5avhacddmk]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:31 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:31 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:31 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageArtifactsConfig-6v5avhacddmk]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:32 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeArtifactsConfig-ecc3vkpmyh5b]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:32 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:32 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:32 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeArtifactsConfig-ecc3vkpmyh5b]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:32 [CephStorageArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:32 [CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:32 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageArtifactsDeploy-agdpcpxpmmg5]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:32 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageArtifactsDeploy-agdpcpxpmmg5]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:33 [VolumeArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:33 [VolumeArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:33 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeArtifactsDeploy-uaa5xotz2wxt]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:33 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeArtifactsDeploy-uaa5xotz2wxt]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:33 [CephStorageArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:33 [CephStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:34 [VolumeArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:34 [VolumePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:34 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageDeployment_Step1-s677wxrlxtvk]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:34 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-CephStorageDeployment_Step1-s677wxrlxtvk]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:35 [VolumePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:35 [VolumeDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:35 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeDeployment_Step1-bympimwcluwg]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:35 [CephStorageDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:35 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:35 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-ExtraConfig-ej46y7xnajub]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:35 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz-ExtraConfig-ej46y7xnajub]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:36 [VolumeDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:36 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-31 06:11:36 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-VolumeDeployment_Step1-bympimwcluwg]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:36 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:36 [overcloud-CephStorageNodesPostDeployment-g42ajs2fb2wz]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:37 [CephStorageNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:37 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-ExtraConfig-on2vflovtnsn]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-31 06:11:37 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox-ExtraConfig-on2vflovtnsn]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:38 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:38 [overcloud-BlockStorageNodesPostDeployment-dxs3ionfumox]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-31 06:11:39 [BlockStorageNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-31 06:11:39 [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
Stack overcloud CREATE_COMPLETE<br />
Skipping "horizon" postconfig because it wasn't found in the endpoint map output<br />
PKI initialization in init-keystone is deprecated and will be removed.<br />
Warning: Permanently added '192.0.2.6' (ECDSA) to the list of known hosts.<br />
The following cert files already exist, use --rebuild to remove the existing files before regenerating:<br />
/etc/keystone/ssl/certs/ca.pem already exists<br />
/etc/keystone/ssl/private/signing_key.pem already exists<br />
/etc/keystone/ssl/certs/signing_cert.pem already exists<br />
Connection to 192.0.2.6 closed.<br />
Overcloud Endpoint: http://10.0.0.4:5000/v2.0<br />
Overcloud Deployed<br />
[stack@instack ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 9055adbe-bbfa-4a4c-b7f9-5570b6da03a7 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| 4285ddee-0368-461a-859c-f80a1b29b9a1 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 |
| 579f7e9f-aa4d-4415-82ad-ec48c86553b9 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| 39eb184e-8ff1-4267-82a8-b7f5f1ffa1e7 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
| d5084553-b32d-47ac-93b6-d3188d1c3bfb | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.7 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
[stack@instack ~]$ neutron net-list
+--------------------------------------+--------------+--------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+--------------------------------------------+
| 67212060-3712-4ac7-b321-7f11ef2f24c2 | tenant | 82b41d3a-25ef-4593-a0f8-dcc3f4a23036 |
| | | 172.16.0.0/24 |
| 72368be6-9bb1-4dbc-b823-4247599a29f2 | storage | 138fbd64-a741-452e-8e79-84690f7b4f1c |
| | | 172.16.1.0/24 |
| cdf13bb1-5c69-410d-b262-6f2251f0aa1b | storage_mgmt | adf3fc93-1f14-43f4-ac59-9643aa0e8854 |
| | | 172.16.3.0/24 |
| 4b80d4bf-f3b0-4d53-901d-f65bb089d335 | external | e58d6ef8-88ea-4737-9732-ae79b7b889ee |
| | | 10.0.0.0/24 |
| 600ad88d-efe4-432b-83e8-d85c4f9ade81 | internal_api | a85b388e-478f-4db6-93f6-6a0d7e3edca5 |
| | | 172.16.2.0/24 |
| f53a98b2-18f8-4e92-8047-779651615a49 | ctlplane | 3b7d83e5-0bf3-4ba5-8008-4aa401598065 |
| | | 192.0.2.0/24 |
+--------------------------------------+--------------+--------------------------------------------+</pre>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0NKov3h5J8qKm2xK2504sPawmcumjlgDv2VuEdfoEckTelrYim976aijJMUwA7kqXc6480vma_Tyz6l1vVtha4uzeh4mUuz5MlX6sRZc6U_h4_gJcyr264vpMLTuGVfZ6BTz8sA/s1600/Screenshot+from+2016-07-31+10-38-16.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0NKov3h5J8qKm2xK2504sPawmcumjlgDv2VuEdfoEckTelrYim976aijJMUwA7kqXc6480vma_Tyz6l1vVtha4uzeh4mUuz5MlX6sRZc6U_h4_gJcyr264vpMLTuGVfZ6BTz8sA/s640/Screenshot+from+2016-07-31+10-38-16.png" width="640" /> </a><br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBE7_2NarlEutf7dAGTtWjob_eDI45RWyG8InLYUTIHBjlfLmLVRpBWd0V9Lq-ceW7DQKfdU5xx0gnnsGvDHmRl4rIjpSyaENzE01TchHh2GgXwffqKdlzRBb0WtJt0cJ4gKIVRA/s1600/Screenshot+from+2016-08-01+10-48-12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgBE7_2NarlEutf7dAGTtWjob_eDI45RWyG8InLYUTIHBjlfLmLVRpBWd0V9Lq-ceW7DQKfdU5xx0gnnsGvDHmRl4rIjpSyaENzE01TchHh2GgXwffqKdlzRBb0WtJt0cJ4gKIVRA/s640/Screenshot+from+2016-08-01+10-48-12.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0xwRsrw4UIntzBVEjhEsP9ha6Y0V5JvPTX9S4Fc39shubsc6h96IyFAVgteHMtfuCGCDwmWzhvzMGL_peEYqhR2L89uSC_19Q3wvL6FcX4tFpz7tYKBst8kpWq6NgPDWKVLPmag/s1600/Screenshot+from+2016-08-01+13-44-14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0xwRsrw4UIntzBVEjhEsP9ha6Y0V5JvPTX9S4Fc39shubsc6h96IyFAVgteHMtfuCGCDwmWzhvzMGL_peEYqhR2L89uSC_19Q3wvL6FcX4tFpz7tYKBst8kpWq6NgPDWKVLPmag/s640/Screenshot+from+2016-08-01+13-44-14.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLNPHhQLFNxt739F1BvlZAPcTxMSaGmkoe-Of-kQhAZZxcp4PkhombADTpbn9yQ7xZnQX_BFidMNF8t6Qnp4vA0SfGs_ziL-dobyjS0RBP6nqjIMUAi1tNWHzRegVnmFAVvQq8Pg/s1600/Screenshot+from+2016-08-01+10-47-00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLNPHhQLFNxt739F1BvlZAPcTxMSaGmkoe-Of-kQhAZZxcp4PkhombADTpbn9yQ7xZnQX_BFidMNF8t6Qnp4vA0SfGs_ziL-dobyjS0RBP6nqjIMUAi1tNWHzRegVnmFAVvQq8Pg/s640/Screenshot+from+2016-08-01+10-47-00.png" width="640" /> </a></div>
<br />
<div style="text-align: left;">
</div>
<div class="separator" style="clear: both;">
<br /></div>
<br />
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-33648553769863042042016-07-25T13:46:00.000-07:002016-08-04T10:58:07.943-07:00TripleO QuickStart vs Attempt of official Mitaka TripleO HA install via instack-virt-setup<div dir="ltr" style="text-align: left;" trbidi="on">
A final target of this post is to compare undercloud configuration been built by QuickStart and undercloud configuration been built per official documentation<br />
for Mitaka stable , please see <a href="http://bderzhavets.blogspot.ru/2016/07/attempt-of-official-tripleo-ha-install.html" target="_blank">Attempt of official Mitaka TripleO HA install via instack-virt-setup</a><br />
<br />
======================== <br />
TripleO QuickStart case <br />
========================<br />
<br />
First of all right before running `openstack overcloud deploy --templates .... `<br />
Run on undercloud VM following commands :-<br />
<br />
[stack@undercloud ~]$ sudo ovs-vsctl show<br />
b8b5ecbc-dc8d-43b8-8f03-09896d1b08b3<br />
Bridge br-int<br />
fail_mode: secure<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port "tapd7a65b7a-48"<br />
tag: 1<br />
Interface "tapd7a65b7a-48"<br />
type: internal<br />
Bridge br-ctlplane<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port "eth1"<br />
Interface "eth1"<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
ovs_version: "2.5.0"<br />
<br />
=============================<br />
<br />
[root@undercloud ~]# ifconfig<br />
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::28e:5aff:fe16:9ba1 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:8e:5a:16:9b:a1 txqueuelen 0 (Ethernet)<br /> RX packets 3383615 bytes 264121585 (251.8 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 4873995 bytes 23750747704 (22.1 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.23.10 netmask 255.255.255.0 broadcast 192.168.23.255<br /> inet6 fe80::28e:5aff:fe16:9b9f prefixlen 64 scopeid 0x20<link></link><br /> ether 00:8e:5a:16:9b:9f txqueuelen 1000 (Ethernet)<br /> RX packets 48092 bytes 42203536 (40.2 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 35731 bytes 4188571 (3.9 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::28e:5aff:fe16:9ba1 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:8e:5a:16:9b:a1 txqueuelen 1000 (Ethernet)<br /> RX packets 3385562 bytes 264368815 (252.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 4876692 bytes 23773014677 (22.1 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 3065638 bytes 25610179577 (23.8 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 3065638 bytes 25610179577 (23.8 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />virbr0: flags=4099<up> mtu 1500<br /> inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255<br /> ether 52:54:00:eb:ef:39 txqueuelen 0 (Ethernet)<br /> RX packets 0 bytes 0 (0.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 0 bytes 0 (0.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::5ce2:8eff:fed9:2f89 prefixlen 64 scopeid 0x20<link></link><br /> ether 5e:e2:8e:d9:2f:89 txqueuelen 0 (Ethernet)<br /> RX packets 1154 bytes 176564 (172.4 KiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 1759 bytes 22168381 (21.1 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /> </up></up></host></up></up></up></up><br />
===============================================<br />
Analyze code undercloud-post-install.sh following bellow<br />
===============================================<br />
[stack@undercloud ~]$ cat undercloud-post-install.sh<br />
#!/bin/bash<br />
<br />
# Prepare the undercloud for deploy<br />
<br />
set -eux<br />
<br />
# Source in undercloud credentials.<br />
source /home/stack/stackrc<br />
###################### <br />
# Set of standard commands<br />
######################<br />
<br />
# Upload images to glance.<br />
openstack overcloud image upload \<br />
<br />
openstack baremetal import --json instackenv.json<br />
openstack baremetal configure boot<br />
<br />
# Perform introspection if requested.<br />
<br />
. . . . . . . .<br />
<br />
################################################<br />
# Here follows critical VM network configuration portion<br />
################################################<br />
<br />
<span style="color: #b45f06;"># enable NAT for "external" network<br />RULE="-s 10.0.0.1/24 ! -d 10.0.0.1/24 -j MASQUERADE"<br /><br />if ! sudo iptables -t nat -C BOOTSTACK_MASQ $RULE; then<br /> sudo iptables -t nat -A BOOTSTACK_MASQ $RULE<br /> sudo sh -c 'iptables-save > /etc/sysconfig/iptables'<br />fi<br /><br />sudo bash -c 'cat <<eof> /etc/sysconfig/network-scripts/ifcfg-vlan10<br />DEVICE=vlan10<br />ONBOOT=yes<br />DEVICETYPE=ovs<br />TYPE=OVSIntPort<br />BOOTPROTO=static<br />IPADDR=10.0.0.1<br />NETMASK=255.255.255.0<br />OVS_BRIDGE=br-ctlplane<br />OVS_OPTIONS="tag=10"<br />EOF'<br /><br />sudo ifup ifcfg-vlan10<br /><br /># clone the t-h-t templates if neede</eof></span><span style="color: #b45f06;">d</span><br />
<br />
=========================================<br />
So finally up on overcloud-deployment completion :-<br />
=========================================<br />
<br />
[root@undercloud ~]# ip netns<br />
qdhcp-74126965-fbac-483d-9d8d-1c2ff43a2bd2<br />
[root@undercloud ~]# ip netns exec qdhcp-74126965-fbac-483d-9d8d-1c2ff43a2bd2 ifconfig<br />
lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 0 bytes 0 (0.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 0 bytes 0 (0.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br /><span style="color: #b45f06;">tapd7a65b7a-48</span>: flags=4163<up> mtu 1500<br /> inet 192.0.2.5 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::f816:3eff:fe9d:1a65 prefixlen 64 scopeid 0x20<link></link><br /> ether fa:16:3e:9d:1a:65 txqueuelen 0 (Ethernet)<br /> RX packets 1109 bytes 103765 (101.3 KiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 943 bytes 91866 (89.7 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />[root@undercloud ~]# ip netns exec qdhcp-74126965-fbac-483d-9d8d-1c2ff43a2bd2 route -n<br />Kernel IP routing table<br />Destination Gateway Genmask Flags Metric Ref Use Iface<br /><span style="color: #b45f06;">0.0.0.0 192.0.2.1 0.0.0.0 UG 0 0 0 tapd7a65b7a-48<br />192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 tapd7a65b7a-48</span></up></host></up><br />
<br />
=======================================================<br />
Get back to `ovs-vsctl show` on undercloud generated by QuickStart<br />
=======================================================<br />
<br />
Focus on device tapd7a65b7a-48 and vlan10<br />
<br />
[stack@undercloud ~]$ sudo ovs-vsctl show<br />
b8b5ecbc-dc8d-43b8-8f03-09896d1b08b3<br />
Bridge br-int<br />
fail_mode: secure<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane} <=== veth pair connecting br-int <br />
Port br-int and br-ctlplane<br />
Interface br-int<br />
type: internal<br />
<span style="color: #b45f06;"> Port "tapd7a65b7a-48"</span><br />
<span style="color: #b45f06;"> tag: 1</span><br />
<span style="color: #b45f06;"> Interface "tapd7a65b7a-48"</span><br />
type: internal<br />
<span style="color: #b45f06;">Bridge br-ctlplane</span><br />
<span style="color: #b45f06;"> Port "vlan10"</span><br />
<span style="color: #b45f06;"> tag: 10</span><br />
<span style="color: #b45f06;"> Interface "vlan10"</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port "eth1"<br />
Interface "eth1"<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane} <=== veth pair connecting<br />
ovs_version: "2.5.0" connecting br-int and<br />
br-ctlplane<br />
<br />
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhv6xsC4OdhUuJUL0qzA-x4HwQYkzzgoa939Fw1D0FthJycyxNMkpwQHhrvNxmrTRXWYISnPv6YXjKGeATQswN8LLB14NoB-TFE1_5HAwze7wfNmAYqUtyRfhm2kmUFgUUZ0hWQ5w/s1600/QuickStart-VS-TrpileO-Official.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhv6xsC4OdhUuJUL0qzA-x4HwQYkzzgoa939Fw1D0FthJycyxNMkpwQHhrvNxmrTRXWYISnPv6YXjKGeATQswN8LLB14NoB-TFE1_5HAwze7wfNmAYqUtyRfhm2kmUFgUUZ0hWQ5w/s640/QuickStart-VS-TrpileO-Official.png" width="640" />'</a></div>
<div class="separator" style="clear: both; text-align: left;">
========================================================</div>
<div class="separator" style="clear: both; text-align: left;">
Now verify instack VM been built per <a href="http://bderzhavets.blogspot.ru/2016/07/attempt-of-official-tripleo-ha-install.html" target="_blank">Attempt of official Mitaka TripleO HA install via instack-virt-setup</a> </div>
<div class="separator" style="clear: both; text-align: left;">
======================================================== </div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
[stack@ServerCentOS72 ~]$ virsh list --all</div>
Id Name State<br />
----------------------------------------------------<br />
2 instack running<br />
- baremetalbrbm_0 shut off<br />
- baremetalbrbm_1 shut off<br />
- baremetalbrbm_2 shut off<br />
- baremetalbrbm_3 shut off<br />
<br />
[stack@ServerCentOS72 ~]$ ssh root@192.168.122.193<br />
Last login: Mon Jul 25 13:59:52 2016 from 192.168.122.1<br />
[root@instack ~]# su - stack<br />
Last login: Mon Jul 25 13:59:54 UTC 2016 on pts/5<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ sudo ovs-vsctl show<br />
bc1c13cd-3651-4f79-87df-bdaf4f5fec01<br />
Bridge br-ctlplane<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
Port "eth1"<br />
Interface "eth1"<br />
<span style="color: #b45f06;"> Port "vlan10"<br /> tag: 10<br /> Interface "vlan10"<br /> error: "could not open network device vlan10 (No such device)"</span><br />
Bridge br-int<br />
fail_mode: secure<br />
Port "tap41e6fddf-31"<br />
tag: 1<br />
Interface "tap41e6fddf-31"<br />
type: internal<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
[stack@instack ~]$ ifconfig</div>
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::297:fff:fe5c:c66c prefixlen 64 scopeid 0x20<link></link><br /> ether 00:97:0f:5c:c6:6c txqueuelen 0 (Ethernet)<br /> RX packets 13 bytes 1038 (1.0 KiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 12 bytes 816 (816.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.122.193 netmask 255.255.255.0 broadcast 192.168.122.255<br /> inet6 fe80::5054:ff:fe6f:906a prefixlen 64 scopeid 0x20<link></link><br /> ether 52:54:00:6f:90:6a txqueuelen 1000 (Ethernet)<br /> RX packets 1674 bytes 213273 (208.2 KiB)<br /> RX errors 0 dropped 9 overruns 0 frame 0<br /> TX packets 1078 bytes 163033 (159.2 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::297:fff:fe5c:c66c prefixlen 64 scopeid 0x20<link></link><br /> ether 00:97:0f:5c:c6:6c txqueuelen 1000 (Ethernet)<br /> RX packets 8 bytes 648 (648.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 14 bytes 1108 (1.0 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 31888 bytes 10276736 (9.8 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 31888 bytes 10276736 (9.8 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</host></up></up></up></up><br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMNWHLAajh2z2be-y2MgAkVzWvPmktvlmcLXpBS47WbdrmV69QFtkBsz7u9EbeKNY2E5Lht82Qg3-148Av7-SCzGDenkGDIY92Z1eBwBsIwrTlwuTafOGOb4rwConGybmGZ1XTTg/s1600/Screenshot+from+2016-07-26+00-23-49.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiMNWHLAajh2z2be-y2MgAkVzWvPmktvlmcLXpBS47WbdrmV69QFtkBsz7u9EbeKNY2E5Lht82Qg3-148Av7-SCzGDenkGDIY92Z1eBwBsIwrTlwuTafOGOb4rwConGybmGZ1XTTg/s640/Screenshot+from+2016-07-26+00-23-49.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Interface eth0 (192.168.122.193)</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOOlImUxWl9PmQ1a29PZwUJzQwrfeC0muA-WGf9sQ1XwJNLYE7uKhOcHpc_XIVbIP-2-gpq6UWoiM3kSQthUKWqOH2qb-GcNVtHyurkzvoTwIy1sy_Zx7rNCT1ZNW7Q_jX2hm_hQ/s1600/Screenshot+from+2016-07-26+00-32-00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgOOlImUxWl9PmQ1a29PZwUJzQwrfeC0muA-WGf9sQ1XwJNLYE7uKhOcHpc_XIVbIP-2-gpq6UWoiM3kSQthUKWqOH2qb-GcNVtHyurkzvoTwIy1sy_Zx7rNCT1ZNW7Q_jX2hm_hQ/s640/Screenshot+from+2016-07-26+00-32-00.png" width="640" /></a></div>
<br />
Interface eth1 which is a OVS port of OVS Bridge br-ctlplane<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6VQM6Hv-ZIKaynGd0VfJnjRU5XpW3XTEvkVf-lPJSqFxReyhrGoTRzh6SD1KuUHdkDnCOhamIRFxD0NT80vqq8cmI_2p1ODv2KI8s5wwJUBbh5alXmW8eS0qEViYljr4lIlpfMA/s1600/Screenshot+from+2016-07-26+00-32-23.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg6VQM6Hv-ZIKaynGd0VfJnjRU5XpW3XTEvkVf-lPJSqFxReyhrGoTRzh6SD1KuUHdkDnCOhamIRFxD0NT80vqq8cmI_2p1ODv2KI8s5wwJUBbh5alXmW8eS0qEViYljr4lIlpfMA/s640/Screenshot+from+2016-07-26+00-32-23.png" width="640" /></a></div>
<br />
<br />
Thus any attempt to activate "Network Isolation" having External Network<br />
running within<br />
<br />
openstack overcloud deploy --templates --libvirt-type qemu \<br />
--control-scale 3 \<br />
--compute-scale 1 \<br />
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \<br />
--ntp-server pool.ntp.org<br />
<br />
after committing all instructions from <a href="http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html" target="_blank">http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html</a><br />
is supposed to fail. Instack VM is missing device vlan10 supposed to become external interface attached as OVS port to br-ctlplane. What I believe was done in <a href="http://mariosandreou.com/tripleo/2016/06/17/deploy-tripleo-stable-mitaka.html" target="_blank">http://mariosandreou.com/tripleo/2016/06/17/deploy-tripleo-stable-mitaka.html</a> without explicitly advertising. <br />
<br />
This guide lines may be, actually, found in <a href="http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html" target="_blank">http://docs.openstack.org/developer/tripleo-docs/advanced_deployment/network_isolation.html </a><br />
but are mostly related with bare metal deployment not dealing specifically<br />
with instack-virt-setup and as a matter of fact just implemented in<br />
"TripleO QuickStart" working in meantime just nice with any configs available<br />
on 32 GB VIRTHOST .<br />
<br />
=========================================================<br />
Get back to post mentioned in the header , we want Mitaka Tripleo deployment<br />
to run on instack VM with "Network Isolation" setting up External network,<br />
Network serving VXLAN tunnels. In regards of Ceph Nodes overcloud deployment "Network Isolation" is obviously extremely important.<br />
So, vlan10 device creation should be done with no doubts. <br />
=========================================================<br />
<br />
<span style="color: #b45f06;">sudo bash -c 'cat < /etc/sysconfig/network-scripts/ifcfg-vlan10</span><br />
<span style="color: #b45f06;">DEVICE=vlan10</span><br />
<span style="color: #b45f06;">ONBOOT=yes</span><br />
<span style="color: #b45f06;">DEVICETYPE=ovs</span><br />
<span style="color: #b45f06;">TYPE=OVSIntPort</span><br />
<span style="color: #b45f06;">BOOTPROTO=static</span><br />
<span style="color: #b45f06;">IPADDR=10.0.0.1</span><br />
<span style="color: #b45f06;">NETMASK=255.255.255.0</span><br />
<span style="color: #b45f06;">OVS_BRIDGE=br-ctlplane</span><br />
<span style="color: #b45f06;">OVS_OPTIONS="tag=10"</span><br />
<span style="color: #b45f06;">EOF'</span><br />
<br />
<span style="color: #b45f06;">sudo ifup ifcfg-vlan10</span> <br />
<br />
sudo iptables -A BOOTSTACK_MASQ -s 10.0.0.0/24 ! -d 10.0.0.0/24 -j MASQUERADE -t nat<br />
<br />
=============================<br />
Make sure updates are done<br />
=============================<br />
<br />
[boris@ServerCentOS72 ~]$ sudo su -<br />
[sudo] password for boris: <br />
Last login: Tue Jul 26 03:58:20 MSK 2016 on pts/0<br />
[root@ServerCentOS72 ~]# su - stack<br />
Last login: Tue Jul 26 03:58:45 MSK 2016 on pts/0<br />
[stack@ServerCentOS72 ~]$ ssh root@192.168.122.193<br />
Last login: Tue Jul 26 01:01:49 2016<br />
[root@instack ~]# su - stack<br />
Last login: Tue Jul 26 01:01:34 UTC 2016 on pts/0<br />
[stack@instack ~]$ sudo ovs-vsctl show<br />
bc1c13cd-3651-4f79-87df-bdaf4f5fec01<br />
<span style="color: #b45f06;"> Bridge br-ctlplane</span><br />
<span style="color: #b45f06;"> Port "eth1"</span><br />
<span style="color: #b45f06;"> Interface "eth1"</span><br />
<span style="color: #b45f06;"> Port br-ctlplane</span><br />
<span style="color: #b45f06;"> Interface br-ctlplane</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
<span style="color: #b45f06;"> Port phy-br-ctlplane</span><br />
<span style="color: #b45f06;"> Interface phy-br-ctlplane</span><br />
<span style="color: #b45f06;"> type: patch</span><br />
<span style="color: #b45f06;"> options: {peer=int-br-ctlplane}</span><br />
<span style="color: #b45f06;"> Port "vlan10"</span><br />
<span style="color: #b45f06;"> tag: 10</span><br />
<span style="color: #b45f06;"> Interface "vlan10"</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
Bridge br-int<br />
fail_mode: secure<br />
Port "tap41e6fddf-31"<br />
tag: 1<br />
Interface "tap41e6fddf-31"<br />
type: internal<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
ovs_version: "2.5.0"<br />
[stack@instack ~]$ ifconfig<br />
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::297:fff:fe5c:c66c prefixlen 64 scopeid 0x20<link></link><br /> ether 00:97:0f:5c:c6:6c txqueuelen 0 (Ethernet)<br /> RX packets 2751712 bytes 205714576 (196.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 2711617 bytes 12789727774 (11.9 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.122.193 netmask 255.255.255.0 broadcast 192.168.122.255<br /> inet6 fe80::5054:ff:fe6f:906a prefixlen 64 scopeid 0x20<link></link><br /> ether 52:54:00:6f:90:6a txqueuelen 1000 (Ethernet)<br /> RX packets 4767 bytes 590862 (577.0 KiB)<br /> RX errors 0 dropped 9 overruns 0 frame 0<br /> TX packets 3138 bytes 488880 (477.4 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::297:fff:fe5c:c66c prefixlen 64 scopeid 0x20<link></link><br /> ether 00:97:0f:5c:c6:6c txqueuelen 1000 (Ethernet)<br /> RX packets 2751684 bytes 205708317 (196.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 2711674 bytes 12789742191 (11.9 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 319388 bytes 1493930109 (1.3 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 319388 bytes 1493930109 (1.3 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br /><span style="color: #b45f06;">vlan10: flags=4163</span><up><span style="color: #b45f06;"> mtu 1500<br /> inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::1478:deff:fe20:7b86 prefixlen 64 scopeid 0x20<br /> ether 16:78:de:20:7b:86 txqueuelen 0 (Ethernet)<br /> RX packets 0 bytes 0 (0.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 12 bytes 816 (816.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><link></link><br />============================================<br />Proceed as follows.Create file<br />============================================<br />[stack@instack ~]$ cat network_env.yaml<br />{<br /> "parameter_defaults": {<br /> "ControlPlaneDefaultRoute": "192.0.2.1",<br /> "ControlPlaneSubnetCidr": "24",<br /> "DnsServers": [<br /> "192.168.122.43"<br /> ],<br /> "EC2MetadataIp": "192.0.2.1",<br /> "ExternalAllocationPools": [<br /> {<br /> "end": "10.0.0.250",<br /> "start": "10.0.0.4"<br /> }<br /> ],<br /> "ExternalNetCidr": "10.0.0.1/24",<br /> "NeutronExternalNetworkBridge": ""<br /> }<br />}</up></host></up></up></up></up><br />
<up><up><up><up><host><up><br /></up></host></up></up></up></up>
<up><up><up><up><host><up>Where </up></host></up></up></up></up>192.168.122.43 is instack VM IP.<br />
<up><up><up><up><host><up>====================<br />Then run :-<br />======================<br />[stack@instack ~]$ source stackrc<br />
</up></host></up></up></up></up><br />
<pre>[stack@instack ~]$ openstack overcloud deploy --templates --control-scale 3 \
--compute-scale 1 \
--libvirt-type qemu \
--ntp-server pool.ntp.org \
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
-e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml \
-e $HOME/network_env.yaml
</pre>
<br />
. . . . <br />
<br />
2016-07-26 01:58:07 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5-ControllerOvercloudServicesDeployment_Step6-xl7prpyio7tq]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:58:08 [ControllerOvercloudServicesDeployment_Step6]: CREATE_COMPLETE state changed<br />
2016-07-26 01:58:08 [ControllerPostPuppet]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:58:09 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5-ControllerPostPuppet-syooepkjk5pr]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:58:09 [ControllerPostPuppetMaintenanceModeConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:58:09 [ControllerPostPuppetRestartConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:58:09 [ControllerPostPuppetMaintenanceModeConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:58:09 [ControllerPostPuppetRestartConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:58:09 [ControllerPostPuppetMaintenanceModeDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:07 [ControllerPostPuppetMaintenanceModeDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:07 [ControllerPostPuppetRestartDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:28 [0]: SIGNAL_IN_PROGRESS Signal: deployment succeeded<br />
2016-07-26 01:59:28 [0]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:29 [ComputePuppetDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:29 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:29 [overcloud-ComputeNodesPostDeployment-mfhiy6ynkcfc-ComputePuppetDeployment-kcd5ajm4snpd]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:30 [overcloud-ComputeNodesPostDeployment-mfhiy6ynkcfc-ExtraConfig-b7h73kicxfbn]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:30 [overcloud-ComputeNodesPostDeployment-mfhiy6ynkcfc-ExtraConfig-b7h73kicxfbn]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:31 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:32 [ComputeNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:32 [overcloud-ComputeNodesPostDeployment-mfhiy6ynkcfc]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:32 [ComputeNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:55 [ControllerPostPuppetRestartDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:56 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5-ControllerPostPuppet-syooepkjk5pr]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:56 [ControllerPostPuppet]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:56 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:56 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5-ExtraConfig-hubh2nqfitzf]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:56 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5-ExtraConfig-hubh2nqfitzf]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:57 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:57 [overcloud-ControllerNodesPostDeployment-l5rjfq2f44f5]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:58 [ControllerNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:58 [BlockStorageNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:58 [CephStorageNodesPostDeployment]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:58 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:58 [VolumeArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:58 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeArtifactsConfig-44x24fxyh2f4]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:58 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:58 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:58 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeArtifactsConfig-44x24fxyh2f4]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 01:59:58 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:58 [CephStorageArtifactsConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:59 [VolumeArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:59 [VolumeArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:59 [CephStoragePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:59 [CephStoragePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:59 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageArtifactsConfig-hfqfkeddlhlp]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 01:59:59 [DeployArtifacts]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 01:59:59 [DeployArtifacts]: CREATE_COMPLETE state changed<br />
2016-07-26 01:59:59 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageArtifactsConfig-hfqfkeddlhlp]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:00 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeArtifactsDeploy-xnjui4d7smte]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:00 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeArtifactsDeploy-xnjui4d7smte]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:00 [CephStorageArtifactsConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:00 [CephStorageArtifactsDeploy]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:00 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageArtifactsDeploy-wyq7fd5t6aju]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:00 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageArtifactsDeploy-wyq7fd5t6aju]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:01 [VolumeArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:01 [VolumePuppetConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:01 [CephStorageArtifactsDeploy]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:01 [CephStorageDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:01 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageDeployment_Step1-6mvwabo37ksn]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:01 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-CephStorageDeployment_Step1-6mvwabo37ksn]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:02 [VolumePuppetConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:02 [VolumeDeployment_Step1]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:02 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeDeployment_Step1-elyo3xpsu6qu]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:02 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-VolumeDeployment_Step1-elyo3xpsu6qu]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:02 [CephStorageDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:02 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:03 [VolumeDeployment_Step1]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:03 [ExtraConfig]: CREATE_IN_PROGRESS state changed<br />
2016-07-26 02:00:03 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-ExtraConfig-w5wrecd3f33k]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:03 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6-ExtraConfig-w5wrecd3f33k]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:03 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-ExtraConfig-hanwj4izf6jd]: CREATE_IN_PROGRESS Stack CREATE started<br />
2016-07-26 02:00:03 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as-ExtraConfig-hanwj4izf6jd]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:04 [CephStorageNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:04 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:04 [overcloud-BlockStorageNodesPostDeployment-7lwxbt5vtwj6]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:04 [ExtraConfig]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:04 [overcloud-CephStorageNodesPostDeployment-xi3jsga2e4as]: CREATE_COMPLETE Stack CREATE completed successfully<br />
2016-07-26 02:00:05 [BlockStorageNodesPostDeployment]: CREATE_COMPLETE state changed<br />
2016-07-26 02:00:05 [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully<br />
Stack overcloud CREATE_COMPLETE<br />
/home/stack/.ssh/known_hosts updated.<br />
Original contents retained as /home/stack/.ssh/known_hosts.old<br />
Skipping "horizon" postconfig because it wasn't found in the endpoint map output<br />
PKI initialization in init-keystone is deprecated and will be removed.<br />
Warning: Permanently added '192.0.2.16' (ECDSA) to the list of known hosts.<br />
The following cert files already exist, use --rebuild to remove the existing files before regenerating:<br />
/etc/keystone/ssl/certs/ca.pem already exists<br />
/etc/keystone/ssl/private/signing_key.pem already exists<br />
/etc/keystone/ssl/certs/signing_cert.pem already exists<br />
Connection to 192.0.2.16 closed.<br />
Overcloud Endpoint: http://10.0.0.4:5000/v2.0<br />
Overcloud Deployed<br />
<br />
[stack@instack ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 068dcf61-1c07-49d3-97f9-66e0ff1896e4 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.19 |
| 1083bc50-4e30-4a8d-8a02-d60c35bab0b7 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.18 |
| 3d88de4e-2c25-4a7e-ac05-580d5e4532f5 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.20 |
| f4589428-ba17-44f5-b73c-db38af7963e9 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.17 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG94URnbDkGuZvHY0cdJXK1Xv-9BF2hSIoF2NB-zCBt7TvvSwcB0R78WxUT3groINRAhxKoSCeSkFKlxrw2DcqrRGiIiEGup3HxtPWNJzXaJW28cdo_tvgOx20GWthMGBZbwQ9hQ/s1600/Screenshot+from+2016-07-26+06-10-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG94URnbDkGuZvHY0cdJXK1Xv-9BF2hSIoF2NB-zCBt7TvvSwcB0R78WxUT3groINRAhxKoSCeSkFKlxrw2DcqrRGiIiEGup3HxtPWNJzXaJW28cdo_tvgOx20GWthMGBZbwQ9hQ/s640/Screenshot+from+2016-07-26+06-10-47.png" width="640" /></a></div>
<br />
[stack@instack ~]$ neutron net-list<br />
<pre>+--------------------------------------+--------------+--------------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+--------------------------------------------+
| cc29c009-f2c8-457c-a92c-021acf650b78 | tenant | 3afaf44d-19b5-46ac-8534-fe1520a14a1c |
| | | 172.16.0.0/24 |
| e8e8d778-1992-4ee2-9b4e-ac349e8d7985 | external | 8ad5eeef-6860-4781-89e6-5132cf633013 |
| | | 10.0.0.0/24 |
| 00bbd0fb-94b0-406a-b5b1-aa60b5526898 | internal_api | 1b84e01e-deb9-458c-990a-94d92f69f668 |
| | | 172.16.2.0/24 |
| 65d426b0-0b02-4654-b598-1ba368a43d35 | storage | 849719fb-877c-49f7-a606-959e4720011d |
| | | 172.16.1.0/24 |
| 687769a2-5438-434d-8085-2988e592755b | storage_mgmt | fc36759d-25ee-4cbe-908b-819dad6a222d |
| | | 172.16.3.0/24 |
| 584468f0-d26e-4a47-89a1-bca5847404fb | ctlplane | 43d05014-098b-4eb5-8582-178404ff0e24 |
| | | 192.0.2.0/24 |
+--------------------------------------+--------------+--------------------------------------------+
</pre>
<br />
[stack@instack ~]$ neutron port-list<br />
<pre>+---------------------------+---------------------------+-------------------+---------------------------+
| id | name | mac_address | fixed_ips |
+---------------------------+---------------------------+-------------------+---------------------------+
| 5dbf529a-02e4-48a7-8989-f | | 00:88:e5:d6:ac:6b | {"subnet_id": "43d05014 |
| 5e325305cc6 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.23"} |
| a3f5f685-7d2b- | | 00:3a:d3:58:f1:1e | {"subnet_id": "43d05014 |
| 4ba9-98d3-8f848dee70b3 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.22"} |
| 1b91fff8-2425-4b69-a017-a | | 00:cb:46:93:d8:c8 | {"subnet_id": "43d05014 |
| 73ad358d4e8 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.6"} |
| d1d3022d-c0f4-4632-9fe6-7 | | 00:16:36:c5:97:67 | {"subnet_id": "43d05014 |
| da36d075793 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.24"} |
| 07ef7fb6-2dc4-4ebd-b21c- | | fa:16:3e:32:91:67 | {"subnet_id": "fc36759d- |
| b74571a48d68 | | | 25ee-4cbe-908b- |
| | | | 819dad6a222d", |
| | | | "ip_address": |
| | | | "172.16.3.5"} |
| 114b7dd0-a4b8-45bb-8180-a | | fa:16:3e:3c:8d:a2 | {"subnet_id": "3afaf44d-1 |
| 48b082ef2c5 | | | 9b5-46ac-8534-fe1520a14a1 |
| | | | c", "ip_address": |
| | | | "172.16.0.6"} |
| 2ee08468-7e92-4bb4-b0bb- | | fa:16:3e:9c:c0:6c | {"subnet_id": "1b84e01e- |
| e9568aabe958 | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.7"} |
| 41e6fddf-3183-4545-88b4-3 | | fa:16:3e:27:0f:c6 | {"subnet_id": "43d05014 |
| b5bb7e3db68 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.5"} |
| 45aa155d-d179-484c- | | fa:16:3e:dd:86:7d | {"subnet_id": "849719fb- |
| 8cf7-9720eaeb0438 | | | 877c- |
| | | | 49f7-a606-959e4720011d", |
| | | | "ip_address": |
| | | | "172.16.1.8"} |
| 4d068805-a4a6-446d- | | fa:16:3e:08:5a:ed | {"subnet_id": "3afaf44d-1 |
| 95c0-8c826df6389d | | | 9b5-46ac-8534-fe1520a14a1 |
| | | | c", "ip_address": |
| | | | "172.16.0.7"} |
| 6ce71590-6887-49dd-b14e- | public_virtual_ip | fa:16:3e:b1:ec:6b | {"subnet_id": "8ad5eeef-6 |
| 2c9567f2fa62 | | | 860-4781-89e6-5132cf63301 |
| | | | 3", "ip_address": |
| | | | "10.0.0.4"} |
| 7a1645ea-a0c6-4550-872a- | | fa:16:3e:b3:26:6f | {"subnet_id": "8ad5eeef-6 |
| c0c62c2a6015 | | | 860-4781-89e6-5132cf63301 |
| | | | 3", "ip_address": |
| | | | "10.0.0.5"} |
| 7e515bc1-3dfb- | redis_virtual_ip | fa:16:3e:7c:c5:ad | {"subnet_id": "1b84e01e- |
| 4a6c-a429-d05db14e330f | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.4"} |
| 8946c126-1767-4e35-b843-c | control_virtual_ip | fa:16:3e:33:e3:ba | {"subnet_id": "43d05014 |
| 1d37c401ae6 | | | -098b- |
| | | | 4eb5-8582-178404ff0e24", |
| | | | "ip_address": |
| | | | "192.0.2.21"} |
| 89a16a00-79a0-4670-b109-6 | | fa:16:3e:02:dc:5d | {"subnet_id": "fc36759d- |
| cfe52f96907 | | | 25ee-4cbe-908b- |
| | | | 819dad6a222d", |
| | | | "ip_address": |
| | | | "172.16.3.7"} |
| 9950087d-f19b-4363-9187-9 | | fa:16:3e:a3:d1:38 | {"subnet_id": "849719fb- |
| eacfded5942 | | | 877c- |
| | | | 49f7-a606-959e4720011d", |
| | | | "ip_address": |
| | | | "172.16.1.7"} |
| 9d5ca5b8-77e0-4fef-b441-0 | | fa:16:3e:86:26:50 | {"subnet_id": "849719fb- |
| b9731fbbe94 | | | 877c- |
| | | | 49f7-a606-959e4720011d", |
| | | | "ip_address": |
| | | | "172.16.1.6"} |
| b41648b9-41d9-44fd-a990-1 | | fa:16:3e:f2:1d:79 | {"subnet_id": "1b84e01e- |
| 9adc8789a3b | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.6"} |
| b9376ba0-33b9-40f0-87a4-0 | | fa:16:3e:ba:f6:06 | {"subnet_id": "8ad5eeef-6 |
| 0f8559f814a | | | 860-4781-89e6-5132cf63301 |
| | | | 3", "ip_address": |
| | | | "10.0.0.6"} |
| b9bd0dc3-a46e-46bc- | | fa:16:3e:42:51:ac | {"subnet_id": "8ad5eeef-6 |
| beb9-c6fa415a1815 | | | 860-4781-89e6-5132cf63301 |
| | | | 3", "ip_address": |
| | | | "10.0.0.7"} |
| c860890b-e22f-4201-b0ff- | | fa:16:3e:86:1c:78 | {"subnet_id": "849719fb- |
| e8759de2a8d2 | | | 877c- |
| | | | 49f7-a606-959e4720011d", |
| | | | "ip_address": |
| | | | "172.16.1.5"} |
| cc59df47-0e26-4dd9-95ac- | | fa:16:3e:1e:2b:76 | {"subnet_id": "fc36759d- |
| 22d4c359be12 | | | 25ee-4cbe-908b- |
| | | | 819dad6a222d", |
| | | | "ip_address": |
| | | | "172.16.3.6"} |
| cef623b3-3702-44f4-8dfa- | storage_management_virtua | fa:16:3e:2b:df:8b | {"subnet_id": "fc36759d- |
| 61bdea8bdbb4 | l_ip | | 25ee-4cbe-908b- |
| | | | 819dad6a222d", |
| | | | "ip_address": |
| | | | "172.16.3.4"} |
| e261a891-3637-4fe5-bf8f- | | fa:16:3e:d1:09:c4 | {"subnet_id": "3afaf44d-1 |
| 19d0772f4268 | | | 9b5-46ac-8534-fe1520a14a1 |
| | | | c", "ip_address": |
| | | | "172.16.0.4"} |
| e269194b-1a4c-48de- | | fa:16:3e:c2:f2:f5 | {"subnet_id": "1b84e01e- |
| 9a28-91d1fe949b46 | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.9"} |
| e4e1f0be-de29-4f75-ac68-8 | internal_api_virtual_ip | fa:16:3e:e3:6a:b2 | {"subnet_id": "1b84e01e- |
| ea53c406849 | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.5"} |
| ec0c243b-669d-475e-9dfa- | | fa:16:3e:28:da:cd | {"subnet_id": "1b84e01e- |
| df66159e9a14 | | | deb9-458c-990a- |
| | | | 94d92f69f668", |
| | | | "ip_address": |
| | | | "172.16.2.8"} |
| f3e1242b-a29c-47fa- | storage_virtual_ip | fa:16:3e:ef:d0:26 | {"subnet_id": "849719fb- |
| 94e1-463607190165 | | | 877c- |
| | | | 49f7-a606-959e4720011d", |
| | | | "ip_address": |
| | | | "172.16.1.4"} |
| f998e578-dffd-4328-a15f- | | fa:16:3e:8d:05:22 | {"subnet_id": "3afaf44d-1 |
| 67c43d4d9ca4 | | | 9b5-46ac-8534-fe1520a14a1 |
| | | | c", "ip_address": |
| | | | "172.16.0.5"} |
+---------------------------+---------------------------+-------------------+---------------------------+
</pre>
<br />
[stack@instack ~]$ ssh heat-admin@192.0.2.19<br />
The authenticity of host '192.0.2.19 (192.0.2.19)' can't be established.<br />
ECDSA key fingerprint is 61:31:94:19:12:51:a3:df:be:22:f6:0a:e7:dc:a1:d7.<br />
Are you sure you want to continue connecting (yes/no)? yes<br />
Warning: Permanently added '192.0.2.19' (ECDSA) to the list of known hosts.<br />
Last login: Tue Jul 26 02:00:36 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
[root@overcloud-controller-0 ~]# pcs status<br />
Cluster name: tripleo_cluster<br />
Last updated: Tue Jul 26 02:03:03 2016 Last change: Tue Jul 26 01:58:37 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br />
3 nodes and 127 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.16 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-0 ]<br />
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: delay-clone [delay]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
<br />
=======================<br />
Verify `ovs-vsctl show`<br />
=======================<br />
<br />
[root@overcloud-controller-0 ~]# ovs-vsctl show<br />
b31c4b88-0b22-4753-bf5e-88a7b4844914<br />
Bridge br-int<br />
fail_mode: secure<br />
Port patch-tun<br />
Interface patch-tun<br />
type: patch<br />
options: {peer=patch-int}<br />
Port int-br-ex<br />
Interface int-br-ex<br />
type: patch<br />
options: {peer=phy-br-ex}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Bridge br-ex<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
Port "eth0"<br />
Interface "eth0"<br />
Port phy-br-ex<br />
Interface phy-br-ex<br />
type: patch<br />
options: {peer=int-br-ex}<br />
Port "vlan50"<br />
tag: 50<br />
Interface "vlan50"<br />
type: internal<br />
Port br-ex<br />
Interface br-ex<br />
type: internal<br />
Port "vlan40"<br />
tag: 40<br />
Interface "vlan40"<br />
type: internal<br />
Port "vlan20"<br />
tag: 20<br />
Interface "vlan20"<br />
type: internal<br />
Port "vlan30"<br />
tag: 30<br />
Interface "vlan30"<br />
type: internal<br />
Bridge br-tun<br />
fail_mode: secure<br />
Port "vxlan-ac100007"<br />
Interface "vxlan-ac100007"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.7"}<br />
Port patch-int<br />
Interface patch-int<br />
type: patch<br />
options: {peer=patch-tun}<br />
Port br-tun<br />
Interface br-tun<br />
type: internal<br />
Port "vxlan-ac100005"<br />
Interface "vxlan-ac100005"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.5"}<br />
Port "vxlan-ac100004"<br />
Interface "vxlan-ac100004"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.4"}<br />
ovs_version: "2.5.0"<br />
<br />
=================================<br />
Now log into compute node<br />
=================================<br />
<br />
[stack@instack ~]$ ssh heat-admin@192.0.2.17<br />
Last login: Tue Jul 26 02:32:46 2016 from 192.0.2.1<br />
<br />
[heat-admin@overcloud-novacompute-0 ~]$ sudo su -<br />
Last login: Tue Jul 26 02:32:55 UTC 2016 on pts/0<br />
<br />
[root@overcloud-novacompute-0 ~]# openstack-service status<br />
MainPID=19664 Id=neutron-openvswitch-agent.service ActiveState=active<br />
MainPID=20292 Id=openstack-ceilometer-compute.service ActiveState=active<br />
MainPID=19693 Id=openstack-nova-compute.service ActiveState=active<br />
<br />
[root@overcloud-novacompute-0 ~]# ovs-vsctl show<br />
c9f526a2-1ae0-4745-9fe0-3ce76ed727cb<br />
Bridge br-tun<br />
fail_mode: secure<br />
Port patch-int<br />
Interface patch-int<br />
type: patch<br />
options: {peer=patch-tun}<br />
Port "vxlan-ac100005"<br />
Interface "vxlan-ac100005"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.5"}<br />
Port br-tun<br />
Interface br-tun<br />
type: internal<br />
Port "vxlan-ac100006"<br />
Interface "vxlan-ac100006"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.6"}<br />
Port "vxlan-ac100004"<br />
Interface "vxlan-ac100004"<br />
type: vxlan<br />
options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.4"}<br />
Bridge br-ex<br />
Port br-ex<br />
Interface br-ex<br />
type: internal<br />
Port "vlan50"<br />
tag: 50<br />
Interface "vlan50"<br />
type: internal<br />
Port "vlan20"<br />
tag: 20<br />
Interface "vlan20"<br />
type: internal<br />
Port "eth0"<br />
Interface "eth0"<br />
Port phy-br-ex<br />
Interface phy-br-ex<br />
type: patch<br />
options: {peer=int-br-ex}<br />
Port "vlan30"<br />
tag: 30<br />
Interface "vlan30"<br />
type: internal<br />
Bridge br-int<br />
fail_mode: secure<br />
Port int-br-ex<br />
Interface int-br-ex<br />
type: patch<br />
options: {peer=phy-br-ex}<br />
Port patch-tun<br />
Interface patch-tun<br />
type: patch<br />
options: {peer=patch-int}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<br />
===============================================<br />
Verification Galera in sync && rabbitmqctl cluster_status<br />
===============================================<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk3reSxtrzfqhjiaGgHhyshVafrkozGUxS6MxSzbn-Casp9VUH4Ld7WDmB_K1M46qcrjPYsbztgw8LgOoN1myHkbvXWpNUMqFCZfPxCIrx4ONrSNW_ZxAp3MuFJkWb_bgW0uMH-w/s1600/Screenshot+from+2016-07-27+16-12-56.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgk3reSxtrzfqhjiaGgHhyshVafrkozGUxS6MxSzbn-Casp9VUH4Ld7WDmB_K1M46qcrjPYsbztgw8LgOoN1myHkbvXWpNUMqFCZfPxCIrx4ONrSNW_ZxAp3MuFJkWb_bgW0uMH-w/s640/Screenshot+from+2016-07-27+16-12-56.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYt3ESargifHAo_UbiDbgiMWOjZWJXNLGRHnLcDg17W9nqnAihFYI8gmvFX1eItfA0rsfOY-3hpZHaVoDyZeLVocD3ZCXZ5FJhnKeqKPmIWL1i37Or0_STpBB2DxSmRo2Sj32T9Q/s1600/Screenshot+from+2016-07-26+12-28-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYt3ESargifHAo_UbiDbgiMWOjZWJXNLGRHnLcDg17W9nqnAihFYI8gmvFX1eItfA0rsfOY-3hpZHaVoDyZeLVocD3ZCXZ5FJhnKeqKPmIWL1i37Or0_STpBB2DxSmRo2Sj32T9Q/s640/Screenshot+from+2016-07-26+12-28-07.png" width="640" /> </a></div>
<div class="separator" style="clear: both; text-align: left;">
Swift has been set up as Glance backend</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihc8XDTSPhdtqGLZwT8cP7FRq2Cr_UsmtLVUgYBA-iXda6u2PC7TrXVTD-JSPG65G1B7UyXp3G40W1f_Pappl-1bGoAOi3i9lFSVbs3Sp4Lnow55OlbJ4d5VXKF1n99HTYswG0yA/s1600/Screenshot+from+2016-07-26+12-36-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihc8XDTSPhdtqGLZwT8cP7FRq2Cr_UsmtLVUgYBA-iXda6u2PC7TrXVTD-JSPG65G1B7UyXp3G40W1f_Pappl-1bGoAOi3i9lFSVbs3Sp4Lnow55OlbJ4d5VXKF1n99HTYswG0yA/s640/Screenshot+from+2016-07-26+12-36-47.png" width="640" /> </a></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
References</div>
<div class="separator" style="clear: both; text-align: left;">
1. <a href="https://bugs.launchpad.net/tripleo/+bug/1593736" target="_blank">https://bugs.launchpad.net/tripleo/+bug/1593736</a></div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-85531448556635423912016-07-24T12:26:00.001-07:002016-07-24T12:42:01.120-07:00Attempt of official Mitaka TripleO HA install via instack-virt-setup<div dir="ltr" style="text-align: left;" trbidi="on">
*******************************************<br />
<div>
<div>
<div>
VIRTHOST REPO SETUP<br />
*******************************************<br />
Setup Mitaka stable repos following [ <a href="http://mariosandreou.com/tripleo/2016/06/17/deploy-tripleo-stable-mitaka.html" target="_blank">1</a> ]<br />
<br />
sudo curl -o /etc/yum.repos.d/delorean-mitaka.repo <a href="http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo" rel="noopener" target="_blank">http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo</a><br />
sudo curl -o /etc/yum.repos.d/delorean-deps-mitaka.repo <a href="http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo" rel="noopener" target="_blank">http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo</a><br />
<br />
Follow <a href="http://docs.openstack.org/developer/tripleo-docs/environments/environments.html#virtual-environment" rel="noopener" target="_blank">http://docs.openstack.org/developer/tripleo-docs/environments/environments.html#virtual-environment</a><br />
<br />
[stack@ServerCentOS72 ~]$ env | grep NODE<br />
NODE_MEM=6000<br />
NODE_COUNT=4<br />
UNDERCLOUD_NODE_CPU=2<br />
NODE_CPU=2<br />
NODE_DIST=centos7<br />
UNDERCLOUD_NODE_MEM=7500<br />
<br />
<br />
<pre>[stack@ServerCentOS72 ~]$ instack-virt-setup</pre>
<br />
**************************************<br />
INSTACK (VM) REPO SETUP<br />
**************************************<br />
<br />
Setup Mitaka stable repos<br />
<br />
sudo curl -o /etc/yum.repos.d/delorean-mitaka.repo <a href="http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo" rel="noopener" target="_blank">http://trunk.rdoproject.org/centos7-mitaka/current/delorean.repo</a><br />
sudo curl -o /etc/yum.repos.d/delorean-deps-mitaka.repo <a href="http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo" rel="noopener" target="_blank">http://trunk.rdoproject.org/centos7-mitaka/delorean-deps.repo</a><br />
<br />
Follow <a href="http://docs.openstack.org/developer/tripleo-docs/installation/installation.html" rel="noopener" target="_blank">http://docs.openstack.org/developer/tripleo-docs/installation/installation.html</a><br />
<br />
Then when building images enable the mitaka repo as follows<br />
<br />
export NODE_DIST=centos7<br />
export USE_DELOREAN_TRUNK=1<br />
export DELOREAN_TRUNK_REPO="<a href="http://trunk.rdoproject.org/centos7-mitaka/current/" rel="noopener" target="_blank">http://trunk.rdoproject.org/centos7-mitaka/current/</a>"<br />
export DELOREAN_REPO_FILE="delorean.repo"<br />
<br />
Follow <a href="http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-image" rel="noopener" target="_blank">http://docs.openstack.org/developer/tripleo-docs/basic_deployment/basic_deployment_cli.html#get-image</a><br />
<br />
************************************<br />
OVERCOUD DEPLOYMENT<br />
************************************<br />
<br />
openstack overcloud deploy --templates --libvirt-type qemu \<br />
--control-scale 3 \<br />
--compute-scale 1 \<br />
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml \<br />
--ntp-server pool.ntp.org<br />
<br />
***************************************************<br />
HORIZONT ACCESS TO OVERCLOUD [ <a href="http://www.anstack.com/blog/2016/07/02/ssh-multi-hop-tripleo.html" target="_blank">2</a> ]<br />
***************************************************<br />
<br />
[root@ServerCentOS72 ~]# su - stack<br />
Last login: Sun Jul 24 21:00:29 MSK 2016 on pts/1<br />
<span style="color: #b45f06;">[stack@ServerCentOS72
~]$ undercloudIp=`sudo virsh domifaddr instack | grep $(tripleo
get-vm-mac instack) | awk '{print $4}' | sed 's/\/.*$//'`</span><br />
[stack@ServerCentOS72 ~]$ echo $undercloudIp<br />
192.168.122.193<br />
<span style="color: #b45f06;">[stack@ServerCentOS72 ~]$ ssh -L 38080:localhost:38080 root@$undercloudIp</span><br />
<span style="color: #b45f06;">Last login: Sun Jul 24 16:26:45 2016 from 192.168.122.1</span><br />
<br />
[root@instack ~]# su - stack<br />
Last login: Sun Jul 24 16:26:53 UTC 2016 on pts/1<br />
[stack@instack ~]$ . stackrc<br />
[stack@instack ~]$ cat overcloudrc<br />
export OS_NO_CACHE=True<br />
export OS_CLOUDNAME=overcloud<br />
export OS_AUTH_URL=<a href="http://192.0.2.12:5000/v2.0" rel="noopener" target="_blank">http://192.0.2.12:5000/v2.0</a><br />
export NOVA_VERSION=1.1<br />
export COMPUTE_API_VERSION=1.1<br />
export OS_USERNAME=admin<br />
<span style="color: #b45f06;">export no_proxy=,192.0.2.12,192.0.2.12</span><br />
export OS_PASSWORD=HAUmgg6h46F6jT2TKBVuqGp8J<br />
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"<br />
export OS_TENANT_NAME=admin<br />
<br />
[stack@instack ~]$ export controllerIp=192.0.2.12<br />
<span style="color: #b45f06;">[stack@instack ~]$ echo $controllerIp</span><br />
<span style="color: #b45f06;">192.0.2.12</span><br />
<span style="color: #b45f06;">[stack@instack ~]$ ssh -L 38080:"$controllerIp":80 heat-admin@"$controllerIp"</span><br />
<span style="color: #b45f06;">Last login: Sun Jul 24 17:11:44 2016 from 192.0.2.1</span><br />
<br />
*********************************</div>
<div>
Virtual Deployment status</div>
<div>
*********************************</div>
<div>
<br /></div>
<div>
[stack@instack ~]$ nova list<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| ID | Name | Status | Task State | Power State | Networks |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
| 7a9b7ed4-7c36-4715-b6b3-1cbb6ae7447f | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.17 |<br />
| 7edbb487-e3ed-468c-9d1b-27c4c2925ff1 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.15 |<br />
| 5cce8a90-b6d6-43cb-8419-339bd88647e2 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.16 |<br />
| c541133d-0f82-4c80-a3df-824db5f350a7 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.14 |<br />
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+<br />
[stack@instack ~]$ ssh <a href="https://e.mail.ru/compose?To=heat%2dadmin@192.0.2.17" rel="noopener">heat-admin@192.0.2.17</a><br />
Last login: Sun Jul 24 18:39:57 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
Last login: Sun Jul 24 18:25:32 UTC 2016 on pts/0<br />
[root@overcloud-controller-0 ~]# . keystonerc_admin<br />
[root@overcloud-controller-0 ~(keystone_admin)]# pcs status<br />
Cluster name: tripleo_cluster<br />
Last
updated: Sun Jul 24 19:06:30 2016 Last change: Sun Jul 24
17:00:41 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br />
3 nodes and 123 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.12 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-192.0.2.13 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-0 ]<br />
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: delay-clone [delay]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Failed Actions:<br />
* galera_monitor_10000 on overcloud-controller-0 'not running' (7): call=22, status=complete, exitreason='none',<br />
last-rc-change='Sun Jul 24 18:05:27 2016', queued=0ms, exec=0ms<br />
*
neutron-openvswitch-agent_monitor_60000 on overcloud-controller-0 'not
running' (7): call=277, status=complete, exitreason='none',<br />
last-rc-change='Sun Jul 24 18:05:34 2016', queued=0ms, exec=0ms<br />
<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled</div>
<div>
<br /></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOP7mUY_og8Jb8rEsrjL33A4z69rh0X3PpqQFjD4y9c75vxMz0KcL3tqPRrnazl3iYNsU4zLZMrlLa1wg21NWFcLFQGm21OTKooicqUgFnuIje618hHXwDwqGQfCkSYecCZ4e89g/s1600/Tripleo01.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOP7mUY_og8Jb8rEsrjL33A4z69rh0X3PpqQFjD4y9c75vxMz0KcL3tqPRrnazl3iYNsU4zLZMrlLa1wg21NWFcLFQGm21OTKooicqUgFnuIje618hHXwDwqGQfCkSYecCZ4e89g/s640/Tripleo01.png" width="640" /></a></div>
<div>
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmz7XSbBknXF582ud7pRVu0-wAj9p6D_zDwYVNYTq1yaijZjlHOqXx7RTDoOC9evh5NIyw5gx60lComnXsxAq2gAsGSvI_oIFvAEdDmCwi0fAkR4u2nf0137hyTaLFQWL9WRVk0Q/s1600/Screenshot+from+2016-07-24+21-54-50.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmz7XSbBknXF582ud7pRVu0-wAj9p6D_zDwYVNYTq1yaijZjlHOqXx7RTDoOC9evh5NIyw5gx60lComnXsxAq2gAsGSvI_oIFvAEdDmCwi0fAkR4u2nf0137hyTaLFQWL9WRVk0Q/s640/Screenshot+from+2016-07-24+21-54-50.png" width="640" /></a></div>
<div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQDVW4lXl_A7iVG0mHqFBn0MnOiGePGVI3vLy-jT-g8n9Ie-ANOBL576l9CLse6vYYouKsgrQsSAFZL1oioIJ8MAcEFmAFXd6QtVX8VGtnelpAonhWEefDBqhF79dbimuLYgaSIA/s1600/Screenshot+from+2016-07-24+20-39-41.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiQDVW4lXl_A7iVG0mHqFBn0MnOiGePGVI3vLy-jT-g8n9Ie-ANOBL576l9CLse6vYYouKsgrQsSAFZL1oioIJ8MAcEFmAFXd6QtVX8VGtnelpAonhWEefDBqhF79dbimuLYgaSIA/s640/Screenshot+from+2016-07-24+20-39-41.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiec876VHTxFwch2_fXfpv74WxEUUNEPhaPr0qkRTqOdOVB9mHhkLT9KO7xO6os-GMfPFfFnhuVc3xdZjzmWSHWPOQcMHHuDVu82kM7hnYMe0im3_4eiuViP3GEfZDBamSChMTRHA/s1600/Screenshot+from+2016-07-24+21-54-14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiec876VHTxFwch2_fXfpv74WxEUUNEPhaPr0qkRTqOdOVB9mHhkLT9KO7xO6os-GMfPFfFnhuVc3xdZjzmWSHWPOQcMHHuDVu82kM7hnYMe0im3_4eiuViP3GEfZDBamSChMTRHA/s640/Screenshot+from+2016-07-24+21-54-14.png" width="640" /></a></div>
<br /></div>
</div>
</div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-22167048195549772062016-06-24T11:37:00.000-07:002016-09-11T06:04:24.612-07:00TripleO QuickStart HA Setup && Keeping undercloud persistent between cold reboots<div dir="ltr" style="text-align: left;" trbidi="on">
================ <br />
UPDATE 09/03/2016<br />
================<br />
Undercloud VM gets created with AutoStart at boot up<br />
in meantime.So just change permissions and allow services<br />
to start on undercloud (5 min - 7 min )<br />
<br />
Up on deployment completed <br />
[stack@ServerTQS72 ~]$ virsh dominfo undercloud | grep -i autostart<br />
Autostart: enable<br />
<br />
================ <br />
UPDATE 08/18/2016<br />
================ <br />
Make following updates<br />
<br />
[root@ServerTQS72 ~]# cat /etc/rc.d/rc.local<br />
#!/bin/bash<br />
# Please note that you must run 'chmod +x /etc/rc.d/rc.local' to ensure<br />
# that this script will be executed during boot.<br />
<span style="color: #b45f06;">mkdir -p /run/user/1001<br />chown -R stack /run/user/1001<br />chgrp -R stack /run/user/1001</span><br />
touch /var/lock/subsys/local<br />
<br />
========================<br />
In stack's .bashrc<br />
========================<br />
<br />
[stack@ServerTQS72 ~]$ cat .bashrc<br />
# .bashrc<br />
<br />
# Source global definitions<br />
if [ -f /etc/bashrc ]; then<br />
. /etc/bashrc<br />
fi<br />
<br />
# Uncomment the following line if you don't like systemctl's auto-paging feature:<br />
# export SYSTEMD_PAGER=<br />
<br />
# User specific aliases and functions<br />
# BEGIN ANSIBLE MANAGED BLOCK<br />
# Make sure XDG_RUNTIME_DIR is set (used by libvirt<br />
# for creating config and sockets for qemu:///session<br />
# connections)<br />
: ${XDG_RUNTIME_DIR:=/run/user/$(id -u)}<br />
export XDG_RUNTIME_DIR<br />
<span style="color: #b45f06;">export DISPLAY=:0.0<br />export NO_AT_BRIDGE=1</span><br />
# END ANSIBLE MANAGED BLOCK<br />
<br />
===========================<br />
Reboot VIRTHOST<br />
===========================<br />
<br />
$ sudo su -<br />
# xhost +<br />
# su - stack<br />
<br />
[stack@ServerTQS72 ~]$ virt-manager --connect qemu:///session<br />
<br />
Start VM undercloud<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjPBqBKRlYVLtHBuPrPOuvNG8zNYf1A42mqCFB3P2DCP33LDiAQ2GLa2_ftjX1XY0tNR68zjCg0XgN1MIXpjpdGVCo-iKyIyGq2oehDFfbFw18cZ1BosL_878GqpuvBcB-lINtsg/s1600/Screenshot+from+2016-08-18+08-37-29.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgjPBqBKRlYVLtHBuPrPOuvNG8zNYf1A42mqCFB3P2DCP33LDiAQ2GLa2_ftjX1XY0tNR68zjCg0XgN1MIXpjpdGVCo-iKyIyGq2oehDFfbFw18cZ1BosL_878GqpuvBcB-lINtsg/s640/Screenshot+from+2016-08-18+08-37-29.png" width="640" /></a></div>
<br />
<br />
============= <br />
END UPDATE <br />
=============<br />
<br />
<br />
This post follows up <a href="http://lxer.com/module/newswire/view/230814/index.html" target="_blank">http://lxer.com/module/newswire/view/230814/index.html</a><br />
and might work as timer saver unless status undecloud.qcow2 per<br />
<a href="http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/" target="_blank">http://artifacts.ci.centos.org/artifacts/rdo/images/mitaka/delorean/stable/</a><br />
requires fresh installation to be done from scratch<br />
So, we intend to survive VIRTHOST cold reboot (downtime) and keep previous version of undercloud VM been able to bring it up avoiding build via quickstart.sh and restart procedure from logging into undercloud and immediately run overcloud deployment. Proceed as follows :-<br />
<br />
1. System shutdown<br />
Cleanly commit :-<br />
[stack@undercloud~] $ openstack stack delete overcloud<br />
2. Login into VIRTHOST as stack and gracefully shutdown undercloud<br />
[stack@ServerCentOS72 ~]$ virsh shutdown undercloud<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLwaudEB9_JOx2tHBI9H2GsNhh5XlL9P9lR0sKyH6feairV66fEq_pmXuOsXsYrKs_KyTSUInFTIcsyzGEZfYuv12fAuxphLVeKfJw3EciaMgpT7ZmbygCKyFKz_KaXWOcOGzgbQ/s1600/Screenshot+from+2016-06-24+19-57-56.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLwaudEB9_JOx2tHBI9H2GsNhh5XlL9P9lR0sKyH6feairV66fEq_pmXuOsXsYrKs_KyTSUInFTIcsyzGEZfYuv12fAuxphLVeKfJw3EciaMgpT7ZmbygCKyFKz_KaXWOcOGzgbQ/s640/Screenshot+from+2016-06-24+19-57-56.png" width="640" /></a></div>
<br />
************************************** <br />
Shutdown and bring up VIRTHOST<br />
**************************************<br />
<br />
Login as root to VIRTHOST :-<br />
<br />
[boris@ServerCentOS72 ~]$ sudo su -<br />
[sudo] password for boris: <br />
Last login: Fri Jun 24 16:47:25 MSK 2016 on pts/0<br />
<br />
******************************************************************************** <br />
This is core step , not to create /run/user/1001/libvirt by root<br />
setting appropriate permissions, just only set correct permissions<br />
on /run/user. This will allow "stack" to issue `virsh list --all` and create<br />
by himself /run/user/1001/libvirt. The rest works fine for myself <br />
********************************************************************************<br />
<br />
[root@ServerCentOS72 ~]# <span style="color: #b45f06;">chown -R stack /run/user</span><br />
[root@ServerCentOS72 ~]# <span style="color: #b45f06;">chgrp -R stack /run/user</span><br />
<br />
[root@ServerCentOS72 ~]# ls -ld /run/user<br />
drwxr-xr-x. 3 stack stack 60 Jun 24 20:01 /run/user<br />
<br />
[root@ServerCentOS72 ~]# su - stack<br />
Last login: Fri Jun 24 16:48:09 MSK 2016 on pts/0<br />
<br />
[stack@ServerCentOS72 ~]$ virsh list --all<br />
Id Name State<br />
----------------------------------------------------<br />
- compute_0 shut off<br />
- compute_1 shut off<br />
- control_0 shut off<br />
- control_1 shut off<br />
- control_2 shut off<br />
- undercloud shut off<br />
<br />
********************** <br />
Make sure :-<br />
**********************<br />
<br />
[stack@ServerCentOS72 ~]$ ls -ld /run/user/1001/libvirt<br />
drwx------. 6 stack stack 160 Jun 24 21:38 /run/user/1001/libvirt<br />
<br />
<br />
<span style="color: #b45f06;">[stack@ServerCentOS72 ~]$ virsh start undercloud<br />Domain undercloud started</span><br />
[stack@ServerCentOS72 ~]$ virsh list --all<br />
Id Name State<br />
---------------------------------------------------------------<br />
2 undercloud running<br />
- compute_0 shut off<br />
- compute_1 shut off<br />
- control_0 shut off<br />
- control_1 shut off<br />
- control_2 shut off<br />
<br />
Wait about 5 min and access the undercloud from workstation by:-<br />
<br />
[boris@fedora22wks tripleo-quickstart]$<span style="color: #b45f06;"> ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud</span><br />
Warning: Permanently added '192.168.1.75' (ECDSA) to the list of known hosts.<br />
Warning: Permanently added 'undercloud' (ECDSA) to the list of known hosts.<br />
Last login: Fri Jun 24 15:34:40 2016 from gateway<br />
<br />
[stack@undercloud ~]$ ls -l<br />
total 1640244<br />
-rw-rw-r--. 1 stack stack 13287936 Jun 24 13:10 cirros.img<br />
-rw-rw-r--. 1 stack stack 3740163 Jun 24 13:10 cirros.initramfs<br />
-rw-rw-r--. 1 stack stack 4979632 Jun 24 13:10 cirros.kernel<br />
-rw-rw-r--. 1 1001 1001 21769 Jun 24 11:56 instackenv.json<br />
-rw-r--r--. 1 root root 385824684 Jun 24 03:28 ironic-python-agent.initramfs<br />
-rwxr-xr-x. 1 root root 5158704 Jun 24 03:28 ironic-python-agent.kernel<br />
-rwxr-xr-x. 1 stack stack 487 Jun 24 12:17 network-environment.yaml<br />
-rwxr-xr-x. 1 stack stack 792 Jun 24 12:17 overcloud-deploy-post.sh<br />
-rwxr-xr-x. 1 stack stack 2284 Jun 24 12:17 overcloud-deploy.sh<br />
-rw-rw-r--. 1 stack stack 4324 Jun 24 13:50 overcloud-env.json<br />
-rw-r--r--. 1 root root 36478203 Jun 24 03:28 overcloud-full.initrd<br />
-rw-r--r--. 1 root root 1224070144 Jun 24 03:29 overcloud-full.qcow2<br />
-rwxr-xr-x. 1 root root 5158704 Jun 24 03:29 overcloud-full.vmlinuz<br />
-rw-rw-r--. 1 stack stack 389 Jun 24 14:28 overcloudrc<br />
-rwxr-xr-x. 1 stack stack 3374 Jun 24 12:17 overcloud-validate.sh<br />
-rwxr-xr-x. 1 stack stack 284 Jun 24 12:17 run-tempest.sh<br />
-rw-r--r--. 1 stack stack 161 Jun 24 12:17 skipfile<br />
-rw-------. 1 stack stack 287 Jun 24 12:16 stackrc<br />
-rw-rw-r--. 1 stack stack 232 Jun 24 14:28 tempest-deployer-input.conf<br />
drwxrwxr-x. 9 stack stack 4096 Jun 24 15:23 tripleo-ci<br />
-rw-rw-r--. 1 stack stack 1123 Jun 24 14:28 tripleo-overcloud-passwords<br />
-rw-------. 1 stack stack 6559 Jun 24 11:59 undercloud.conf<br />
-rw-rw-r--. 1 stack stack 782405 Jun 24 12:16 undercloud_install.log<br />
-rwxr-xr-x. 1 stack stack 83 Jun 24 12:00 undercloud-install.sh<br />
-rw-rw-r--. 1 stack stack 1579 Jun 24 12:00 undercloud-passwords.conf<br />
-rw-rw-r--. 1 stack stack 7699 Jun 24 12:17 undercloud_post_install.log<br />
-rwxr-xr-x. 1 stack stack 2780 Jun 24 12:00 undercloud-post-install.sh<br />
<br />
<span style="color: #b45f06;">[stack@undercloud ~]$ ./overcloud-deploy.sh</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3qWKnjv-Mv14LNHtwGRY2NwTAKREUvsU6l5KzL-coYZ8T-i03JO1aCX0kHeBhJsYzmYbYE-vX8oADSqlEj96CZ7oxA0CaQrZHRA3opVihAM-eP0MXicLhaVWz1T8IxRfrP78VFQ/s1600/Screenshot+from+2016-06-24+21-47-09.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg3qWKnjv-Mv14LNHtwGRY2NwTAKREUvsU6l5KzL-coYZ8T-i03JO1aCX0kHeBhJsYzmYbYE-vX8oADSqlEj96CZ7oxA0CaQrZHRA3opVihAM-eP0MXicLhaVWz1T8IxRfrP78VFQ/s640/Screenshot+from+2016-06-24+21-47-09.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKKaj2uwVinC3zqRZQR-QLAdu0aRydJbxjmGgUyd0ixJFsvgEHiT2JsZskA6M_NnhZxnYdqXSTv_lD7uCnS95hgVqqbHJd-T8EpmwsU3pvfBQ8_lVmAdjBZdDurjNohqcf3ni3A/s1600/Screenshot+from+2016-06-24+21-47-30.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjAKKaj2uwVinC3zqRZQR-QLAdu0aRydJbxjmGgUyd0ixJFsvgEHiT2JsZskA6M_NnhZxnYdqXSTv_lD7uCnS95hgVqqbHJd-T8EpmwsU3pvfBQ8_lVmAdjBZdDurjNohqcf3ni3A/s640/Screenshot+from+2016-06-24+21-47-30.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFkh6e13dHV0Eo_UCYDDSvsRfEsodAwep0SZI0xlgavVV9-Jw9HYgKcr85ABUXNAdm8ARNMypVH1zj2HkP6WAqx9j3bSIc5_JRJwtLisQtLc8ZmRIW2L3xm5cu9Nzic9lGy_0aJA/s1600/Screenshot+from+2016-06-24+21-48-25.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFkh6e13dHV0Eo_UCYDDSvsRfEsodAwep0SZI0xlgavVV9-Jw9HYgKcr85ABUXNAdm8ARNMypVH1zj2HkP6WAqx9j3bSIc5_JRJwtLisQtLc8ZmRIW2L3xm5cu9Nzic9lGy_0aJA/s640/Screenshot+from+2016-06-24+21-48-25.png" width="640" /></a></div>
<br />
Fourth redeployment based on same undercloud VM. DHCP pool of ctlplane<br />
is obviosly increasing starting point<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixesC4k0pTwiUz4jzRWpJ9hEs7FR8jA8IjRQLBlXZGz3LrAu32ax6AP-guezJOgahDAfoHbcrBQJRY0AWc7BbFlK6vl9g3pHh-Wrd0OWcH75qtyKiFbnKZxkBlmYbJ5lA0AsDzCg/s1600/Screenshot+from+2016-06-25+12-35-59.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixesC4k0pTwiUz4jzRWpJ9hEs7FR8jA8IjRQLBlXZGz3LrAu32ax6AP-guezJOgahDAfoHbcrBQJRY0AWc7BbFlK6vl9g3pHh-Wrd0OWcH75qtyKiFbnKZxkBlmYbJ5lA0AsDzCg/s640/Screenshot+from+2016-06-25+12-35-59.png" width="640" /></a></div>
<br />
<pre> Libvirt's pool && volumes configuration been built by QuickStart
</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ItQ0Jnv66tgDGWy4zUH4UAO6HvS_A5UFJTBgPrSWvxuIQ1DV5FpVCwmJxyqDi7djAHLHe3tODdHpftQXgrx56pp0f1Hc_HX-G_FnOaW2WSc81S377J8fR6_2aFAZq2MSjZkbFg/s1600/Screenshot+from+2016-06-25+14-39-06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0ItQ0Jnv66tgDGWy4zUH4UAO6HvS_A5UFJTBgPrSWvxuIQ1DV5FpVCwmJxyqDi7djAHLHe3tODdHpftQXgrx56pp0f1Hc_HX-G_FnOaW2WSc81S377J8fR6_2aFAZq2MSjZkbFg/s640/Screenshot+from+2016-06-25+14-39-06.png" width="640" /></a></div>
<br />
<pre>[stack@ServerCentOS72 ~]$ virsh pool-dumpxml oooq_pool
<pool type='dir'>
<name>oooq_pool</name>
<uuid>dcf7f52b-e7f7-46aa-aa67-591afe598804</uuid>
<capacity unit='bytes'>257572208640</capacity>
<allocation unit='bytes'>85467271168</allocation>
<available unit='bytes'>172104937472</available>
<source>
</source>
<target>
<path>/home/stack/.quickstart/pool</path>
<permissions>
<mode>0775</mode>
<owner>1001</owner>
<group>1001</group>
<label>unconfined_u:object_r:user_home_t:s0</label>
</permissions>
</target>
</pool></pre>
<pre> </pre>
<pre>***************************************************************************
A bit different way to manage - login as stack and invoke virt-manager
via `virt-manager --connect qemu:///session` when /run/user already got
a correct permissions.
***************************************************************************
</pre>
<pre>$ sudo su -
# chown -R stack /run/user
# chgrp -R stack /run/user
^D
</pre>
<pre>[stack@ServerCentOS72 ~]$ virsh list --all
Id Name State
----------------------------------------------------
- compute_0 shut off
- compute_1 shut off
- control_0 shut off
- control_1 shut off
- control_2 shut off
- undercloud shut off
[stack@ServerCentOS72 ~]$ virt-manager --connect qemu:///session
[stack@ServerCentOS72 ~]$ virsh list --all
Id Name State
----------------------------------------------------
2 undercloud running
- compute_0 shut off
- compute_1 shut off
- control_0 shut off
- control_1 shut off
- control_2 shut off
</pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXuputGd40nLXRf6ffdGqTZ7AM4CZxtgnUEkWjtRRLynlfITctfyCl-YVvQvWowv_yfe1ZJfl8axt9P37PdyhX7Qlo6YEwFUnH10TtMUe2qgQURO1sUDdPG5Rc1f6eK7J_D207pg/s1600/Screenshot+from+2016-06-25+16-26-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgXuputGd40nLXRf6ffdGqTZ7AM4CZxtgnUEkWjtRRLynlfITctfyCl-YVvQvWowv_yfe1ZJfl8axt9P37PdyhX7Qlo6YEwFUnH10TtMUe2qgQURO1sUDdPG5Rc1f6eK7J_D207pg/s640/Screenshot+from+2016-06-25+16-26-55.png" width="640" /></a></div>
<br />
To start virt-manager without warning :-<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-FT0woncciaCh9Mpf3_RG5SknKSlUTwQCb8sM1wEFB2tPmkeHW1pDSWILQbFwRaBO56cNO-5xKkNsnv6no-IIdXgzLZTmrxYz5yU1xcagH4cvO9qBRcZgsZsHOZo1uhU4h2lA_Q/s1600/Screenshot+from+2016-06-28+21-34-11.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-FT0woncciaCh9Mpf3_RG5SknKSlUTwQCb8sM1wEFB2tPmkeHW1pDSWILQbFwRaBO56cNO-5xKkNsnv6no-IIdXgzLZTmrxYz5yU1xcagH4cvO9qBRcZgsZsHOZo1uhU4h2lA_Q/s640/Screenshot+from+2016-06-28+21-34-11.png" width="640" /></a></div>
<br />
<br />
From workstation connect to undercloud
<br />
<pre>[boris@fedora22wks tripleo-quickstart]$ ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud
[stack@undercloud~] ./overcloud-deploy.sh
In several minutes you will see</pre>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikKOO4X8njw_PiKE0lae2DQfBc2odHRDU2Ysa3oQRGTmfbePNfjo3XjkwAUehNJ3byhmmHQc64fCG-oIh1HXNU0Qn_jDKoYGKtXpM2OqbXQFDWQ_QO8fkfkewAZsl7mNndr3LXcg/s1600/Screenshot+from+2016-06-25+17-07-56.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikKOO4X8njw_PiKE0lae2DQfBc2odHRDU2Ysa3oQRGTmfbePNfjo3XjkwAUehNJ3byhmmHQc64fCG-oIh1HXNU0Qn_jDKoYGKtXpM2OqbXQFDWQ_QO8fkfkewAZsl7mNndr3LXcg/s640/Screenshot+from+2016-06-25+17-07-56.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyT70GgAa9zq1CkvSGz2NAuBL_Nzmsz4AZxhDatmWC44kKWh8Nj1NVQn2Ew2AoHXorPh4NpeNQXsKRTpPvufa65EnP1yp3_EFWTeVeutItwuSASB9ifTgGM0_vI9JVE3664_5Y-Q/s1600/Screenshot+from+2016-06-25+17-50-32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgyT70GgAa9zq1CkvSGz2NAuBL_Nzmsz4AZxhDatmWC44kKWh8Nj1NVQn2Ew2AoHXorPh4NpeNQXsKRTpPvufa65EnP1yp3_EFWTeVeutItwuSASB9ifTgGM0_vI9JVE3664_5Y-Q/s640/Screenshot+from+2016-06-25+17-50-32.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJdbqGSGvX4vP_-8n8SKI2cL_lQ-kAwujBvaoqWt3uxzGuuP2G-eKJN6I2tkNwtcR1v7E7hxRJ2sf41t98hu4hR-cyO_NWtAYiVNenwdOcTGScTFSTQ97tQRjND48XUnOsTYpWrg/s1600/Screenshot+from+2016-06-25+20-02-06.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgJdbqGSGvX4vP_-8n8SKI2cL_lQ-kAwujBvaoqWt3uxzGuuP2G-eKJN6I2tkNwtcR1v7E7hxRJ2sf41t98hu4hR-cyO_NWtAYiVNenwdOcTGScTFSTQ97tQRjND48XUnOsTYpWrg/s640/Screenshot+from+2016-06-25+20-02-06.png" width="640" /></a></div>
<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 40754e8a-461e-4328-b0c4-6740c71e9a0d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.27 |
| df272524-a0bd-4ed7-b95c-92ac779c0b96 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.26 |
| 22802ff4-c472-4500-94d7-415c429073ab | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.29 |
| e79a8967-5c81-4ce1-9037-4e07b298d779 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.25 |
| 27a7c6ac-a480-4945-b4d5-72e32b3c1886 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.28 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.27<br />
Last login: Sat Jun 25 09:35:35 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
Last login: Sat Jun 25 09:54:06 UTC 2016 on pts/0<br />
[root@overcloud-controller-0 ~]# . keystonerc_admin<br />
[root@overcloud-controller-0 ~(keystone_admin)]# pcs status<br />
Cluster name: tripleo_cluster<br />
Last updated: Sat Jun 25 10:04:32 2016 Last change: Sat Jun 25 09:21:21 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
<pre>Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum
3 nodes and 127 resources configured
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Full list of resources:
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: memcached-clone [memcached]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-192.0.2.24 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
Clone Set: mongod-clone [mongod]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-clone [openstack-heat-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-api-clone [openstack-glance-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-api-clone [openstack-nova-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: delay-clone [delay]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-server-clone [neutron-server]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: httpd-clone [httpd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Failed Actions:
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=92, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:16:45 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=355, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 10:00:10 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=313, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:20:51 2016', queued=0ms, exec=2101ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=328, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:23:05 2016', queued=0ms, exec=2121ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=97, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:16:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=365, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 10:00:12 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=324, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:22:32 2016', queued=0ms, exec=2237ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=342, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:23:32 2016', queued=0ms, exec=2200ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=94, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:16:47 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=353, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 10:00:08 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=318, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:22:39 2016', queued=0ms, exec=2113ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=322, status=complete, exitreason='none',
last-rc-change='Sat Jun 25 09:22:48 2016', queued=0ms, exec=2123ms</pre>
<br />
<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-43401198819666024022016-06-18T08:53:00.000-07:002016-07-01T07:30:22.684-07:00RDO Triple0 QuickStart HA Setup - Work in progress<div dir="ltr" style="text-align: left;" trbidi="on">
This post follows up <a href="https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-intel-core-i7-4790-desktop" target="_blank">https://www.linux.com/blog/rdo-triple0-quickstart-ha-setup-intel-core-i7-4790-desktop</a> <br />
In meantime undercloud-install,undercloud-post-install (openstack undercloud install, openstack overcloud image upload ) are supposed to be performed
during original run `bash quickstart.sh --config /path-to/ha.yml $VIRTHOST`. Neutron networks deployment on undercloud and HA Server's configuration has been significantly
rebuilt during the last weeks. I believe current design is close to proposed in <a href="https://remote-lab.net/rdo-manager-ha-openstack-deployment" target="_blank">https://remote-lab.net/rdo-manager-ha-openstack-deployment</a><br />
However , attempt to reproduce <a href="http://docs.openstack.org/developer/tripleo-docs/installation/installation.html" target="_blank">http://docs.openstack.org/developer/tripleo-docs/installation/installation.html</a><br />
results hanging on `openstack undercloud install`, wheh it attempts to start<br />
openstack-nova-compute on undercloud. Nova-compute.log report failure<br />
to connect 127.0.0.1:5672. Verification via `netstat -antp | grep 5672` reports<br />
port 5672 bind only to 192.0.2.1 ( ctlplane IP address ).<br />
<br />
See also <a href="https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html" target="_blank">https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html</a><br />
Quoting ( complaints are not mine) :-<br />
<span style="color: #b45f06;">By the way, I'd love to see and help to have an complete installation
guide for TripleO powered by RDO on the RDO site (the instack virt setup
without quickstart . . . . </span><br />
<br />
*****************************<br />
Start on workstation :-<br />
*****************************<br />
$ git clone https://github.com/openstack/tripleo-quickstart<br />
$ cd tripleo-quickstart<br />
$ sudo bash quickstart.sh --install-deps<br />
$ sudo yum -y install redhat-rpm-config<br />
$ export VIRTHOST=192.168.1.75 #put your own IP here<br />
$ ssh-keygen<br />
$ ssh-copy-id root@$VIRTHOST<br />
$ ssh root@$VIRTHOST uname -a # no root login prompt<br />
<br />
<span style="color: #b45f06;">######################</span><br />
<span style="color: #b45f06;"># Template code</span><br />
<span style="color: #b45f06;">######################</span><br />
compute_memory: 6144<br />
compute_vcpu:1<br />
<br />
undercloud_memory: 8192<br />
<br />
# Giving the undercloud additional CPUs can greatly improve heat's<br />
# performance (and result in a shorter deploy time).<br />
undercloud_vcpu: 4<br />
<br />
# Create three controller nodes and one compute node.<br />
overcloud_nodes:<br />
- name: control_0<br />
flavor: control<br />
- name: control_1<br />
flavor: control<br />
- name: control_2<br />
flavor: control<br />
<br />
- name: compute_0<br />
flavor: compute<br />
- name: compute_1<br />
flavor: compute<br />
<br />
# We don't need introspection in a virtual environment (because we are<br />
# creating all the "hardware" we really know the necessary<br />
# information).<br />
introspect: false<br />
<br />
# Tell tripleo about our environment.<br />
<span style="color: #b45f06;">network_isolation: true</span><br />
<span style="color: #b45f06;">extra_args: >-</span><br />
<span style="color: #b45f06;"> --control-scale 3 --compute-scale 2 --neutron-network-type vxlan</span><br />
<span style="color: #b45f06;"> --neutron-tunnel-types vxlan</span><br />
<span style="color: #b45f06;"> -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml</span><br />
<span style="color: #b45f06;"> --ntp-server pool.ntp.org</span><br />
deploy_timeout: 75<br />
tempest: false<br />
pingtest: true<br />
<br />
***********************************************<br />
Then run under tripleo-quickstart<br />
***********************************************<br />
$ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST<br />
<br />
During this run the most important is to reach this point on VIRTHOST<br />
<br />
<pre>[root@ServerCentOS72 ~]# cd /var/cache/tripleo-quickstart/images
[root@ServerCentOS72 images]# ls -l
total 2638232
-rw-rw-r--. 1 stack stack 2701548544 Jun 17 19:25 83e62624dd7bd637dada343bbf4fe8f1.qcow2
lrwxrwxrwx. 1 stack stack 75 Jun 17 19:25 latest-undercloud.qcow2 -> /var/cache/tripleo-quickstart/images/83e62624dd7bd637dada343bbf4fe8f1.qcow2</pre>
<br />
Saturday 18 June 2016 12:07:05 +0300 (0:00:00.124) 0:26:21.276 ********* <br />
<pre>===============================================================================
tripleo/undercloud : Install the undercloud -------------------------- 1155.95s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/install-undercloud.yml:1
setup/undercloud : Get undercloud vm ip address ------------------------ 81.26s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:173
setup/undercloud : Resize undercloud image (call virt-resize) ---------- 76.39s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:122
tripleo/undercloud : Prepare the undercloud for deploy ----------------- 70.15s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/post-install.yml:27
setup/undercloud : Upload undercloud volume to storage pool ------------ 53.20s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:142
setup/undercloud : Copy instackenv.json to appliance ------------------- 35.25s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:53
setup/undercloud : Get qcow2 image from cache -------------------------- 32.77s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:144
setup/undercloud : Inject undercloud ssh public key to appliance -------- 7.07s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:72
setup ------------------------------------------------------------------- 6.68s
None --------------------------------------------------------------------------
setup/undercloud : Perform selinux relabel on undercloud image ---------- 3.47s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:94
environment/teardown : Check if libvirt is available -------------------- 1.99s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:8 ----
setup ------------------------------------------------------------------- 1.92s
/home/boris/.quickstart/playbooks/provision.yml:29 ----------------------------
setup ------------------------------------------------------------------- 1.90s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.81s
None --------------------------------------------------------------------------
parts/libvirt : Install packages for libvirt ---------------------------- 1.78s
/home/boris/tripleo-quickstart/roles/parts/libvirt/tasks/main.yml:5 -----------
setup/overcloud : Create overcloud vm storage --------------------------- 1.57s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:55
setup/overcloud : Define overcloud vms ---------------------------------- 1.48s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:67
provision/teardown : Remove non-root user account ----------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:47 -----
provision/teardown : Wait for processes to exit ------------------------- 1.41s
/home/boris/tripleo-quickstart/roles/provision/teardown/tasks/main.yml:27 -----
environment/teardown : Stop libvirt networks ---------------------------- 1.35s
/home/boris/tripleo-quickstart/roles/environment/teardown/tasks/main.yml:29 ---</pre>
+ set +x<br />
##################################<br />
Virtual Environment Setup Complete<br />
##################################<br />
<br />
Access the undercloud by:<br />
<br />
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud<br />
<br />
There are scripts in the home directory to continue the deploy:<br />
<br />
<span style="color: #b45f06;">overcloud-deploy.sh will deploy the overcloud</span><br />
<br />
Detailed syntax of `openstack overcloud deploy --templates ... `<br />
captured by snapshot bellow, compare with <a href="https://remote-lab.net/rdo-manager-ha-openstack-deployment" target="_blank">https://remote-lab.net/rdo-manager-ha-openstack-deployment</a><br />
<br />
$ openstack overcloud deploy --control-scale 3 --compute-scale 2 \<br />
--libvirt-type qemu --ntp-server pool.ntp.org --templates ~/the-cloud/ \<br />
-e ~/the-cloud/environments/puppet-pacemaker.yaml \<br />
-e
~/the-cloud/environments/network-isolation.yaml \<br />
-e
~/the-cloud/environments/net-single-nic-with-vlans.yaml \<br />
-e
~/the-cloud/environments/network-environment.yaml<br />
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTq2oaEM8i4Wt9-ucYWwZZ-fI0TMg17NzK5Q6EMmIFUfJQNn8-sxpJuXYkkZJCG5MYLlIodZtOuYzi4vj-RjvVErMtMvx8gLbFgqiA_VNBZlTWegTIIMOX-bcnoyo-FR2lgN-Wyw/s1600/Screenshot+from+2016-06-19+14-29-39.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTq2oaEM8i4Wt9-ucYWwZZ-fI0TMg17NzK5Q6EMmIFUfJQNn8-sxpJuXYkkZJCG5MYLlIodZtOuYzi4vj-RjvVErMtMvx8gLbFgqiA_VNBZlTWegTIIMOX-bcnoyo-FR2lgN-Wyw/s640/Screenshot+from+2016-06-19+14-29-39.png" width="640" /></a></div>
<br />
<span style="color: #b45f06;"> overcloud-deploy-post.sh will do any post-deploy configuration</span><br />
<span style="color: #b45f06;"> overcloud-validate.sh will run post-deploy validation</span><br />
<br />
Alternatively, you can ignore these scripts and follow the upstream docs,<br />
starting from the overcloud deploy section:<br />
<br />
http://ow.ly/1Vc1301iBlb<br />
<br />
Then run 3 mentoned above scripts<br />
<br />
[stack@undercloud ~]$ . stackrc<br />
[stack@undercloud ~]$ heat stack-list<br />
<pre>+--------------------------------------+------------+-----------------+---------------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 356243b1-a071-45c8-8083-85b9a12532c6 | overcloud | CREATE_COMPLETE | 2016-06-18T09:09:40 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+</pre>
<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.7 |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br />
[stack@undercloud ~]$ neutron net-list<br />
<pre>+--------------------------------------+--------------+----------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+----------------------------------------+
| cde382ae-a7fa-4ebb-bbdc-9e2af9c0df83 | external | 42fac214-7177-4b4f-8778-105015ed30da |
| | | 10.0.0.0/24 |
| 5fc97bca-fa67-4ede-b4d3-8234c0ace5e5 | storage_mgmt | 719f9a19-2f1d-4eed-914a-430468086f10 |
| | | 172.16.3.0/24 |
| 4236d358-b4cd-4fb9-a337-f8a421bb13cd | tenant | d6f1e772-c0a1-4869-a9bc-b551faf5be8e |
| | | 172.16.0.0/24 |
| a4155b70-a4d8-41bf-bbe6-a5f4e248c5ad | ctlplane | 199a8e99-d9c7-43f2-8ccd-6a59b8424362 |
| | | 192.0.2.0/24 |
| fae53fb0-c5da-427f-b473-bfaa0ab21877 | internal_api | 5f2ff369-1000-4361-8131-b0ae69821b9f |
| | | 172.16.2.0/24 |
| 41862220-b9e6-4000-8341-9fbdb34b47f5 | storage | d0cf1cac-f841-41dd-923d-47d164c07d0f |
| | | 172.16.1.0/24 |
+--------------------------------------+--------------+----------------------------------------+</pre>
<br />
[stack@undercloud ~]$ cat overcloudrc<br />
export OS_NO_CACHE=True<br />
export OS_CLOUDNAME=overcloud<br />
export OS_AUTH_URL=http://10.0.0.4:5000/v2.0<br />
export NOVA_VERSION=1.1<br />
export COMPUTE_API_VERSION=1.1<br />
export OS_USERNAME=admin<br />
export no_proxy=,10.0.0.4,192.0.2.6<br />
export OS_PASSWORD=gdjYmYMdB6aWX8PjBUWdCHkem<br />
export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"<br />
export OS_TENANT_NAME=admin<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| dbb233ab-9108-4a22-b0dd-44c6ef9a481a | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
| 1a91083e-e1ba-43c3-8ad2-78500f6b3ecb | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.7 |
| 0b3f6ec8-0a13-4f40-b9e3-4557f1b8c7a3 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| 97a8a546-72a0-4431-8065-c1f81103ee25 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| e87a79db-75f8-437f-8ed7-f29aacfe7339 | overcloud-novacompute-1 | ACTIVE | - | Running | ctlplane=192.0.2.8 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+</pre>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihTLDrdNaFfJ66duop1BPWKtwa3ia8iDK1Rz0ax9hJBpruIO8wQ_hWMrF8FZD6MtivVJ4MdqX2GZnDhY7Z1fo1DwoTBa1MKtasf9ADqZ07ra1YcNkFdM_KfiEsaqr2A403Yi8TWQ/s1600/Screenshot+from+2016-06-18+19-24-17.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihTLDrdNaFfJ66duop1BPWKtwa3ia8iDK1Rz0ax9hJBpruIO8wQ_hWMrF8FZD6MtivVJ4MdqX2GZnDhY7Z1fo1DwoTBa1MKtasf9ADqZ07ra1YcNkFdM_KfiEsaqr2A403Yi8TWQ/s640/Screenshot+from+2016-06-18+19-24-17.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBItFmFHCqPGlCc3uPIA_N89jnV4IAVDs7V-JmklX95cPmWeajxsLQ2h5Fb7t5SHiaFwIjNLt1_nrfGgpJO3aBzPT00hcjW4rK4j-rJD-uBcFKj9rwMwtYUrI3dKn8BCKppnIfUA/s1600/Screenshot+from+2016-06-18+19-26-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBItFmFHCqPGlCc3uPIA_N89jnV4IAVDs7V-JmklX95cPmWeajxsLQ2h5Fb7t5SHiaFwIjNLt1_nrfGgpJO3aBzPT00hcjW4rK4j-rJD-uBcFKj9rwMwtYUrI3dKn8BCKppnIfUA/s640/Screenshot+from+2016-06-18+19-26-03.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2JGw1yMLnhshK7_AAugcfa4-Xf3skFqyKQzs3YuWLT3-FYWjQ5Avf8oaNqLzr-ui1zmlIByWSVTFGdwTeGBNrat4dMosvnUZFWHUIit5tTXnUgwS8peDYrQVHaJn9Yos0GRXSgA/s1600/Screenshot+from+2016-06-18+20-17-52.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2JGw1yMLnhshK7_AAugcfa4-Xf3skFqyKQzs3YuWLT3-FYWjQ5Avf8oaNqLzr-ui1zmlIByWSVTFGdwTeGBNrat4dMosvnUZFWHUIit5tTXnUgwS8peDYrQVHaJn9Yos0GRXSgA/s640/Screenshot+from+2016-06-18+20-17-52.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLUS9bZkXow3STD8fi59S05RK81M8DvLZFMqH67_ztw1kCvGN3Rv3XBGS1AL6sUx_mVc6BehDp9z_xiKcV3wrjnFz7FYm-hRhn-WQjqNsCREUa25lcm7UHwGLyN6LHzgp0VKr43Q/s1600/Screenshot+from+2016-07-01+17-27-55.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgLUS9bZkXow3STD8fi59S05RK81M8DvLZFMqH67_ztw1kCvGN3Rv3XBGS1AL6sUx_mVc6BehDp9z_xiKcV3wrjnFz7FYm-hRhn-WQjqNsCREUa25lcm7UHwGLyN6LHzgp0VKr43Q/s640/Screenshot+from+2016-07-01+17-27-55.png" width="640" /></a></div>
<br />
<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.11<br />
The authenticity of host '192.0.2.11 (192.0.2.11)' can't be established.<br />
ECDSA key fingerprint is 74:99:da:b1:c8:ac:58:e6:65:c1:51:45:64:e4:e9:ed.<br />
Are you sure you want to continue connecting (yes/no)? yes<br />
Warning: Permanently added '192.0.2.11' (ECDSA) to the list of known hosts.<br />
Last login: Sat Jun 18 09:52:37 2016 from 192.0.2.1<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
[root@overcloud-controller-0 ~]# vi keystonerc_admin<br />
[root@overcloud-controller-0 ~]# . keystonerc_admin<br />
[root@overcloud-controller-0 ~(keystone_admin)]# psc status<br />
-bash: psc: command not found<br />
<span style="color: #b45f06;">[root@overcloud-controller-0 ~(keystone_admin)]# pcs status</span><br />
Cluster name: tripleo_cluster<br />
Last updated: Sat Jun 18 10:01:58 2016 Last change: Sat Jun 18 09:49:22 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-1 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br />
3 nodes and 127 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:
<br />
<pre> ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: haproxy-clone [haproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: galera-master [galera]
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: memcached-clone [memcached]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2
Clone Set: rabbitmq-clone [rabbitmq]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-core-clone [openstack-core]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Master/Slave Set: redis-master [redis]
Masters: [ overcloud-controller-1 ]
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]
Clone Set: mongod-clone [mongod]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-clone [openstack-heat-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-api-clone [openstack-glance-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-api-clone [openstack-nova-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: delay-clone [delay]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: neutron-server-clone [neutron-server]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: httpd-clone [httpd]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]
</pre>
<br />
Failed Actions:<br />
<pre>* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=95, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:44:43 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=331, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:44 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=335, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:50:53 2016', queued=0ms, exec=2099ms
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=339, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:17 2016', queued=0ms, exec=2117ms
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=96, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:44:40 2016', queued=0ms, exec=0ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=332, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:42 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=339, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:13 2016', queued=0ms, exec=2145ms
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=341, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:28 2016', queued=0ms, exec=2147ms
* openstack-aodh-evaluator_start_0 on overcloud-controller-2 'not running' (7): call=368, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:53:18 2016', queued=0ms, exec=2107ms
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=321, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:56:46 2016', queued=0ms, exec=0ms
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=326, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:51:06 2016', queued=0ms, exec=2185ms
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=378, status=complete, exitreason='none',
last-rc-change='Sat Jun 18 09:54:14 2016', queued=1ms, exec=2116ms
</pre>
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
<br />
[root@overcloud-controller-0 ~(keystone_admin)]# ovs-vsctl show<br />
8fea5ee4-62cf-4767-96c8-d9867cab9972<br />
Bridge br-tun<br />
fail_mode: secure<br />
Port br-tun<br />
Interface br-tun<br />
type: internal<br />
Port "vxlan-ac100004"<br />
Interface "vxlan-ac100004"<br />
type: vxlan<br />
<span style="color: #b45f06;"> options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.4"}</span><br />
Port "vxlan-ac100005"<br />
Interface "vxlan-ac100005"<br />
type: vxlan<br />
<span style="color: #b45f06;">options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.5"}</span><br />
Port "vxlan-ac100008"<br />
Interface "vxlan-ac100008"<br />
type: vxlan<br />
<span style="color: #b45f06;"> options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.8"}</span><br />
Port patch-int<br />
Interface patch-int<br />
type: patch<br />
options: {peer=patch-tun}<br />
Port "vxlan-ac100007"<br />
Interface "vxlan-ac100007"<br />
type: vxlan<br />
<span style="color: #b45f06;">options: {df_default="true", in_key=flow, local_ip="172.16.0.6", out_key=flow, remote_ip="172.16.0.7"}</span><br />
Bridge br-int<br />
fail_mode: secure<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port patch-tun<br />
Interface patch-tun<br />
type: patch<br />
options: {peer=patch-int}<br />
Port int-br-ex<br />
Interface int-br-ex<br />
type: patch<br />
options: {peer=phy-br-ex}<br />
<span style="color: #b45f06;"> Bridge br-ex</span><br />
<span style="color: #b45f06;"> Port br-ex</span><br />
<span style="color: #b45f06;"> Interface br-ex</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
Port "vlan20"<br />
tag: 20<br />
Interface "vlan20"<br />
type: internal<br />
<span style="color: #b45f06;"> Port "eth0"</span><br />
<span style="color: #b45f06;"> Interface "eth0"</span><br />
Port phy-br-ex<br />
Interface phy-br-ex<br />
type: patch<br />
options: {peer=int-br-ex}<br />
Port "vlan40"<br />
tag: 40<br />
Interface "vlan40"<br />
type: internal<br />
Port "vlan50"<br />
tag: 50<br />
Interface "vlan50"<br />
type: internal<br />
<span style="color: #b45f06;">Port "vlan10"</span><br />
<span style="color: #b45f06;"> tag: 10</span><br />
<span style="color: #b45f06;"> Interface "vlan10"</span><br />
type: internal<br />
Port "vlan30"<br />
tag: 30<br />
Interface "vlan30"<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<br />
[root@overcloud-controller-0 ~(keystone_admin)]# ifconfig<br />
br-ex: flags=4163<up> mtu 1500<br /> inet 192.0.2.11 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::250:dcff:fecf:b7d5 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:50:dc:cf:b7:d5 txqueuelen 0 (Ethernet)<br /> RX packets 15254 bytes 29305270 (27.9 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 15111 bytes 2037368 (1.9 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet6 fe80::250:dcff:fecf:b7d5 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:50:dc:cf:b7:d5 txqueuelen 1000 (Ethernet)<br /> RX packets 554865 bytes 314056269 (299.5 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 537763 bytes 196316938 (187.2 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 128951 bytes 42842317 (40.8 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 128951 bytes 42842317 (40.8 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.6 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::2cf7:9cff:fe98:df2e prefixlen 64 scopeid 0x20<link></link><br /> ether 2e:f7:9c:98:df:2e txqueuelen 0 (Ethernet)<br /> RX packets 1563 bytes 22172141 (21.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 935 bytes 339459 (331.5 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan20: flags=4163<up> mtu 1500<br /> inet 172.16.2.9 netmask 255.255.255.0 broadcast 172.16.2.255<br /> inet6 fe80::9c4a:96ff:fe42:f562 prefixlen 64 scopeid 0x20<link></link><br /> ether 9e:4a:96:42:f5:62 txqueuelen 0 (Ethernet)<br /> RX packets 515281 bytes 202417994 (193.0 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 498334 bytes 112312907 (107.1 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan30: flags=4163<up> mtu 1500<br /> inet 172.16.1.5 netmask 255.255.255.0 broadcast 172.16.1.255<br /> inet6 fe80::8cbe:80ff:fe80:7945 prefixlen 64 scopeid 0x20<link></link><br /> ether 8e:be:80:80:79:45 txqueuelen 0 (Ethernet)<br /> RX packets 20275 bytes 45196003 (43.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 20405 bytes 52618634 (50.1 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan40: flags=4163<up> mtu 1500<br /> inet 172.16.3.6 netmask 255.255.255.0 broadcast 172.16.3.255<br /> inet6 fe80::8c06:98ff:fe7a:5b7 prefixlen 64 scopeid 0x20<link></link><br /> ether 8e:06:98:7a:05:b7 txqueuelen 0 (Ethernet)<br /> RX packets 2299 bytes 12722091 (12.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 2557 bytes 26854977 (25.6 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan50: flags=4163<up> mtu 1500<br /> inet 172.16.0.6 netmask 255.255.255.0 broadcast 172.16.0.255<br /> inet6 fe80::6454:dff:fe41:90e9 prefixlen 64 scopeid 0x20<link></link><br /> ether 66:54:0d:41:90:e9 txqueuelen 0 (Ethernet)<br /> RX packets 107 bytes 9834 (9.6 KiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 121 bytes 12394 (12.1 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />[root@overcloud-controller-0 ~(keystone_admin)]# route -n</up></up></up></up></up></host></up></up></up><br />
<pre>Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
<span style="color: #b45f06;">0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 vlan10</span>
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
169.254.169.254 192.0.2.1 255.255.255.255 UGH 0 0 0 br-ex
172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan50
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan30
172.16.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan20
172.16.3.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan40
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex</pre>
<pre> </pre>
<pre>[root@overcloud-controller-0 ~]# cat /etc/os-net-config/config.json | jq '.[]'
[
{
"addresses": [
{
"ip_netmask": "192.0.2.11/24"
}
],
"type": "ovs_bridge",
"use_dhcp": false,
"routes": [
{
"next_hop": "192.0.2.1",
"ip_netmask": "169.254.169.254/32"
}
],
"members": [
{
"primary": true,
"name": "nic1",
"type": "interface"
},
{
"vlan_id": 10,
"addresses": [
{
"ip_netmask": "10.0.0.6/24"
}
],
"type": "vlan",
"routes": [
{
"next_hop": "10.0.0.1",
"default": true
}
]
},
{
"vlan_id": 20,
"addresses": [
{
"ip_netmask": "172.16.2.9/24"
}
],
"type": "vlan"
},
{
"vlan_id": 30,
"addresses": [
{
"ip_netmask": "172.16.1.5/24"
}
],
"type": "vlan"
},
{
"vlan_id": 40,
"addresses": [
{
"ip_netmask": "172.16.3.6/24"
}
],
"type": "vlan"
},
{
"vlan_id": 50,
"addresses": [
{
"ip_netmask": "172.16.0.6/24"
}
],
"type": "vlan"
}
],
"name": "br-ex",
"dns_servers": [
"8.8.8.8",
"8.8.4.4"
]
}
]
</pre>
<pre>************************
On underclod
************************
</pre>
<pre>[stack@undercloud ~]$ sudo su -
Last login: Sat Jun 18 10:47:31 UTC 2016 on pts/1
[root@undercloud ~]# ovs-vsctl show
7fb4d9b7-4704-410f-845f-6f3f0a1b65cd
<span style="color: #b45f06;"> Bridge br-ctlplane
Port "vlan10"
tag: 10
Interface "vlan10"
type: internal</span>
Port br-ctlplane
Interface br-ctlplane
type: internal
Port phy-br-ctlplane
Interface phy-br-ctlplane
type: patch
options: {peer=int-br-ctlplane}
Port "eth1"
Interface "eth1"
Bridge br-int
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port "tap41a7c72c-39"
tag: 1
Interface "tap41a7c72c-39"
type: internal
Port int-br-ctlplane
Interface int-br-ctlplane
type: patch
options: {peer=phy-br-ctlplane}
ovs_version: "2.5.0"</pre>
<pre>[root@undercloud ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.23.1 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ctlplane
192.168.23.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
</pre>
<pre>[root@undercloud ~]# ifconfig
<span style="color: #b45f06;">br-ctlplane: flags=4163</span><up><span style="color: #b45f06;"> mtu 1500
inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255</span>
inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20<link></link>
ether 00:ad:c4:6f:77:8a txqueuelen 0 (Ethernet)
RX packets 4743446 bytes 382457275 (364.7 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6573214 bytes 31299066406 (29.1 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<up> mtu 1500
inet 192.168.23.46 netmask 255.255.255.0 broadcast 192.168.23.255
inet6 fe80::2ad:c4ff:fe6f:7788 prefixlen 64 scopeid 0x20<link></link>
ether 00:ad:c4:6f:77:88 txqueuelen 1000 (Ethernet)
RX packets 402911 bytes 1166354846 (1.0 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 286351 bytes 63608008 (60.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<up> mtu 1500
inet6 fe80::2ad:c4ff:fe6f:778a prefixlen 64 scopeid 0x20<link></link>
ether 00:ad:c4:6f:77:8a txqueuelen 1000 (Ethernet)
RX packets 4793675 bytes 390579748 (372.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 6627325 bytes 32167819071 (29.9 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<up> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 0 (Local Loopback)
RX packets 5342779 bytes 31375282714 (29.2 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5342779 bytes 31375282714 (29.2 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<up> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:b7:65:c0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vlan10: flags=4163<up> mtu 1500
inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::c4d1:81ff:fec1:6006 prefixlen 64 scopeid 0x20<link></link>
ether c6:d1:81:c1:60:06 txqueuelen 0 (Ethernet)
RX packets 49362 bytes 7857042 (7.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 52980 bytes 868430005 (828.1 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
</up></up></host></up></up></up></up></pre>
<pre> </pre>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-28024676854984921052016-06-11T12:29:00.000-07:002016-06-11T14:21:50.036-07:00RDO Mitaka Virtual Deployment having real physical network as External<div dir="ltr" style="text-align: left;" trbidi="on">
Nova-Docker driver is installed on Compute node which is supposed to run several Java EE Servers as light weight Nova-Docker Containers
(instances) having floating IPs on external flat network (actually real office network 192.168.1.0/24) . General Setup RDO Mitaka <a href="http://textuploader.com/5b4ow" target="_blank">ML2&OVS&VLAN 3 Nodes.</a> VLAN tenant's segregation for RDO lansdcape was selected to avoid DVR configuration Controller && Compute Cluster.<br />
Details here <a href="http://bderzhavets.blogspot.ru/2016/04/setup-docker-hypervisor-on-multi-node.html" target="_blank">Setup Docker Hypervisor on Multi Node DVR Cluster RDO Mitaka</a> <br />
<br />
Configuration RDO Mitaka :-<br />
<br />
<span style="color: #b45f06;">Controller/Network (VM) 192.169.142.127 (eth0 -mgmt, eth1- vlan </span><br />
<span style="color: #b45f06;"> vm/data, eth2 external )</span><br />
<span style="color: #b45f06;"> Compute (VM) 192.169.142.137 (eth0 -mgmt, eth1- valn, vm/data)</span><br />
<span style="color: #b45f06;"> Storage (VM ) 192.169.142.147 (eth0 -mgmt)</span><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIeLlfsnX0YT5SCXih0G4YP9EaW7dP3E8V_Z_IceqIy-y-dnnOZklvIYTeaSdnEmg7b8A9CrJtjAzqd9I18gmetF0ApNVZwq1sMh7olsaik_XMtIttU59IxdyxxX_DiOvil4uszw/s1600/Screenshot+from+2016-06-12+00-20-47.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIeLlfsnX0YT5SCXih0G4YP9EaW7dP3E8V_Z_IceqIy-y-dnnOZklvIYTeaSdnEmg7b8A9CrJtjAzqd9I18gmetF0ApNVZwq1sMh7olsaik_XMtIttU59IxdyxxX_DiOvil4uszw/s640/Screenshot+from+2016-06-12+00-20-47.png" width="640" /></a></div>
<br />
********************************************************************************************<br />
Office LAN 192.168.1.0/24 is supposed to match external network (configured via flat network provider ) for VM's deployed system . VIRTHOST (F23) is based on linux bridge br0 having original interface enp3s0 as source interface<br />
********************************************************************************************<br />
[root@fedora23wks network-scripts]# cat ifcfg-br0 <br />
DEVICE=br0<br />
TYPE=Bridge<br />
BOOTPROTO=static<br />
DNS1=192.168.1.1<br />
DNS2=83.221.202.254<br />
GATEWAY=192.168.1.1<br />
IPADDR=192.168.1.57<br />
NETMASK=255.255.255.0<br />
ONBOOT=yes<br />
<br />
[root@fedora23wks network-scripts]# cat ifcfg-enp3s0<br />
DEVICE=enp3s0<br />
HWADDR=78:24:af:43:1b:53<br />
ONBOOT=yes<br />
TYPE=Ethernet<br />
IPV6INIT=no<br />
USERCTL=no<br />
BRIDGE=br0<br />
<br />
*************************** <br />
Then run script<br />
***************************<br />
<span style="color: #b45f06;">#!/bin/bash -x <br />
chkconfig network on<br />
systemctl stop NetworkManager<br />
systemctl disable NetworkManager <br />
service network restart<br />
</span><br />
Reboot node<br />
[root@fedora23wks network-scripts]# brctl show<br />
bridge name bridge id STP enabled interfaces<br />
br0 8000.7824af431b53 no enp3s0<br />
vnet2<br />
********************************************************************************************<br />
Creating external network via flat external network provider on Controller<br />
matching CIDR of Office LAN 192.168.1.1 is IP of external physical router<br />
device.<br />
********************************************************************************************<br />
<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGWFL_xi6dSGH7waWmaJyF1H5Wha4E8dXu1odvs4tHPOXPlsPrMam2rBUpsB5JMPnzRjXs0eKCDiiqI8QNqQX9L_6cj4oSrGczub2VLnN5HsEMe-08V-m6QBRJT9OYQO9h9RkVQQ/s1600/Screenshot+from+2016-06-11+21-45-13.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGWFL_xi6dSGH7waWmaJyF1H5Wha4E8dXu1odvs4tHPOXPlsPrMam2rBUpsB5JMPnzRjXs0eKCDiiqI8QNqQX9L_6cj4oSrGczub2VLnN5HsEMe-08V-m6QBRJT9OYQO9h9RkVQQ/s640/Screenshot+from+2016-06-11+21-45-13.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrM7GykNUb-TNI9Gpc10oevCEP71UK1Qza4_yav7CkYBzzfuZgBJbF3q7g7dIbO51exQDdI2fadEyXK_4mWLX6PiLYIAe0I1jqJL58VrM9JjgRTpu05nnhee_hRoBBjdhT2tvuRw/s1600/Screenshot+from+2016-06-11+21-49-32.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrM7GykNUb-TNI9Gpc10oevCEP71UK1Qza4_yav7CkYBzzfuZgBJbF3q7g7dIbO51exQDdI2fadEyXK_4mWLX6PiLYIAe0I1jqJL58VrM9JjgRTpu05nnhee_hRoBBjdhT2tvuRw/s640/Screenshot+from+2016-06-11+21-49-32.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_0t6oxRaN73RTAl0gVx7NZzA2Gy8DUxpW0gzsRwE8albiKvIXmleElJkPPu8bAJK1-l7-IctuPbh9tSaDYM0tDy_3FdNwa0g2ZmOD77w0aBDG8f5C1C6VaiRQowakIjgWHA0GEw/s1600/Screenshot+from+2016-06-11+21-50-44.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_0t6oxRaN73RTAl0gVx7NZzA2Gy8DUxpW0gzsRwE8albiKvIXmleElJkPPu8bAJK1-l7-IctuPbh9tSaDYM0tDy_3FdNwa0g2ZmOD77w0aBDG8f5C1C6VaiRQowakIjgWHA0GEw/s640/Screenshot+from+2016-06-11+21-50-44.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
********************************<br />
Controller Configuration<br />
********************************<br />
<br />
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^$|grep -v ^#<br />
[DEFAULT]<br />
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver<br />
agent_mode = legacy<br />
gateway_external_network_id =<br />
<span style="color: #b45f06;">external_network_bridge = </span><br />
debug = False<br />
<br />
[AGENT]<br />
[root@ip-192-169-142-127 neutron(keystone_admin)]# cd plugins/ml2<br />
[root@ip-192-169-142-127 ml2(keystone_admin)]# cat ml2_conf.ini<br />
[DEFAULT]<br />
[ml2]<br />
type_drivers = vlan,flat<br />
tenant_network_types = vlan<br />
mechanism_drivers =openvswitch<br />
path_mtu = 0<br />
[ml2_type_flat]<br />
flat_networks = *<br />
[ml2_type_geneve]<br />
[ml2_type_gre]<br />
[ml2_type_vlan]<br />
<span style="color: #b45f06;">network_vlan_ranges =physnet1:100:200,physnet2</span><br />
[ml2_type_vxlan]<br />
[securitygroup]<br />
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br />
enable_security_group = True<br />
<br />
[root@ip-192-169-142-127 ml2(keystone_admin)]# cat openvswitch_agent.ini<br />
[DEFAULT]<br />
[agent]<br />
l2_population = False<br />
drop_flows_on_start = False<br />
[ovs]<br />
integration_bridge = br-int<br />
<span style="color: #b45f06;">bridge_mappings =physnet1:br-eth1,physnet2:br-eth2</span><br />
enable_tunneling=False<br />
local_ip=192.169.142.127<br />
[securitygroup]<br />
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver<br />
<br />
<br />
[root@ip-192-169-142-127 ~(keystone_admin)]# ovs-vsctl show<br />
d12e6a7a-f589-42cd-91b3-96156ad9ed59<br />
Bridge br-int<br />
fail_mode: secure<br />
Port "tap4118e71e-a4"<br />
tag: 2<br />
Interface "tap4118e71e-a4"<br />
type: internal<br />
Port "qr-41a1a0fa-ec"<br />
tag: 1<br />
Interface "qr-41a1a0fa-ec"<br />
type: internal<br />
Port "tap390b9bc5-b9"<br />
tag: 1<br />
Interface "tap390b9bc5-b9"<br />
type: internal<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port "int-br-eth1"<br />
Interface "int-br-eth1"<br />
type: patch<br />
options: {peer="phy-br-eth1"}<br />
Port "qg-65a69bdf-c7"<br />
tag: 2<br />
Interface "qg-65a69bdf-c7"<br />
type: internal<br />
Port "int-br-eth2"<br />
Interface "int-br-eth2"<br />
type: patch<br />
options: {peer="phy-br-eth2"}<br />
Bridge "br-eth2" <span style="color: #b45f06;"><=== external bridge for non-bridged networking</span><br />
Port "phy-br-eth2"<br />
Interface "phy-br-eth2"<br />
type: patch<br />
options: {peer="int-br-eth2"}<br />
Port "br-eth2"<br />
Interface "br-eth2"<br />
type: internal<br />
Port "eth2"<br />
Interface "eth2"<br />
Bridge br-ex<br />
Port br-ex<br />
Interface br-ex<br />
type: internal<br />
Port "eth0"<br />
Interface "eth0"<br />
Bridge "br-eth1" <span style="color: #b45f06;"><=== internal VLAN vm/data network bridge</span><br />
Port "phy-br-eth1"<br />
Interface "phy-br-eth1"<br />
type: patch<br />
options: {peer="int-br-eth1"}<br />
Port "eth1"<br />
Interface "eth1"<br />
Port "br-eth1"<br />
Interface "br-eth1"<br />
type: internal<br />
ovs_version: "2.4.0"<br />
<br />
****************************************************************************************<br />
Dashboard Console ( Controller VM on VIRTHOST 192.168.1.57 )<br />
**************************************************************************************** <br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTx9QiwwLsLA48Ys7qjAhLsi6DUSwhuwR3wMc_coDczoypgJwF9rVtap-6_I2h2GE5CnYAZrbqCSfPhFEzzbq0RNPisdoHvvjj46wqrEGAR3U217T8BR1_L82PnFPc-E8fwn1CkQ/s1600/Screenshot+from+2016-06-11+23-08-50.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgTx9QiwwLsLA48Ys7qjAhLsi6DUSwhuwR3wMc_coDczoypgJwF9rVtap-6_I2h2GE5CnYAZrbqCSfPhFEzzbq0RNPisdoHvvjj46wqrEGAR3U217T8BR1_L82PnFPc-E8fwn1CkQ/s640/Screenshot+from+2016-06-11+23-08-50.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCmeQeIL7DAfMlElPIjNwVh8xpGBtw6LjYixj0J-hTnwW5tD-5k8OZFOIFJidogrwAInH-U1235XarvFZWaq4tn4rfkr8DKGhVLYmqNRmaBeCG6C6AHKnQkqOxDly6vdtrN-TWVw/s1600/Screenshot+from+2016-06-11+23-01-26.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhCmeQeIL7DAfMlElPIjNwVh8xpGBtw6LjYixj0J-hTnwW5tD-5k8OZFOIFJidogrwAInH-U1235XarvFZWaq4tn4rfkr8DKGhVLYmqNRmaBeCG6C6AHKnQkqOxDly6vdtrN-TWVw/s640/Screenshot+from+2016-06-11+23-01-26.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Connect to GF 4.1 Server from remote workstation<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynD4_l7VVU_8VsaWvO-C6ddzQ9wAUWZU1NGLvzneOR4YPX84W_OZ9DavKZBPunCfdoP1MGzoQsOflCODT7dtmJxAVsvq91ltJyhLRgdHI7P2SCLGfg_Yp02MiA_0X3uRqWL8EiA/s1600/Screenshot+from+2016-06-11+23-25-59.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjynD4_l7VVU_8VsaWvO-C6ddzQ9wAUWZU1NGLvzneOR4YPX84W_OZ9DavKZBPunCfdoP1MGzoQsOflCODT7dtmJxAVsvq91ltJyhLRgdHI7P2SCLGfg_Yp02MiA_0X3uRqWL8EiA/s640/Screenshot+from+2016-06-11+23-25-59.png" width="640" /></a></div>
<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVz-Cbu7JdpcNmMVT2d1uF6x669soQGuob2FYUbeXgogP81SbYlWynd3d1utMDED_xSihCkB3atpjyicNhm9VZxQZbIf7aB2wDUSfkjWvjA6eGpxvRtFod94iLgvRCRiosy1kCQA/s1600/Screenshot+from+2016-06-11+23-05-19.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVz-Cbu7JdpcNmMVT2d1uF6x669soQGuob2FYUbeXgogP81SbYlWynd3d1utMDED_xSihCkB3atpjyicNhm9VZxQZbIf7aB2wDUSfkjWvjA6eGpxvRtFod94iLgvRCRiosy1kCQA/s640/Screenshot+from+2016-06-11+23-05-19.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO-W8F79ULxzx_sBI5r2u1wHHQBHNDU7dDi3jin6fsMh4gfJiN9sbL9Wj8skmhS78J4AFlxcrhvwiU4sGqR5nGzLuPIIeSBLEigB_KFquMQMcU0KSzj7K2x71SNL6IhcwQwX3mWA/s1600/Screenshot+from+2016-06-11+23-22-07.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgO-W8F79ULxzx_sBI5r2u1wHHQBHNDU7dDi3jin6fsMh4gfJiN9sbL9Wj8skmhS78J4AFlxcrhvwiU4sGqR5nGzLuPIIeSBLEigB_KFquMQMcU0KSzj7K2x71SNL6IhcwQwX3mWA/s640/Screenshot+from+2016-06-11+23-22-07.png" width="640" /></a></div>
<br />
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-62996968859012361082016-06-03T05:44:00.000-07:002016-06-21T07:11:26.125-07:00RDO Triple0 QuickStart HA Setup on Intel Core i7-4790 Desktop<div dir="ltr" style="text-align: left;" trbidi="on">
<div dir="ltr" style="text-align: left;" trbidi="on">
################# <br />
UPDATE 06/17/2016<br />
#################<br />
<br />
In meantime undercloud-install,undercloud-post-install (openstack
undercloud install, openstack overcloud image upload ) are supposed to
be performed
during original run `bash quickstart.sh --config /path-to/ha.yml
$VIRTHOST`. Neutron networks deployment on undercloud and HA Server's
configuration has been significantly
rebuilt during the last weeks. I believe current design is close to
proposed in <a href="https://remote-lab.net/rdo-manager-ha-openstack-deployment" target="_blank">https://remote-lab.net/rdo-manager-ha-openstack-deployment</a><br />
However , attempt to reproduce <a href="http://docs.openstack.org/developer/tripleo-docs/installation/installation.html" target="_blank">http://docs.openstack.org/developer/tripleo-docs/installation/installation.html</a><br />
results hanging on `openstack undercloud install`, wheh it attempts to start<br />
openstack-nova-compute on undercloud. Nova-compute.log report failure<br />
to connect 127.0.0.1:5672. Verification via `netstat -antp | grep 5672` reports<br />
port 5672 bind only to 192.0.2.1 ( ctlplane IP address ).<br />
<br />
See also <a href="https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html" target="_blank">https://www.redhat.com/archives/rdo-list/2016-March/msg00171.html</a><br />
Quoting ( complaints are not mine) :-<br />
<span style="color: #b45f06;">By the way, I'd love to see and help to have an complete installation
guide for TripleO powered by RDO on the RDO site (the instack virt setup
without quickstart . . . . </span><br />
<br />
Then start on workstation :-<br />
<br />
$ git clone https://github.com/openstack/tripleo-quickstart<br />
$ cd tripleo-quickstart<br />
$ sudo bash quickstart.sh --install-deps<br />
$ sudo yum -y install redhat-rpm-config<br />
<br />
<br />
$ export VIRTHOST=192.168.1.75 #put your own IP here<br />
$ ssh-keygen <br />
$ ssh-copy-id root@$VIRTHOST<br />
$ ssh root@$VIRTHOST uname -a # no root login prompt<br />
<br />
Then run under tripleo-quickstart<br />
<br />
<span style="color: #b45f06;">$ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST</span><br />
During this run the most important is to reach this point on VIRTHOST<br />
<pre>[root@ServerCentOS72 ~]# cd /var/cache/tripleo-quickstart/images
[root@ServerCentOS72 images]# ls -l
total 2638232
-rw-rw-r--. 1 stack stack 2701548544 Jun 17 19:25 83e62624dd7bd637dada343bbf4fe8f1.qcow2
lrwxrwxrwx. 1 stack stack 75 Jun 17 19:25 latest-undercloud.qcow2 -> /var/cache/tripleo-quickstart/images/83e62624dd7bd637dada343bbf4fe8f1.qcow2
</pre>
If everything went well you will be brought back to command prompt and see message <code><span class="com"> </span></code><br />
<code><span class="com"><br /></span></code>
<code><span class="com">PLAY RECAP
</span></code><br />
<pre><code><span class="com">*********************************************************************
192.168.1.75 : ok=97 changed=50 unreachable=0 failed=0
localhost : ok=10 changed=4 unreachable=0 failed=0
undercloud : ok=24 changed=15 unreachable=0 failed=0
Friday 17 June 2016 19:48:21 +0300 (0:00:00.122) 0:25:38.417 ***********
===============================================================================
tripleo/undercloud : Install the undercloud --------------------------- 997.81s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/install-undercloud.yml:1
setup/undercloud : Get image ------------------------------------------- 83.00s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:81
setup/undercloud : Get undercloud vm ip address ------------------------ 81.33s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:173
setup/undercloud : Resize undercloud image (call virt-resize) ---------- 77.90s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:122
setup/undercloud : Copy instackenv.json to appliance ------------------- 71.66s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:53
tripleo/undercloud : Prepare the undercloud for deploy ----------------- 64.63s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/post-install.yml:27
setup/undercloud : Upload undercloud volume to storage pool ------------ 54.75s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:142
teardown/nodes : Check overcloud vms ----------------------------------- 36.14s
/home/boris/tripleo-quickstart/roles/libvirt/teardown/nodes/tasks/main.yml:21 -
setup/undercloud : Inject undercloud ssh public key to appliance -------- 7.68s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:72
setup/undercloud : Get actual md5 checksum of image --------------------- 6.03s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:92
setup/undercloud : Perform selinux relabel on undercloud image ---------- 3.59s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/main.yml:94
tripleo/undercloud : Create undercloud configuration -------------------- 1.92s
/home/boris/tripleo-quickstart/roles/tripleo/undercloud/tasks/create-scripts.yml:3
setup ------------------------------------------------------------------- 1.82s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.73s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.65s
/home/boris/.quickstart/playbooks/provision.yml:29 ----------------------------
setup ------------------------------------------------------------------- 1.64s
None --------------------------------------------------------------------------
setup ------------------------------------------------------------------- 1.22s
None --------------------------------------------------------------------------
setup/overcloud : Define overcloud vms ---------------------------------- 1.19s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:67
setup/overcloud : Create overcloud vm storage --------------------------- 1.18s
/home/boris/tripleo-quickstart/roles/libvirt/setup/overcloud/tasks/main.yml:55
setup/undercloud : Get qcow2 image from cache --------------------------- 1.16s
/home/boris/tripleo-quickstart/roles/libvirt/setup/undercloud/tasks/fetch_image.yml:144
+ set +x
##################################
Virtual Environment Setup Complete
##################################
</span></code></pre>
<code><span class="com">
<br /><br />Access the undercloud by:<br /><br /> ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud<br /><br />There are scripts in the home directory to continue the deploy:<br /><br /> <span style="color: #b45f06;"> overcloud-deploy.sh will deploy the overcloud<br /> overcloud-deploy-post.sh will do any post-deploy configuration<br /> overcloud-validate.sh will run post-deploy validation</span><br /><br />Alternatively, you can ignore these scripts and follow the upstream docs,<br />starting from the overcloud deploy section:<br /><br /> http://ow.ly/1Vc1301iBlb</span></code><br />
<code><span class="com"><br /></span></code>
Then run 3 mentoned above scripts<br />
<br />
[stack@undercloud ~]$ neutron net-list<br />
<pre>+--------------------------------------+--------------+----------------------------------------+
| id | name | subnets |
+--------------------------------------+--------------+----------------------------------------+
| 9f0b6b5e-4859-4ecb-9870-a0704330ba3b | internal_api | 233ee2b9-84a3-4c78-bbd3-f9e2bbca37dd |
| | | 172.16.2.0/24 |
| b7122e93-0a04-41c5-8638-d011910d9dd5 | external | 775b0c70-521f-4313-9010-404b136bf863 |
| | | 10.0.0.0/24 |
| be6df0b9-d75e-4c92-ac1c-326fa60d5815 | tenant | 5b5e7299-90dc-46ff-860b-3bb8324cd650 |
| | | 172.16.0.0/24 |
| 4cf94755-4a87-4a81-9454-e8757928860f | storage_mgmt | 86068f21-37d6-4439-93b7-58982018a60c |
| | | 172.16.3.0/24 |
| e3bca056-be41-4330-9dc3-262f4a54d3b2 | storage | 335e91d4-91f9-4c37-a129-3c23cf77b8e3 |
| | | 172.16.1.0/24 |
| 6fada30d-71cb-435a-b06c-76932a12bc96 | ctlplane | 372a173e-1aed-4df8-83ca-55f4f272d910 |
| | | 192.0.2.0/24 |
+--------------------------------------+--------------+----------------------------------------+
</pre>
<br />
[stack@undercloud ~]$ heat stack-list<br />
<pre>+--------------------------------------+------------+-----------------+---------------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| cad1cbe8-5790-4665-9512-9add40cea4e8 | overcloud | CREATE_COMPLETE | 2016-06-17T16:53:29 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+
</pre>
<br />
[stack@undercloud ~]$ nova list<br />
<pre>+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 74ad2828-978c-4c05-a7d7-24e3d769f09d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| 2d8d0321-f93b-42e7-857c-a7199ee89e27 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.7 |
| ba130214-385f-4d32-948b-6ec522705bf3 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| 726a4273-9970-4601-8405-0d5e9a096691 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
</pre>
[root@undercloud ~]# route -n<br />
Kernel IP routing table<br />
<pre>Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.23.1 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ctlplane
192.168.23.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
</pre>
<br />
<br />
[root@undercloud ~]# ovs-vsctl show<br />
cc957f71-47e7-4fdd-a32b-26b31de42cd0<br />
Bridge br-ctlplane<br />
Port "vlan10"<br />
tag: 10<br />
Interface "vlan10"<br />
type: internal<br />
Port phy-br-ctlplane<br />
Interface phy-br-ctlplane<br />
type: patch<br />
options: {peer=int-br-ctlplane}<br />
Port br-ctlplane<br />
Interface br-ctlplane<br />
type: internal<br />
Port "eth1"<br />
Interface "eth1"<br />
Bridge br-int<br />
fail_mode: secure<br />
Port "tap2138f24c-cf"<br />
tag: 1<br />
Interface "tap2138f24c-cf"<br />
type: internal<br />
Port int-br-ctlplane<br />
Interface int-br-ctlplane<br />
type: patch<br />
options: {peer=phy-br-ctlplane}<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
ovs_version: "2.5.0"<br />
<br />
[root@undercloud ~]# ifconfig<br />
br-ctlplane: flags=4163<up> mtu 1500<br /> inet 192.0.2.1 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::251:8eff:fed1:cae1 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:51:8e:d1:ca:e1 txqueuelen 0 (Ethernet)<br /> RX packets 3525063 bytes 282216789 (269.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 5194629 bytes 24689319446 (22.9 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth0: flags=4163<up> mtu 1500<br /> inet 192.168.23.24 netmask 255.255.255.0 broadcast 192.168.23.255<br /> inet6 fe80::251:8eff:fed1:cadf prefixlen 64 scopeid 0x20<link></link><br /> ether 00:51:8e:d1:ca:df txqueuelen 1000 (Ethernet)<br /> RX packets 317765 bytes 583156188 (556.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 207058 bytes 40922620 (39.0 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />eth1: flags=4163<up> mtu 1500<br /> inet6 fe80::251:8eff:fed1:cae1 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:51:8e:d1:ca:e1 txqueuelen 1000 (Ethernet)<br /> RX packets 3546320 bytes 289792462 (276.3 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 5219521 bytes 24981243189 (23.2 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 3891442 bytes 26647179103 (24.8 GiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 3891442 bytes 26647179103 (24.8 GiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />virbr0: flags=4099<up> mtu 1500<br /> inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255<br /> ether 52:54:00:60:59:f7 txqueuelen 0 (Ethernet)<br /> RX packets 0 bytes 0 (0.0 B)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 0 bytes 0 (0.0 B)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.1 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::3049:b4ff:fe89:f348 prefixlen 64 scopeid 0x20<link></link><br /> ether 32:49:b4:89:f3:48 txqueuelen 0 (Ethernet)<br /> RX packets 20613 bytes 7441258 (7.0 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 24083 bytes 291745696 (278.2 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</up></up></host></up></up></up></up><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFRJIwp0DfAzxXmnFTn7uKMSLKHzsTBWRs18We3-5XrGCQKCd86SzS6S_f-bXuhDFFd1k6KZibbIX4C5GlA92tC4pN9ZGIoSIFY9ULTMj7teITT_RlbwRaFPlxPGeRlaDi7BTsog/s1600/Screenshot+from+2016-06-18+00-56-27.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFRJIwp0DfAzxXmnFTn7uKMSLKHzsTBWRs18We3-5XrGCQKCd86SzS6S_f-bXuhDFFd1k6KZibbIX4C5GlA92tC4pN9ZGIoSIFY9ULTMj7teITT_RlbwRaFPlxPGeRlaDi7BTsog/s640/Screenshot+from+2016-06-18+00-56-27.png" width="640" /></a></div>
<up><up><up><up><host><up><up> </up></up></host></up></up></up></up><br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvKVHEc5OECwLovKW9oew4LjJcKIHcXYuv-qIpx6bGQzUtBIn7ozmqVq2Mn99jXyhPO4p8bFhspQsgSsyDk-kqXrbomRBq0JUKZgAAQFrN-4P9p15OmRPOkkpgCySlQBkOyoTTjA/s1600/Screenshot+from+2016-06-18+00-57-14.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjvKVHEc5OECwLovKW9oew4LjJcKIHcXYuv-qIpx6bGQzUtBIn7ozmqVq2Mn99jXyhPO4p8bFhspQsgSsyDk-kqXrbomRBq0JUKZgAAQFrN-4P9p15OmRPOkkpgCySlQBkOyoTTjA/s640/Screenshot+from+2016-06-18+00-57-14.png" width="640" /></a></div>
<up><up><up><host><up><up><br /> </up></up></host></up></up></up>[root@overcloud-controller-0 ~(keystone_admin)]# nova-manage version<br />
Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".<br />
Option "notification_topics" from group "DEFAULT" is deprecated. Use option "topics" from group "oslo_messaging_notifications".<br />
13.0.1-0.20160611000828.c8ec9eb.el7.centos<br />
<br />
<br />
<span style="color: #b45f06;">[root@overcloud-controller-0 ~(keystone_admin)]# pcs status</span><br />
Cluster name: tripleo_cluster<br />
Last updated: Fri Jun 17 18:19:25 2016 Last change: Fri Jun 17 17:24:54 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br />
3 nodes and 127 resources configured<br />
<br />
Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.5 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
ip-172.16.3.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-10.0.0.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
ip-172.16.2.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
ip-172.16.1.4 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-0 ]<br />
Slaves: [ overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: delay-clone [delay]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Failed Actions:<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=96, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:21:27 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-0 'not running' (7): call=364, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 18:16:34 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-statsd_start_0 on overcloud-controller-0 'not running' (7): call=262, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:22:36 2016', queued=0ms, exec=2216ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=325, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:25:56 2016', queued=0ms, exec=2088ms<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=90, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:21:32 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-2 'not running' (7): call=345, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 18:16:30 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-statsd_start_0 on overcloud-controller-2 'not running' (7): call=302, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:24:27 2016', queued=0ms, exec=2203ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=304, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:24:32 2016', queued=0ms, exec=2102ms<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=95, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:21:29 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-metricd_monitor_60000 on overcloud-controller-1 'not running' (7): call=350, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 18:16:32 2016', queued=0ms, exec=0ms<br />
* openstack-gnocchi-statsd_start_0 on overcloud-controller-1 'not running' (7): call=309, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:24:37 2016', queued=0ms, exec=2206ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=287, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 17 17:24:07 2016', queued=0ms, exec=2126ms<br />
<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
<br />
<br />
<span style="color: #b45f06;">[heat-admin@overcloud-controller-0 ~]$ sudo cat /etc/os-net-config/config.json | jq '.[]'</span><br />
[<br />
{<br />
"dns_servers": [<br />
"8.8.8.8",<br />
"8.8.4.4"<br />
],<br />
"name": "br-ex",<br />
"members": [<br />
{<br />
"type": "interface",<br />
"name": "nic1",<br />
"primary": true<br />
},<br />
{<br />
"routes": [<br />
{<br />
"default": true,<br />
"next_hop": "10.0.0.1"<br />
}<br />
],<br />
"type": "vlan",<br />
"addresses": [<br />
{<br />
"ip_netmask": "10.0.0.7/24"<br />
}<br />
],<br />
"vlan_id": 10<br />
},<br />
{<br />
"type": "vlan",<br />
"addresses": [<br />
{<br />
"ip_netmask": "172.16.2.8/24"<br />
}<br />
],<br />
"vlan_id": 20<br />
},<br />
{<br />
"type": "vlan",<br />
"addresses": [<br />
{<br />
"ip_netmask": "172.16.1.8/24"<br />
}<br />
],<br />
"vlan_id": 30<br />
},<br />
{<br />
"type": "vlan",<br />
"addresses": [<br />
{<br />
"ip_netmask": "172.16.3.6/24"<br />
}<br />
],<br />
"vlan_id": 40<br />
},<br />
{<br />
"type": "vlan",<br />
"addresses": [<br />
{<br />
"ip_netmask": "172.16.0.7/24"<br />
}<br />
],<br />
"vlan_id": 50<br />
}<br />
],<br />
"routes": [<br />
{<br />
"ip_netmask": "169.254.169.254/32",<br />
"next_hop": "192.0.2.1"<br />
}<br />
],<br />
"use_dhcp": false,<br />
"type": "ovs_bridge",<br />
"addresses": [<br />
{<br />
"ip_netmask": "192.0.2.9/24"<br />
}<br />
]<br />
}<br />
]<br />
<br />
[heat-admin@overcloud-controller-0 ~]$ sudo route -n<br />
<pre>Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 0 0 0 vlan10
10.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan10
169.254.169.254 192.0.2.1 255.255.255.255 UGH 0 0 0 br-ex
172.16.0.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan50
172.16.1.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan30
172.16.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan20
172.16.3.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan40
192.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-ex
</pre>
<br />
[root@overcloud-controller-0 ~(keystone_admin)]# ifconfig<br />
<span style="color: #b45f06;">br-ex: flags=4163</span><up><span style="color: #b45f06;"> mtu 1500<br /> inet 192.0.2.9 netmask 255.255.255.0 broadcast 192.0.2.255<br /> inet6 fe80::292:beff:fe94:32f9 prefixlen 64 scopeid 0x20<br /> ether 00:92:be:94:32:f9 txqueuelen 0 (Ethernet)<br /> RX packets 32540 bytes 74708595 (71.2 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 32034 bytes 3733716 (3.5 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0</span><link></link><br /><br />eth0: flags=4163<up> mtu 1500<br /> inet6 fe80::292:beff:fe94:32f9 prefixlen 64 scopeid 0x20<link></link><br /> ether 00:92:be:94:32:f9 txqueuelen 1000 (Ethernet)<br /> RX packets 1252373 bytes 973500960 (928.4 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 1226276 bytes 584049729 (556.9 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />lo: flags=73<up> mtu 65536<br /> inet 127.0.0.1 netmask 255.0.0.0<br /> inet6 ::1 prefixlen 128 scopeid 0x10<host><br /> loop txqueuelen 0 (Local Loopback)<br /> RX packets 342429 bytes 576234034 (549.5 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 342429 bytes 576234034 (549.5 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan10: flags=4163<up> mtu 1500<br /> inet 10.0.0.7 netmask 255.255.255.0 broadcast 10.0.0.255<br /> inet6 fe80::fcb9:82ff:fe2b:4785 prefixlen 64 scopeid 0x20<link></link><br /> ether fe:b9:82:2b:47:85 txqueuelen 0 (Ethernet)<br /> RX packets 18161 bytes 284359113 (271.1 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 16451 bytes 5011950 (4.7 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan20: flags=4163<up> mtu 1500<br /> inet 172.16.2.8 netmask 255.255.255.0 broadcast 172.16.2.255<br /> inet6 fe80::345b:85ff:fec9:1a58 prefixlen 64 scopeid 0x20<link></link><br /> ether 36:5b:85:c9:1a:58 txqueuelen 0 (Ethernet)<br /> RX packets 1130946 bytes 290484989 (277.0 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 1112747 bytes 214163892 (204.2 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan30: flags=4163<up> mtu 1500<br /> inet 172.16.1.8 netmask 255.255.255.0 broadcast 172.16.1.255<br /> inet6 fe80::1892:70ff:febe:6fa5 prefixlen 64 scopeid 0x20<link></link><br /> ether 1a:92:70:be:6f:a5 txqueuelen 0 (Ethernet)<br /> RX packets 51203 bytes 51062473 (48.6 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 49854 bytes 311860707 (297.4 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />vlan40: flags=4163<up> mtu 1500<br /> inet 172.16.3.6 netmask 255.255.255.0 broadcast 172.16.3.255<br /> inet6 fe80::4858:c5ff:fe85:dca5 prefixlen 64 scopeid 0x20<link></link><br /> ether 4a:58:c5:85:dc:a5 txqueuelen 0 (Ethernet)<br /> RX packets 18746 bytes 267835013 (255.4 MiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 14631 bytes 44417807 (42.3 MiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br /><span style="color: #b45f06;">vlan50: flags=4163</span><up><span style="color: #b45f06;"> mtu 1500<br /> inet 172.16.0.7 netmask 255.255.255.0 broadcast 172.16.0.255</span><br /> inet6 fe80::80d1:c1ff:fe06:a095 prefixlen 64 scopeid 0x20<link></link><br /> ether 82:d1:c1:06:a0:95 txqueuelen 0 (Ethernet)<br /> RX packets 621 bytes 62554 (61.0 KiB)<br /> RX errors 0 dropped 0 overruns 0 frame 0<br /> TX packets 146 bytes 12262 (11.9 KiB)<br /> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br /><br />[root@overcloud-controller-0 ~(keystone_admin)]# ovs-vsctl show<br />765a651a-f908-4ae7-9dab-1712de0f8ed2<br /> <span style="color: #b45f06;">Bridge br-ex</span><br /> Port "vlan50"<br /> tag: 50<br /> Interface "vlan50"<br /> type: internal<br /> Port phy-br-ex<br /> Interface phy-br-ex<br /> type: patch<br /> options: {peer=int-br-ex}<br /> <span style="color: #b45f06;">Port "eth0"<br /> Interface "eth0"</span><br /> Port "vlan30"<br /> tag: 30<br /> Interface "vlan30"<br /> type: internal<br /> Port "vlan20"<br /> tag: 20<br /> Interface "vlan20"<br /> type: internal<br /> Port "vlan40"<br /> tag: 40<br /> Interface "vlan40"<br /> type: internal<br /> Port "vlan10"<br /> tag: 10<br /> Interface "vlan10"<br /> type: internal<br /> Port "qg-d116056c-ab"<br /> Interface "qg-d116056c-ab"<br /> type: internal<br /> Port br-ex<br /> Interface br-ex<br /> type: internal<br /> Bridge br-int<br /> fail_mode: secure<br /> Port int-br-ex<br /> Interface int-br-ex<br /> type: patch<br /> options: {peer=phy-br-ex}<br /> Port br-int<br /> Interface br-int<br /> type: internal<br /> Port "qr-02a6b269-22"<br /> tag: 4<br /> Interface "qr-02a6b269-22"<br /> type: internal<br /> Port "ha-2043a0a0-79"<br /> tag: 3<br /> Interface "ha-2043a0a0-79"<br /> type: internal<br /> Port "tap8d7afb39-38"<br /> tag: 4<br /> Interface "tap8d7afb39-38"<br /> type: internal<br /> Port patch-tun<br /> Interface patch-tun<br /> type: patch<br /> options: {peer=patch-int}<br /> <span style="color: #b45f06;"> Bridge br-tun<br /> fail_mode: secure<br /> Port "vxlan-ac100005"<br /> Interface "vxlan-ac100005"<br /> type: vxlan<br /> options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.5"}<br /> Port "vxlan-ac100004"<br /> Interface "vxlan-ac100004"<br /> type: vxlan<br /> options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.4"}<br /> Port "vxlan-ac100006"<br /> Interface "vxlan-ac100006"<br /> type: vxlan<br /> options: {df_default="true", in_key=flow, local_ip="172.16.0.7", out_key=flow, remote_ip="172.16.0.6"}<br /> Port br-tun<br /> Interface br-tun<br /> type: internal<br /> Port patch-int<br /> Interface patch-int</span><br /> type: patch<br /> options: {peer=patch-tun}<br /> ovs_version: "2.5.0"</up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up><br /></up></up></up></up></up></host></up></up></up>
<up><up><up><host><up><up><up><up><up>[root@overcloud-controller-0 ~(keystone_admin)]# cd /etc/neutron</up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up><br /><span style="color: #b45f06;">[root@overcloud-controller-0 neutron(keystone_admin)]# cat l3_agent.ini | grep -v ^#|grep -v ^$</span><br />[DEFAULT]<br />ovs_use_veth = False<br />interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver<br />agent_mode = legacy<br />debug = False<br />[AGENT]<br /> </up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up>[root@overcloud-controller-0 neutron(keystone_admin)]# cd plugins/ml2</up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up><br /><span style="color: #b45f06;">[root@overcloud-controller-0 ml2(keystone_admin)]# cat ml2_conf.ini | grep -v ^#|grep -v ^$</span><br />[DEFAULT]<br />[ml2]<br />type_drivers = vxlan,vlan,flat,gre<br />tenant_network_types = vxlan<br />mechanism_drivers =openvswitch<br />extension_drivers =qos,port_security<br />path_mtu = 0<br />[ml2_type_flat]<br />flat_networks = datacentre<br />[ml2_type_geneve]<br />[ml2_type_gre]<br />tunnel_id_ranges =1:4094<br />[ml2_type_vlan]<br />network_vlan_ranges =datacentre:1:1000<br />[ml2_type_vxlan]<br />vni_ranges =1:4094<br />vxlan_group = 224.0.0.1<br />[securitygroup]</up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up><br /><span style="color: #b45f06;">[root@overcloud-controller-0 ml2(keystone_admin)]# cat openvswitch_agent.ini | grep -v ^#|grep -v ^$</span><br />[DEFAULT]<br />[agent]<br />tunnel_types =vxlan<br />vxlan_udp_port = 4789<br />l2_population = False<br />drop_flows_on_start = False<br />extensions=qos<br />[ovs]<br />integration_bridge = br-int<br />tunnel_bridge = br-tun<br />local_ip = 172.16.0.7<br />bridge_mappings =datacentre:br-ex<br />enable_tunneling=True<br />[securitygroup]<br />firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver</up></up></up></up></up></host></up></up></up><br />
<up><up><up><host><up><up><up><up><up><br />###########<br />END UPDATE </up></up></up></up></up></host></up></up></up><br />
###########<br />
<br />
This posting follows up <a href="https://simplenfv.wordpress.com/2016/05/16/deploying-openstack-on-just-one-hosted-server/" target="_blank">Deploying OpenStack on just one hosted server</a> but is focused on utilizing i7 4790/4770 CPUs with inexpensive boards like ASUS Z97-P having 32 GB RAM. As remote workstation C2D E8400 with 8 GB RAM has been used . OS installed on both PCs was CentOS 7.2 (Release 1604). Mentioned environment allows in about couple of hours to obtain stable working configuration based on ha.yml template :-<br />
<br />
<span style="color: #b45f06;">######################</span><br />
<span style="color: #b45f06;"># Template code</span><br />
<span style="color: #b45f06;">######################</span><br />
compute_memory: 6144<br />
compute_vcpu:1<br />
<br />
undercloud_memory: 8192<br />
<br />
# Giving the undercloud additional CPUs can greatly improve heat's<br />
# performance (and result in a shorter deploy time).<br />
undercloud_vcpu: 4<br />
<br />
# Create three controller nodes and one compute node.<br />
overcloud_nodes:<br />
- name: control_0<br />
flavor: control<br />
- name: control_1<br />
flavor: control<br />
- name: control_2<br />
flavor: control<br />
<br />
- name: compute_0<br />
flavor: compute<br />
<br />
# We don't need introspection in a virtual environment (because we are<br />
# creating all the "hardware" we really know the necessary<br />
# information).<br />
introspect: false<br />
<br />
# Tell tripleo about our environment.<br />
network_isolation: true<br />
extra_args: >-<br />
--control-scale 3 --neutron-network-type vxlan<br />
--neutron-tunnel-types vxlan<br />
-e /usr/share/openstack-tripleo-heat-templates/environments/puppet-pacemaker.yaml<br />
--ntp-server pool.ntp.org<br />
deploy_timeout: 75<br />
tempest: false<br />
pingtest: true<br />
<br />
**********************************************************************************<br />
Fist fix bugs on Server mentioned in link above on the Server's Desktop<br />
and run `yum groupinstall "Virtualization Host"`<br />
********************************************************************************** <br />
<br />
Then start on workstation :-<br />
<br />
$ git clone https://github.com/openstack/tripleo-quickstart<br />
$ cd tripleo-quickstart<br />
$ sudo bash quickstart.sh --install-deps<br />
$ sudo yum -y install redhat-rpm-config<br />
<br />
<br />
$ export VIRTHOST=192.168.1.75 #put your own IP here<br />
$ ssh-keygen <br />
$ ssh-copy-id root@$VIRTHOST<br />
$ ssh root@$VIRTHOST uname -a # no root login prompt<br />
<br />
Then run under tripleo-quickstart<br />
<br />
<span style="color: #b45f06;">$ bash quickstart.sh --config ./config/general_config/ha.yml $VIRTHOST</span><br />
<br />
If everything went well you will be brought back to command prompt and see message <code><span class="com"> </span></code><br />
<br />
##################################<br />
Virtual Environment Setup Complete<br />
##################################<br />
<br />
Access the undercloud by:<br />
<br />
ssh -F /home/boris/.quickstart/ssh.config.ansible undercloud <br />
<br />
<br />
There are scripts in the home directory to continue the deploy:<br />
<br />
undercloud-install.sh will run the undercloud install<br />
undercloud-post-install.sh will perform all pre-deploy steps<br />
overcloud-deploy.sh will deploy the overcloud<br />
overcloud-deploy-post.sh will do any post-deploy configuration<br />
overcloud-validate.sh will run post-deploy validation<br />
<br />
<br />
During overcloud deployment open remote ssh session to server and run top<br />
You will see that memory allocation matches your ha.yml<br />
When done : su - stack on Server<br />
<br />
[root@ServerCentOS72 ~]# su - stack<br />
Last login: Fri Jun 3 10:47:22 MSK 2016 from 192.168.1.54 on pts/0<br />
[stack@ServerCentOS72 ~]$ virsh list<br />
Id Name State<br />
----------------------------------------------------<br />
2 undercloud running<br />
7 compute_0 running<br />
8 control_0 running<br />
9 control_1 running<br />
10 control_2 running<br />
<br />
[stack@ServerCentOS72 ~]$ virsh dumpxml undercloud | grep cpu<br />
<vcpu placement='static'>4</vcpu><br />
[stack@ServerCentOS72 ~]$ virsh dumpxml undercloud | grep memory<br />
<memory unit='KiB'>8388608</memory><br />
[stack@ServerCentOS72 ~]$ virsh dumpxml control_0 | grep memory<br />
<memory unit='KiB'>6291456</memory><br />
[stack@ServerCentOS72 ~]$ virsh dumpxml control_0 | grep cpu<br />
<vcpu placement='static'>1</vcpu><br />
<cpu mode='host-passthrough'/><br />
[stack@ServerCentOS72 ~]$ virsh dumpxml compute_0 | grep cpu<br />
<vcpu placement='static'>1</vcpu><br />
<cpu mode='host-passthrough'/><br />
[stack@ServerCentOS72 ~]$ virsh dumpxml compute_0 | grep memory<br />
<memory unit='KiB'>6291456</memory><br />
<br />
***************************************<br />
Up on completion of last script run :-<br />
***************************************<br />
<br />
[stack@undercloud ~]$ . stackrc<br />
<br />
[stack@undercloud ~]$ heat stack-list<br />
<br />
<pre>+--------------------------------------+------------+-----------------+---------------------+--------------+
| id | stack_name | stack_status | creation_time | updated_time |
+--------------------------------------+------------+-----------------+---------------------+--------------+
| 0c6b8205-be86-4a24-be36-fd4ece956c6d | overcloud | CREATE_COMPLETE | 2016-06-03T08:14:19 | None |
+--------------------------------------+------------+-----------------+---------------------+--------------+
</pre>
<pre>[stack@undercloud ~]$ nova list
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
| 6a38b7be-3743-4339-970b-6121e687741d | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.0.2.10 |
| 9222dc1b-5974-495b-8b98-b8176ac742f4 | overcloud-controller-1 | ACTIVE | - | Running | ctlplane=192.0.2.9 |
| 76adbb27-220f-42ef-9691-94729ee28749 | overcloud-controller-2 | ACTIVE | - | Running | ctlplane=192.0.2.11 |
| 8f57f7b6-a2d8-4b7b-b435-1c675e63ea84 | overcloud-novacompute-0 | ACTIVE | - | Running | ctlplane=192.0.2.8 |
+--------------------------------------+-------------------------+--------+------------+-------------+---------------------+
</pre>
<br />
[stack@undercloud ~]$ ssh heat-admin@192.0.2.10<br />
Last login: Fri Jun 3 10:01:44 2016 from gateway<br />
[heat-admin@overcloud-controller-0 ~]$ sudo su -<br />
Last login: Fri Jun 3 10:01:49 UTC 2016 on pts/0<br />
<br />
[root@overcloud-controller-0 ~]# . keystonerc_admin <== /etc/stack/overcloudrc<br />
<span style="color: #b45f06;">[root@overcloud-controller-0 ~]# pcs status</span><br />
Cluster name: tripleo_cluster<br />
Last updated: Fri Jun 3 10:07:22 2016 Last change: Fri Jun 3 08:50:59 2016 by root via cibadmin on overcloud-controller-0<br />
Stack: corosync<br />
Current DC: overcloud-controller-0 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br />
3 nodes and 123 resources configured<br />
<br />
<span style="color: #b45f06;">Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]</span><br />
<br />
Full list of resources:<br />
<br />
ip-192.0.2.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0<br />
Clone Set: haproxy-clone [haproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
ip-192.0.2.7 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1<br />
Master/Slave Set: galera-master [galera]<br />
Masters: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: memcached-clone [memcached]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: rabbitmq-clone [rabbitmq]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-core-clone [openstack-core]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Master/Slave Set: redis-master [redis]<br />
Masters: [ overcloud-controller-1 ]<br />
Slaves: [ overcloud-controller-0 overcloud-controller-2 ]<br />
Clone Set: mongod-clone [mongod]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-evaluator-clone [openstack-aodh-evaluator]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-scheduler-clone [openstack-nova-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-l3-agent-clone [neutron-l3-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-netns-cleanup-clone [neutron-netns-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-ovs-cleanup-clone [neutron-ovs-cleanup]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-2<br />
Clone Set: openstack-heat-engine-clone [openstack-heat-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-api-clone [openstack-ceilometer-api]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-listener-clone [openstack-aodh-listener]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-metadata-agent-clone [neutron-metadata-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-metricd-clone [openstack-gnocchi-metricd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-aodh-notifier-clone [openstack-aodh-notifier]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-clone [openstack-heat-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-collector-clone [openstack-ceilometer-collector]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-api-clone [openstack-glance-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-scheduler-clone [openstack-cinder-scheduler]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-api-clone [openstack-nova-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-consoleauth-clone [openstack-nova-consoleauth]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-api-clone [openstack-sahara-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cloudwatch-clone [openstack-heat-api-cloudwatch]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-sahara-engine-clone [openstack-sahara-engine]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-glance-registry-clone [openstack-glance-registry]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-gnocchi-statsd-clone [openstack-gnocchi-statsd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-notification-clone [openstack-ceilometer-notification]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-cinder-api-clone [openstack-cinder-api]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-dhcp-agent-clone [neutron-dhcp-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-openvswitch-agent-clone [neutron-openvswitch-agent]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-novncproxy-clone [openstack-nova-novncproxy]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: delay-clone [delay]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: neutron-server-clone [neutron-server]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-ceilometer-central-clone [openstack-ceilometer-central]<br />
Stopped: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: httpd-clone [httpd]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-heat-api-cfn-clone [openstack-heat-api-cfn]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
Clone Set: openstack-nova-conductor-clone [openstack-nova-conductor]<br />
Started: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ]<br />
<br />
Failed Actions:<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-1 'not running' (7): call=76, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:47:22 2016', queued=0ms, exec=0ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-1 'not running' (7): call=290, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:51:18 2016', queued=0ms, exec=2132ms<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-2 'not running' (7): call=76, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:47:16 2016', queued=0ms, exec=0ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-2 'not running' (7): call=292, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:51:31 2016', queued=0ms, exec=2102ms<br />
* openstack-aodh-evaluator_monitor_60000 on overcloud-controller-0 'not running' (7): call=77, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:47:19 2016', queued=0ms, exec=0ms<br />
* openstack-ceilometer-central_start_0 on overcloud-controller-0 'not running' (7): call=270, status=complete, exitreason='none',<br />
last-rc-change='Fri Jun 3 08:50:02 2016', queued=0ms, exec=2199ms<br />
<br />
<br />
PCSD Status:<br />
overcloud-controller-0: Online<br />
overcloud-controller-1: Online<br />
overcloud-controller-2: Online<br />
<br />
Daemon Status:<br />
corosync: active/enabled<br />
pacemaker: active/enabled<br />
pcsd: active/enabled<br />
<br />
<br />
Daemons running on Controller-0 <br />
<span style="color: #b45f06;"><br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFZIxddkrOi8lN4VB0ktOJcOrMeASxU-0H3Vuv0Puu7eWVjDk9mE-DduTsWOkPXh8-aesT0E8SOHLdapplNZBi9huHcEIU76OUp7i8hAq6vxazig1j4YqGAnW_u24rBDrL1CAlbg/s1600/Screenshot+from+2016-06-03+16-30-00.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFZIxddkrOi8lN4VB0ktOJcOrMeASxU-0H3Vuv0Puu7eWVjDk9mE-DduTsWOkPXh8-aesT0E8SOHLdapplNZBi9huHcEIU76OUp7i8hAq6vxazig1j4YqGAnW_u24rBDrL1CAlbg/s640/Screenshot+from+2016-06-03+16-30-00.png" width="640" /></a></span></div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: left;">
Neutron reports on Comtroller_0</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiK2SpYrsEUMU7-AD5DOBQPFJAEFyISmSNtW3c5kanJmQrEb20gEwGIsoWIreVy5wllvgg4sJAVkuy7BYDyaIW0iASZ6fdTOEzkPHoltwqzHdzSmZDm11agrU0gxUoUiag47EHZlw/s1600/Screenshot+from+2016-06-03+17-41-24.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiK2SpYrsEUMU7-AD5DOBQPFJAEFyISmSNtW3c5kanJmQrEb20gEwGIsoWIreVy5wllvgg4sJAVkuy7BYDyaIW0iASZ6fdTOEzkPHoltwqzHdzSmZDm11agrU0gxUoUiag47EHZlw/s640/Screenshot+from+2016-06-03+17-41-24.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
[root@overcloud-controller-0 ~]# neutron l3-agent-list-hosting-router RouterDSA<br />
+--------------------------------------+------------------------+----------------+-------+----------+<br />
| id | host | admin_state_up | alive | ha_state |<br />
+--------------------------------------+------------------------+----------------+-------+----------+<br />
| 3ffad1c0-da80-4ab0-b165-1f555f1190e4 | overcloud-controller-0 | True | :-) | active |<br />
| ec70ba18-9cc3-4409-a671-33b21f9a586f | overcloud-controller-1| True | :-) | standby |<br />
| bd409fca-52a5-4bca-bb68-a8bd57632dfa | overcloud-controller-2 | True | :-) | standby |<br />
+--------------------------------------+------------------------+----------------+-------+----------+<br />
<br />
System information<br />
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxhrEMvGucphh_9m_sIGdPkfaZezu_TM4IwpU5ChsahhkMgm9ZReBXsNVsRVp3iyvxCuXFSxAKsRmIPnSlvvTR-v7siA_IX9BVPTo-muuCD_sFliwIF-tcnW752eIdZzXalhv_UA/s1600/Screenshot+from+2016-06-03+16-02-48.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxhrEMvGucphh_9m_sIGdPkfaZezu_TM4IwpU5ChsahhkMgm9ZReBXsNVsRVp3iyvxCuXFSxAKsRmIPnSlvvTR-v7siA_IX9BVPTo-muuCD_sFliwIF-tcnW752eIdZzXalhv_UA/s640/Screenshot+from+2016-06-03+16-02-48.png" width="640" /></a><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8knwY9kJgYk-81bSfZg2AL2Dty2dwS9aQDif265WgTVnLGr68GDCV4r71FWbaI6qquEQVeWUd8aVYACiRxSzIjKpezRt5rnjCIjwiha4gkWtWTRlz7pJ9Zeao5bnCMKsYHBI8dg/s1600/Screenshot+from+2016-06-03+16-03-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8knwY9kJgYk-81bSfZg2AL2Dty2dwS9aQDif265WgTVnLGr68GDCV4r71FWbaI6qquEQVeWUd8aVYACiRxSzIjKpezRt5rnjCIjwiha4gkWtWTRlz7pJ9Zeao5bnCMKsYHBI8dg/s640/Screenshot+from+2016-06-03+16-03-03.png" width="640" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<br />
Instances runing<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqcQITCZq4pq7TppORd3vEOZPvD2-erphzrRlrS-3ga6cRuOrHQrXX8MNzPx1NWpzaKg60Q0B-DE-sXhyphenhyphen2E0yiVxlcq21R31G8g8e4_vMXSHObg8BHYcDPlmT8hp78DCCpgNEE5A/s1600/Screenshot+from+2016-06-03+15-53-03.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjqcQITCZq4pq7TppORd3vEOZPvD2-erphzrRlrS-3ga6cRuOrHQrXX8MNzPx1NWpzaKg60Q0B-DE-sXhyphenhyphen2E0yiVxlcq21R31G8g8e4_vMXSHObg8BHYcDPlmT8hp78DCCpgNEE5A/s640/Screenshot+from+2016-06-03+15-53-03.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCuapuOUmZvP70vleAi8rG8xaQt2eQixna9wL7zxbciO3I3GpkqsfhnJkTTSyS4K4YaWX9OvL2T1S0TSwLQJH-x7AKkZcO_jIPbALKC1te98IrH79DbBXI_utTE7Is6t7-fn0naQ/s1600/Screenshot+from+2016-06-03+16-03-17.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCuapuOUmZvP70vleAi8rG8xaQt2eQixna9wL7zxbciO3I3GpkqsfhnJkTTSyS4K4YaWX9OvL2T1S0TSwLQJH-x7AKkZcO_jIPbALKC1te98IrH79DbBXI_utTE7Is6t7-fn0naQ/s640/Screenshot+from+2016-06-03+16-03-17.png" width="640" /></a></div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSJ2kyuIjprserN7ftItQEncm3EL7W6SQ4PP3S3Rs10fpSA0IMiyHrGTlQ3NBpdCCX0u8ehbQ3_9KJ9Qk9MacgtxztwSc4PSVLSjDgoYFQ0h0Jg-mTNZPr4Icy0iNIto2M3kfs-g/s1600/Screenshot+from+2016-06-03+16-03-38.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiSJ2kyuIjprserN7ftItQEncm3EL7W6SQ4PP3S3Rs10fpSA0IMiyHrGTlQ3NBpdCCX0u8ehbQ3_9KJ9Qk9MacgtxztwSc4PSVLSjDgoYFQ0h0Jg-mTNZPr4Icy0iNIto2M3kfs-g/s640/Screenshot+from+2016-06-03+16-03-38.png" width="640" /></a></div>
<br />
Shapshots from underground<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB_9jIPUhV_CgKSIEU1Sm_o9MjdZQLa0NUP8hDPOVXipMziI1qQnia4e4I3IsQtR2uhTvFLnpZq06bl0CxyhlUBbog1oKB5I1JBS3f_E41ToVdU_TLViThWRWRlUpyfSPKsAB3Nw/s1600/Screenshot+from+2016-06-03+15-56-12.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiB_9jIPUhV_CgKSIEU1Sm_o9MjdZQLa0NUP8hDPOVXipMziI1qQnia4e4I3IsQtR2uhTvFLnpZq06bl0CxyhlUBbog1oKB5I1JBS3f_E41ToVdU_TLViThWRWRlUpyfSPKsAB3Nw/s640/Screenshot+from+2016-06-03+15-56-12.png" width="640" /></a></div>
<br />
Connection to VMs running in overcloud from undercloud (VM)<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGwNNQ5LRCpUDsp28twMY2QbO7s97rYelhQVG-i9aacoNL_fX35a-nF9AJ8sMowYAuiK_LTM5oZiohLsf9zD7eUJ9-FLHc1Q9KMZl2IN_bCwcXCgxn-e5W-DWnZeLm5d_WxvshFA/s1600/Screenshot+from+2016-06-03+16-15-22.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGwNNQ5LRCpUDsp28twMY2QbO7s97rYelhQVG-i9aacoNL_fX35a-nF9AJ8sMowYAuiK_LTM5oZiohLsf9zD7eUJ9-FLHc1Q9KMZl2IN_bCwcXCgxn-e5W-DWnZeLm5d_WxvshFA/s640/Screenshot+from+2016-06-03+16-15-22.png" width="640" /></a></div>
<div class="composeBoxWrapper OJTUNIC-M-e">
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: left;">
Neutron port list on undercloud</div>
<div class="separator" style="clear: both; text-align: left;">
</div>
<div class="separator" style="clear: both; text-align: left;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDjV8csdwDoM37_nDEY9aOztJ6oVKU4csuMIRXeIDHuOr8cSgYkmHScbJbVUG9u3MleZDowW5s-_5eWfhss52Aujg010QnwD21nJmJUKm3gRi8aqD9IFCGtMnc3BbHF0h2-uMl5Q/s1600/Screenshot+from+2016-06-04+00-01-08.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDjV8csdwDoM37_nDEY9aOztJ6oVKU4csuMIRXeIDHuOr8cSgYkmHScbJbVUG9u3MleZDowW5s-_5eWfhss52Aujg010QnwD21nJmJUKm3gRi8aqD9IFCGtMnc3BbHF0h2-uMl5Q/s640/Screenshot+from+2016-06-04+00-01-08.png" width="640" /></a></div>
<br />
<div style="text-align: left;">
<br />
Regarding details of ovecloud server's configuration<br />
<br />
[root@overcloud-controller-0 network-scripts]# cat ifcfg-br-ex<br />
# This file is autogenerated by os-net-config<br />
DEVICE=br-ex<br />
ONBOOT=yes<br />
HOTPLUG=no<br />
NM_CONTROLLED=no<br />
DEVICETYPE=ovs<br />
TYPE=OVSBridge<br />
OVSBOOTPROTO=dhcp<br />
OVSDHCPINTERFACES="eth0"<br />
OVS_EXTRA="set bridge br-ex other-config:hwaddr=<span style="color: #b45f06;">00:83:94:4b:f4:bf</span>"<br />
<br />
[root@overcloud-controller-0 network-scripts]# ifconfig<br />
br-ex: flags=4163<up> mtu 1500</up><br />
inet 192.0.2.10 netmask 255.255.255.0 broadcast 192.0.2.255<br />
inet6 fe80::283:94ff:fe4b:f4bf prefixlen 64 scopeid 0x20<link></link><br />
ether 00:83:94:4b:f4:bf txqueuelen 0 (Ethernet)<br />
RX packets 1524142 bytes 482079467 (459.7 MiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 1479958 bytes 289821172 (276.3 MiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
eth0: flags=4163<up> mtu 1500</up><br />
inet6 fe80::283:94ff:fe4b:f4bf prefixlen 64 scopeid 0x20<link></link><br />
ether <span style="color: #b45f06;">00:83:94:4b:f4:bf</span> txqueuelen 1000 (Ethernet)<br />
RX packets 1524492 bytes 482222219 (459.8 MiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 1480362 bytes 289890148 (276.4 MiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
<br />
<br />
Watch <a href="https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare">https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare</a>
</div>
</div>
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGwNNQ5LRCpUDsp28twMY2QbO7s97rYelhQVG-i9aacoNL_fX35a-nF9AJ8sMowYAuiK_LTM5oZiohLsf9zD7eUJ9-FLHc1Q9KMZl2IN_bCwcXCgxn-e5W-DWnZeLm5d_WxvshFA/s1600/Screenshot+from+2016-06-03+16-15-22.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a></div>
<br /></div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.comtag:blogger.com,1999:blog-17067101.post-18654247184031031272016-06-01T04:51:00.002-07:002016-06-02T10:07:37.940-07:00RDO Mitaka AIO Setup with external bridge in DHCP mode per Lars Kellogg Stedman<div dir="ltr" style="text-align: left;" trbidi="on">
I have been recently watching<br />
<a href="https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare" target="_blank"> https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare </a><br />
Regardless external bridge br-ex usually doesn't have DHCP mode but<br />
on system where management and external interfaces are supported by different<br />
NICs ( say eth0 and eht1) . It makes sense to turn br-ex into DHCP mode, splitting DHCP pools of DHCP Server serving physical external network and<br />
allocation pool of floating IPs which belongs to virtual external network.<br />
Lars's Kellogg Stedman Video has been made for RDO IceHouse, and I wanted to<br />
make sure that explicit update to ovsdb via br-ex syntax would work as expected on RDO Mitaka . To see for yourself that it works exactly as proposed a while ago.<br />
<br />
<br />
Create pool DHCP on external router like this<br />
<br />
<br />
[root@fedora23wks ~]# virsh net-dumpxml external3<br />
<network connections='1'><br />
<name>external3</name><br />
<uuid>d0e9964b-e91a-40c0-b769-a609aee41bf2</uuid><br />
<forward mode='nat'><br />
<nat><br />
<port start='1024' end='65535'/><br />
</nat><br />
</forward><br />
<bridge name='virbr7' stp='on' delay='0'/><br />
<mac address='52:54:00:60:f8:6d'/><br />
<ip address='192.179.143.1' netmask='255.255.255.0'><br />
<dhcp><br />
<span style="color: #b45f06;"> <range start='192.179.143.2' end='192.179.143.100'/></span><br />
</dhcp><br />
</ip><br />
</network><br />
<br />
System has two VNIC eth0 - mgmt ( static IP ) and external VNIC eth1<br />
<br />
************ <br />
Then :-<br />
************ <br />
<pre class="highlight plaintext"><code># yum install -y centos-release-openstack-mitaka
# yum update -y
# sudo yum install -y openstack-packstack
# packstack --allinone</code></pre>
<br />
In this setup packstack will bind AIO Instance to static IP belongs 192.169.142.0/24, interface eth1 will have IP obtained via DHCP<br />
from NAT Libvirt network external3 defined above in interval <br />
( <span style="color: #b45f06;">192.179.143.2 , 192.179.143.100</span> ) <br />
<br />
*************************************************************************************<br />
ifcfg-br-ex, ifcfg-et1 configuration follow <a href="https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare" target="_blank">https://www.youtube.com/watch?v=8zFQG5mKwPk&feature=autoshare </a> <br />
*************************************************************************************<br />
<br />
[root@CentOS72DHV network-scripts(keystone_admin)]# cat ifcfg-br-ex<br />
<span style="color: #b45f06;">DEVICE=br-ex</span><br />
<span style="color: #b45f06;">DEVICETYPE=ovs</span><br />
<span style="color: #b45f06;">TYPE=OVSBridge</span><br />
<span style="color: #b45f06;">ONBOOT=yes</span><br />
<span style="color: #b45f06;"># MACADDRESS eth1</span><br />
<span style="color: #b45f06;">MACADDR=52:54:00:d6:d8:a0</span><br />
<span style="color: #b45f06;">OVS_EXTRA="set bridge $DEVICE other-config:hwaddr=$MACADDR"</span><br />
<span style="color: #b45f06;">OVSBOOTPROTO=dhcp</span><br />
<span style="color: #b45f06;">OVSDHCPINTERFACES=eth1</span><br />
<span style="color: #b45f06;"><br /></span>
<span style="color: #b45f06;">[root@CentOS72DHV network-scripts(keystone_admin)]# cat ifcfg-eth1</span><br />
<span style="color: #b45f06;">DEVICE="eth1"</span><br />
<span style="color: #b45f06;">BOOTPROTO="none"</span><br />
<span style="color: #b45f06;">ONBOOT="yes"</span><br />
<span style="color: #b45f06;">DEVICETYPE=ovs</span><br />
<span style="color: #b45f06;">TYPE=OVSPort</span><br />
<span style="color: #b45f06;">OVS_BRIDGE=br-ex</span><br />
<br />
<br />
*************************** <br />
Then run script<br />
***************************<br />
<span style="color: #b45f06;">#!/bin/bash -x <br />
chkconfig network on<br />
systemctl stop NetworkManager<br />
systemctl disable NetworkManager <br />
service network restart</span><br />
<br />
<br />
****************************************<br />
Network and OVS Configuration<br />
**************************************** <br />
<br />
[root@CentOS72DHV network-scripts(keystone_admin)]# ovs-vsctl show<br />
7e37d142-9b04-4d1d-a94f-c1571bf3e72d<br />
<span style="color: #b45f06;"> Bridge br-ex</span><br />
<span style="color: #b45f06;"> Port "qg-3c158a8b-f2"</span><br />
<span style="color: #b45f06;"> Interface "qg-3c158a8b-f2"</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
<span style="color: #b45f06;"> Port "eth1"</span><br />
<span style="color: #b45f06;"> Interface "eth1"</span><br />
<span style="color: #b45f06;"> Port br-ex</span><br />
<span style="color: #b45f06;"> Interface br-ex</span><br />
<span style="color: #b45f06;"> type: internal</span><br />
Bridge br-tun<br />
fail_mode: secure<br />
Port br-tun<br />
Interface br-tun<br />
type: internal<br />
Port patch-int<br />
Interface patch-int<br />
type: patch<br />
options: {peer=patch-tun}<br />
Bridge br-int<br />
fail_mode: secure<br />
Port patch-tun<br />
Interface patch-tun<br />
type: patch<br />
options: {peer=patch-int}<br />
Port "qvodb9910dc-eb"<br />
tag: 2<br />
Interface "qvodb9910dc-eb"<br />
Port "tap19245275-18"<br />
tag: 1<br />
Interface "tap19245275-18"<br />
type: internal<br />
Port br-int<br />
Interface br-int<br />
type: internal<br />
Port "tapec314038-5e"<br />
tag: 2<br />
Interface "tapec314038-5e"<br />
type: internal<br />
Port "qr-c5e01f38-65"<br />
tag: 2<br />
Interface "qr-c5e01f38-65"<br />
type: internal<br />
ovs_version: "2.4.0"<br />
<br />
[root@CentOS72DHV network-scripts(keystone_admin)]# ifconfig<br />
<span style="color: #b45f06;">br-ex: flags=4163<up> mtu 1500 <=== external bridge</up></span><br />
<span style="color: #b45f06;"> inet 192.179.143.7 netmask 255.255.255.0 broadcast 192.179.143.255</span><br />
inet6 fe80::5054:ff:fed6:d8a0 prefixlen 64 scopeid 0x20<link></link><br />
ether 52:54:00:d6:d8:a0 txqueuelen 0 (Ethernet)<br />
RX packets 317 bytes 27040 (26.4 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 304 bytes 25442 (24.8 KiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
<span style="color: #b45f06;">eth0: flags=4163<up> mtu 1500 <=== management interface</up></span><br />
<span style="color: #b45f06;"> inet 192.169.142.50 netmask 255.255.255.0 broadcast 192.169.142.255</span><br />
inet6 fe80::5054:ff:fe22:d9a2 prefixlen 64 scopeid 0x20<link></link><br />
ether 52:54:00:22:d9:a2 txqueuelen 1000 (Ethernet)<br />
RX packets 3136 bytes 1034328 (1010.0 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 2369 bytes 6386578 (6.0 MiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
eth1: flags=4163<up> mtu 1500</up><br />
inet6 fe80::5054:ff:fed6:d8a0 prefixlen 64 scopeid 0x20<link></link><br />
ether 52:54:00:d6:d8:a0 txqueuelen 1000 (Ethernet)<br />
RX packets 1083 bytes 126189 (123.2 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 494 bytes 96540 (94.2 KiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
lo: flags=73<up> mtu 65536</up><br />
inet 127.0.0.1 netmask 255.0.0.0<br />
inet6 ::1 prefixlen 128 scopeid 0x10<host></host><br />
loop txqueuelen 0 (Local Loopback)<br />
RX packets 310689 bytes 67699696 (64.5 MiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 310689 bytes 67699696 (64.5 MiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
qbrdb9910dc-eb: flags=4163<up> mtu 1450</up><br />
ether 02:b8:f4:eb:86:ca txqueuelen 0 (Ethernet)<br />
RX packets 15 bytes 1444 (1.4 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 0 bytes 0 (0.0 B)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
qvbdb9910dc-eb: flags=4419<up> mtu 1450</up><br />
inet6 fe80::b8:f4ff:feeb:86ca prefixlen 64 scopeid 0x20<link></link><br />
ether 02:b8:f4:eb:86:ca txqueuelen 1000 (Ethernet)<br />
RX packets 271 bytes 78660 (76.8 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 262 bytes 79210 (77.3 KiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
qvodb9910dc-eb: flags=4419<up> mtu 1450</up><br />
inet6 fe80::502b:aaff:fea3:bd34 prefixlen 64 scopeid 0x20<link></link><br />
ether 52:2b:aa:a3:bd:34 txqueuelen 1000 (Ethernet)<br />
RX packets 262 bytes 79210 (77.3 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 271 bytes 78660 (76.8 KiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
tapdb9910dc-eb: flags=4163<up> mtu 1450</up><br />
inet6 fe80::fc16:3eff:fef4:568c prefixlen 64 scopeid 0x20<link></link><br />
ether fe:16:3e:f4:56:8c txqueuelen 500 (Ethernet)<br />
RX packets 254 bytes 78562 (76.7 KiB)<br />
RX errors 0 dropped 0 overruns 0 frame 0<br />
TX packets 272 bytes 78738 (76.8 KiB)<br />
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br />
<br />
*******************************************<br />
Neutron reports<br />
*******************************************<br />
<br />
[root@CentOS72DHV ~(keystone_admin)]# neutron net-list<br />
+--------------------------------------+--------------+--------------------------------------------+<br />
| id | name | subnets |<br />
+--------------------------------------+--------------+--------------------------------------------+<br />
| 2855a852-4c0a-49a4-8ba0-f4663d78d680 | private | 72411e45-85f1-4d71-8924-fe2e2ad7aca9 |<br />
| | | 10.0.0.0/24 |<br />
| <span style="color: #b45f06;">b388c993-ab9f-4c36-a9c4-98b9008bd5c7 | public | 6a144f83-e878-4bb3-92a6-dfce114b5d87 |</span><br />
<span style="color: #b45f06;">| | | 192.179.143.0/24 </span> |<br />
| 985d0b1a-fab9-40d6-a53c-8ea9d6e1970b | demo_network | de8523c9-1a0c-4970-b1e7-4df8a335ad34 |<br />
| | | 50.0.0.0/24 |<br />
+--------------------------------------+--------------+--------------------------------------------+<br />
<br />
<br />
[root@CentOS72DHV ~(keystone_admin)]# neutron subnet-list<br />
+--------------------------------+------------------+------------------+--------------------------------+<br />
| id | name | cidr | allocation_pools |<br />
+--------------------------------+------------------+------------------+--------------------------------+<br />
| <span style="color: #b45f06;">6a144f83-e878-4bb3-92a6-dfce11 | sub_public | 192.179.143.0/24 | {"start": "192.179.143.150"</span>, |<br />
| <span style="color: #b45f06;">4b5d87 | | | "end": "192.179.143.254"} </span> |<br />
| 72411e45-85f1-4d71-8924-fe2e2a | private_subnet | 10.0.0.0/24 | {"start": "10.0.0.2", "end": |<br />
| d7aca9 | | | "10.0.0.254"} |<br />
| de8523c9-1a0c-4970-b1e7-4df8a3 | sub_demo_network | 50.0.0.0/24 | {"start": "50.0.0.10", "end": |<br />
| 35ad34 | | | "50.0.0.254"} |<br />
+--------------------------------+------------------+------------------+--------------------------------+<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb9DChh44nDFqpbfGA7MZi7oI9ExEPGMyiEewNYOu9c3VFTdfp2lRjw7nFrlS6QNm1h7YmS-uYYBNJLuBHuf7hKmIZCMen-r2IU-Y_B_dnuonlj1aCiz_NyD6pVWcEWauD_ZJkKA/s1600/Screenshot+from+2016-06-01+21-24-53.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="512" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgb9DChh44nDFqpbfGA7MZi7oI9ExEPGMyiEewNYOu9c3VFTdfp2lRjw7nFrlS6QNm1h7YmS-uYYBNJLuBHuf7hKmIZCMen-r2IU-Y_B_dnuonlj1aCiz_NyD6pVWcEWauD_ZJkKA/s640/Screenshot+from+2016-06-01+21-24-53.png" width="640" /></a></div>
</div>
Boris Derzhavetshttp://www.blogger.com/profile/08011293873468656575noreply@blogger.com