Saturday, March 19, 2016

Attempt to set up RDO Mitaka at any given time (Delorean trunks)

Quoting  Official delorean documentation

"The RDO project has a continuous integration pipeline that consists of multiple jobs that deploy and test OpenStack as accomplished by different installers. This vast test coverage attempts to ensure that there are no known issues either in packaging, in code or in the installers themselves.Once a Delorean consistent repository has undergone these tests successfully, it will be promoted to current-passed-ci. Current-passed-ci represents the latest and greatest
version of RDO trunk packages that were tested together successfully"

Set up current-passed-ci repositories on all deployment nodes Controller,Storage,Compute. It might be not really needed ( if packstack at run-time copies repositories from Controller to other nodes), but won't hurt anyway.

# yum -y install yum-plugin-priorities
# cd /etc/yum.repos.d
# curl -O
# curl -O

On Controller

# yum -y install openstack-packstack

[root@SeverMitaka01 ~]# rpm -qa \*openstack-packstack\*

Answer file for testing  mentioned 3 node deployment is here

Two deployments done bellow intentionally test ability to add Storage Node
and hack keystone endpoint table to switch to new Swift Server

First test is simplest  two node cluster (Controller,Compute) install completed

   Final configuration was obtained after adding Storage Node using
   . . . . . . .
   . . . . . . .
   . . . . . . .  
   and updating keystone endpoint right inside keystone database
   for swift-proxy pointing to IP of added Server instead of  Controller
   been used for simplest two node Cluster test.
   Running packstack with answer-file posted at link "here" is supposed
   to create three node deployment via single run and create correct
   endpoints for all storage services  glance,cinder,swift  pointing  to
     So, CONFIG_UNSUPPORTED=y seems to work for oncoming RDO Mitaka

Second test  running to add Storage Node using EXCLUDE_SERVERS=Controller-IP,Compute-IP


   Now hacking endpoint tables of keystone database to get new IP set for glance
   and swift records , followed `service httpd restart`  on Controller .


   All set to use newly added Swift Node with 3 XFS 10GB drives involved
   in swift's replication

   Controller Node

   Storage Node. Swift is configured to have three replica drives and serves
   as glance back end ( )

   Compute Node