Thursday, August 14, 2014

Setup Gluster 3.5.2 on Two Node Controller&Compute Neutron ML2&&VXLAN&&OVS CentOS 7 Cluster

    This post is an update for previous one -  RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&VXLAN Cluster 
on CentOS  7  http://bderzhavets.blogspot.com/2014/07/rdo-setup-two-real-node_29.html.  It's focused on Gluster 3.5.2  implementation including tuning /etc/sysconfig/iptables files on Controller and Compute Nodes CentOS 7.
    Copying ssh-key from master node to compute, step by step verification of gluster volume replica 2  functionality and switching RDO IceHouse cinder services to work with gluster volume created  to store instances bootable cinders volumes for performance improvement. Of course creating gluster bricks under "/"  is not recommended . It should be a separate mount point for "xfs" filesystem to store gluster bricks on each node.


- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

icehouse1.localdomain   -  Controller (192.168.1.127)
icehouse2.localdomain   -  Compute   (192.168.1.137)

Download from http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/EPEL.repo/epel-7/SRPMS/
glusterfs-3.5.2-1.el7.src.rpm

$ rpm -iv glusterfs-3.5.2-1.el7.src.rpm

$ sudo yum install bison flex gcc automake libtool ncurses-devel readline-devel libxml2-devel openssl-devel libaio-devel lvm2-devel glib2-devel libattr-devel libibverbs-devel librdmacm-devel fuse-devel

$ rpmbuild -bb glusterfs.spec
. . . . . . . . . . . . . . . . . . . . . . .

Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-libs-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-cli-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-rdma-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-geo-replication-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-fuse-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-server-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-api-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-extra-xlators-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/noarch/glusterfs-resource-agents-3.5.2-1.el7.centos.noarch.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-devel-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-api-devel-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-regression-tests-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-debuginfo-3.5.2-1.el7.centos.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.Sigc7l
+ umask 022
+ cd /home/boris/rpmbuild/BUILD
+ cd glusterfs-3.5.2
+ rm -rf /home/boris/rpmbuild/BUILDROOT/glusterfs-3.5.2-1.el7.centos.x86_64
+ exit 0

[boris@icehouse1 x86_64]$ cat install
sudo yum install glusterfs-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-api-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-api-devel-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-cli-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-devel-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-extra-xlators-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-fuse-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-geo-replication-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-libs-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-rdma-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-server-3.5.2-1.el7.centos.x86_64.rpm

$ sudo service glusterd start

1. First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction

# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
# -A INPUT -j REJECT --reject-with icmp-host-prohibited

 Restart service iptables on both nodes

2. Second step:-


On icehouse1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@icehouse2

On both nodes run :-

# ./install
# service glusterd start

On icehouse1

#gluster peer probe icehouse2.localdomain

Should return "success"

[root@icehouse1 ~(keystone_admin)]# gluster peer status
Number of Peers: 1

Hostname: icehouse2.localdomain
Uuid: 3ca6490b-c44a-4601-ac13-51fec99e9caf
State: Peer in Cluster (Connected)

[root@icehouse1 ~(keystone_admin)]# gluster volume info

Volume Name: cinder-volumes09
Type: Replicate
Volume ID: 83b645a0-532e-46df-93e2-ed1f95f081cd
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: icehouse1.localdomain:/GLSD/Volumes
Brick2: icehouse2.localdomain:/GLSD/Volumes
Options Reconfigured:
auth.allow: 192.168.1.*

[root@icehouse1 ~(keystone_admin)]# gluster volume status
Status of volume: cinder-volumes09
Gluster process Port Online Pid
------------------------------------------------------------------------------
Brick icehouse1.localdomain:/GLSD/Volumes 49152 Y 5453
Brick icehouse2.localdomain:/GLSD/Volumes 49152 Y 3009
NFS Server on localhost 2049 Y 5458
Self-heal Daemon on localhost N/A Y 5462
NFS Server on icehouse2.localdomain 2049 Y 3965
Self-heal Daemon on icehouse2.localdomain N/A Y 3964

Task Status of Volume cinder-volumes09
------------------------------------------------------------------------------
There are no active volume tasks


[root@icehouse1 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Thu Aug 14 17:53:41 2014
[root@icehouse2 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.1.127
Uuid: 051e7528-8c2b-46e1-abb6-6d84b2f2e45b
State: Peer in Cluster (Connected)


*************************************************************************
On Controller (192.168.1.127) and on Compute (192.168.1.137)
*************************************************************************

Verify ports availability:-

[root@icehouse1 ~(keystone_admin)]# netstat -lntp | grep gluster
tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      5453/glusterfsd  
tcp        0      0 0.0.0.0:2049             0.0.0.0:*               LISTEN      5458/glusterfs    
tcp        0      0 0.0.0.0:38465           0.0.0.0:*               LISTEN      5458/glusterfs    
tcp        0      0 0.0.0.0:38466           0.0.0.0:*               LISTEN      5458/glusterfs    
tcp        0      0 0.0.0.0:38468           0.0.0.0:*               LISTEN      5458/glusterfs    
tcp        0      0 0.0.0.0:38469           0.0.0.0:*               LISTEN      5458/glusterfs    
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      2667/glusterd    
tcp        0      0 0.0.0.0:978               0.0.0.0:*               LISTEN      5458/glusterfs

************************************
Switching Cinder to Gluster volume
************************************

# gluster volume create cinder-volumes09  replica 2 icehouse1.localdomain:/GLSD/Volumes   icehouse2.localdomain:/GLSD/Volumes  force

# gluster volume start cinder-volumes09

# gluster volume set cinder-volumes09  auth.allow 192.168.1.*


# openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver

# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf

# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes

# vi /etc/cinder/shares.conf
    192.168.1.127:/cinder-volumes09

:wq

The following configuration changes are necessary for 'qemu' and 'samba vfs plugin' integration with libgfapi to work seamlessly:
1. gluster volume set cinder-volumes09 server.allow-insecure on
2. Restarting is required
    
    gluster volume stop cinder-volumes09
    gluster volume start cinder-volumes09

3. Edit /etc/glusterfs/glusterd.vol   to have a line :
     
     option rpc-auth-allow-insecure on

4. Restart glusterd is required :

     service glusterd restart 
   

Nova.conf (on Compute Node)  should have entry :-

qemu_allowed_storage_drivers = gluster



Make sure all thin LVM have been deleted via `cinder list` , if no then delete them all.

[root@icehouse1 ~(keystone_admin)]$ for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

 It should add row to `df -h` output :

[root@icehouse1 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/centos01-root        147G   15G  132G  10% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G   13M  3.9G   1% /dev/shm
tmpfs                            3.9G   18M  3.9G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sdb6                        477M  191M  257M  43% /boot
192.168.1.127:/cinder-volumes09  147G   18G  130G  12% /var/lib/cinder/volumes/5c5ae2460f1962d6f046ca5859584996
tmpfs                            3.9G   18M  3.9G   1% /run/netns



[root@icehouse1 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/centos01-root        147G   17G  131G  12% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G   19M  3.9G   1% /dev/shm
tmpfs                            3.9G   42M  3.8G   2% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/sdb6                        477M  191M  257M  43% /boot
192.168.1.127:/cinder-volumes09  147G   18G  129G  13% /var/lib/cinder/volumes/5c5ae2460f1962d6f046ca5859584996
tmpfs                            3.9G   42M  3.8G   2% /run/netns

[root@icehouse1 ~(keystone_admin)]# ls -l /var/lib/cinder/volumes/5c5ae2460f1962d6f046ca5859584996
total 5739092
-rw-rw-rw-. 1 root root 5368709120 Aug 14 21:58 volume-2f20aefb-b1ab-4b3f-bb23-10a1cbe9b946
-rw-rw-rw-. 1 root root 5368709120 Aug 14 22:06 volume-d8b0d31c-6f3a-44a1-86a4-bc4575697c29

[root@icehouse1 ~(keystone_admin)]# cinder list --all-tenants
+--------------------------------------+--------+---------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |  Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+---------------+------+-------------+----------+--------------------------------------+
| 2f20aefb-b1ab-4b3f-bb23-10a1cbe9b946 | in-use | UbuntuLVG0814 |  5   |     None    |   true   | ead0fe1b-923a-4a12-978c-ad33b9ea245c |
| d8b0d31c-6f3a-44a1-86a4-bc4575697c29 | in-use |  VF20VLG0814  |  5   |     None    |   true   | 7343807e-5bd1-4c7f-8b4a-e5efb1ce8c2e |
+--------------------------------------+--------+---------------+------+-------------+----------+--------------------------------------+