Friday, November 28, 2014

VXLAN tenants subnets and "VLAN tags" on Juno


  In case of GRE or VXLAN tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface. They are only used by openvswitch to separate traffic on the br-int, so the different tap interfaces corresponding to different neutron subnets do not see each other's traffic. As far as this tags are not 12-bits segments , number 4096 is not important.

 
**************************
ON CONTROLLER
**************************

[root@juno1 ~(keystone_admin)]# neutron net-list
+--------------------------------------+-----------------+-----------------------------------------------------+
| id                                   | name            | subnets                                             |
+--------------------------------------+-----------------+-----------------------------------------------------+
| 90b574e2-f51a-423e-aef9-c201f6f68b76 | kashyap_private | 5fe9e3cc-feee-4f51-bed8-f4891bd8aafe 40.0.0.0/24    |
| 65cbd354-daae-41bb-9d3c-e58b1062be19 | public          | 147d5ecd-fe39-489e-8901-3b20a2c50148 192.168.1.0/24 |
| 8b2de478-de3f-448e-8ec1-8f973a762daf | boris_network   | 4142d2c6-220c-4b47-8147-cf512f7a753b 15.0.0.0/24    |
| 3fdb2eb7-fff8-4633-824b-1da4c38ccbd5 | kashyap_network | 8bee3a8d-7fac-4d05-84e3-bbe52a601084 70.0.0.0/24    |
| 951715cc-2a19-470e-9987-25b3b7906756 | demo_network    | d512bb49-29f0-4fe2-886a-10d880cc83fc 10.0.0.0/24    |
+--------------------------------------+-----------------+-----------------------------------------------------+
[root@juno1 ~(keystone_admin)]# ip netns exec qdhcp-90b574e2-f51a-423e-aef9-c201f6f68b76 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1  bytes 576 (576.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 576 (576.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap5532b72d-8c: flags=4163  mtu 1500
        inet 40.0.0.11  netmask 255.255.255.0  broadcast 40.0.0.255
        inet6 fe80::f816:3eff:fe6e:91a7  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6e:91:a7  txqueuelen 0  (Ethernet)
        RX packets 27  bytes 2982 (2.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 2112 (2.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@juno1 ~(keystone_admin)]# ovs-vsctl show
f2113bd0-c4ca-4c4b-af16-928ff03e53da
    Bridge br-int
        fail_mode: secure
        Port "tap37cfc1fc-09"
            tag: 1
            Interface "tap37cfc1fc-09"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-7ff72517-f3"
            tag: 4
            Interface "qr-7ff72517-f3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap2d26c49a-37"
            tag: 5
            Interface "tap2d26c49a-37"
                type: internal
        Port "tap5532b72d-8c"
            tag: 4    <==== tag 4 on Controller Node
            Interface "tap5532b72d-8c"

                type: internal
        Port "qr-bd92408c-b4"
            tag: 5
            Interface "qr-bd92408c-b4"
                type: internal
        Port "qr-a494fcc8-e5"
            tag: 2
            Interface "qr-a494fcc8-e5"
                type: internal
        Port "qr-4162d98e-5c"
            tag: 1
            Interface "qr-4162d98e-5c"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap305cff35-34"
            tag: 2
            Interface "tap305cff35-34"
                type: internal
    Bridge br-ex
        Port "qg-88b1ef62-2c"
            Interface "qg-88b1ef62-2c"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "qg-d3e929c6-ba"
            Interface "qg-d3e929c6-ba"
                type: internal
        Port "enp2s0"
            Interface "enp2s0"
        Port "qg-5c6dd032-a8"
            Interface "qg-5c6dd032-a8"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-7b037650-10"
            Interface "qg-7b037650-10"
                type: internal
    Bridge br-tun
        Port "vxlan-c0a80089"
            Interface "vxlan-c0a80089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.1.3"


************************************************************************
So kashyap_private network corresponds tag 4 on Controller
************************************************************************
**********************
ON COMPUTE
**********************
[root@juno2 ~(keystone_kashyap)]# nova list
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                                 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+
| 1359ac92-8092-47bc-b7d6-ee474b641355 | CirrOS321  | ACTIVE | -          | Running     | kashyap_private=40.0.0.12, 192.168.1.155 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+

VM is running on kashyap_private network (40.0.0.0/24)

[root@juno2 ~(keystone_kashyap)]# nova show 1359ac92-8092-47bc-b7d6-ee474b641355
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-11-28T15:21:59.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2014-11-28T15:21:48Z                                     |
| flavor                               | m1.tiny (1)                                              |
| hostId                               | 7339f21099a5d4d918b8e69302c1dab2d5f7af63babf36d02057177d |
| id                                   | 1359ac92-8092-47bc-b7d6-ee474b641355                     |
| image                                | cirros (6f7d1877-9b6b-4530-b868-9fe42a71bca9)            |
| kashyap_private network              | 40.0.0.12, 192.168.1.155                                 |
| key_name                             | oskey37                                                  |
| metadata                             | {}                                                       |
| name                                 | CirrOS321                                                |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 2a8be6536a864dd0b73782fd0fb2faff                         |
| updated                              | 2014-11-28T15:21:59Z                                     |
| user_id                              | bce3ed8aaa97447b8edfe3cf734b0793                         |
+--------------------------------------+----------------------------------------------------------+

****************************
Identifying qvo* port
****************************

[root@juno2 ~(keystone_kashyap)]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr23b93632-0d        8000.1e955c2e4f2e    no        qvb23b93632-0d
                            tap23b93632-0d

qbr37f299e0-de        8000.1a32e0906384    no        qvb37f299e0-de
qbra0fa9687-39        8000.f6fd5e4399e7    no        qvba0fa9687-39
qbrff853471-05        8000.9ef67b07ba76    no        qvbff853471-05

[root@juno2 ~(keystone_admin)]# virsh dumpxml 1359ac92-8092-47bc-b7d6-ee474b641355 | grep 23b93632-0d
     
      <source bridge='qbr23b93632-0d'/>
      <target dev='tap23b93632-0d'/>


***********************************************************
Tracking veth-pair ( qvb23b93632-0d, qvo23b93632-0d )
***********************************************************
[root@juno2 ~(keystone_kashyap)]# ovs-vsctl

79e82e7f-9040-4789-b8c8-d7d397ec230b
    Bridge br-tun
        Port "vxlan-c0a8007f"
            Interface "vxlan-c0a8007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.137", out_key=flow, remote_ip="192.168.0.127"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qvoff853471-05"
            tag: 2
            Interface "qvoff853471-05"
        Port "qvoa0fa9687-39"
            tag: 1
            Interface "qvoa0fa9687-39"
        Port "qvod282b303-3a"
            tag: 5
            Interface "qvod282b303-3a"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo23b93632-0d"
            tag: 6  <=== tag 6 on Compute Node
            Interface "qvo23b93632-0d"

        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo37f299e0-de"
            tag: 1
            Interface "qvo37f299e0-de"
    ovs_version: "2.1.3"

****************************************************************************
So kashyap_private network corresponds tag 6 ( Compute Node)
****************************************************************************








Wednesday, November 26, 2014

ovs-ofctl dump-flows br-tun & VXLAN

Three VXLAN tenants networks created on Controller

#########################################
 Controller&&Network Node: ovs-ofctl dump-flows br-tun
#########################################
cookie=0x0, duration=11839.724s, table=4, n_packets=17158, n_bytes=1355764, idle_age=1506, priority=1,tun_id=0x3ee actions=mod_vlan_vid:5,resubmit(,10)
cookie=0x0, duration=43283.552s, table=4, n_packets=131115, n_bytes=9306495, idle_age=327, priority=1,tun_id=0x3ec actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=43280.638s, table=4, n_packets=60742, n_bytes=4530221, idle_age=5242, priority=1,tun_id=0x3ea actions=mod_vlan_vid:3,resubmit(,10)

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface. They are only used by openvswitch to separate traffic from br-tun to br-int, so the different tap interfaces corresponding to different neutron subnets do not see each other's traffic.

########################################
Compute Node : ovs-ofctl dump-flows br-tun
########################################
cookie=0x0, duration=11239.277s, table=4, n_packets=28289, n_bytes=40742145, idle_age=1670, priority=1,tun_id=0x3ee actions=mod_vlan_vid:6,resubmit(,10)
cookie=0x0, duration=43497.709s, table=4, n_packets=188677, n_bytes=281310140, idle_age=491, priority=1,tun_id=0x3ec actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=17757.690s, table=4, n_packets=107542, n_bytes=155828433, idle_age=5406, priority=1,tun_id=0x3ea actions=mod_vlan_vid:4,resubmit(,10)

VLAN tags here just correspond  qvo* interfaces ( tap-interfaces) of
nova instances running on Compute node. They were used `ovs-ofctl dump-flows br-tun` to transfer data from br-tun to br-int  each tag for corresponding VM.

Monday, November 24, 2014

How VMs access metadata via qrouter-namespace in Juno

    It  is actually an  update of  http://techbackground.blogspot.ie/2013/06/metadata-via-quantum-router.html   for Neutron on Juno ( original blog considers Quantum implementation on Grizzly ). From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova -api ) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Neutron proxies metadata requests to Nova adding HTTP headers which Nova uses to identify the source instance. Neutron actually uses two proxies to do this: a namespace proxy and a metadata agent.
 This post shows how a metadata request gets from an instance to the Nova metadata service via a namespace proxy running in a Neutron router.


  

         
1. Instance makes request
[fedora@vf20rsx2211 ~]$ curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups[fedora@vf20rsx2211 ~]$

[fedora@vf20rsx2211 ~]$ ip -4 address show dev eth0
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 50.0.0.44/24 brd 50.0.0.255 scope global dynamic eth0
       valid_lft 70518sec preferred_lft 70518sec

[fedora@vf20rsx2211 ~]$ ip route
default via 50.0.0.1 dev eth0  proto static  metric 1024
50.0.0.0/24 dev eth0  proto kernel  scope link  src 50.0.0.44

[fedora@vf20rsx2211 ~]$ ip route get 169.254.169.254
169.254.169.254 via 50.0.0.1 dev eth0  src 50.0.0.44
    cache

2. Namespace proxy receives request
The default gateway 50.0.0.1  exists within a Neutron router namespace on the network node.  The Neutron-l3-agent started a namespace proxy in this namespace and added some iptables rules to redirect metadata requests to it.
There are no special routes, so the request goes out the default gateway.
of course a Neutron router needs to have an interface on the subnet.




[root@juno1 ~(keystone_admin)]# ip netns exec qdhcp-45577666-657d-4f75-a3ab-9bc232f15203 route -n


Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         50.0.0.1        0.0.0.0         UG    0      0        0 tap7a12f9b0-a4
50.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 tap7a12f9b0-a4

[root@juno1 ~(keystone_admin)]# neutron router-list
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name    | external_gateway_info                                                                                                                                                                     | distributed | ha    |
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

| 1cf08ea2-959f-4206-b2f1-a9b4708399c1 | router4 | {"network_id": "65cbd354-daae-41bb-9d3c-e58b1062be19", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "147d5ecd-fe39-489e-8901-3b20a2c50148", "ip_address": "192.168.1.173"}]} | False       | False |
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 ifconfig

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-7b037650-10: flags=4163  mtu 1500
        inet 192.168.1.173  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::f816:3eff:fee5:de97  prefixlen 64  scopeid 0x20
        ether fa:16:3e:e5:de:97  txqueuelen 0  (Ethernet)
        RX packets 63929  bytes 87480425 (83.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36523  bytes 5286278 (5.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-17ddee14-9f: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe6f:a8e7  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6f:a8:e7  txqueuelen 0  (Ethernet)
        RX packets 36643  bytes 5304620 (5.0 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 62940  bytes 87350558 (83.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 ip -4 address show dev qr-17ddee14-9f
 

16: qr-17ddee14-9f: mtu 1500 qdisc noqueue state UNKNOWN
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-17ddee14-9f
       valid_lft forever preferred_lft forever

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 iptables-save| grep 9697


-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      6755/python        

[root@juno1 ~(keystone_admin)]# ps -f --pid 6755 | fold -s -w 82

UID        PID  PPID  C STIME TTY          TIME CMD
root      6755     1  0 08:01 ?        00:00:00 /usr/bin/python
/bin/neutron-ns-metadata-proxy

--pid_file=/var/lib/neutron/external/pids/1cf08ea2-959f-4206-b2f1-a9b4708399c1.pid
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=1cf08ea2-959f-4206-b2f1-a9b4708399c1 --state_path=/var/lib/neutron
--metadata_port=9697 --verbose
--log-file=neutron-ns-metadata-proxy-1cf08ea2-959f-4206-b2f1-a9b4708399c1.log
--log-dir=/var/log/neutron

The nameserver proxy adds two HTTP headers to the request:
    X-Forwarded-For: with the instance's IP address
    X-Neutron-Router-ID: with the uuid of the Neutron router
and proxies it to a Unix domain socket with name
/var/lib/Neutron/metadata_proxy.


 3. Metadata agent receives request and queries the Neutron service
The metadata agent listens on this Unix socket. It is a normal Linux service that runs in the main operating system IP namespace, and so it is able to reach the Neutron  and Nova metadata services. Its configuration file has all the information required to do so.

[root@juno1 ~(keystone_admin)]# netstat -lxp | grep metadata
unix  2      [ ACC ]     STREAM     LISTENING     36027    1589/python          /var/lib/neutron/metadata_proxy

[root@juno1 ~(keystone_admin)]#  lsof /var/lib/neutron/metadata_proxy
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
      Output information may be incomplete.
COMMAND    PID    USER   FD   TYPE             DEVICE SIZE/OFF  NODE NAME
neutron-m 1589 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy
neutron-m 3412 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy
neutron-m 3413 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy

[root@juno1 ~(keystone_admin)]# ps -f --pid 1589 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1589     1  0 07:59 ?        00:00:03 /usr/bin/python
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --log-file
/var/log/neutron/metadata-agent.log


[root@juno1 neutron(keystone_admin)]#  grep -v '^#\|^\s*$' /etc/neutron/metadata_agent.ini

[DEFAULT]
debug = False
auth_url = http://192.168.1.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.168.1.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502

metadata_workers =2
metadata_backlog = 4096

It reads the X-Forwarded-For and X-Neutron-Router-ID headers in the request and queries the Neutron service to find the ID of the instance that created the request.


 4. Metadata agent proxies request to Nova metadata service
It then adds these headers:
    X-Instance-ID: the instance ID returned from Neutron 
    X-Instance-ID-Signature: instance ID signed with the shared-secret
    X-Forwarded-For: the instance's IP address
and proxies the request to the Nova metadata service.

5. Nova metadata service receives request
The metadata service was started by nova-api. The handler checks the X-Instance-ID-Signature with the shared key, looks up the data and returns the response which travels back via the two proxies to the instance.


[root@juno1 nova(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.168.1.127
neutron_metadata_proxy_shared_secret=a965cd23ed2f4502
service_neutron_metadata_proxy=True


[root@juno1 nova(keystone_admin)]# grep metadata /var/log/nova/nova-api.log | tail -15
2014-11-24 08:20:39.208 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-keys/ HTTP/1.1" status: 200 len: 125 time: 0.0013790
2014-11-24 08:20:39.217 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/ami-id HTTP/1.1" status: 200 len: 120 time: 0.0014508
2014-11-24 08:20:39.227 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/kernel-id HTTP/1.1" status: 200 len: 120 time: 0.0014200
2014-11-24 08:20:39.237 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-action HTTP/1.1" status: 200 len: 120 time: 0.0013640
2014-11-24 08:20:39.247 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 130 time: 0.0014000
2014-11-24 08:20:39.256 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ HTTP/1.1" status: 200 len: 130 time: 0.0013840
2014-11-24 08:20:39.265 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/ami-manifest-path HTTP/1.1" status: 200 len: 121 time: 0.0013070
2014-11-24 08:20:39.275 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/security-groups HTTP/1.1" status: 200 len: 116 time: 0.0013120
2014-11-24 08:20:39.285 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 124 time: 0.0013220
2014-11-24 08:20:39.294 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 127 time: 0.0012989
2014-11-24 08:20:39.304 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 120 time: 0.0013518
2014-11-24 08:20:39.313 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-keys/0/openssh-key HTTP/1.1" status: 200 len: 517 time: 0.0013201
2014-11-24 08:20:39.323 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 119 time: 0.0013349
2014-11-24 08:20:39.333 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ebs0 HTTP/1.1" status: 200 len: 124 time: 0.0013509
2014-11-24 08:20:39.342 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 124 time: 0.0013192

Thursday, November 20, 2014

OVS Setup on Juno network node


[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 12  bytes 1008 (1008.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 1008 (1008.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-7b037650-10: flags=4163  mtu 1500
        inet 192.168.1.173  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::f816:3eff:fee5:de97  prefixlen 64  scopeid 0x20
        ether fa:16:3e:e5:de:97  txqueuelen 0  (Ethernet)
        RX packets 45149  bytes 46211483 (44.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 21438  bytes 4059759 (3.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-17ddee14-9f: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe6f:a8e7  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6f:a8:e7  txqueuelen 0  (Ethernet)
        RX packets 30107  bytes 4574015 (4.3 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 38725  bytes 44984619 (42.9 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@juno1 ~(keystone_admin)]# ip netns exec qdhcp-45577666-657d-4f75-a3ab-9bc232f15203 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 16270  bytes 781242 (762.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 16270  bytes 781242 (762.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap7a12f9b0-a4: flags=4163  mtu 1500
        inet 50.0.0.11  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe29:fef1  prefixlen 64  scopeid 0x20
        ether fa:16:3e:29:fe:f1  txqueuelen 0  (Ethernet)
        RX packets 4664  bytes 267057 (260.7 KiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 21948  bytes 1352385 (1.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
 


################################
Getting ifconfig
################################

[root@juno1 ~(keystone_admin)]# ifconfig
br-ex: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
        RX packets 3411331  bytes 548241709 (522.8 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3171333  bytes 1172191351 (1.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp2s0: flags=4163  mtu 1500
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
        RX packets 3448839  bytes 593192446 (565.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 3192798  bytes 1176251503 (1.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17 

enp5s1: flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 22472  bytes 5191240 (4.9 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 59792  bytes 48604605 (46.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 5627133  bytes 1136824718 (1.0 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5627133  bytes 1136824718 (1.0 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:30:a6:39  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

################################
 Now verifying OVS configuration :
################################

[root@juno1 ~(keystone_admin)]# ovs-vsctl show
f2113bd0-c4ca-4c4b-af16-928ff03e53da
    Bridge br-int
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap3f570ba8-a1"
            tag: 2
            Interface "tap3f570ba8-a1"
                type: internal
        Port "tapba3a2dd7-73"
            tag: 3
            Interface "tapba3a2dd7-73"
                type: internal
        Port "qr-00d5c709-9a"
            tag: 3
            Interface "qr-00d5c709-9a"
                type: internal
        Port "tap7a12f9b0-a4"   <=====  port of br-int ( tap-interface of qdhcp-
            tag: 1                                                                          -namespce )
            Interface "tap7a12f9b0-a4"
                type: internal
        Port "tapb593041a-c7"
            tag: 4095
            Interface "tapb593041a-c7"
                type: internal
        Port "qr-17ddee14-9f"    <====== port of br-int
            tag: 1
            Interface "qr-17ddee14-9f"
                type: internal
        Port "qr-5bbf9169-4b"
            tag: 4
            Interface "qr-5bbf9169-4b"
                type: internal
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-c0a80089"
            Interface "vxlan-c0a80089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port "qg-d3e929c6-ba"
            Interface "qg-d3e929c6-ba"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "qg-7b037650-10"   <====== port of br-ex
            Interface "qg-7b037650-10"
                type: internal
        Port "enp2s0" <=========== port of br-ex
            Interface "enp2s0"
        Port "qg-fd2baf63-9e"
            Interface "qg-fd2baf63-9e"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-38b0f41d-21"
            Interface "qg-38b0f41d-21"
                type: internal
    ovs_version: "2.1.3"



Tuesday, November 11, 2014

Tuning RDO Juno CentOS 7 TwoNode Gluster 3.5.2 Cluster for Qemu integration with libgfapi to work seamlessly

This post is  focused on tuning replica 2 gluster volume when building RDO Juno Gluster Cluster on CentOS 7. Steps undertaken come from Gluster 3.5.2 Release Notes (http://blog.nixpanic.net/2014_07_01_archive.html) and make integration Qemu (1.5.3) && libgfapi really working

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin  )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)

juno1.localdomain   -  Controller (192.168.1.127)
juno2.localdomain   -  Compute   (192.168.1.137)

Download from http://download.gluster.org/pub/gluster/glusterfs/3.5/3.5.2/EPEL.repo/epel-7/SRPMS/
glusterfs-3.5.2-1.el7.src.rpm

$ rpm -iv glusterfs-3.5.2-1.el7.src.rpm

$ sudo yum install bison flex gcc automake libtool ncurses-devel readline-devel libxml2-devel openssl-devel libaio-devel lvm2-devel glib2-devel libattr-devel libibverbs-devel librdmacm-devel fuse-devel

$ rpmbuild -bb glusterfs.spec
. . . . . . . . . . . . . . . . . . . . . . .

Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-libs-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-cli-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-rdma-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-geo-replication-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-fuse-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-server-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-api-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-extra-xlators-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/noarch/glusterfs-resource-agents-3.5.2-1.el7.centos.noarch.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-devel-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-api-devel-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-regression-tests-3.5.2-1.el7.centos.x86_64.rpm
Wrote: /home/boris/rpmbuild/RPMS/x86_64/glusterfs-debuginfo-3.5.2-1.el7.centos.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.Sigc7l
+ umask 022
+ cd /home/boris/rpmbuild/BUILD
+ cd glusterfs-3.5.2
+ rm -rf /home/boris/rpmbuild/BUILDROOT/glusterfs-3.5.2-1.el7.centos.x86_64
+ exit 0

[boris@juno1 x86_64]$ cat install
sudo yum install glusterfs-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-api-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-api-devel-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-cli-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-devel-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-extra-xlators-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-fuse-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-geo-replication-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-libs-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-rdma-3.5.2-1.el7.centos.x86_64.rpm \
glusterfs-server-3.5.2-1.el7.centos.x86_64.rpm

$ sudo service glusterd start

1. First step is tuning /etc/sysconfig/iptables for IPv4 iptables firewall (service firewalld should be disabled) :-

Update /etc/sysconfig/iptables on both nodes:-

-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT

Comment out lines bellow , ignoring instruction

# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
# -A INPUT -j REJECT --reject-with icmp-host-prohibited

 Restart service iptables on both nodes

2. Second step:-


On juno1, run the following commands :

# ssh-keygen (Hit Enter to accept all of the defaults)
# ssh-copy-id -i ~/.ssh/id_rsa.pub  root@juno2

On both nodes run :-

# ./install
# service glusterd start

On juno1

#gluster peer probe juno2.localdomain

Should return "success"

[root@juno1 ~(keystone_admin)]# gluster peer status
Number of Peers: 1

Hostname: juno2.localdomain
Uuid: 3ca6490b-c44a-4601-ac13-51fec99e9caf
State: Peer in Cluster (Connected)

[root@juno1 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Thu Aug 14 17:53:41 2014
[root@juno2 ~]# gluster peer status
Number of Peers: 1

Hostname: 192.168.1.127
Uuid: 051e7528-8c2b-46e1-abb6-6d84b2f2e45b
State: Peer in Cluster (Connected)


*************************************************************************
On Controller (192.168.1.127) and on Compute (192.168.1.137)
*************************************************************************

Verify ports availability:-

[root@juno1 ~(keystone_admin)]# netstat -lntp | grep gluster
tcp        0      0 0.0.0.0:49152           0.0.0.0:*               LISTEN      5453/glusterfsd 
tcp        0      0 0.0.0.0:2049             0.0.0.0:*               LISTEN      5458/glusterfs   
tcp        0      0 0.0.0.0:38465           0.0.0.0:*               LISTEN      5458/glusterfs   
tcp        0      0 0.0.0.0:38466           0.0.0.0:*               LISTEN      5458/glusterfs   
tcp        0      0 0.0.0.0:38468           0.0.0.0:*               LISTEN      5458/glusterfs   
tcp        0      0 0.0.0.0:38469           0.0.0.0:*               LISTEN      5458/glusterfs   
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      2667/glusterd   
tcp        0      0 0.0.0.0:978               0.0.0.0:*               LISTEN      5458/glusterfs

************************************
Switching Cinder to Gluster volume
************************************

# gluster volume create cinder-volumes57 \
replica 2 juno1.localdomain:/data5/data-volumes   juno2.localdomain:/data5/data-volumes 

# gluster volume start cinder-volumes57

# gluster volume set cinder-volumes57  auth.allow 192.168.1.*

The following configuration changes are necessary for 'qemu' and '
samba vfs plugin' integration with libgfapi to work seamlessly:

1. First step

       gluster volume set cinder-volumes57 server.allow-insecure on

2. Restarting is required
   
    gluster volume stop cinder-volumes57
    gluster volume start cinder-volumes57

3. Edit /etc/glusterfs/glusterd.vol   to have a line :
    
     option rpc-auth-allow-insecure on

4. Restart glusterd is required :

     service glusterd restart
  

Nova.conf (on Compute Node)  should have entry :-

qemu_allowed_storage_drivers = gluster

[root@juno1 ~]# gluster volume info
Volume Name: cinder-volumes57
Type: Replicate
Volume ID: c1f2e1d2-0b11-426e-af3d-7af0d1d24d5e
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: juno1.localdomain:/data5/data-volumes
Brick2: juno2.localdomain:/data5/data-volumes
Options Reconfigured:
auth.allow: 192.168.1.*
server.allow-insecure: on

[root@juno1 ~]# gluster volume status
Status of volume: cinder-volumes57
Gluster process                        Port    Online    Pid
------------------------------------------------------------------------------
Brick juno1.localdomain:/data5/data-volumes        49152    Y    3346
Brick juno2.localdomain:/data5/data-volumes        49152    Y    3113
NFS Server on localhost                    2049    Y    3380
Self-heal Daemon on localhost                N/A    Y    3387
NFS Server on juno2.localdomain                2049    Y    3911
Self-heal Daemon on juno2.localdomain            N/A    Y    3916

Task Status of Volume cinder-volumes57
------------------------------------------------------------------------------
There are no active volume tasks


##############################
Create entries  in /etc/cinder/cinder.conf
############################## 

enabled_backends=gluster

[gluster]
volume_driver = cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config = /etc/cinder/shares.conf
glusterfs_mount_point_base = /var/lib/cinder/volumes
volume_backend_name=GLUSTER


# vi /etc/cinder/shares.conf
    192.168.1.127:/cinder-volumes57
:wq


[root@juno1 ~(keystone_admin)]# cinder type-create gluster
+--------------------------------------+---------+
|                  ID                  |   Name  |
+--------------------------------------+---------+
| 29917269-d73f-4c28-b295-59bfbda5d044 | gluster |
+--------------------------------------+---------+

[root@juno1 ~(keystone_admin)]# cinder type-key gluster  set volume_backend_name=GLUSTER

Next step is cinder services restart :-

[root@juno1 ~(keystone_demo)]# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

[root@juno1 ~(keystone_admin)]# df -h
Filesystem                       Size  Used Avail Use% Mounted on
/dev/mapper/centos01-root00      147G   43G  105G  29% /
devtmpfs                         3.9G     0  3.9G   0% /dev
tmpfs                            3.9G  152K  3.9G   1% /dev/shm
tmpfs                            3.9G   26M  3.8G   1% /run
tmpfs                            3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/loop0                       1.9G  6.0M  1.7G   1% /srv/node/swift_loopback
/dev/sda3                        477M  146M  302M  33% /boot
/dev/mapper/centos01-data5        98G   15G   83G  16% /data5
192.168.1.127:/cinder-volumes57   98G   15G   83G  16% /var/lib/cinder/volumes/8478b56ad61cf67ab9839fb0a5296965
tmpfs                            3.9G   26M  3.8G   1% /run/netns


###################################################
How to verify implementation success. Boot nova instance 
( with instance-id say 00000049) based on cinder volume.
###################################################

On Compute Node grep /var/log/libvirt/qemu/instance-00000049.log looking for
"gluster" entry . You are supposed to find a string highlighted down here

# cd /var/log/libvirt/qemu
# [root@juno2 qemu]# cat instance-00000049.log | grep gluster
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name instance-00000049 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 92151b16-c7b4-48d1-b49f-1e310e005c80 -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2014.2-2.el7.centos,serial=5dff0de4-c27d-453d-85b4-b2d9af514fcd,uuid=92151b16-c7b4-48d1-b49f-1e310e005c80 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000049.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://192.168.1.127:24007/cinder-volumes57/volume-179b9782-d2b7-4891-ba89-5198b71c6188,if=none,id=drive-virtio-disk0,format=raw,serial=179b9782-d2b7-4891-ba89-5198b71c6188,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:8b:9f:6c,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/92151b16-c7b4-48d1-b49f-1e310e005c80/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


At the same time issue on Controller following commands  :-

[root@juno1 ~(keystone_boris)]# cinder list
+--------------------------------------+--------+-----------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |   Display Name  | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+-----------------+------+-------------+----------+--------------------------------------+
| 179b9782-d2b7-4891-ba89-5198b71c6188 | in-use | Win2012GLSVOL01 |  20  |   gluster   |   true   | 92151b16-c7b4-48d1-b49f-1e310e005c80 |
| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711  |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |
+--------------------------------------+--------+-----------------+------+-------------+----------+--------------------------------------+

[root@juno1 ~(keystone_boris)]# nova list
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+
| ID                                   | Name         | Status    | Task State | Power State | Networks                               |
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+
| 51a20959-0a0c-4ef6-81ec-2edeab6e3588 | VF20RX520711 | SUSPENDED | -          | Shutdown    | private_boris=50.0.0.12, 192.168.1.175 |
| 92151b16-c7b4-48d1-b49f-1e310e005c80 | Win2012SRV05 | SUSPENDED | -          | Shutdown    | private_boris=50.0.0.25, 192.168.1.179 |
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+

[root@juno1 ~(keystone_boris)]# nova show 92151b16-c7b4-48d1-b49f-1e310e005c80 | grep 179b9782-d2b7-4891-ba89-5198b71c6188
| os-extended-volumes:volumes_attached | [{"id": "179b9782-d2b7-4891-ba89-5198b71c6188"}]         |



##############################################
Another way of verification - run on Compute Node:-
##############################################

[root@juno1 ~(keystone_boris)]# ssh 192.168.1.137
Last login: Tue Nov 11 17:12:04 2014 from juno1.localdomain

[root@juno2 ~]# . keystonerc_boris

[root@juno2 ~(keystone_boris)]# nova list
+--------------------------------------+----------------+-----------+------------+-------------+----------------------------------------+
| ID                                   | Name           | Status    | Task State | Power State | Networks                               |
+--------------------------------------+----------------+-----------+------------+-------------+----------------------------------------+
| 57640068-3ab7-466a-8eae-cf132359b233 | UbuntuUTRX1211 | ACTIVE    | -          | Running     | private_boris=50.0.0.26, 192.168.1.174 |
| 51a20959-0a0c-4ef6-81ec-2edeab6e3588 | VF20RX520711   | SUSPENDED | -          | Shutdown    | private_boris=50.0.0.12, 192.168.1.175 |
| 92151b16-c7b4-48d1-b49f-1e310e005c80 | Win2012SRV05   | SUSPENDED | -          | Shutdown    | private_boris=50.0.0.25, 192.168.1.179 |
+--------------------------------------+----------------+-----------+------------+-------------+----------------------------------------+

[root@juno2 ~(keystone_boris)]# virsh dumpxml 57640068-3ab7-466a-8eae-cf132359b233 | grep -E 'source (file|protocol)'

  <source protocol='gluster' name='cinder-volumes57/volume-bf448475-50c8-4491-92aa-77d36666f296'>

[root@juno2 ~(keystone_boris)]# nova show 57640068-3ab7-466a-
8eae-cf132359b233 | grep bf448475-50c8-4491-92aa-77d36666f296
| os-extended-volumes:volumes_attached | [{"id": "bf448475-50c8-4491-92aa-77d36666f296"}]         |



Saturday, November 08, 2014

LVMiSCSI cinder backend for RDO Juno on CentOS 7

Current post follows up http://lxer.com/module/newswire/view/207415/index.html
RDO Juno has been intalled on Controller and Compute nodes via packstack as described in link @lxer.com. iSCSI target (Server) implementation on CentOS 7 differs significantly from CentOS 6.5 and is based on  CLI utility targetcli and service target.
   With Enterprise Linux 7, both Red Hat and CentOS, there is a big change in the management of iSCSI targets.Software run as part of the standard systemd structure. Consequently there will be significant changes in multi back end cinder architecture of RDO Juno running on CentOS 7 or Fedora 21 utilizing LVM
based iSCSI targets.

  Create following entries in /etc/cinder/cinder.conf on Controller ( which in case of two node Cluster works as Storage node as well).

First entry id [DEFAULT] section

#######################
enabled_backends=lvm51,lvm52
#######################

At the bottom of file

[lvm51]
iscsi_helper=lioadm
volume_group=cinder-volumes51
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
iscsi_ip_address=192.168.1.127
volume_backend_name=LVM_iSCSI51


[lvm52]
iscsi_helper=lioadm
volume_group=cinder-volumes52
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
iscsi_ip_address=192.168.1.127
volume_backend_name=LVM_iSCSI52
 

VG cinder-volumes52,51 created on /dev/sda6 and /dev/sdb1 correspondently :-

# pvcreate /dev/sda6
# pvcreate /dev/sdb1
# vgcreate cinder-volumes52 /dev/sda6
# vgcreate cinder-volumes51  /dev/sdb1

Then issue :-

[root@juno1 ~(keystone_admin)]# cinder type-create lvms
+--------------------------------------+------+
|                  ID                  | Name |
+--------------------------------------+------+
| 64414f3a-7770-4958-b422-8db0c3e2f433 | lvms  |
+--------------------------------------+------+


[root@juno1 ~(keystone_admin)]# cinder type-create lvmz +--------------------------------------+---------+
|                  ID                  |   Name  |
+--------------------------------------+---------+
| 29917269-d73f-4c28-b295-59bfbda5d044 | lvmz |

+--------------------------------------+---------+

[root@juno1 ~(keystone_admin)]# cinder type-list
+--------------------------------------+---------+
|                  ID                  |   Name  |
+--------------------------------------+---------+
| 29917269-d73f-4c28-b295-59bfbda5d044 |  lvmz   |
| 64414f3a-7770-4958-b422-8db0c3e2f433 |  lvms   |
+--------------------------------------+---------+


[root@juno1 ~(keystone_admin)]# cinder type-key lvmz set volume_backend_name=LVM_iSCSI51

[root@juno1 ~(keystone_admin)]# cinder type-key lvms set volume_backend_name=LVM_iSCSI52



Then enable and start service target:-

   [root@juno1 ~(keystone_admin)]#   service target enable
   [root@juno1 ~(keystone_admin)]#   service target start

[root@juno1 ~(keystone_admin)]# service target status
Redirecting to /bin/systemctl status  target.service
target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Wed 2014-11-05 13:23:09 MSK; 44min ago
  Process: 1611 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1611 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service


Nov 05 13:23:07 juno1.localdomain systemd[1]: Starting Restore LIO kernel target configuration...
Nov 05 13:23:09 juno1.localdomain systemd[1]: Started Restore LIO kernel target configuration.

Now all changes done by creating cinder volumes of types lvms,lvmz ( via
dashboard - volume create with dropdown menu volume types or via cinder CLI )
will be persistent in  targetcli> ls output between reboots

[root@juno1 ~(keystone_boris)]# cinder list
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
|                  ID                  | Status |   Display Name   | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+
| 3a4f6878-530a-4a28-87bb-92ee256f63ea | in-use | UbuntuUTLV510851 |  5   |     lvmz    |   true   | efb1762e-6782-4895-bf2b-564f14105b5b |
| 51528876-405d-4a15-abc2-61ad72fc7d7e | in-use |   CentOS7LVG51   |  10  |     lvmz    |   true   | ba3e87fa-ee81-42fc-baed-c59ca6c8a100 |
| ca0694ae-7e8d-4c84-aad8-3f178416dec6 | in-use |  VF20LVG520711   |  7   |     lvms    |   true   | 51a20959-0a0c-4ef6-81ec-2edeab6e3588 |
| dc9e31f0-b27f-4400-a666-688365126f67 | in-use | UbuntuUTLV520711 |  7   |     lvms    |   true   | 1fe7d2c3-58ae-4ee8-8f5f-baf334195a59 |
+--------------------------------------+--------+------------------+------+-------------+----------+--------------------------------------+


    Compare 'green' highlighted volume id's and tarcgetcli>ls output

 



  

  
   Next snapshot demonstrates lvms && lvmz volumes attached to corresponding
   nova instances utilizing LVMiSCSI cinder backend.

   

On Compute Node iscsiadm output will look as follows :-

[root@juno2 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-3a4f6878-530a-4a28-87bb-92ee256f63ea
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-ca0694ae-7e8d-4c84-aad8-3f178416dec6
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-dc9e31f0-b27f-4400-a666-688365126f67
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-51528876-405d-4a15-abc2-61ad72fc7d7e


References
1.  https://www.centos.org/forums/viewtopic.php?f=47&t=48591