Sunday, December 14, 2014

Running Nova-Docker on OpenStack RDO Juno (CentOS 7)


  Recently Filip Krikava made a fork on github and created a Juno branch using
the latest commit +  Fix the problem when an image is not located in the local docker image registry

 Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after "Merge oslo.i18n". Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git
 


Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.
 


Install required packages to install nova-docker driver per https://wiki.openstack.org/wiki/Docker

*****************************************************
As of 11/12/2014 third line may be per official fork
https://github.com/stackforge/nova-docker/tree/stable/juno
# git clone https://github.com/stackforge/nova-docker *****************************************************
 
***************************
Initial docker setup
***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

#  master                1ed1820 A note no firewall drivers.
  remotes/origin/HEAD   -> origin/master
  remotes/origin/juno   1a08ea5 Fix the problem when an image
            is not located in the local docker image registry.
  remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d


******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"

************************************************
Next, create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

************************
Restart Services
************************
usermod -G docker nova
systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api

******************************
Verification docker install
******************************

[root@juno ~]# docker run -i -t fedora /bin/bash
Unable to find image 'fedora' locally
fedora:latest: The image you are pulling has been verified

00a0c78eeb6d: Pull complete
2f6ab0c1646e: Pull complete
511136ea3c5a: Already exists
Status: Downloaded newer image for fedora:latest
bash-4.3# cat /etc/issue
Fedora release 21 (Twenty One)
Kernel \r on an \m (\l)

[root@juno ~]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                        PORTS               NAMES
738e54f9efd4        fedora:latest            "/bin/bash"         3 minutes ago       Exited (127) 25 seconds ago                       stoic_lumiere                              
14fd0cbba76d        ubuntu:latest            "/bin/bash"         3 minutes ago       Exited (0) 3 minutes ago                          prickly_hypatia                            
ef1a726d1cd4        fedora:latest            "/bin/bash"         5 minutes ago       Exited (0) 3 minutes ago                          drunk_shockley                             
0a2da90a269f        ubuntu:latest            "/bin/bash"         11 hours ago        Exited (0) 11 hours ago                           thirsty_kowalevski                         
5a3288ce0e8e        ubuntu:latest            "/bin/bash"         11 hours ago        Exited (0) 11 hours ago                           happy_leakey                               
21e84951eabd        tutum/wordpress:latest   "/run.sh"           16 hours ago        Up About an hour                                  nova-bf5f7eb9-900d-48bf-a230-275d65813b0f  

*******************
Setup Wordpress
*******************


# docker pull tutum/wordpress
# . keystonerc_admin
# docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress


[root@juno ~(keystone_admin)]# glance image-list
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size      | Status |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+
| c6d01e60-56c2-443f-bf87-15a0372bc2d9 | cirros          | qcow2       | bare             | 13200896  | active |
| 9d59e7ad-35b4-4c3f-9103-68f85916f36e | tutum/wordpress | raw         | docker           | 517639680 | active |
+--------------------------------------+-----------------+-------------+------------------+-----------+--------+

********************
Start container
********************
$ . keystonerc_demo

[root@juno ~(keystone_demo)]# neutron net-list
+--------------------------------------+--------------+-------------------------------------------------------+
| id                                   | name         | subnets                                               |
+--------------------------------------+--------------+-------------------------------------------------------+
| ccfc4bb1-696d-4381-91d7-28ce7c9cb009 | private      | 6c0a34ab-e3f1-458c-b24a-96f5a2149878 10.0.0.0/24      |
| 32c14896-8d47-4a56-b3c6-0dd823f03089 | public       | b1799aef-3f69-429c-9881-f81c74d83060 192.169.142.0/24 |
| a65bff8f-e397-491b-aa97-955864bec2f9 | demo_private | 69012862-f72e-4cd2-a4fc-4106d431cf2f 70.0.0.0/24      |
+--------------------------------------+--------------+-------------------------------------------------------+
$ nova boot --image "tutum/wordpress" --flavor m1.tiny --key-name  osxkey --nic net-id=a65bff8f-e397-491b-aa97-955864bec2f9 WordPress

 [root@juno ~(keystone_demo)]# nova list
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+
| ID                                   | Name      | Status  | Task State | Power State | Networks                                |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+

| bf5f7eb9-900d-48bf-a230-275d65813b0f |  WordPress   | ACTIVE  | -          | Running     | demo_private=70.0.0.16, 192.169.142.153 |
+--------------------------------------+-----------+---------+------------+-------------+-----------------------------------------+

[root@juno ~(keystone_demo)]# docker ps -a

CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS                   PORTS               NAMES
21e84951eabd        tutum/wordpress:latest   "/run.sh"           About an hour ago   Up 11 minutes                                nova-bf5f7eb9-900d-48bf-a230-275d65813b0f 


**************************
Starting Wordpress
**************************

Immediately after VM starts (on non-default Libvirts Subnet 192.169.142.0/24) status WordPress  is SHUTOFF, so we start WordPress (browser launched to
Juno VM 192.169.142.45 from KVM Hypervisor Server ) :-


   Browser launched to WordPress container 192.169.142.153  from KVM  Hypervisor Server
 


 **********************************************************************************
  Floating IP assigned to Wordpress container  been used to launch browser:-
 ********************************************************************************** 

     



*******************************************************************************************
Another sample to demonstrating nova-docker container functionality. Browser launched to Wordpress nova-docker container   (192.169.142.155)   from KVM Hypervisor Server hosting Libvirt's Subnet (192.169.142.0/24)
*******************************************************************************************


 



  

*****************
MySQL Setup
*****************

  # docker pull tutum/mysql
  # .   keystonerc_admin
  *****************************
  Creating Glance Image
  *****************************
  #   docker save tutum/mysql:latest | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/mysql:latest

  ****************************************
  Starting Nova-Docker container
  ****************************************
  # .   keystonerc_demo
  #   nova boot --image "tutum/mysql:latest" --flavor m1.tiny --key-name  osxkey --nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 mysql
 

 [root@ip-192-169-142-45 ~(keystone_demo)]# nova list
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+
| ID                                   | Name          | Status | Task State | Power State | Networks                                |
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+
| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress     | ACTIVE | -          | Running     | demo_network=70.0.0.16, 192.169.142.153 |
| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql         | ACTIVE | -          | Running     | demo_network=70.0.0.19, 192.169.142.155 |
| 626bd8e0-cf1a-4891-aafc-620c464e8a94 | tutum/hipache | ACTIVE | -          | Running     | demo_network=70.0.0.18, 192.169.142.154 |
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------------------+

[root@ip-192-169-142-45 ~(keystone_demo)]# docker ps -a
CONTAINER ID        IMAGE                          COMMAND               CREATED             STATUS                         PORTS               NAMES
3da1e94892aa        tutum/mysql:latest             "/run.sh"             25 seconds ago      Up 23 seconds                                      nova-39eef361-1329-44d9-b05a-f6b4b8693aa3 
77538873a273        tutum/hipache:latest           "/run.sh"             30 minutes ago                                                         condescending_leakey                      
844c75ca5a0e        tutum/hipache:latest           "/run.sh"             31 minutes ago                                                         condescending_turing                      
f477605840d0        tutum/hipache:latest           "/run.sh"             42 minutes ago      Up 31 minutes                                      nova-626bd8e0-cf1a-4891-aafc-620c464e8a94 
3e2fe064d822        rastasheep/ubuntu-sshd:14.04   "/usr/sbin/sshd -D"   About an hour ago   Exited (0) About an hour ago                       test_sshd                                 
8e79f9d8e357        fedora:latest                  "/bin/bash"           About an hour ago   Exited (0) About an hour ago                       evil_colden                               
9531ab33db8d        ubuntu:latest                  "/bin/bash"           About an hour ago   Exited (0) About an hour ago                       angry_bardeen                             
df6f3c9007a7        tutum/wordpress:latest         "/run.sh"             2 hours ago         Up About an hour                                   nova-3dbf981f-f28c-4abe-8fd1-09b8b8cad930

 
[root@ip-192-169-142-45 ~(keystone_demo)]# docker logs 3da1e94892aa
=> An empty or uninitialized MySQL volume is detected in /var/lib/mysql
=> Installing MySQL ...
=> Done!
=> Creating admin user ...
=> Waiting for confirmation of MySQL service startup, trying 0/13 ...
=> Creating MySQL user admin with random password
=> Done!
========================================================================
You can now connect to this MySQL Server using:

    mysql -uadmin -pfXs5UarEYaow -h -P

Please remember to change the above password as soon as possible!
MySQL user 'root' has no password but only allows local connections
========================================================================
141218 20:45:31 mysqld_safe Can't log to error log and syslog at the same time.
Remove all --log-error configuration options for --syslog to take effect.

141218 20:45:31 mysqld_safe Logging to '/var/log/mysql/error.log'.
141218 20:45:31 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql

[root@ip-192-169-142-45 ~(keystone_demo)]# mysql -uadmin -pfXs5UarEYaow -h 192.169.142.155  -P 3306
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.40-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases ;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.01 sec)

MySQL [(none)]>


*******************************************
Setup Ubuntu 14.04 with SSH access
*******************************************

# docker pull rastasheep/ubuntu-sshd:14.04

# . keystonerc_admin
# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04

# . keystonerc_demo
# nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny --key-name  osxkey    --nic net-id=5fcd01ac-bc8e-450d-be67-f0c274edd041 ubuntuTrusty

***********************************************************
Login to dashboard && assign floating IP via dashboard:-
***********************************************************




  [root@ip-192-169-142-45 ~(keystone_demo)]# nova list
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------------+
| ID                                   | Name         | Status  | Task State | Power State | Networks                                |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------------+
| 3dbf981f-f28c-4abe-8fd1-09b8b8cad930 | WordPress    | SHUTOFF | -          | Shutdown    | demo_network=70.0.0.16, 192.169.142.153 |
| 7bbf887f-167c-461e-9ee0-dd4d43605c9e | lamp         | ACTIVE  | -          | Running     | demo_network=70.0.0.20, 192.169.142.156 |
| 39eef361-1329-44d9-b05a-f6b4b8693aa3 | mysql        | SHUTOFF | -          | Shutdown    | demo_network=70.0.0.19, 192.169.142.155 |
| f21dc265-958e-4ed0-9251-31c4bbab35f4 | ubuntuTrusty | ACTIVE  | -          | Running     | demo_network=70.0.0.21, 192.169.142.157 |
+--------------------------------------+--------------+---------+------------+-------------+-----------------------------------------+
[root@ip-192-169-142-45 ~(keystone_demo)]# ssh root@192.169.142.157
root@192.169.142.157's password:
Last login: Fri Dec 19 09:19:40 2014 from ip-192-169-142-45.ip.secureserver.net
root@instance-0000000d:~# cat /etc/issue
Ubuntu 14.04.1 LTS \n \l

root@instance-0000000d:~# ifconfig
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

nse49711e9-93 Link encap:Ethernet  HWaddr fa:16:3e:32:5e:d8 
          inet addr:70.0.0.21  Bcast:70.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe32:5ed8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2574 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1653 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2257920 (2.2 MB)  TX bytes:255582 (255.5 KB)


root@instance-0000000d:~# df -h
Filesystem                                                                                         Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:1-4600578-76893e146987bf4b58b42ff6ed80892df938ffba108f22c7a4591b18990e0438  9.8G  302M  9.0G   4% /
tmpfs                                                                                              1.9G     0  1.9G   0% /dev
shm                                                                                                 64M     0   64M   0% /dev/shm
/dev/mapper/centos-root                                                                             36G  9.8G   26G  28% /etc/hosts
tmpfs                                                                                              1.9G     0  1.9G   0% /run/secrets
tmpfs                                                                                              1.9G     0  1.9G   0% /proc/kcore


  

************************************************************
Set up VNC with Ubuntu SSH nova-docker container
************************************************************

root@instance-0000000e:~# cat /etc/issue
Ubuntu 14.04.1 LTS \n \l

root@instance-0000000e:~#apt-get install xorg fluxbox vnc4server -y
root@instance-0000000e:~# echo "exec fluxbox">> ~/.xinitrc


root@instance-0000000e:~# ifconfig                            
lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

ns6006a533-77 Link encap:Ethernet  HWaddr fa:16:3e:26:64:d2 
          inet addr:70.0.0.22  Bcast:70.0.0.255  Mask:255.255.255.0
          inet6 addr: fe80::f816:3eff:fe26:64d2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4284 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5771 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:706698 (706.6 KB)  TX bytes:2866607 (2.8 MB)

root@instance-0000000e:~# ps -ef | grep vnc
root        33     1  0 15:48 pts/0    00:00:02 Xvnc4 :1 -desktop instance-0000000e:1 (root) -auth /root/.Xauthority -geometry 1024x768 -depth 16 -rfbwait 30000 -rfbauth /root/.vnc/passwd -rfbport 5901 -pn -fp /usr/X11R6/lib/X11/fonts/Type1/,/usr/X11R6/lib/X11/fonts/Speedo/,/usr/X11R6/lib/X11/fonts/misc/,/usr/X11R6/lib/X11/fonts/75dpi/,/usr/X11R6/lib/X11/fonts/100dpi/,/usr/share/fonts/X11/misc/,/usr/share/fonts/X11/Type1/,/usr/share/fonts/X11/75dpi/,/usr/share/fonts/X11/100dpi/ -co /etc/X11/rgb
root        39     1  0 15:48 pts/0    00:00:00 vncconfig -iconic
root       148     9  0 15:59 pts/0    00:00:00 grep --color=auto vnc



  

 
 *************************************************
  Testing TomCat in Nova-Docker Container
 **************************************************

[root@junodocker ~]# docker pull tutum/tomcat
Pulling repository tutum/tomcat
02e84f04100e: Download complete
511136ea3c5a: Download complete
1c9383292a8f: Download complete
9942dd43ff21: Download complete
d92c3c92fa73: Download complete
0ea0d582fd90: Download complete
cc58e55aa5a5: Download complete
c4ff7513909d: Download complete
3e9890d3403b: Download complete
b9192a10c580: Download complete
28e5e6a80860: Download complete
2128079f9b8f: Download complete
1a9cd5ad5ba2: Download complete
a4432dd90a00: Download complete
9e76a879f59b: Download complete
1cd5071591ec: Download complete
abc71ecb910e: Download complete
4f2b619579e3: Download complete
7907e64b4ca0: Download complete
80446f8c6fc0: Download complete
747d7f2e49a2: Download complete
c6e054e6696d: Download complete
Status: Downloaded newer image for tutum/tomcat:latest

[root@junodocker ~]# . keystonerc_admin

[root@junodocker ~(keystone_admin)]#  docker save tutum/tomcat:latest | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/tomcat:latest
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | a96192a982d17a1b63d81ff28eda07fe     |
| container_format | docker                               |
| created_at       | 2014-12-25T13:20:44                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | raw                                  |
| id               | a18ef105-74af-4b05-a5e7-76425de7b9fb |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | tutum/tomcat:latest                  |
| owner            | 304f8ad83c414cb79c2af21d2c89880d     |
| protected        | False                                |
| size             | 552029184                            |
| status           | active                               |
| updated_at       | 2014-12-25T13:21:29                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

***************************
Deployed via dashboard
***************************


[root@junodocker ~(keystone_admin)]# docker ps -a
CONTAINER ID        IMAGE                                      COMMAND                CREATED             STATUS                  PORTS               NAMES
bc6b30f07717        tutum/tomcat:latest                        "/run.sh"              2 minutes ago       Up 2 minutes                                nova-bc5ed111-aa9e-44e7-a5ba-d650d46bfcd7  
3527cc82e982        eugeneware/docker-wordpress-nginx:latest   "/bin/bash /start.sh   2 days ago          Up 2 hours                                  nova-d55a0876-acf5-4af7-9240-4e44810ebb21  
009d1cadcab5        rastasheep/ubuntu-sshd:14.04               "/usr/sbin/sshd -D"    2 days ago          Up About an hour                            nova-716e0421-8e56-4b19-a447-b39e5aedbc6b  
b23f9d238a2a        rastasheep/ubuntu-sshd:14.04               "/usr/sbin/sshd -D"    5 days ago          Exited (0) 2 days ago                       nova-ae40aae3-c148-4def-8a4b-0adf69e38f8d  
[root@junodocker ~(keystone_admin)]# docker logs bc6b30f07717
=> Creating and admin user with a random password in Tomcat
=> Done!
===============================================
You can now configure to this Tomcat server using:

    admin:8VMttCo2lE5O

===============================================


  

  


  Another sample to test kumarpraveen/fedora-sshd
  View https://registry.hub.docker.com/u/kumarpraveen/fedora-sshd/

 References

Thursday, December 11, 2014

Testing Juno on CentOS 7 with SELINUX enforced

AVC denial during packstack run :-


  

****************************************************************************
Attempt restart neutron-dhcp-agent.service with dnsmasq enabled
****************************************************************************

[root@juno ~]# systemctl status  neutron-dhcp-agent.service -l
neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; enabled)
   Active: active (running) since Thu 2014-12-11 08:17:45 EST; 23s ago
 Main PID: 10255 (neutron-dhcp-ag)
   CGroup: /system.slice/neutron-dhcp-agent.service
           └─10255 /usr/bin/python /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/dhcp_agent.ini --log-file /var/log/neutron/dhcp-agent.log

Dec 11 08:17:45 juno.localdomain systemd[1]: Starting OpenStack Neutron DHCP Agent...
Dec 11 08:17:45 juno.localdomain systemd[1]: Started OpenStack Neutron DHCP Agent.
Dec 11 08:17:47 juno.localdomain sudo[10266]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-ccfc4bb1-696d-4381-91d7-28ce7c9cb009 ip link set tap6d7e5854-58 up
Dec 11 08:17:48 juno.localdomain sudo[10269]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-ccfc4bb1-696d-4381-91d7-28ce7c9cb009 ip addr show tap6d7e5854-58 permanent scope global
Dec 11 08:17:48 juno.localdomain sudo[10276]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-ccfc4bb1-696d-4381-91d7-28ce7c9cb009 ip route list dev tap6d7e5854-58 scope link
Dec 11 08:17:48 juno.localdomain sudo[10285]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-ccfc4bb1-696d-4381-91d7-28ce7c9cb009 ip route list dev tap6d7e5854-58
Dec 11 08:17:48 juno.localdomain sudo[10289]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-rootwrap /etc/neutron/rootwrap.conf ip netns exec qdhcp-ccfc4bb1-696d-4381-91d7-28ce7c9cb009 env NEUTRON_NETWORK_ID=ccfc4bb1-696d-4381-91d7-28ce7c9cb009 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap6d7e5854-58 --except-interface=lo --pid-file=/var/lib/neutron/dhcp/ccfc4bb1-696d-4381-91d7-28ce7c9cb009/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/ccfc4bb1-696d-4381-91d7-28ce7c9cb009/host --addn-hosts=/var/lib/neutron/dhcp/ccfc4bb1-696d-4381-91d7-28ce7c9cb009/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/ccfc4bb1-696d-4381-91d7-28ce7c9cb009/opts --leasefile-ro --dhcp-range=set:tag0,10.0.0.0,static,86400s --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq.conf --domain=openstacklocal
Dec 11 08:17:49 juno.localdomain dnsmasq[10291]: cannot open log /var/log/neutron/dnsmasq.log: Permission denied
Dec 11 08:17:49 juno.localdomain dnsmasq[10291]: FAILED to start up

*************************************************
 cat  /var/log/audit/audit.log | grep -i avc
************************************************

type=USER_AVC msg=audit(1418301241.093:22175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='avc:  received setenforce notice (enforcing=0)  exe="/usr/lib/systemd/systemd" sauid=0 hostname=? addr=? terminal=?'
type=AVC msg=audit(1418302563.923:26020): avc:  denied  { signal } for  pid=11874 comm="keystone-all" scontext=system_u:system_r:keystone_t:s0 tcontext=system_u:system_r:keystone_t:s0 tclass=process
type=AVC msg=audit(1418302655.746:250): avc:  denied  { search } for  pid=4126 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302655.754:252): avc:  denied  { search } for  pid=4126 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302655.754:253): avc:  denied  { search } for  pid=4126 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302655.754:254): avc:  denied  { search } for  pid=4126 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302655.754:255): avc:  denied  { search } for  pid=4126 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302655.968:260): avc:  denied  { search } for  pid=4128 comm="nova-rootwrap" name=".local" dev="dm-1" ino=138303468 scontext=system_u:system_r:nova_api_t:s0 tcontext=unconfined_u:object_r:gconf_home_t:s0 tclass=dir
type=AVC msg=audit(1418302656.488:272): avc:  denied  { search } for  pid=4138 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302656.496:274): avc:  denied  { search } for  pid=4138 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302656.496:275): avc:  denied  { search } for  pid=4138 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302656.496:276): avc:  denied  { search } for  pid=4138 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302656.496:277): avc:  denied  { search } for  pid=4138 comm="sudo" name="sss" dev="dm-1" ino=136223202 scontext=system_u:system_r:nova_api_t:s0 tcontext=system_u:object_r:sssd_var_lib_t:s0 tclass=dir
type=AVC msg=audit(1418302656.543:280): avc:  denied  { search } for  pid=4139 comm="nova-rootwrap" name=".local" dev="dm-1" ino=138303468 scontext=system_u:system_r:nova_api_t:s0 tcontext=unconfined_u:object_r:gconf_home_t:s0 tclass=dir
type=AVC msg=audit(1418303504.663:3261): avc:  denied  { search } for  pid=8861 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303535.870:3369): avc:  denied  { search } for  pid=8986 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303567.129:3480): avc:  denied  { search } for  pid=9086 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303598.400:3593): avc:  denied  { search } for  pid=9217 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303629.511:3716): avc:  denied  { search } for  pid=9368 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303660.889:3834): avc:  denied  { search } for  pid=9503 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303692.339:3938): avc:  denied  { search } for  pid=9626 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303723.502:4053): avc:  denied  { search } for  pid=9767 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303754.847:4159): avc:  denied  { search } for  pid=9875 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303786.015:4272): avc:  denied  { search } for  pid=9984 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303817.131:4378): avc:  denied  { search } for  pid=10089 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303848.215:4491): avc:  denied  { search } for  pid=10198 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303869.080:4574): avc:  denied  { search } for  pid=10293 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303900.477:4690): avc:  denied  { search } for  pid=10425 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303931.657:4793): avc:  denied  { search } for  pid=10542 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303962.808:4911): avc:  denied  { search } for  pid=10669 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418303993.921:5012): avc:  denied  { search } for  pid=10782 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir
type=AVC msg=audit(1418304025.148:5137): avc:  denied  { search } for  pid=10916 comm="dnsmasq" name="neutron" dev="dm-1" ino=204294423 scontext=system_u:system_r:dnsmasq_t:s0 tcontext=system_u:object_r:neutron_log_t:s0 tclass=dir

*******************
Disabling SELINUX
*******************

[root@juno ~]# systemctl status  neutron-dhcp-agent.service
neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent
   Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; enabled)
   Active: active (running) since Thu 2014-12-11 08:25:52 EST; 14s ago
 Main PID: 12515 (neutron-dhcp-ag)
   CGroup: /system.slice/neutron-dhcp-agent.service
           ├─12515 /usr/bin/python /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutro...
           └─12542 dnsmasq --no-hosts --no-resolv --strict-order --bind-interfaces --interface=tap6d7e...

Dec 11 08:25:52 juno.localdomain systemd[1]: Starting OpenStack Neutron DHCP Agent...
Dec 11 08:25:52 juno.localdomain systemd[1]: Started OpenStack Neutron DHCP Agent.
Dec 11 08:25:53 juno.localdomain sudo[12526]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/... up
Dec 11 08:25:53 juno.localdomain sudo[12529]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/...bal
Dec 11 08:25:54 juno.localdomain sudo[12532]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/...ink
Dec 11 08:25:54 juno.localdomain sudo[12535]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/...-58
Dec 11 08:25:54 juno.localdomain sudo[12538]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/...ace
Dec 11 08:25:56 juno.localdomain python[12544]: SELinux is preventing /usr/sbin/dnsmasq from write ...y .
                                               
                                                *****  Plugin catchall (100. confidence) suggests   **...
Dec 11 08:25:56 juno.localdomain python[12544]: SELinux is preventing /usr/sbin/dnsmasq from write ...y .
                                               
                                                *****  Plugin catchall (100. confidence) suggests   **...
Dec 11 08:25:56 juno.localdomain python[12544]: SELinux is preventing /usr/sbin/dnsmasq from write ...y .
                                               
                                                *****  Plugin catchall (100. confidence) suggests   **...
Dec 11 08:25:56 juno.localdomain python[12544]: SELinux is preventing /usr/sbin/dnsmasq from write ...y .
                                               
                                                *****  Plugin catchall (100. confidence) suggests   **...
Dec 11 08:25:56 juno.localdomain python[12544]: SELinux is preventing /usr/sbin/dnsmasq from setatt...e .
                                               
                                                *****  Plugin catchall (100. confidence) suggests   **...
Hint: Some lines were ellipsized, use -l to show in full.

Virt-install W7 (Evaluation copy ) with virtio-win-0.1-81.iso

**************
Virt-install
**************

qemu-img create -f qcow2 win7.img 15G
 
virt-install --connect qemu:///system \
    --name WIN7KVM --ram 2048 --vcpus=2 --cpuset=auto \
    --disk path=/var/lib/libvirt/images/win7.img,bus=virtio \
    --network=network=openstackvms,model=virtio,mac=RANDOM  \
    --graphics vnc,port=5903 \
    --disk device=cdrom,path=/home/boris/isos/virtio-win-0.1-81.iso  \
    --disk device=cdrom,path=/home/boris/isos/Win7.iso \
    --os-type=windows --os-variant=win7 --boot cdrom,hd



 

 

  


[boris@juno2 ~]$ virt-install --connect qemu:///system \
>     --name WIN7KVM --ram 2048 --vcpus=2 --cpuset=auto \
>     --disk path=/var/lib/libvirt/images/win7.img,bus=virtio \
>     --network=network=openstackvms,model=virtio,mac=RANDOM  \
>     --graphics vnc,port=5904 \
>     --disk device=cdrom,path=/home/boris/isos/virtio-win-0.1-81.iso  \
>     --disk device=cdrom,path=/home/boris/isos/Win7.iso \
>     --os-type=windows --os-variant=win7 --boot cdrom,hd

Starting install...
Creating domain...                                                                |    0 B  00:00:00    

(virt-viewer:4713): Gtk-WARNING **: Attempting to add a widget with type VncDisplay to a container of type VirtViewerDisplayVnc, but the widget is already inside a container of type VirtViewerDisplayVnc, please use gtk_widget_reparent()

Domain creation completed. You can restart your domain by running:
  virsh --connect qemu:///system start WIN7KVM


  
************************
Virt-Manager
************************



 

 

 

 


Convertion raw => qcow2

qemu-img convert -f raw -O qcow2 Windows7.img  Windows7.qcow2
virsh edit Windows7 => /var/lib/libvirt/images/Windows7.qcow2
Update Virt-manager page Virtio Disk1=>Advanced Options=>Storage format=>qcow2
   

Friday, November 28, 2014

VXLAN tenants subnets and "VLAN tags" on Juno


  In case of GRE or VXLAN tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface. They are only used by openvswitch to separate traffic on the br-int, so the different tap interfaces corresponding to different neutron subnets do not see each other's traffic. As far as this tags are not 12-bits segments , number 4096 is not important.

 
**************************
ON CONTROLLER
**************************

[root@juno1 ~(keystone_admin)]# neutron net-list
+--------------------------------------+-----------------+-----------------------------------------------------+
| id                                   | name            | subnets                                             |
+--------------------------------------+-----------------+-----------------------------------------------------+
| 90b574e2-f51a-423e-aef9-c201f6f68b76 | kashyap_private | 5fe9e3cc-feee-4f51-bed8-f4891bd8aafe 40.0.0.0/24    |
| 65cbd354-daae-41bb-9d3c-e58b1062be19 | public          | 147d5ecd-fe39-489e-8901-3b20a2c50148 192.168.1.0/24 |
| 8b2de478-de3f-448e-8ec1-8f973a762daf | boris_network   | 4142d2c6-220c-4b47-8147-cf512f7a753b 15.0.0.0/24    |
| 3fdb2eb7-fff8-4633-824b-1da4c38ccbd5 | kashyap_network | 8bee3a8d-7fac-4d05-84e3-bbe52a601084 70.0.0.0/24    |
| 951715cc-2a19-470e-9987-25b3b7906756 | demo_network    | d512bb49-29f0-4fe2-886a-10d880cc83fc 10.0.0.0/24    |
+--------------------------------------+-----------------+-----------------------------------------------------+
[root@juno1 ~(keystone_admin)]# ip netns exec qdhcp-90b574e2-f51a-423e-aef9-c201f6f68b76 ifconfig
lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1  bytes 576 (576.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 576 (576.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tap5532b72d-8c: flags=4163  mtu 1500
        inet 40.0.0.11  netmask 255.255.255.0  broadcast 40.0.0.255
        inet6 fe80::f816:3eff:fe6e:91a7  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6e:91:a7  txqueuelen 0  (Ethernet)
        RX packets 27  bytes 2982 (2.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 12  bytes 2112 (2.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@juno1 ~(keystone_admin)]# ovs-vsctl show
f2113bd0-c4ca-4c4b-af16-928ff03e53da
    Bridge br-int
        fail_mode: secure
        Port "tap37cfc1fc-09"
            tag: 1
            Interface "tap37cfc1fc-09"
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qr-7ff72517-f3"
            tag: 4
            Interface "qr-7ff72517-f3"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap2d26c49a-37"
            tag: 5
            Interface "tap2d26c49a-37"
                type: internal
        Port "tap5532b72d-8c"
            tag: 4    <==== tag 4 on Controller Node
            Interface "tap5532b72d-8c"

                type: internal
        Port "qr-bd92408c-b4"
            tag: 5
            Interface "qr-bd92408c-b4"
                type: internal
        Port "qr-a494fcc8-e5"
            tag: 2
            Interface "qr-a494fcc8-e5"
                type: internal
        Port "qr-4162d98e-5c"
            tag: 1
            Interface "qr-4162d98e-5c"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap305cff35-34"
            tag: 2
            Interface "tap305cff35-34"
                type: internal
    Bridge br-ex
        Port "qg-88b1ef62-2c"
            Interface "qg-88b1ef62-2c"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "qg-d3e929c6-ba"
            Interface "qg-d3e929c6-ba"
                type: internal
        Port "enp2s0"
            Interface "enp2s0"
        Port "qg-5c6dd032-a8"
            Interface "qg-5c6dd032-a8"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port "qg-7b037650-10"
            Interface "qg-7b037650-10"
                type: internal
    Bridge br-tun
        Port "vxlan-c0a80089"
            Interface "vxlan-c0a80089"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.127", out_key=flow, remote_ip="192.168.0.137"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.1.3"


************************************************************************
So kashyap_private network corresponds tag 4 on Controller
************************************************************************
**********************
ON COMPUTE
**********************
[root@juno2 ~(keystone_kashyap)]# nova list
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+
| ID                                   | Name       | Status | Task State | Power State | Networks                                 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+
| 1359ac92-8092-47bc-b7d6-ee474b641355 | CirrOS321  | ACTIVE | -          | Running     | kashyap_private=40.0.0.12, 192.168.1.155 |
+--------------------------------------+------------+--------+------------+-------------+------------------------------------------+

VM is running on kashyap_private network (40.0.0.0/24)

[root@juno2 ~(keystone_kashyap)]# nova show 1359ac92-8092-47bc-b7d6-ee474b641355
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2014-11-28T15:21:59.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2014-11-28T15:21:48Z                                     |
| flavor                               | m1.tiny (1)                                              |
| hostId                               | 7339f21099a5d4d918b8e69302c1dab2d5f7af63babf36d02057177d |
| id                                   | 1359ac92-8092-47bc-b7d6-ee474b641355                     |
| image                                | cirros (6f7d1877-9b6b-4530-b868-9fe42a71bca9)            |
| kashyap_private network              | 40.0.0.12, 192.168.1.155                                 |
| key_name                             | oskey37                                                  |
| metadata                             | {}                                                       |
| name                                 | CirrOS321                                                |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 2a8be6536a864dd0b73782fd0fb2faff                         |
| updated                              | 2014-11-28T15:21:59Z                                     |
| user_id                              | bce3ed8aaa97447b8edfe3cf734b0793                         |
+--------------------------------------+----------------------------------------------------------+

****************************
Identifying qvo* port
****************************

[root@juno2 ~(keystone_kashyap)]# brctl show

bridge name    bridge id        STP enabled    interfaces
qbr23b93632-0d        8000.1e955c2e4f2e    no        qvb23b93632-0d
                            tap23b93632-0d

qbr37f299e0-de        8000.1a32e0906384    no        qvb37f299e0-de
qbra0fa9687-39        8000.f6fd5e4399e7    no        qvba0fa9687-39
qbrff853471-05        8000.9ef67b07ba76    no        qvbff853471-05

[root@juno2 ~(keystone_admin)]# virsh dumpxml 1359ac92-8092-47bc-b7d6-ee474b641355 | grep 23b93632-0d
     
      <source bridge='qbr23b93632-0d'/>
      <target dev='tap23b93632-0d'/>


***********************************************************
Tracking veth-pair ( qvb23b93632-0d, qvo23b93632-0d )
***********************************************************
[root@juno2 ~(keystone_kashyap)]# ovs-vsctl

79e82e7f-9040-4789-b8c8-d7d397ec230b
    Bridge br-tun
        Port "vxlan-c0a8007f"
            Interface "vxlan-c0a8007f"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.0.137", out_key=flow, remote_ip="192.168.0.127"}
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qvoff853471-05"
            tag: 2
            Interface "qvoff853471-05"
        Port "qvoa0fa9687-39"
            tag: 1
            Interface "qvoa0fa9687-39"
        Port "qvod282b303-3a"
            tag: 5
            Interface "qvod282b303-3a"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "qvo23b93632-0d"
            tag: 6  <=== tag 6 on Compute Node
            Interface "qvo23b93632-0d"

        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo37f299e0-de"
            tag: 1
            Interface "qvo37f299e0-de"
    ovs_version: "2.1.3"

****************************************************************************
So kashyap_private network corresponds tag 6 ( Compute Node)
****************************************************************************








Wednesday, November 26, 2014

ovs-ofctl dump-flows br-tun & VXLAN

Three VXLAN tenants networks created on Controller

#########################################
 Controller&&Network Node: ovs-ofctl dump-flows br-tun
#########################################
cookie=0x0, duration=11839.724s, table=4, n_packets=17158, n_bytes=1355764, idle_age=1506, priority=1,tun_id=0x3ee actions=mod_vlan_vid:5,resubmit(,10)
cookie=0x0, duration=43283.552s, table=4, n_packets=131115, n_bytes=9306495, idle_age=327, priority=1,tun_id=0x3ec actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=43280.638s, table=4, n_packets=60742, n_bytes=4530221, idle_age=5242, priority=1,tun_id=0x3ea actions=mod_vlan_vid:3,resubmit(,10)

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface. They are only used by openvswitch to separate traffic from br-tun to br-int, so the different tap interfaces corresponding to different neutron subnets do not see each other's traffic.

########################################
Compute Node : ovs-ofctl dump-flows br-tun
########################################
cookie=0x0, duration=11239.277s, table=4, n_packets=28289, n_bytes=40742145, idle_age=1670, priority=1,tun_id=0x3ee actions=mod_vlan_vid:6,resubmit(,10)
cookie=0x0, duration=43497.709s, table=4, n_packets=188677, n_bytes=281310140, idle_age=491, priority=1,tun_id=0x3ec actions=mod_vlan_vid:1,resubmit(,10)
cookie=0x0, duration=17757.690s, table=4, n_packets=107542, n_bytes=155828433, idle_age=5406, priority=1,tun_id=0x3ea actions=mod_vlan_vid:4,resubmit(,10)

VLAN tags here just correspond  qvo* interfaces ( tap-interfaces) of
nova instances running on Compute node. They were used `ovs-ofctl dump-flows br-tun` to transfer data from br-tun to br-int  each tag for corresponding VM.

Monday, November 24, 2014

How VMs access metadata via qrouter-namespace in Juno

    It  is actually an  update of  http://techbackground.blogspot.ie/2013/06/metadata-via-quantum-router.html   for Neutron on Juno ( original blog considers Quantum implementation on Grizzly ). From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova -api ) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.

Neutron proxies metadata requests to Nova adding HTTP headers which Nova uses to identify the source instance. Neutron actually uses two proxies to do this: a namespace proxy and a metadata agent.
 This post shows how a metadata request gets from an instance to the Nova metadata service via a namespace proxy running in a Neutron router.


  

         
1. Instance makes request
[fedora@vf20rsx2211 ~]$ curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
security-groups[fedora@vf20rsx2211 ~]$

[fedora@vf20rsx2211 ~]$ ip -4 address show dev eth0
2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 50.0.0.44/24 brd 50.0.0.255 scope global dynamic eth0
       valid_lft 70518sec preferred_lft 70518sec

[fedora@vf20rsx2211 ~]$ ip route
default via 50.0.0.1 dev eth0  proto static  metric 1024
50.0.0.0/24 dev eth0  proto kernel  scope link  src 50.0.0.44

[fedora@vf20rsx2211 ~]$ ip route get 169.254.169.254
169.254.169.254 via 50.0.0.1 dev eth0  src 50.0.0.44
    cache

2. Namespace proxy receives request
The default gateway 50.0.0.1  exists within a Neutron router namespace on the network node.  The Neutron-l3-agent started a namespace proxy in this namespace and added some iptables rules to redirect metadata requests to it.
There are no special routes, so the request goes out the default gateway.
of course a Neutron router needs to have an interface on the subnet.




[root@juno1 ~(keystone_admin)]# ip netns exec qdhcp-45577666-657d-4f75-a3ab-9bc232f15203 route -n


Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         50.0.0.1        0.0.0.0         UG    0      0        0 tap7a12f9b0-a4
50.0.0.0        0.0.0.0         255.255.255.0   U     0      0        0 tap7a12f9b0-a4

[root@juno1 ~(keystone_admin)]# neutron router-list
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+
| id                                   | name    | external_gateway_info                                                                                                                                                                     | distributed | ha    |
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

| 1cf08ea2-959f-4206-b2f1-a9b4708399c1 | router4 | {"network_id": "65cbd354-daae-41bb-9d3c-e58b1062be19", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "147d5ecd-fe39-489e-8901-3b20a2c50148", "ip_address": "192.168.1.173"}]} | False       | False |
+--------------------------------------+---------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+-------+

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 ifconfig

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qg-7b037650-10: flags=4163  mtu 1500
        inet 192.168.1.173  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::f816:3eff:fee5:de97  prefixlen 64  scopeid 0x20
        ether fa:16:3e:e5:de:97  txqueuelen 0  (Ethernet)
        RX packets 63929  bytes 87480425 (83.4 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 36523  bytes 5286278 (5.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

qr-17ddee14-9f: flags=4163  mtu 1500
        inet 50.0.0.1  netmask 255.255.255.0  broadcast 50.0.0.255
        inet6 fe80::f816:3eff:fe6f:a8e7  prefixlen 64  scopeid 0x20
        ether fa:16:3e:6f:a8:e7  txqueuelen 0  (Ethernet)
        RX packets 36643  bytes 5304620 (5.0 MiB)
        RX errors 0  dropped 5  overruns 0  frame 0
        TX packets 62940  bytes 87350558 (83.3 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 ip -4 address show dev qr-17ddee14-9f
 

16: qr-17ddee14-9f: mtu 1500 qdisc noqueue state UNKNOWN
    inet 50.0.0.1/24 brd 50.0.0.255 scope global qr-17ddee14-9f
       valid_lft forever preferred_lft forever

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 iptables-save| grep 9697


-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 9697 -j ACCEPT

[root@juno1 ~(keystone_admin)]# ip netns exec qrouter-1cf08ea2-959f-4206-b2f1-a9b4708399c1 netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      6755/python        

[root@juno1 ~(keystone_admin)]# ps -f --pid 6755 | fold -s -w 82

UID        PID  PPID  C STIME TTY          TIME CMD
root      6755     1  0 08:01 ?        00:00:00 /usr/bin/python
/bin/neutron-ns-metadata-proxy

--pid_file=/var/lib/neutron/external/pids/1cf08ea2-959f-4206-b2f1-a9b4708399c1.pid
 --metadata_proxy_socket=/var/lib/neutron/metadata_proxy
--router_id=1cf08ea2-959f-4206-b2f1-a9b4708399c1 --state_path=/var/lib/neutron
--metadata_port=9697 --verbose
--log-file=neutron-ns-metadata-proxy-1cf08ea2-959f-4206-b2f1-a9b4708399c1.log
--log-dir=/var/log/neutron

The nameserver proxy adds two HTTP headers to the request:
    X-Forwarded-For: with the instance's IP address
    X-Neutron-Router-ID: with the uuid of the Neutron router
and proxies it to a Unix domain socket with name
/var/lib/Neutron/metadata_proxy.


 3. Metadata agent receives request and queries the Neutron service
The metadata agent listens on this Unix socket. It is a normal Linux service that runs in the main operating system IP namespace, and so it is able to reach the Neutron  and Nova metadata services. Its configuration file has all the information required to do so.

[root@juno1 ~(keystone_admin)]# netstat -lxp | grep metadata
unix  2      [ ACC ]     STREAM     LISTENING     36027    1589/python          /var/lib/neutron/metadata_proxy

[root@juno1 ~(keystone_admin)]#  lsof /var/lib/neutron/metadata_proxy
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
      Output information may be incomplete.
COMMAND    PID    USER   FD   TYPE             DEVICE SIZE/OFF  NODE NAME
neutron-m 1589 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy
neutron-m 3412 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy
neutron-m 3413 neutron    5u  unix 0xffff8800c269a580      0t0 36027 /var/lib/neutron/metadata_proxy

[root@juno1 ~(keystone_admin)]# ps -f --pid 1589 | fold -w 80 -s
UID        PID  PPID  C STIME TTY          TIME CMD
neutron   1589     1  0 07:59 ?        00:00:03 /usr/bin/python
/usr/bin/neutron-metadata-agent --config-file
/usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/metadata_agent.ini --log-file
/var/log/neutron/metadata-agent.log


[root@juno1 neutron(keystone_admin)]#  grep -v '^#\|^\s*$' /etc/neutron/metadata_agent.ini

[DEFAULT]
debug = False
auth_url = http://192.168.1.127:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 808e36e154bd4cee
nova_metadata_ip = 192.168.1.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =a965cd23ed2f4502

metadata_workers =2
metadata_backlog = 4096

It reads the X-Forwarded-For and X-Neutron-Router-ID headers in the request and queries the Neutron service to find the ID of the instance that created the request.


 4. Metadata agent proxies request to Nova metadata service
It then adds these headers:
    X-Instance-ID: the instance ID returned from Neutron 
    X-Instance-ID-Signature: instance ID signed with the shared-secret
    X-Forwarded-For: the instance's IP address
and proxies the request to the Nova metadata service.

5. Nova metadata service receives request
The metadata service was started by nova-api. The handler checks the X-Instance-ID-Signature with the shared key, looks up the data and returns the response which travels back via the two proxies to the instance.


[root@juno1 nova(keystone_admin)]# grep metadata /etc/nova/nova.conf | grep -v ^# | grep -v ^$
enabled_apis=ec2,osapi_compute,metadata
metadata_listen=0.0.0.0
metadata_workers=2
metadata_host=192.168.1.127
neutron_metadata_proxy_shared_secret=a965cd23ed2f4502
service_neutron_metadata_proxy=True


[root@juno1 nova(keystone_admin)]# grep metadata /var/log/nova/nova-api.log | tail -15
2014-11-24 08:20:39.208 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-keys/ HTTP/1.1" status: 200 len: 125 time: 0.0013790
2014-11-24 08:20:39.217 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/ami-id HTTP/1.1" status: 200 len: 120 time: 0.0014508
2014-11-24 08:20:39.227 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/kernel-id HTTP/1.1" status: 200 len: 120 time: 0.0014200
2014-11-24 08:20:39.237 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-action HTTP/1.1" status: 200 len: 120 time: 0.0013640
2014-11-24 08:20:39.247 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-ipv4 HTTP/1.1" status: 200 len: 130 time: 0.0014000
2014-11-24 08:20:39.256 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ HTTP/1.1" status: 200 len: 130 time: 0.0013840
2014-11-24 08:20:39.265 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/ami-manifest-path HTTP/1.1" status: 200 len: 121 time: 0.0013070
2014-11-24 08:20:39.275 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/security-groups HTTP/1.1" status: 200 len: 116 time: 0.0013120
2014-11-24 08:20:39.285 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-type HTTP/1.1" status: 200 len: 124 time: 0.0013220
2014-11-24 08:20:39.294 4972 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/instance-id HTTP/1.1" status: 200 len: 127 time: 0.0012989
2014-11-24 08:20:39.304 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/placement/availability-zone HTTP/1.1" status: 200 len: 120 time: 0.0013518
2014-11-24 08:20:39.313 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/public-keys/0/openssh-key HTTP/1.1" status: 200 len: 517 time: 0.0013201
2014-11-24 08:20:39.323 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ami HTTP/1.1" status: 200 len: 119 time: 0.0013349
2014-11-24 08:20:39.333 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/ebs0 HTTP/1.1" status: 200 len: 124 time: 0.0013509
2014-11-24 08:20:39.342 4970 INFO nova.metadata.wsgi.server [-] 50.0.0.44,192.168.1.127 "GET /2009-04-04/meta-data/block-device-mapping/root HTTP/1.1" status: 200 len: 124 time: 0.0013192