**********************************************************
UPDATE on 03/16//2014
**********************************************************
1. I have added Dashboard to this setup http://bderzhavets.blogspot.com/2014/03/setting-up-dashboard-on-two-node.html
2. As alternative to turning on Gluster 3.4.2 backend for cinder thin LVM option may be considered. However thin LVM schema works for me just as POC. View http://bderzhavets.blogspot.com/2014/03/up-to-date-procedure- of-creating.html Please, be advised thin LVM is a Proof of concept , it is unacceptable in prod environment
3. To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it's, actually, not always necessary) and I will be able create new one instance for sure. It has been tested on 2 "Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster".
It is related with `nova quota-show`
All testing details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html
4. Updated dhcp_agent.ini and dnsmasq.conf to assign internal IP for instance been created with MTU 1454 at the first boot up as follows :-
[root@dfw02 neutron(keystone_admin)]$ cat dhcp_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
external_network_bridge = br-ex
ovs_use_veth = True
use_namespaces = True
# Line fixed . There was a typo in original file.
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
[root@dfw02 neutron(keystone_admin)]$ cat dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454
Then restarted dnsmasq and neutron-dhcp-agent service.
**********************************************************
1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1454 for proper communications with GRE tunnel
Post bellow follows up two Fedora 20 VMs setup described in :-
http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
Both cases have been tested above - default and non-default libvirt's networks
In meantime I believe that using Libvirt's networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don't start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn't start if mysql.users table won't be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.
Manuals mentioned above require some editing per authors opinion as well.
See also http://bderzhavets.blogspot.ru/2014/02/mysql-credentials-for-root-nova-in-two.html to setup MySQL credentials for root and nova to able connect remotely to Controller
dwf01.localdomain - Compute (192.168.1.137)
Originally two instances have been running on Compute (dfw01):-
VF19RS instance has 192.168.1.102 - floating ip
CirrOS 3.1 instance has 192.168.1.101 - floating ip
Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it's problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with "RDO Havana+Glusterfs+Neutron VLAN" works on same box (dual booting with F20) much faster.
[root@dfw02 ~(keystone_admin)]$ openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: inactive (disabled on boot)
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: active
neutron-l3-agent: active
neutron-metadata-agent: active
neutron-lbaas-agent: inactive (disabled on boot)
neutron-openvswitch-agent: active
neutron-linuxbridge-agent: inactive (disabled on boot)
neutron-ryu-agent: inactive (disabled on boot)
neutron-nec-agent: inactive (disabled on boot)
neutron-mlnx-agent: inactive (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
== Ceilometer services ==
openstack-ceilometer-api: inactive (disabled on boot)
openstack-ceilometer-central: inactive (disabled on boot)
openstack-ceilometer-compute: active
openstack-ceilometer-collector: inactive (disabled on boot)
openstack-ceilometer-alarm-notifier: inactive (disabled on boot)
openstack-ceilometer-alarm-evaluator: inactive (disabled on boot)
== Support services ==
mysqld: inactive (disabled on boot)
libvirtd: active
openvswitch: active
dbus: active
tgtd: active
qpidd: active
== Keystone users ==
+----------------------------------+---------+---------+-------+
| id | name | enabled | email |
+----------------------------------+---------+---------+-------+
| 970ed56ef7bc41d59c54f5ed8a1690dc | admin | True | |
| 1beeaa4b20454048bf23f7d63a065137 | cinder | True | |
| 006c2728df9146bd82fab04232444abf | glance | True | |
| 5922aa93016344d5a5d49c0a2dab458c | neutron | True | |
| af2f251586564b46a4f60cdf5ff6cf4f | nova | True | |
+----------------------------------+---------+---------+-------+
== Glance images ==
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | qcow2 | bare | 13147648 | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2 | bare | 237371392 | active |
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler | dfw02.localdomain | internal | enabled | up | 2014-01-23T22:36:15.000000 | None |
| nova-conductor | dfw02.localdomain | internal | enabled | up | 2014-01-23T22:36:11.000000 | None |
| nova-compute | dfw01.localdomain | nova | enabled | up | 2014-01-23T22:36:10.000000 | None |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+-------+------+
| ID | Label | Cidr |
+--------------------------------------+-------+------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | None |
+--------------------------------------+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler dfw02.localdomain internal enabled :-) 2014-01-23 22:39:05
nova-conductor dfw02.localdomain internal enabled :-) 2014-01-23 22:39:11
nova-compute dfw01.localdomain nova enabled :-) 2014-01-23 22:39:10
[root@dfw02 ~(keystone_admin)]$ ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tapf933e768-42"
tag: 1
Interface "tapf933e768-42"
Port "tap40dd712c-e4"
tag: 1
Interface "tap40dd712c-e4"
Bridge br-ex
Port "p37p1"
Interface "p37p1"
Port br-ex
Interface br-ex
type: internal
Port "tap54e34740-87"
Interface "tap54e34740-87"
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-2"
Interface "gre-2"
type: gre
options: {in_key=flow, local_ip="192.168.1.127", out_key=flow, remote_ip="192.168.1.137"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.0.0"
[root@dfw02 ~(keystone_admin)]$ neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24 |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron net-show 1eea88bb-4952-4aa4-9148-18b61c22d5b7
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 1eea88bb-4952-4aa4-9148-18b61c22d5b7 |
| name | int |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 2 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | fa930cea-3d51-4cbe-a305-579f12aa53c0 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron net-show 780ce2f3-2e6e-4881-bbac-857813f9a8e0
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| name | ext |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | f30e5a16-a055-4388-a6ea-91ee142efc3d |
| tenant_id | 04ebe929a2a34557af21b6a735986278 |
+---------------------------+--------------------------------------+
Running instances on dfw01.localdomain :
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler dfw02.localdomain internal enabled :-) 2014-01-23 22:25:45
nova-conductor dfw02.localdomain internal enabled :-) 2014-01-23 22:25:41
nova-compute dfw01.localdomain nova enabled :-) 2014-01-23 22:25:50
[root@dfw02 ~(keystone_admin)]$ neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------+-------+----------------+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent | dfw02.localdomain | :-) | True |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-) | True |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent | dfw02.localdomain | :-) | True |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-) | True |
+--------------------------------------+--------------------+-------------------+-------+----------------+
Fedora 19 instance loaded via :
[root@dfw02 ~(keystone_admin)]$ nova image-list
+--------------------------------------+------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+------------------+--------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | ACTIVE | |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE | |
+--------------------------------------+------------------+--------+--------+
[root@dfw02 ~(keystone_admin)]$ nova boot --flavor 2 --user-data=./myfile.txt
--image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF19RS
where
[root@dfw02 ~(keystone_admin)]$ cat ./myfile.txt
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.103 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 |
| port_id | |
| router_id | |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL | ACTIVE | None | Running |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron port-list --device-id aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 1d10dc02-c0f2-4225-ae61-db281f3af69c | | fa:16:3e:00:d0:c5 | {"subnet_id": "fa930cea-3d51-4cbe-a305-579f12aa53c0", "ip_address": "10.0.0.5"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 1d10dc02-c0f2-4225-ae61-db281f3af69c
IP 192.168.1.103 assigned to new instance VF19GL
Snapshots done on dfw01 host with VNC consoles opened via virt-manager :-
\
To test Internet browsing ability of instances been setup on Compute Node following step was attempted:-
Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up
during KDE Env installation via yum , which I actually don't need at all on cloud instance of Fedora )
# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm
# yum install feh xcompmgr lxappearance xscreensaver dmenu
View for details http://blog.bodhizazen.net/linux/a-5-minute-guide-to-fluxbox/
$mkdir .fluxbox/backgrounds
Add to ~/.fluxbox/menu file
[submenu] (Wallpapers)
[wallpapers] (~/.fluxbox/backgrounds) {feh --bg-scale}
[end]
to be able set wallpapers
Install some fonts :-
# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts
Regarding surfing Internet make MTU 1454 only on cloud instances :
# ifconfig eth0 mtu 1454 up
Otherwise, you would have problems with GRE tunnels
We are ready to go :-
# echo "exec fluxbox" > ~/.xinitrc
# startx
[root@dfw02 ~(keystone_admin)]$ nova list | grep LXW
| 492af969-72c0-4235-ac4e-d75d3778fd0a | VF20LXW | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.106 |
[root@dfw02 ~(keystone_admin)]$ nova show 492af969-72c0-4235-ac4e-d75d3778fd0a
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-06T09:38:52Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.4, 192.168.1.106 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000021 |
| OS-SRV-USG:launched_at | 2014-02-05T17:47:38.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 492af969-72c0-4235-ac4e-d75d3778fd0a |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | VF20LXW |
| created | 2014-02-05T17:47:33Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'd0c5706d-4193-4925-9140-29dea801b447'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
Switching to Spice session improves X-Server behaviour on F20 cloud instance.
# ssh -L 5900:localhost:5900 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5901:localhost:5901 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5902:localhost:5902 -N -f -l 192.168.1.137 ( Compute IP-address)
# spicy -h localhost -p 590(X)
Same command : `ifconfig eth0 mtu 1454 up` will put ssh in work from
Controller and Compute nodes.
[root@dfw02 nova(keystone_admin)]$ nova list
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5 | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 14c49bfe-f99c-4f31-918e-dcf0fd42b49d | VF19RST | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | SUSPENDED | None | Shutdown | int=10.0.0.6, 192.168.1.104 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM | ACTIVE | None | Running | int=10.0.0.7, 192.168.1.109 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 nova(keystone_admin)]$ ssh fedora@192.168.1.109
fedora@192.168.1.109's password:
Last login: Thu Jan 30 15:54:04 2014 from 192.168.1.127
[fedora@vf20kvm ~]$ ifconfig
eth0: flags=4163 mtu 1454
inet 10.0.0.7 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::f816:3eff:fec6:e89a prefixlen 64 scopeid 0x20
ether fa:16:3e:c6:e8:9a txqueuelen 1000 (Ethernet)
RX packets 630779 bytes 877092770 (836.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 166603 bytes 14706620 (14.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 2 bytes 140 (140.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 140 (140.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So, loading cloud instance via `nova boot --user-data=./myfile.txt ....` allows to get access to command line and set MTU for eth0 to 1454 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing instances fedora 19,20, Ubuntu 13.10 Server .
Light weight X Windows setup has been used for all cloud instances mentioned above.
[root@dfw02 ~(keystone_admin)]$ ip netns list
qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-f933e768-42: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:6a:d3:f0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-f933e768-42
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:d3f0/64 scope link
valid_lft forever preferred_lft forever
3: qg-54e34740-87: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:00:9a:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.101/32 brd 192.168.1.101 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.102/32 brd 192.168.1.102 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe00:9a0d/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7 ip a
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-40dd712c-e4: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:93:44:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global ns-40dd712c-e4
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:44f8/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip r
default via 192.168.1.1 dev qg-54e34740-87
10.0.0.0/24 dev qr-f933e768-42 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-54e34740-87 proto kernel scope link src 192.168.1.100
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 \
> iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir
ports 8700
[root@dfw02 ~(keystone_admin)]$ neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24 |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron subnet-list
+--------------------------------------+------+----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+----------------+----------------------------------------------------+
| fa930cea-3d51-4cbe-a305-579f12aa53c0 | | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| f30e5a16-a055-4388-a6ea-91ee142efc3d | | 192.168.1.0/24 | {"start": "192.168.1.100", "end": "192.168.1.200"} |
+--------------------------------------+------+----------------+----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2 | 192.168.1.101 | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4 | 192.168.1.102 | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show af9c6ba6-e0ca-498e-8f67-b9327f75d93f
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.4 |
| floating_ip_address | 192.168.1.102 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | af9c6ba6-e0ca-498e-8f67-b9327f75d93f |
| port_id | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show 9d15609c-9465-4254-bdcb-43f072b6c7d4
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.2 |
| floating_ip_address | 192.168.1.101 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | 9d15609c-9465-4254-bdcb-43f072b6c7d4 |
| port_id | e4cb68c4-b932-4c83-86cd-72c75289114a |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
Snapshot :-
*********************************************************************************
Configuring Cinder to Add GlusterFS , view also Gluster 3.4.2 backend for cinder The last link provides much more detailed information then you would find bellow, in particular, regarding gluster 3.4.2 two node setup itself , IPv4 iptables firewall tuning, setting up required packages and initial steps on Fedora 20.
*********************************************************************************
# gluster volume create cinder-volumes05 replica 2 dwf02.localdomain:/data1/cinder5 dfw01.localdomain:/data1/cinder5
# gluster volume start cinder-volumes05
# gluster volume set cinder-volumes05 auth.allow 192.168.1.*
# openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.127:/cinder-volumes05
:wq
Update /etc/sysconfig/iptables:-
-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT
Comment Out
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j REJECT --reject-with icmp-host-prohibited
# service iptables restart
*************************************************************************
On Controller (192.168.1.127) and on Compute (192.168.1.137)
*************************************************************************
Verify ports availability:-
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp 0 0 0.0.0.0:655 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 2524/glusterfsd
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49155 0.0.0.0:* LISTEN 2525/glusterfsd
tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 2380/glusterd
To mount gluster volume for cinder backend in current setup :-
# losetup -fv /cinder-volumes
# cinder list (gives id-number bellow)
# cinder delete a94b97f5-120b-40bd-b59e-8962a5cb6296
The above lines deleted testvol1 been created per Kashyap's schema to test cinder
Ignoring this step would cause failure restart openstack-cinder-volume-service in particular situation
# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done
Verification of service status :-
[root@dfw02 cinder(keystone_admin)]$ service openstack-cinder-volume status -l
Redirecting to /bin/systemctl status -l openstack-cinder-volume.service
openstack-cinder-volume.service - OpenStack Cinder Volume Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
Active: active (running) since Sat 2014-01-25 07:43:10 MSK; 6s ago
Main PID: 21727 (cinder-volume)
CGroup: /system.slice/openstack-cinder-volume.service
├─21727 /usr/bin/python /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
├─21736 /usr/bin/python /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
└─21793 /usr/sbin/glusterfs --volfile-id=cinder-volumes05 --volfile-server=192.168.1.127 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:10 dfw02.localdomain systemd[1]: Started OpenStack Cinder Volume Server.
Jan 25 07:43:11 dfw02.localdomain cinder-volume[21727]: 2014-01-25 07:43:11.402 21736 WARNING cinder.volume.manager [req-69c0060b-b5bf-4bce-8a8e-f2218dec7638 None None] Unable to update stats, driver is uninitialized
Jan 25 07:43:11 dfw02.localdomain sudo[21754]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.1.127:cinder-volumes05 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:11 dfw02.localdomain sudo[21803]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf df --portability --block-size 1 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
[root@dfw02 cinder(keystone_admin)]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora00-root 96G 7.4G 84G 9% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 152K 3.9G 1% /dev/shm
tmpfs 3.9G 1.2M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 184K 3.9G 1% /tmp
/dev/sda5 477M 101M 347M 23% /boot
/dev/mapper/fedora00-data1 77G 53M 73G 1% /data1
tmpfs 3.9G 1.2M 3.9G 1% /run/netns
192.168.1.127:/cinder-volumes05 77G 52M 73G 1% /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
At runtime on Compute Node :-
[root@dfw01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 96G 54G 38G 59% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 484K 3.9G 1% /dev/shm
tmpfs 3.9G 1.3M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 36K 3.9G 1% /tmp
/dev/sda5 477M 121M 327M 27% /boot
/dev/mapper/fedora-data1 77G 6.7G 67G 10% /data1
192.168.1.127:/cinder-volumes05 77G 6.7G 67G 10% /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a
[root@dfw02 ~(keystone_admin)]$ nova image-list
+--------------------------------------+------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+------------------+--------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | ACTIVE | |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE | |
+--------------------------------------+------------------+--------+--------+
[root@dfw02 ~(keystone_admin)]$ cinder create --image-id 03c9ad20-b0a3-4b71-aa08-2728ecb66210 \
> --display-name Fedora19VLG 7
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-01-25T03:45:21.124690 |
| display_description | None |
| display_name | Fedora19VLG |
| id | 5f0f096b-192a-435b-bdbc-5063ed5c6366 |
| image_id | 03c9ad20-b0a3-4b71-aa08-2728ecb66210 |
| metadata | {} |
| size | 7 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@dfw02 cinder5(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5f0f096b-192a-435b-bdbc-5063ed5c6366 | available | Fedora19VLG | 7 | None | true | |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------
Loading instance via created volume on Glusterfs
**********************************************************************************
UPDATE on 03/09/2014. In meantime I am able to load instance via glusterfs cinder's volume only via command :-
**********************************************************************************
[root@dallas1 ~(keystone_boris)]$ nova boot --flavor 2 --user-data=./myfile.txt --block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012
***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create --image-id IMAGE_ID --display_name VOL_NAME SIZE ` && ` nova boot --flavor 2 --user-data=./myfile.txt --block_device_mapping vda=VOLUME_ID:::0 INSTANCE_NAME` started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder's volumes.
However, ending up with "Error" status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image, via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
Please view :-
"Provide an API for external services to send defined events to the compute service for synchronization. This includes immediate needs for nova-neutron interaction around boot timing and network info updates"
https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api
and bug report :-
https://bugs.launchpad.net/nova/+bug/1280357
As far as I can see target milestone is "Icehouse-rc1"
***********************************************************************************
View also Launching Instance via image and creating simultaneously bootable cinder volume on Two Node GRE+OVS+Gluster F20 Cluster
[root@dfw02 ~(keystone_admin)]$ nova boot --flavor 2 --user-data=./myfile.txt --block_device_mapping vda=5f0f096b-192a-435b-bdbc-5063ed5c6366:::0 VF19VLGL
+--------------------------------------+----------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------+
| OS-EXT-STS:task_state | scheduling |
| image | Attempt to boot from volume - no image supplied |
| OS-EXT-STS:vm_state | building |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 5aa903c5-624d-4dde-9e3c-49996d4a5edc |
| security_groups | [{u'name': u'default'}] |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
| status | BUILD |
| updated | 2014-01-25T03:59:12Z |
| hostId | |
| OS-EXT-SRV-ATTR:host | None |
| OS-SRV-USG:terminated_at | None |
| key_name | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| name | VF19VLGL |
| adminPass | Aq4LBKP9rBGF |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| created | 2014-01-25T03:59:12Z |
| os-extended-volumes:volumes_attached | [{u'id': u'5f0f096b-192a-435b-bdbc-5063ed5c6366'}] |
| metadata | {} |
+--------------------------------------+----------------------------------------------------+
Just in a second new instance will be booted via created volume on Glusterfs ( Fedora 20 : Qemu 1.6, Libvirt 1.1.3)
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | ACTIVE | None | Running | int=10.0.0.6 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron port-list --device-id 5aa903c5-624d-4dde-9e3c-49996d4a5edc
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 7196be1f-9216-4bfd-ac8b-9903780936d9 | | fa:16:3e:4b:97:90 | {"subnet_id": "fa930cea-3d51-4cbe-a305-579f12aa53c0", "ip_address": "10.0.0.6"} |
+--------------------------------------+------+-------------------
+---------------------------------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 | 10.0.0.5 | 192.168.1.103 | 1d10dc02-c0f2-4225-ae61-db281f3af69c |
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2 | 192.168.1.101 | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4 | 192.168.1.102 | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e | | 192.168.1.104 | |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e 7196be1f-9216-4bfd-ac8b-9903780936d9
Associated floatingip c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.6 |
| floating_ip_address | 192.168.1.104 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |
| port_id | 7196be1f-9216-4bfd-ac8b-9903780936d9 |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=4.19 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=1.32 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.06 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=1.11 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=1.13 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=1.02 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=1.05 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=1.08 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.974 ms
64 bytes from 192.168.1.104: icmp_seq=10 ttl=63 time=1.03 ms
I/O Speed improvement is noticeable on boot up and disk operations like this
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 02ef842e-b86f-4545-a018-33835c5350f8 | UbuntuSalanaderVLG | SUSPENDED | None | Shutdown | int=10.0.0.7, 192.168.1.105 |
| 58f8f449-f109-42cf-92e2-d5f8b194d814 | VF19DFW | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.109 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.104 |
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova show 5aa903c5-624d-4dde-9e3c-49996d4a5edc
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-01-25T20:13:54Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.6, 192.168.1.104 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-SRV-USG:launched_at | 2014-01-25T03:59:17.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 5aa903c5-624d-4dde-9e3c-49996d4a5edc |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | VF19VLGL |
| created | 2014-01-25T03:59:12Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'5f0f096b-192a-435b-bdbc-5063ed5c6366'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dfw01]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status -l openstack-nova-compute.service
openstack-nova-compute.service - OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Tue 2014-01-28 20:28:06 MSK; 10min ago
Main PID: 3969 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─3969 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log
└─5440 /usr/sbin/glusterfs --volfile-id=cinder-volumes05 --volfile-server=192.168.1.127 /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a
Jan 28 20:35:02 dfw01.localdomain sudo[5515]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link add qvb3465c1f6-6f type veth peer name qvo3465c1f6-6f
Jan 28 20:35:02 dfw01.localdomain sudo[5522]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5525]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb3465c1f6-6f promisc on
Jan 28 20:35:02 dfw01.localdomain sudo[5528]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5531]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo3465c1f6-6f promisc on
Jan 28 20:35:02 dfw01.localdomain sudo[5534]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5537]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr3465c1f6-6f qvb3465c1f6-6f
Jan 28 20:35:02 dfw01.localdomain sudo[5540]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl -- --may-exist add-port br-int qvo3465c1f6-6f -- set Interface qvo3465c1f6-6f external-ids:iface-id=3465c1f6-6f58-46c4-b0cf-049d89603e5f external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d6:ef:b2 external-ids:vm-uuid=14c49bfe-f99c-4f31-918e-dcf0fd42b49d
Jan 28 20:35:02 dfw01.localdomain ovs-vsctl[5542]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl -- --may-exist add-port br-int qvo3465c1f6-6f -- set Interface qvo3465c1f6-6f external-ids:iface-id=3465c1f6-6f58-46c4-b0cf-049d89603e5f external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d6:ef:b2 external-ids:vm-uuid=14c49bfe-f99c-4f31-918e-dcf0fd42b49d
Jan 28 20:35:03 dfw01.localdomain sudo[5557]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap3465c1f6-6f/brport/hairpin_mode
************************************************************************************
CentOS 6.5 instance was able to start it's own X Server in VNC session from F20 in other words been client of X Server of F20 host (?).
************************************************************************************
[root@dfw02 ~(keystone_admin)]$ nova list | grep UbuntuSalamander
| 812d369d-e351-469e-8820-a2d0d8740716 | UbuntuSalamander | ACTIVE | None | Running | int=10.0.0.8, 192.168.1.110 |
[root@dfw02 ~(keystone_admin)]$ nova show 812d369d-e351-469e-8820-a2d0d8740716
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-01-31T04:46:30Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.8, 192.168.1.110 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000016 |
| OS-SRV-USG:launched_at | 2014-01-31T04:46:30.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 812d369d-e351-469e-8820-a2d0d8740716 |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | UbuntuSalamander |
| created | 2014-01-31T04:46:25Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'34bdf9d9-5bcc-4b62-8140-919c00fe07df'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ ssh ubuntu@192.168.1.110
ubuntu@192.168.1.110's password:
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Jan 31 05:13:19 UTC 2014
System load: 0.08 Processes: 73
Usage of /: 11.4% of 6.86GB Users logged in: 1
Memory usage: 3% IP address for eth0: 10.0.0.8
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Fri Jan 31 05:13:25 2014 from 192.168.1.127
ubuntu@ubuntusalamander:~$ ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:1e:16:35
inet addr:10.0.0.8 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe1e:1635/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1454 Metric:1
RX packets:854 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85929 (85.9 KB) TX bytes:81060 (81.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
Setting up light weight X environment on Ubuntu instance
$ sudo apt-get install xorg fluxbox firefox gnome-terminal
Reboot
$ startx
Right mouse click on desktop opens X-terminal
$ /usr/bin/firefox &
Testing tenants Network (kashyap)
[root@dallas1 ~]# . keystonerc_kashyap
[root@dallas1 ~(keystone_kashyap)]$ neutron net-list
+--------------------------------------+------+---------------------------------------+
| id | name | subnets |
+--------------------------------------+------+---------------------------------------+
| 082249a5-08f4-478f-b176-effad0ef6843 | ext | 7dd9ee7e-3c1e-4850-a78e-375c7268019f |
+--------------------------------------+------+---------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-create router02
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 65e6de75-c7ec-40a7-9a7b-bd37e133cb1c |
| name | router02 |
| status | ACTIVE |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-----------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-gateway-set router02 ext
Set gateway for router router02
[root@dallas1 ~(keystone_kashyap)]$ neutron net-create int01
Created a new network:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| admin_state_up | True |
| id | 388c5557-1c53-4195-aed1-726a4fe7af55 |
| name | int01 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+----------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron subnet-create int01 30.0.0.0/24 --dns_nameservers list=true 83.221.202.254
Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} |
| cidr | 30.0.0.0/24 |
| dns_nameservers | 83.221.202.254 |
| enable_dhcp | True |
| gateway_ip | 30.0.0.1 |
| host_routes | |
| id | 3e3b07fd-53b0-4186-8fd6-859a4dd422f8 |
| ip_version | 4 |
| name | |
| network_id | 388c5557-1c53-4195-aed1-726a4fe7af55 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+------------------+--------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-interface-add router02
3e3b07fd-53b0-4186-8fd6-859a4dd422f8
Added interface 5e69cdcc-3764-45c4-925c-ae53a5500b26 to router router02.
[root@dallas1 ~(keystone_kashyap)]$ neutron subnet-list
+--------------------------------------+------+-------------+--------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+-------------+--------------------------------------------+
| 3e3b07fd-53b0-4186-8fd6-859a4dd422f8 | | 30.0.0.0/24 | {"start": "30.0.0.2", "end": "30.0.0.254"} |
+--------------------------------------+------+-------------+--------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ glance image-list
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31 | qcow2 | bare | 13147648 | active |
| fd1cd492-d7d8-4fc3-961a-0b43f9aa148d | Fedora 20 Image | qcow2 | bare | 214106112 | active |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64 | qcow2 | bare | 214106112 | active |
| 1def8fdc-9fe9-400d-944a-707d1352b6da | New Fedora 20 image | qcow2 | bare | 214106112 | active |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | qcow2 | bare | 244711424 | active |
| 14cf6e7b-9aed-40c6-8185-366eb0c4c397 | Ubuntu Salamander Server | qcow2 | bare | 244711424 | active |
| b94f3144-0337-4b0c-8c2b-18bbb18be6c8 | Ubuntu Saucy | qcow2 | bare | 244711424 | active |
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
[root@dallas1 ~(keystone_kashyap)]$ nova boot --flavor 2 --user-data=./myfile.txt --image fd1cd492-d7d8-4fc3-961a-0b43f9aa148d VF20RSX
+--------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------+
| status | BUILD |
| updated | 2014-02-20T15:42:28Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | None |
| image | Fedora 20 Image |
| hostId | |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | abb1fa95b0ec448ea8da3cc99d61d301 |
| name | VF20RSX |
| adminPass | eHCQZ5fD2MpR |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
| created | 2014-02-20T15:42:27Z |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | BUILD | spawning | NOSTATE | |
+--------------------------------------+---------+--------+------------+-------------+----------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+----------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2 |
+--------------------------------------+---------+--------+------------+-------------+----------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-create ext
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.108 |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id | b2b428c4-71bc-4391-a2f0-592abf6990c8 |
| port_id | |
| router_id | |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+---------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron port-list --device-id 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| ce1e02fe-cfd8-4802-85d0-b628beb56bff | | fa:16:3e:39:d6:38 | {"subnet_id": "3e3b07fd-53b0-4186-8fd6-859a4dd422f8", "ip_address": "30.0.0.2"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-associate b2b428c4-71bc-4391-a2f0-592abf6990c8 ce1e02fe-cfd8-4802-85d0-b628beb56bff
Associated floatingip b2b428c4-71bc-4391-a2f0-592abf6990c8
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-show b2b428c4-71bc-4391-a2f0-592abf6990c8
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 30.0.0.2 |
| floating_ip_address | 192.168.1.108 |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id | b2b428c4-71bc-4391-a2f0-592abf6990c8 |
| port_id | ce1e02fe-cfd8-4802-85d0-b628beb56bff |
| router_id | 65e6de75-c7ec-40a7-9a7b-bd37e133cb1c |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+---------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-list
+--------------------------------------+---------+-------------+
| id | name | description |
+--------------------------------------+---------+-------------+
| 378e5257-dfe4-4101-b6f5-047591681e27 | default | default |
+--------------------------------------+---------+-------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-rule-create --protocol icmp \
> --direction ingress --remote-ip-prefix 0.0.0.0/0 378e5257-dfe4-4101-b6f5-047591681e27
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | 829463b5-cd24-48b6-ba80-cc0c3ad2ab3e |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 378e5257-dfe4-4101-b6f5-047591681e27 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-rule-create --protocol tcp \
> --port-range-min 22 --port-range-max 22 \
> --direction ingress --remote-ip-prefix 0.0.0.0/0 378e5257-dfe4-4101-b6f5-047591681e27
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | fee6ad64-238e-4628-8457-4c19d198182f |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 378e5257-dfe4-4101-b6f5-047591681e27 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ ping 192.168.1.108
PING 192.168.1.108 (192.168.1.108) 56(84) bytes of data.
64 bytes from 192.168.1.108: icmp_seq=1 ttl=63 time=4.06 ms
64 bytes from 192.168.1.108: icmp_seq=2 ttl=63 time=0.688 ms
64 bytes from 192.168.1.108: icmp_seq=3 ttl=63 time=0.853 ms
64 bytes from 192.168.1.108: icmp_seq=4 ttl=63 time=0.631 ms
64 bytes from 192.168.1.108: icmp_seq=5 ttl=63 time=0.762 ms
^C
--- 192.168.1.108 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.631/1.398/4.060/1.333 ms
# ssh-keygen
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.140 (Compute)
Block is included in /etc/rc.d/rc.local:-
ssh -L 5900:localhost:5900 -N -f -l root 192.168.1.140
ssh -L 5901:localhost:5901 -N -f -l root 192.168.1.140
ssh -L 5902:localhost:5902 -N -f -l root 192.168.1.140
ssh -L 5903:localhost:5903 -N -f -l root 192.168.1.140
[root@dallas1 ~(keystone_kashyap)]$ vncviewer localhost:0
TigerVNC Viewer 64-bit v1.3.0 (20140121)
Built on Jan 21 2014 at 09:40:20
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.
Thu Feb 20 19:48:32 2014
CConn: connected to host localhost port 5900
CConnection: Server supports RFB protocol version 3.8
CConnection: Using RFB protocol version 3.8
PlatformPixelBuffer: Using default colormap and visual, TrueColor, depth 24.
DesktopWindow: Adjusting window size to avoid accidental full screen request
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888
CConn: Using Tight encoding
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova reboot VF20RSX
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | REBOOT | rebooting | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ ping 192.168.1.108
PING 192.168.1.108 (192.168.1.108) 56(84) bytes of data.
64 bytes from 192.168.1.108: icmp_seq=1 ttl=63 time=5.75 ms
64 bytes from 192.168.1.108: icmp_seq=2 ttl=63 time=1.00 ms
64 bytes from 192.168.1.108: icmp_seq=3 ttl=63 time=0.749 ms
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova show 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-20T16:56:41Z |
| OS-EXT-STS:task_state | None |
| key_name | None |
| image | Fedora 20 Image (fd1cd492-d7d8-4fc3-961a-0b43f9aa148d) |
| hostId | 684566c890e07a7c31cb0265f3ba21a9e009391b12e0bbf1822ad75c |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-02-20T15:42:39.000000 |
| flavor | m1.small (2) |
| id | 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | abb1fa95b0ec448ea8da3cc99d61d301 |
| name | VF20RSX |
| created | 2014-02-20T15:42:27Z |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| int01 network | 30.0.0.2, 192.168.1.108 |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dallas1 ~(keystone_admin)]$ keystone user-list
+----------------------------------+---------+---------+-------+
| id | name | enabled | email |
+----------------------------------+---------+---------+-------+
| 974006673310455e8893e692f1d9350b | admin | True | |
| fbba3a8646dc44e28e5200381d77493b | cinder | True | |
| 0214c6ae6ebc4d6ebeb3e68d825a1188 | glance | True | |
| abb1fa95b0ec448ea8da3cc99d61d301 | kashyap | True | |
| 329b3ca03a894b319420b3a166d461b5 | neutron | True | |
| 89b3f7d54dd04648b0519f8860bd0f2a | nova | True | |
+----------------------------------+---------+---------+-------+
Check tenant :-
[root@dfw02 ~(keystone_boris)]$ nova list
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
| 5fcd83c3-1d4e-4b11-bfe5-061a03b73174 | UbuntuRSX | ACTIVE | None | Running | int1=40.0.0.5, 192.168.1.120 |
| 4028b4a7-de0c-4226-89ac-1543fb9382d7 | VF19RSX | ACTIVE | None | Running | int1=40.0.0.2, 192.168.1.118 |
| 99a7e40c-896f-42c9-a18d-4a1368de49e9 | VF20RSX | ACTIVE | None | Running | int1=40.0.0.4, 192.168.1.119 |
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
[root@dfw02 ~(keystone_boris)]$ nova show VF20RSX
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-26T16:19:31Z |
| OS-EXT-STS:task_state | None |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| hostId | 73ee4f5bd4da8ad7b39d768d0b167a03ac0471ea50d9ded6c6190fb1 |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-02-26T16:19:31.000000 |
| flavor | m1.small (2) |
| id | 99a7e40c-896f-42c9-a18d-4a1368de49e9 |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 162021e787c54cac906ab3296a386006 |
| name | VF20RSX |
| created | 2014-02-26T16:19:26Z |
| tenant_id | 4dacfff9e72c4245a48d648ee23468d5 |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'0322b452-8fbe-470f-acf1-2e60740ba3f2'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| int1 network | 40.0.0.4, 192.168.1.119 |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dfw02 ~(keystone_boris)]$ exit
logout
[boris@dfw02 ~]$ sudo su -
Last login: Wed Feb 26 21:42:10 MSK 2014 on pts/4
[root@dfw02 ~]# . keystonerc_admin
[root@dfw02 ~(keystone_admin)]$ keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| d0a0acfdb62b4cc8a2bfa8d6a08bb62f | admin | True |
| 4dacfff9e72c4245a48d648ee23468d5 | ostenant | True |
| 04ebe929a2a34557af21b6a735986278 | services | True |
+----------------------------------+----------+---------+
The original text of documents was posted on fedoraproject.org by Kashyap.
Atached ones tuned for new IP's and should not have any more typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed ./etc/sysconfig/iptables updated on Controller and Compute Nodes. Lines below commented out :-
# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
# -A INPUT -j REJECT --reject-with icmp-host-prohibited
To be able set up Gluster 3.4.2 cluster and use gluster replica 2 volume as storage for Cinder.
MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure that using Libvirt's default and non-default networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.
References
1. http://textuploader.com/1hin
2. http://textuploader.com/1hey
UPDATE on 03/16//2014
**********************************************************
1. I have added Dashboard to this setup http://bderzhavets.blogspot.com/2014/03/setting-up-dashboard-on-two-node.html
2. As alternative to turning on Gluster 3.4.2 backend for cinder thin LVM option may be considered. However thin LVM schema works for me just as POC. View http://bderzhavets.blogspot.com/2014/03/up-to-date-procedure- of-creating.html Please, be advised thin LVM is a Proof of concept , it is unacceptable in prod environment
3. To create new instance I must have in `nova list` no more then 4 entries. Then I can sequentially restart qpidd, openstack-nova-scheduler services ( it's, actually, not always necessary) and I will be able create new one instance for sure. It has been tested on 2 "Two Node Neutron GRE+OVS+Gluster Backend for Cinder Cluster".
It is related with `nova quota-show`
for
tenant (10 instances is default ). Having 3 vms on Compute I brought up
openstack-nova-compute on Controller and was able to create 2 vms more
on Compute and 5 vms on Controller. All testing details here http://bderzhavets.blogspot.com/2014/02/next-attempt-to-set-up-two-node-neutron.html
4. Updated dhcp_agent.ini and dnsmasq.conf to assign internal IP for instance been created with MTU 1454 at the first boot up as follows :-
[root@dfw02 neutron(keystone_admin)]$ cat dhcp_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
handle_internal_only_routers = TRUE
external_network_bridge = br-ex
ovs_use_veth = True
use_namespaces = True
# Line fixed . There was a typo in original file.
dnsmasq_config_file = /etc/neutron/dnsmasq.conf
[root@dfw02 neutron(keystone_admin)]$ cat dnsmasq.conf
log-facility = /var/log/neutron/dnsmasq.log
log-dhcp
# Line added
dhcp-option=26,1454
Then restarted dnsmasq and neutron-dhcp-agent service.
**********************************************************
1. F19,F20 have been installed via volumes based on glusterfs and show a good performance on Compute node. Yum works stable on F19 and a bit slower on F20.
2. CentOS 6.5 was installed only via glance image ( cinder shows ERROR status for volume ) network ops are slower then on Fedoras.
3. Ubuntu 13.10 Server was installed via volume based on glusterfs was able to obtain internal and floating IP. Network speed close to Fedora 19
4. Turning on Gluster backend for Cinder on F20 Two-Node Neutron GRE Cluster (Controller+Compute) improves performance significantly. Due to known F20 bug glustefs FS was ext4
5.On any cloud instance MTU should be set to 1454 for proper communications with GRE tunnel
Post bellow follows up two Fedora 20 VMs setup described in :-
http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt
http://kashyapc.fedorapeople.org/virt/openstack/neutron-configs-GRE-OVS-two-node.txt
Both cases have been tested above - default and non-default libvirt's networks
In meantime I believe that using Libvirt's networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling for RDO Havana on Fedora 20 manual setup.
Current status F20 requires manual intervention in MariaDB (MySQL) database regarding root&nova passwords presence at FQDN of Controller Host. I was also never able to start Neutron-Server via account suggested by Kashyap only as root:password@Controller (FQDN). Neutron Openvswitch agent and Neutron L3 agent don't start at point described in first manual , only when Neutron Metadata agent is up and running. Notice also that in meantime services :-
openstack-nova-conductor & openstack-nova-scheduler wouldn't start if mysql.users table won't be ready for nova account password at Controllers FQDN. All this updates are reflected in References Links attached as text docs.
Manuals mentioned above require some editing per authors opinion as well.
See also http://bderzhavets.blogspot.ru/2014/02/mysql-credentials-for-root-nova-in-two.html to setup MySQL credentials for root and nova to able connect remotely to Controller
Manual Setup
- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin and GRE tunneling )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)
dwf02.localdomain - Controller (192.168.1.127)
dwf01.localdomain - Compute (192.168.1.137)
Originally two instances have been running on Compute (dfw01):-
VF19RS instance has 192.168.1.102 - floating ip
CirrOS 3.1 instance has 192.168.1.101 - floating ip
Cloud instances running on Compute perform commands like nslookup,traceroute. Yum install & yum -y update work on Fedora 19 instance, however, in meantime time network on VF19 is stable, but relatively slow . Might be Realtek 8169 integrated on board is not good enough for GRE and it's problem of my hardware ( dfw01 built up with Q9550,ASUS P5Q3, 8 GB DDR3, SATA 2 Seagate 500 GB). CentOS 6.5 with "RDO Havana+Glusterfs+Neutron VLAN" works on same box (dual booting with F20) much faster.
[root@dfw02 ~(keystone_admin)]$ openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: inactive (disabled on boot)
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: active
neutron-l3-agent: active
neutron-metadata-agent: active
neutron-lbaas-agent: inactive (disabled on boot)
neutron-openvswitch-agent: active
neutron-linuxbridge-agent: inactive (disabled on boot)
neutron-ryu-agent: inactive (disabled on boot)
neutron-nec-agent: inactive (disabled on boot)
neutron-mlnx-agent: inactive (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
== Ceilometer services ==
openstack-ceilometer-api: inactive (disabled on boot)
openstack-ceilometer-central: inactive (disabled on boot)
openstack-ceilometer-compute: active
openstack-ceilometer-collector: inactive (disabled on boot)
openstack-ceilometer-alarm-notifier: inactive (disabled on boot)
openstack-ceilometer-alarm-evaluator: inactive (disabled on boot)
== Support services ==
mysqld: inactive (disabled on boot)
libvirtd: active
openvswitch: active
dbus: active
tgtd: active
qpidd: active
== Keystone users ==
+----------------------------------+---------+---------+-------+
| id | name | enabled | email |
+----------------------------------+---------+---------+-------+
| 970ed56ef7bc41d59c54f5ed8a1690dc | admin | True | |
| 1beeaa4b20454048bf23f7d63a065137 | cinder | True | |
| 006c2728df9146bd82fab04232444abf | glance | True | |
| 5922aa93016344d5a5d49c0a2dab458c | neutron | True | |
| af2f251586564b46a4f60cdf5ff6cf4f | nova | True | |
+----------------------------------+---------+---------+-------+
== Glance images ==
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | qcow2 | bare | 13147648 | active |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | qcow2 | bare | 237371392 | active |
+--------------------------------------+------------------+-------------+------------------+-----------+--------+
== Nova managed services ==
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
| nova-scheduler | dfw02.localdomain | internal | enabled | up | 2014-01-23T22:36:15.000000 | None |
| nova-conductor | dfw02.localdomain | internal | enabled | up | 2014-01-23T22:36:11.000000 | None |
| nova-compute | dfw01.localdomain | nova | enabled | up | 2014-01-23T22:36:10.000000 | None |
+----------------+-------------------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+-------+------+
| ID | Label | Cidr |
+--------------------------------------+-------+------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | None |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | None |
+--------------------------------------+-------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler dfw02.localdomain internal enabled :-) 2014-01-23 22:39:05
nova-conductor dfw02.localdomain internal enabled :-) 2014-01-23 22:39:11
nova-compute dfw01.localdomain nova enabled :-) 2014-01-23 22:39:10
[root@dfw02 ~(keystone_admin)]$ ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
Bridge br-int
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tapf933e768-42"
tag: 1
Interface "tapf933e768-42"
Port "tap40dd712c-e4"
tag: 1
Interface "tap40dd712c-e4"
Bridge br-ex
Port "p37p1"
Interface "p37p1"
Port br-ex
Interface br-ex
type: internal
Port "tap54e34740-87"
Interface "tap54e34740-87"
Bridge br-tun
Port br-tun
Interface br-tun
type: internal
Port "gre-2"
Interface "gre-2"
type: gre
options: {in_key=flow, local_ip="192.168.1.127", out_key=flow, remote_ip="192.168.1.137"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.0.0"
[root@dfw02 ~(keystone_admin)]$ neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24 |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron net-show 1eea88bb-4952-4aa4-9148-18b61c22d5b7
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 1eea88bb-4952-4aa4-9148-18b61c22d5b7 |
| name | int |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 2 |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | fa930cea-3d51-4cbe-a305-579f12aa53c0 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron net-show 780ce2f3-2e6e-4881-bbac-857813f9a8e0
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| name | ext |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| router:external | True |
| shared | False |
| status | ACTIVE |
| subnets | f30e5a16-a055-4388-a6ea-91ee142efc3d |
| tenant_id | 04ebe929a2a34557af21b6a735986278 |
+---------------------------+--------------------------------------+
Running instances on dfw01.localdomain :
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.101 |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+--------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova-manage service list
Binary Host Zone Status State Updated_At
nova-scheduler dfw02.localdomain internal enabled :-) 2014-01-23 22:25:45
nova-conductor dfw02.localdomain internal enabled :-) 2014-01-23 22:25:41
nova-compute dfw01.localdomain nova enabled :-) 2014-01-23 22:25:50
[root@dfw02 ~(keystone_admin)]$ neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-------------------+-------+----------------+
| 037b985d-7a7d-455b-8536-76eed40b0722 | L3 agent | dfw02.localdomain | :-) | True |
| 22438ee9-b4ea-4316-9492-eb295288f61a | Open vSwitch agent | dfw02.localdomain | :-) | True |
| 76ed02e2-978f-40d0-879e-1a2c6d1f7915 | DHCP agent | dfw02.localdomain | :-) | True |
| 951632a3-9744-4ff4-a835-c9f53957c617 | Open vSwitch agent | dfw01.localdomain | :-) | True |
+--------------------------------------+--------------------+-------------------+-------+----------------+
Fedora 19 instance loaded via :
[root@dfw02 ~(keystone_admin)]$ nova image-list
+--------------------------------------+------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+------------------+--------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | ACTIVE | |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE | |
+--------------------------------------+------------------+--------+--------+
[root@dfw02 ~(keystone_admin)]$ nova boot --flavor 2 --user-data=./myfile.txt
--image 03c9ad20-b0a3-4b71-aa08-2728ecb66210 VF19RS
where
[root@dfw02 ~(keystone_admin)]$ cat ./myfile.txt
#cloud-config
password: mysecret
chpasswd: { expire: False }
ssh_pwauth: True
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-create ext
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.103 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 |
| port_id | |
| router_id | |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL | ACTIVE | None | Running |
| 82dfc826-46cd-4b4c-a0f6-bac5f7132dec | VF19RS | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.102 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron port-list --device-id aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 1d10dc02-c0f2-4225-ae61-db281f3af69c | | fa:16:3e:00:d0:c5 | {"subnet_id": "fa930cea-3d51-4cbe-a305-579f12aa53c0", "ip_address": "10.0.0.5"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 1d10dc02-c0f2-4225-ae61-db281f3af69c
IP 192.168.1.103 assigned to new instance VF19GL
Snapshots done on dfw01 host with VNC consoles opened via virt-manager :-
Snapshots done on dfw02 host via virt-manager connection to dfw01 :-
Setup Light Weight X
Windows environment on Fedora 20 Cloud instance and running F20 cloud
instance in VNC and Spice sessions via virt-manager or spicy
http://bderzhavets.blogspot.com/2014/02/setup-light-weight-x-windows_2.html
Next we install X windows on F20 to run fluxbox ( by the way after hours of googling I was unable to find requied set of packages and just picked them up
during KDE Env installation via yum , which I actually don't need at all on cloud instance of Fedora )
# yum install xorg-x11-server-Xorg xorg-x11-xdm fluxbox \
xorg-x11-drv-ati xorg-x11-drv-evdev xorg-x11-drv-fbdev \
xorg-x11-drv-intel xorg-x11-drv-mga xorg-x11-drv-nouveau \
xorg-x11-drv-openchrome xorg-x11-drv-qxl xorg-x11-drv-synaptics \
xorg-x11-drv-vesa xorg-x11-drv-vmmouse xorg-x11-drv-vmware \
xorg-x11-drv-wacom xorg-x11-font-utils xorg-x11-drv-modesetting \
xorg-x11-glamor xorg-x11-utils xterm
# yum install feh xcompmgr lxappearance xscreensaver dmenu
View for details http://blog.bodhizazen.net/linux/a-5-minute-guide-to-fluxbox/
$mkdir .fluxbox/backgrounds
Add to ~/.fluxbox/menu file
[submenu] (Wallpapers)
[wallpapers] (~/.fluxbox/backgrounds) {feh --bg-scale}
[end]
to be able set wallpapers
Install some fonts :-
# yum install dejavu-fonts-common \
dejavu-sans-fonts \
dejavu-sans-mono-fonts \
dejavu-serif-fonts
Regarding surfing Internet make MTU 1454 only on cloud instances :
# ifconfig eth0 mtu 1454 up
Otherwise, you would have problems with GRE tunnels
We are ready to go :-
# echo "exec fluxbox" > ~/.xinitrc
# startx
[root@dfw02 ~(keystone_admin)]$ nova list | grep LXW
| 492af969-72c0-4235-ac4e-d75d3778fd0a | VF20LXW | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.106 |
[root@dfw02 ~(keystone_admin)]$ nova show 492af969-72c0-4235-ac4e-d75d3778fd0a
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-06T09:38:52Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.4, 192.168.1.106 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000021 |
| OS-SRV-USG:launched_at | 2014-02-05T17:47:38.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 492af969-72c0-4235-ac4e-d75d3778fd0a |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | VF20LXW |
| created | 2014-02-05T17:47:33Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'd0c5706d-4193-4925-9140-29dea801b447'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
Switching to Spice session improves X-Server behaviour on F20 cloud instance.
# ssh -L 5900:localhost:5900 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5901:localhost:5901 -N -f -l 192.168.1.137 ( Compute IP-address)
# ssh -L 5902:localhost:5902 -N -f -l 192.168.1.137 ( Compute IP-address)
# spicy -h localhost -p 590(X)
Same command : `ifconfig eth0 mtu 1454 up` will put ssh in work from
Controller and Compute nodes.
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 964fd0b0-b331-4b0c-a1d5-118bf8a40abf | CentOS6.5 | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.105 |
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 14c49bfe-f99c-4f31-918e-dcf0fd42b49d | VF19RST | ACTIVE | None | Running | int=10.0.0.4, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | SUSPENDED | None | Shutdown | int=10.0.0.6, 192.168.1.104 |
| 0e0b4f69-4cff-4423-ba9d-71c8eb53af16 | VF20KVM | ACTIVE | None | Running | int=10.0.0.7, 192.168.1.109 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 nova(keystone_admin)]$ ssh fedora@192.168.1.109
fedora@192.168.1.109's password:
Last login: Thu Jan 30 15:54:04 2014 from 192.168.1.127
[fedora@vf20kvm ~]$ ifconfig
eth0: flags=4163
inet 10.0.0.7 netmask 255.255.255.0 broadcast 10.0.0.255
inet6 fe80::f816:3eff:fec6:e89a prefixlen 64 scopeid 0x20
ether fa:16:3e:c6:e8:9a txqueuelen 1000 (Ethernet)
RX packets 630779 bytes 877092770 (836.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 166603 bytes 14706620 (14.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 0 (Local Loopback)
RX packets 2 bytes 140 (140.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 140 (140.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
So, loading cloud instance via `nova boot --user-data=./myfile.txt ....` allows to get access to command line and set MTU for eth0 to 1454 , this makes instance available for ssh connections from Controller and Compute Nodes and also makes possible Internet Surfing instances fedora 19,20, Ubuntu 13.10 Server .
Light weight X Windows setup has been used for all cloud instances mentioned above.
[root@dfw02 ~(keystone_admin)]$ ip netns list
qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: qr-f933e768-42:
link/ether fa:16:3e:6a:d3:f0 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-f933e768-42
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe6a:d3f0/64 scope link
valid_lft forever preferred_lft forever
3: qg-54e34740-87:
link/ether fa:16:3e:00:9a:0d brd ff:ff:ff:ff:ff:ff
inet 192.168.1.100/24 brd 192.168.1.255 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.101/32 brd 192.168.1.101 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet 192.168.1.102/32 brd 192.168.1.102 scope global qg-54e34740-87
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe00:9a0d/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7 ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ns-40dd712c-e4:
link/ether fa:16:3e:93:44:f8 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.3/24 brd 10.0.0.255 scope global ns-40dd712c-e4
valid_lft forever preferred_lft forever
inet6 fe80::f816:3eff:fe93:44f8/64 scope link
valid_lft forever preferred_lft forever
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 ip r
default via 192.168.1.1 dev qg-54e34740-87
10.0.0.0/24 dev qr-f933e768-42 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-54e34740-87 proto kernel scope link src 192.168.1.100
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 \
> iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir
[root@dfw02 ~(keystone_admin)]$ neutron net-list
+--------------------------------------+------+-----------------------------------------------------+
| id | name | subnets |
+--------------------------------------+------+-----------------------------------------------------+
| 1eea88bb-4952-4aa4-9148-18b61c22d5b7 | int | fa930cea-3d51-4cbe-a305-579f12aa53c0 10.0.0.0/24 |
| 780ce2f3-2e6e-4881-bbac-857813f9a8e0 | ext | f30e5a16-a055-4388-a6ea-91ee142efc3d 192.168.1.0/24 |
+--------------------------------------+------+-----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron subnet-list
+--------------------------------------+------+----------------+----------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+----------------+----------------------------------------------------+
| fa930cea-3d51-4cbe-a305-579f12aa53c0 | | 10.0.0.0/24 | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| f30e5a16-a055-4388-a6ea-91ee142efc3d | | 192.168.1.0/24 | {"start": "192.168.1.100", "end": "192.168.1.200"} |
+--------------------------------------+------+----------------+----------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2 | 192.168.1.101 | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4 | 192.168.1.102 | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show af9c6ba6-e0ca-498e-8f67-b9327f75d93f
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.4 |
| floating_ip_address | 192.168.1.102 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | af9c6ba6-e0ca-498e-8f67-b9327f75d93f |
| port_id | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show 9d15609c-9465-4254-bdcb-43f072b6c7d4
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.2 |
| floating_ip_address | 192.168.1.101 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | 9d15609c-9465-4254-bdcb-43f072b6c7d4 |
| port_id | e4cb68c4-b932-4c83-86cd-72c75289114a |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
Snapshot :-
*********************************************************************************
Configuring Cinder to Add GlusterFS , view also Gluster 3.4.2 backend for cinder The last link provides much more detailed information then you would find bellow, in particular, regarding gluster 3.4.2 two node setup itself , IPv4 iptables firewall tuning, setting up required packages and initial steps on Fedora 20.
*********************************************************************************
# gluster volume create cinder-volumes05 replica 2 dwf02.localdomain:/data1/cinder5 dfw01.localdomain:/data1/cinder5
# gluster volume start cinder-volumes05
# gluster volume set cinder-volumes05 auth.allow 192.168.1.*
# openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.glusterfs.GlusterfsDriver
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_shares_config /etc/cinder/shares.conf
# openstack-config --set /etc/cinder/cinder.conf DEFAULT glusterfs_mount_point_base /var/lib/cinder/volumes
# vi /etc/cinder/shares.conf
192.168.1.127:/cinder-volumes05
:wq
Update /etc/sysconfig/iptables:-
-A INPUT -p tcp -m multiport --dport 24007:24047 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT
Comment Out
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A INPUT -j REJECT --reject-with icmp-host-prohibited
# service iptables restart
*************************************************************************
On Controller (192.168.1.127) and on Compute (192.168.1.137)
*************************************************************************
Verify ports availability:-
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep gluster
tcp 0 0 0.0.0.0:655 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN 2524/glusterfsd
tcp 0 0 0.0.0.0:2049 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38465 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38466 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:49155 0.0.0.0:* LISTEN 2525/glusterfsd
tcp 0 0 0.0.0.0:38468 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:38469 0.0.0.0:* LISTEN 2591/glusterfs
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN 2380/glusterd
To mount gluster volume for cinder backend in current setup :-
# losetup -fv /cinder-volumes
# cinder list (gives id-number bellow)
# cinder delete a94b97f5-120b-40bd-b59e-8962a5cb6296
The above lines deleted testvol1 been created per Kashyap's schema to test cinder
Ignoring this step would cause failure restart openstack-cinder-volume-service in particular situation
# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done
Verification of service status :-
[root@dfw02 cinder(keystone_admin)]$ service openstack-cinder-volume status -l
Redirecting to /bin/systemctl status -l openstack-cinder-volume.service
openstack-cinder-volume.service - OpenStack Cinder Volume Server
Loaded: loaded (/usr/lib/systemd/system/openstack-cinder-volume.service; enabled)
Active: active (running) since Sat 2014-01-25 07:43:10 MSK; 6s ago
Main PID: 21727 (cinder-volume)
CGroup: /system.slice/openstack-cinder-volume.service
├─21727 /usr/bin/python /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
├─21736 /usr/bin/python /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/volume.log
└─21793 /usr/sbin/glusterfs --volfile-id=cinder-volumes05 --volfile-server=192.168.1.127 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:10 dfw02.localdomain systemd[1]: Started OpenStack Cinder Volume Server.
Jan 25 07:43:11 dfw02.localdomain cinder-volume[21727]: 2014-01-25 07:43:11.402 21736 WARNING cinder.volume.manager [req-69c0060b-b5bf-4bce-8a8e-f2218dec7638 None None] Unable to update stats, driver is uninitialized
Jan 25 07:43:11 dfw02.localdomain sudo[21754]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 192.168.1.127:cinder-volumes05 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
Jan 25 07:43:11 dfw02.localdomain sudo[21803]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/cinder-rootwrap /etc/cinder/rootwrap.conf df --portability --block-size 1 /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
[root@dfw02 cinder(keystone_admin)]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora00-root 96G 7.4G 84G 9% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 152K 3.9G 1% /dev/shm
tmpfs 3.9G 1.2M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 184K 3.9G 1% /tmp
/dev/sda5 477M 101M 347M 23% /boot
/dev/mapper/fedora00-data1 77G 53M 73G 1% /data1
tmpfs 3.9G 1.2M 3.9G 1% /run/netns
192.168.1.127:/cinder-volumes05 77G 52M 73G 1% /var/lib/cinder/volumes/62f75cf6996a8a6bcc0d343be378c10a
At runtime on Compute Node :-
[root@dfw01 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora-root 96G 54G 38G 59% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 484K 3.9G 1% /dev/shm
tmpfs 3.9G 1.3M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 36K 3.9G 1% /tmp
/dev/sda5 477M 121M 327M 27% /boot
/dev/mapper/fedora-data1 77G 6.7G 67G 10% /data1
192.168.1.127:/cinder-volumes05 77G 6.7G 67G 10% /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a
[root@dfw02 ~(keystone_admin)]$ nova image-list
+--------------------------------------+------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+------------------+--------+--------+
| dc992799-7831-4933-b6ee-7b81868f808b | CirrOS31 | ACTIVE | |
| 03c9ad20-b0a3-4b71-aa08-2728ecb66210 | Fedora 19 x86_64 | ACTIVE | |
+--------------------------------------+------------------+--------+--------+
[root@dfw02 ~(keystone_admin)]$ cinder create --image-id 03c9ad20-b0a3-4b71-aa08-2728ecb66210 \
> --display-name Fedora19VLG 7
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-01-25T03:45:21.124690 |
| display_description | None |
| display_name | Fedora19VLG |
| id | 5f0f096b-192a-435b-bdbc-5063ed5c6366 |
| image_id | 03c9ad20-b0a3-4b71-aa08-2728ecb66210 |
| metadata | {} |
| size | 7 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
[root@dfw02 cinder5(keystone_admin)]$ cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 5f0f096b-192a-435b-bdbc-5063ed5c6366 | available | Fedora19VLG | 7 | None | true | |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------
Loading instance via created volume on Glusterfs
**********************************************************************************
UPDATE on 03/09/2014. In meantime I am able to load instance via glusterfs cinder's volume only via command :-
**********************************************************************************
[root@dallas1 ~(keystone_boris)]$ nova boot --flavor 2 --user-data=./myfile.txt --block-device source=image,id=d0e90250-5814-4685-9b8d-65ec9daa7117,dest=volume,size=5,shutdown=preserve,bootindex=0 VF20RS012
***********************************************************************************
Update on 03/11/2014.
***********************************************************************************
Standard schema via `cinder create --image-id IMAGE_ID --display_name VOL_NAME SIZE ` && ` nova boot --flavor 2 --user-data=./myfile.txt --block_device_mapping vda=VOLUME_ID:::0 INSTANCE_NAME` started to work fine. Schema described in previous UPDATE 03/09/14 on the contrary stopped to work smoothly on glusterfs based cinder's volumes.
However, ending up with "Error" status it creates glusterfs cinder volume ( with system_id ) , which is quite healthy and may be utilized for building new instance of F20 or Ubuntu 14.04, whatever was original image, via CLI or Dashboard. It looks like a kind of bug in Nova&Neutron interprocess communications. I would say synchronization at boot up.
Please view :-
"Provide an API for external services to send defined events to the compute service for synchronization. This includes immediate needs for nova-neutron interaction around boot timing and network info updates"
https://blueprints.launchpad.net/nova/+spec/admin-event-callback-api
and bug report :-
https://bugs.launchpad.net/nova/+bug/1280357
As far as I can see target milestone is "Icehouse-rc1"
***********************************************************************************
View also Launching Instance via image and creating simultaneously bootable cinder volume on Two Node GRE+OVS+Gluster F20 Cluster
[root@dfw02 ~(keystone_admin)]$ nova boot --flavor 2 --user-data=./myfile.txt --block_device_mapping vda=5f0f096b-192a-435b-bdbc-5063ed5c6366:::0 VF19VLGL
+--------------------------------------+----------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------+
| OS-EXT-STS:task_state | scheduling |
| image | Attempt to boot from volume - no image supplied |
| OS-EXT-STS:vm_state | building |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 5aa903c5-624d-4dde-9e3c-49996d4a5edc |
| security_groups | [{u'name': u'default'}] |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| OS-DCF:diskConfig | MANUAL |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
| status | BUILD |
| updated | 2014-01-25T03:59:12Z |
| hostId | |
| OS-EXT-SRV-ATTR:host | None |
| OS-SRV-USG:terminated_at | None |
| key_name | None |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None |
| name | VF19VLGL |
| adminPass | Aq4LBKP9rBGF |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| created | 2014-01-25T03:59:12Z |
| os-extended-volumes:volumes_attached | [{u'id': u'5f0f096b-192a-435b-bdbc-5063ed5c6366'}] |
| metadata | {} |
+--------------------------------------+----------------------------------------------------+
Just in a second new instance will be booted via created volume on Glusterfs ( Fedora 20 : Qemu 1.6, Libvirt 1.1.3)
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| aaeada4a-6a83-4cbc-ac8b-96a8b1fa81ad | VF19GL | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.103 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | ACTIVE | None | Running | int=10.0.0.6 |
+--------------------------------------+-----------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron port-list --device-id 5aa903c5-624d-4dde-9e3c-49996d4a5edc
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| 7196be1f-9216-4bfd-ac8b-9903780936d9 | | fa:16:3e:4b:97:90 | {"subnet_id": "fa930cea-3d51-4cbe-a305-579f12aa53c0", "ip_address": "10.0.0.6"} |
+--------------------------------------+------+-------------------
+---------------------------------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-list
+--------------------------------------+------------------+---------------------+--------------------------------------+
| id | fixed_ip_address | floating_ip_address | port_id |
+--------------------------------------+------------------+---------------------+--------------------------------------+
| 04ccafab-1878-44f6-b5ab-a1e2ea1faa97 | 10.0.0.5 | 192.168.1.103 | 1d10dc02-c0f2-4225-ae61-db281f3af69c |
| 9d15609c-9465-4254-bdcb-43f072b6c7d4 | 10.0.0.2 | 192.168.1.101 | e4cb68c4-b932-4c83-86cd-72c75289114a |
| af9c6ba6-e0ca-498e-8f67-b9327f75d93f | 10.0.0.4 | 192.168.1.102 | c6914ab9-6c83-4cb6-bf18-f5b0798ec513 |
| c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e | | 192.168.1.104 | |
+--------------------------------------+------------------+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-associate c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e 7196be1f-9216-4bfd-ac8b-9903780936d9
Associated floatingip c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e
[root@dfw02 ~(keystone_admin)]$ neutron floatingip-show c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 10.0.0.6 |
| floating_ip_address | 192.168.1.104 |
| floating_network_id | 780ce2f3-2e6e-4881-bbac-857813f9a8e0 |
| id | c8bb83a0-73cd-4cc1-9dc1-5f1f6c74e86e |
| port_id | 7196be1f-9216-4bfd-ac8b-9903780936d9 |
| router_id | bf360d81-79fb-4636-8241-0a843f228fc8 |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
+---------------------+--------------------------------------+
[root@dfw02 ~(keystone_admin)]$ ping 192.168.1.104
PING 192.168.1.104 (192.168.1.104) 56(84) bytes of data.
64 bytes from 192.168.1.104: icmp_seq=1 ttl=63 time=4.19 ms
64 bytes from 192.168.1.104: icmp_seq=2 ttl=63 time=1.32 ms
64 bytes from 192.168.1.104: icmp_seq=3 ttl=63 time=1.06 ms
64 bytes from 192.168.1.104: icmp_seq=4 ttl=63 time=1.11 ms
64 bytes from 192.168.1.104: icmp_seq=5 ttl=63 time=1.13 ms
64 bytes from 192.168.1.104: icmp_seq=6 ttl=63 time=1.02 ms
64 bytes from 192.168.1.104: icmp_seq=7 ttl=63 time=1.05 ms
64 bytes from 192.168.1.104: icmp_seq=8 ttl=63 time=1.08 ms
64 bytes from 192.168.1.104: icmp_seq=9 ttl=63 time=0.974 ms
64 bytes from 192.168.1.104: icmp_seq=10 ttl=63 time=1.03 ms
I/O Speed improvement is noticeable on boot up and disk operations like this
[root@dfw02 ~(keystone_admin)]$ nova list
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
| 3f2db906-567c-48b0-967e-799b2bffe277 | Cirros312 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.101 |
| 02ef842e-b86f-4545-a018-33835c5350f8 | UbuntuSalanaderVLG | SUSPENDED | None | Shutdown | int=10.0.0.7, 192.168.1.105 |
| 58f8f449-f109-42cf-92e2-d5f8b194d814 | VF19DFW | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.109 |
| 5aa903c5-624d-4dde-9e3c-49996d4a5edc | VF19VLGL | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.104 |
+--------------------------------------+--------------------+-----------+------------+-------------+-----------------------------+
[root@dfw02 ~(keystone_admin)]$ nova show 5aa903c5-624d-4dde-9e3c-49996d4a5edc
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-01-25T20:13:54Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.6, 192.168.1.104 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000005 |
| OS-SRV-USG:launched_at | 2014-01-25T03:59:17.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 5aa903c5-624d-4dde-9e3c-49996d4a5edc |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | VF19VLGL |
| created | 2014-01-25T03:59:12Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'5f0f096b-192a-435b-bdbc-5063ed5c6366'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
On Compute Node :-
[root@dfw01]# service openstack-nova-compute status -l
Redirecting to /bin/systemctl status -l openstack-nova-compute.service
openstack-nova-compute.service - OpenStack Nova Compute Server
Loaded: loaded (/usr/lib/systemd/system/openstack-nova-compute.service; enabled)
Active: active (running) since Tue 2014-01-28 20:28:06 MSK; 10min ago
Main PID: 3969 (nova-compute)
CGroup: /system.slice/openstack-nova-compute.service
├─3969 /usr/bin/python /usr/bin/nova-compute --logfile /var/log/nova/compute.log
└─5440 /usr/sbin/glusterfs --volfile-id=cinder-volumes05 --volfile-server=192.168.1.127 /var/lib/nova/mnt/62f75cf6996a8a6bcc0d343be378c10a
Jan 28 20:35:02 dfw01.localdomain sudo[5515]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link add qvb3465c1f6-6f type veth peer name qvo3465c1f6-6f
Jan 28 20:35:02 dfw01.localdomain sudo[5522]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5525]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvb3465c1f6-6f promisc on
Jan 28 20:35:02 dfw01.localdomain sudo[5528]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5531]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qvo3465c1f6-6f promisc on
Jan 28 20:35:02 dfw01.localdomain sudo[5534]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ip link set qbr3465c1f6-6f up
Jan 28 20:35:02 dfw01.localdomain sudo[5537]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf brctl addif qbr3465c1f6-6f qvb3465c1f6-6f
Jan 28 20:35:02 dfw01.localdomain sudo[5540]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf ovs-vsctl -- --may-exist add-port br-int qvo3465c1f6-6f -- set Interface qvo3465c1f6-6f external-ids:iface-id=3465c1f6-6f58-46c4-b0cf-049d89603e5f external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d6:ef:b2 external-ids:vm-uuid=14c49bfe-f99c-4f31-918e-dcf0fd42b49d
Jan 28 20:35:02 dfw01.localdomain ovs-vsctl[5542]: ovs|00001|vsctl|INFO|Called as /bin/ovs-vsctl -- --may-exist add-port br-int qvo3465c1f6-6f -- set Interface qvo3465c1f6-6f external-ids:iface-id=3465c1f6-6f58-46c4-b0cf-049d89603e5f external-ids:iface-status=active external-ids:attached-mac=fa:16:3e:d6:ef:b2 external-ids:vm-uuid=14c49bfe-f99c-4f31-918e-dcf0fd42b49d
Jan 28 20:35:03 dfw01.localdomain sudo[5557]: nova : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/nova-rootwrap /etc/nova/rootwrap.conf tee /sys/class/net/tap3465c1f6-6f/brport/hairpin_mode
************************************************************************************
CentOS 6.5 instance was able to start it's own X Server in VNC session from F20 in other words been client of X Server of F20 host (?).
************************************************************************************
Setting up Ubuntu 13.10 cloud instance
| 812d369d-e351-469e-8820-a2d0d8740716 | UbuntuSalamander | ACTIVE | None | Running | int=10.0.0.8, 192.168.1.110 |
[root@dfw02 ~(keystone_admin)]$ nova show 812d369d-e351-469e-8820-a2d0d8740716
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-01-31T04:46:30Z |
| OS-EXT-STS:task_state | None |
| OS-EXT-SRV-ATTR:host | dfw01.localdomain |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| int network | 10.0.0.8, 192.168.1.110 |
| hostId | b67c11ccfa8ac8c9ed019934fa650d307f91e7fe7bbf8cd6874f3e01 |
| OS-EXT-STS:vm_state | active |
| OS-EXT-SRV-ATTR:instance_name | instance-00000016 |
| OS-SRV-USG:launched_at | 2014-01-31T04:46:30.000000 |
| OS-EXT-SRV-ATTR:hypervisor_hostname | dfw01.localdomain |
| flavor | m1.small (2) |
| id | 812d369d-e351-469e-8820-a2d0d8740716 |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 970ed56ef7bc41d59c54f5ed8a1690dc |
| name | UbuntuSalamander |
| created | 2014-01-31T04:46:25Z |
| tenant_id | d0a0acfdb62b4cc8a2bfa8d6a08bb62f |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'34bdf9d9-5bcc-4b62-8140-919c00fe07df'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dfw02 ~(keystone_admin)]$ ssh ubuntu@192.168.1.110
ubuntu@192.168.1.110's password:
Welcome to Ubuntu 13.10 (GNU/Linux 3.11.0-15-generic x86_64)
* Documentation: https://help.ubuntu.com/
System information as of Fri Jan 31 05:13:19 UTC 2014
System load: 0.08 Processes: 73
Usage of /: 11.4% of 6.86GB Users logged in: 1
Memory usage: 3% IP address for eth0: 10.0.0.8
Swap usage: 0%
Graph this data and manage this system at:
https://landscape.canonical.com/
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
Last login: Fri Jan 31 05:13:25 2014 from 192.168.1.127
ubuntu@ubuntusalamander:~$ ifconfig
eth0 Link encap:Ethernet HWaddr fa:16:3e:1e:16:35
inet addr:10.0.0.8 Bcast:10.0.0.255 Mask:255.255.255.0
inet6 addr: fe80::f816:3eff:fe1e:1635/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1454 Metric:1
RX packets:854 errors:0 dropped:0 overruns:0 frame:0
TX packets:788 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:85929 (85.9 KB) TX bytes:81060 (81.0 KB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
$ sudo apt-get install xorg fluxbox firefox gnome-terminal
Reboot
$ startx
Right mouse click on desktop opens X-terminal
$ /usr/bin/firefox &
Testing tenants Network (kashyap)
[root@dallas1 ~]# . keystonerc_kashyap
[root@dallas1 ~(keystone_kashyap)]$ neutron net-list
+--------------------------------------+------+---------------------------------------+
| id | name | subnets |
+--------------------------------------+------+---------------------------------------+
| 082249a5-08f4-478f-b176-effad0ef6843 | ext | 7dd9ee7e-3c1e-4850-a78e-375c7268019f |
+--------------------------------------+------+---------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-create router02
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | 65e6de75-c7ec-40a7-9a7b-bd37e133cb1c |
| name | router02 |
| status | ACTIVE |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-----------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-gateway-set router02 ext
Set gateway for router router02
[root@dallas1 ~(keystone_kashyap)]$ neutron net-create int01
Created a new network:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| admin_state_up | True |
| id | 388c5557-1c53-4195-aed1-726a4fe7af55 |
| name | int01 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+----------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron subnet-create int01 30.0.0.0/24 --dns_nameservers list=true 83.221.202.254
Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "30.0.0.2", "end": "30.0.0.254"} |
| cidr | 30.0.0.0/24 |
| dns_nameservers | 83.221.202.254 |
| enable_dhcp | True |
| gateway_ip | 30.0.0.1 |
| host_routes | |
| id | 3e3b07fd-53b0-4186-8fd6-859a4dd422f8 |
| ip_version | 4 |
| name | |
| network_id | 388c5557-1c53-4195-aed1-726a4fe7af55 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+------------------+--------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron router-interface-add router02
3e3b07fd-53b0-4186-8fd6-859a4dd422f8
Added interface 5e69cdcc-3764-45c4-925c-ae53a5500b26 to router router02.
[root@dallas1 ~(keystone_kashyap)]$ neutron subnet-list
+--------------------------------------+------+-------------+--------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+-------------+--------------------------------------------+
| 3e3b07fd-53b0-4186-8fd6-859a4dd422f8 | | 30.0.0.0/24 | {"start": "30.0.0.2", "end": "30.0.0.254"} |
+--------------------------------------+------+-------------+--------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ glance image-list
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
| 2cada436-30a7-425b-9e75-ce56764cdd13 | Cirros31 | qcow2 | bare | 13147648 | active |
| fd1cd492-d7d8-4fc3-961a-0b43f9aa148d | Fedora 20 Image | qcow2 | bare | 214106112 | active |
| c0b90f9e-fd47-46da-b98b-1144a41a6c08 | Fedora 20 x86_64 | qcow2 | bare | 214106112 | active |
| 1def8fdc-9fe9-400d-944a-707d1352b6da | New Fedora 20 image | qcow2 | bare | 214106112 | active |
| 2dcc95ad-ebef-43f1-ae14-8d28e6f8194b | Ubuntu 13.10 Server | qcow2 | bare | 244711424 | active |
| 14cf6e7b-9aed-40c6-8185-366eb0c4c397 | Ubuntu Salamander Server | qcow2 | bare | 244711424 | active |
| b94f3144-0337-4b0c-8c2b-18bbb18be6c8 | Ubuntu Saucy | qcow2 | bare | 244711424 | active |
+--------------------------------------+--------------------------+-------------+------------------+-----------+--------+
[root@dallas1 ~(keystone_kashyap)]$ nova boot --flavor 2 --user-data=./myfile.txt --image fd1cd492-d7d8-4fc3-961a-0b43f9aa148d VF20RSX
+--------------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------------+--------------------------------------+
| status | BUILD |
| updated | 2014-02-20T15:42:28Z |
| OS-EXT-STS:task_state | scheduling |
| key_name | None |
| image | Fedora 20 Image |
| hostId | |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | None |
| flavor | m1.small |
| id | 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | abb1fa95b0ec448ea8da3cc99d61d301 |
| name | VF20RSX |
| adminPass | eHCQZ5fD2MpR |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
| created | 2014-02-20T15:42:27Z |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-AZ:availability_zone | nova |
| config_drive | |
+--------------------------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+----------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | BUILD | spawning | NOSTATE | |
+--------------------------------------+---------+--------+------------+-------------+----------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+----------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+----------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2 |
+--------------------------------------+---------+--------+------------+-------------+----------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-create ext
Created a new floatingip:
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | |
| floating_ip_address | 192.168.1.108 |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id | b2b428c4-71bc-4391-a2f0-592abf6990c8 |
| port_id | |
| router_id | |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+---------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron port-list --device-id 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
| ce1e02fe-cfd8-4802-85d0-b628beb56bff | | fa:16:3e:39:d6:38 | {"subnet_id": "3e3b07fd-53b0-4186-8fd6-859a4dd422f8", "ip_address": "30.0.0.2"} |
+--------------------------------------+------+-------------------+---------------------------------------------------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-associate b2b428c4-71bc-4391-a2f0-592abf6990c8 ce1e02fe-cfd8-4802-85d0-b628beb56bff
Associated floatingip b2b428c4-71bc-4391-a2f0-592abf6990c8
[root@dallas1 ~(keystone_kashyap)]$ neutron floatingip-show b2b428c4-71bc-4391-a2f0-592abf6990c8
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| fixed_ip_address | 30.0.0.2 |
| floating_ip_address | 192.168.1.108 |
| floating_network_id | 082249a5-08f4-478f-b176-effad0ef6843 |
| id | b2b428c4-71bc-4391-a2f0-592abf6990c8 |
| port_id | ce1e02fe-cfd8-4802-85d0-b628beb56bff |
| router_id | 65e6de75-c7ec-40a7-9a7b-bd37e133cb1c |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+---------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-list
+--------------------------------------+---------+-------------+
| id | name | description |
+--------------------------------------+---------+-------------+
| 378e5257-dfe4-4101-b6f5-047591681e27 | default | default |
+--------------------------------------+---------+-------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-rule-create --protocol icmp \
> --direction ingress --remote-ip-prefix 0.0.0.0/0 378e5257-dfe4-4101-b6f5-047591681e27
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | 829463b5-cd24-48b6-ba80-cc0c3ad2ab3e |
| port_range_max | |
| port_range_min | |
| protocol | icmp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 378e5257-dfe4-4101-b6f5-047591681e27 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ neutron security-group-rule-create --protocol tcp \
> --port-range-min 22 --port-range-max 22 \
> --direction ingress --remote-ip-prefix 0.0.0.0/0 378e5257-dfe4-4101-b6f5-047591681e27
Created a new security_group_rule:
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| direction | ingress |
| ethertype | IPv4 |
| id | fee6ad64-238e-4628-8457-4c19d198182f |
| port_range_max | 22 |
| port_range_min | 22 |
| protocol | tcp |
| remote_group_id | |
| remote_ip_prefix | 0.0.0.0/0 |
| security_group_id | 378e5257-dfe4-4101-b6f5-047591681e27 |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
+-------------------+--------------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ ping 192.168.1.108
PING 192.168.1.108 (192.168.1.108) 56(84) bytes of data.
64 bytes from 192.168.1.108: icmp_seq=1 ttl=63 time=4.06 ms
64 bytes from 192.168.1.108: icmp_seq=2 ttl=63 time=0.688 ms
64 bytes from 192.168.1.108: icmp_seq=3 ttl=63 time=0.853 ms
64 bytes from 192.168.1.108: icmp_seq=4 ttl=63 time=0.631 ms
64 bytes from 192.168.1.108: icmp_seq=5 ttl=63 time=0.762 ms
^C
--- 192.168.1.108 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.631/1.398/4.060/1.333 ms
# ssh-keygen
# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.140 (Compute)
Block is included in /etc/rc.d/rc.local:-
ssh -L 5900:localhost:5900 -N -f -l root 192.168.1.140
ssh -L 5901:localhost:5901 -N -f -l root 192.168.1.140
ssh -L 5902:localhost:5902 -N -f -l root 192.168.1.140
ssh -L 5903:localhost:5903 -N -f -l root 192.168.1.140
[root@dallas1 ~(keystone_kashyap)]$ vncviewer localhost:0
TigerVNC Viewer 64-bit v1.3.0 (20140121)
Built on Jan 21 2014 at 09:40:20
Copyright (C) 1999-2011 TigerVNC Team and many others (see README.txt)
See http://www.tigervnc.org for information on TigerVNC.
Thu Feb 20 19:48:32 2014
CConn: connected to host localhost port 5900
CConnection: Server supports RFB protocol version 3.8
CConnection: Using RFB protocol version 3.8
PlatformPixelBuffer: Using default colormap and visual, TrueColor, depth 24.
DesktopWindow: Adjusting window size to avoid accidental full screen request
CConn: Using pixel format depth 24 (32bpp) little-endian rgb888
CConn: Using Tight encoding
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova reboot VF20RSX
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | REBOOT | rebooting | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ ping 192.168.1.108
PING 192.168.1.108 (192.168.1.108) 56(84) bytes of data.
64 bytes from 192.168.1.108: icmp_seq=1 ttl=63 time=5.75 ms
64 bytes from 192.168.1.108: icmp_seq=2 ttl=63 time=1.00 ms
64 bytes from 192.168.1.108: icmp_seq=3 ttl=63 time=0.749 ms
[root@dallas1 ~(keystone_kashyap)]$ nova list
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
| 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f | VF20RSX | ACTIVE | None | Running | int01=30.0.0.2, 192.168.1.108 |
+--------------------------------------+---------+--------+------------+-------------+-------------------------------+
[root@dallas1 ~(keystone_kashyap)]$ nova show 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-20T16:56:41Z |
| OS-EXT-STS:task_state | None |
| key_name | None |
| image | Fedora 20 Image (fd1cd492-d7d8-4fc3-961a-0b43f9aa148d) |
| hostId | 684566c890e07a7c31cb0265f3ba21a9e009391b12e0bbf1822ad75c |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-02-20T15:42:39.000000 |
| flavor | m1.small (2) |
| id | 0d0bd2ec-90f0-4203-b1bc-b946f7f0a91f |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | abb1fa95b0ec448ea8da3cc99d61d301 |
| name | VF20RSX |
| created | 2014-02-20T15:42:27Z |
| tenant_id | ab1cd5ee334a4caeafdb2df90540359a |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| int01 network | 30.0.0.2, 192.168.1.108 |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dallas1 ~(keystone_admin)]$ keystone user-list
+----------------------------------+---------+---------+-------+
| id | name | enabled | email |
+----------------------------------+---------+---------+-------+
| 974006673310455e8893e692f1d9350b | admin | True | |
| fbba3a8646dc44e28e5200381d77493b | cinder | True | |
| 0214c6ae6ebc4d6ebeb3e68d825a1188 | glance | True | |
| abb1fa95b0ec448ea8da3cc99d61d301 | kashyap | True | |
| 329b3ca03a894b319420b3a166d461b5 | neutron | True | |
| 89b3f7d54dd04648b0519f8860bd0f2a | nova | True | |
+----------------------------------+---------+---------+-------+
Check tenant :-
[root@dfw02 ~(keystone_boris)]$ nova list
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
| 5fcd83c3-1d4e-4b11-bfe5-061a03b73174 | UbuntuRSX | ACTIVE | None | Running | int1=40.0.0.5, 192.168.1.120 |
| 4028b4a7-de0c-4226-89ac-1543fb9382d7 | VF19RSX | ACTIVE | None | Running | int1=40.0.0.2, 192.168.1.118 |
| 99a7e40c-896f-42c9-a18d-4a1368de49e9 | VF20RSX | ACTIVE | None | Running | int1=40.0.0.4, 192.168.1.119 |
+--------------------------------------+-----------+--------+------------+-------------+------------------------------+
[root@dfw02 ~(keystone_boris)]$ nova show VF20RSX
+--------------------------------------+----------------------------------------------------------+
| Property | Value |
+--------------------------------------+----------------------------------------------------------+
| status | ACTIVE |
| updated | 2014-02-26T16:19:31Z |
| OS-EXT-STS:task_state | None |
| key_name | None |
| image | Attempt to boot from volume - no image supplied |
| hostId | 73ee4f5bd4da8ad7b39d768d0b167a03ac0471ea50d9ded6c6190fb1 |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2014-02-26T16:19:31.000000 |
| flavor | m1.small (2) |
| id | 99a7e40c-896f-42c9-a18d-4a1368de49e9 |
| security_groups | [{u'name': u'default'}] |
| OS-SRV-USG:terminated_at | None |
| user_id | 162021e787c54cac906ab3296a386006 |
| name | VF20RSX |
| created | 2014-02-26T16:19:26Z |
| tenant_id | 4dacfff9e72c4245a48d648ee23468d5 |
| OS-DCF:diskConfig | MANUAL |
| metadata | {} |
| os-extended-volumes:volumes_attached | [{u'id': u'0322b452-8fbe-470f-acf1-2e60740ba3f2'}] |
| accessIPv4 | |
| accessIPv6 | |
| progress | 0 |
| OS-EXT-STS:power_state | 1 |
| OS-EXT-AZ:availability_zone | nova |
| int1 network | 40.0.0.4, 192.168.1.119 |
| config_drive | |
+--------------------------------------+----------------------------------------------------------+
[root@dfw02 ~(keystone_boris)]$ exit
logout
[boris@dfw02 ~]$ sudo su -
Last login: Wed Feb 26 21:42:10 MSK 2014 on pts/4
[root@dfw02 ~]# . keystonerc_admin
[root@dfw02 ~(keystone_admin)]$ keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| d0a0acfdb62b4cc8a2bfa8d6a08bb62f | admin | True |
| 4dacfff9e72c4245a48d648ee23468d5 | ostenant | True |
| 04ebe929a2a34557af21b6a735986278 | services | True |
+----------------------------------+----------+---------+
The original text of documents was posted on fedoraproject.org by Kashyap.
Atached ones tuned for new IP's and should not have any more typos of original version.They also contain MySQL preventive updates currently required for openstack-nova-compute & neutron-openvswitch-agent remote connection to Controller Node to succeed ./etc/sysconfig/iptables updated on Controller and Compute Nodes. Lines below commented out :-
# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
# -A INPUT -j REJECT --reject-with icmp-host-prohibited
To be able set up Gluster 3.4.2 cluster and use gluster replica 2 volume as storage for Cinder.
MySQL stuff is mine. All attached *.conf & *.ini files have been update for my network as well.
In meantime I am quite sure that using Libvirt's default and non-default networks for creating Controller and Compute nodes as F20 VMs is not important. Configs allow metadata to be sent from Controller to Compute on real
physical boxes. Just one Ethernet Controller per box should be required in case of using GRE tunnelling in case of RDO Havana on Fedora 20 manual setup.
References
1. http://textuploader.com/1hin
2. http://textuploader.com/1hey