************************
UPDATE 08/08/2015
************************
After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow
****************************************************************************** Action to be undertaken on Controller before deployment:
******************************************************************************
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
/usr/lib/python2.7/site-packages/packstack/puppet/templates.
Another option rebuild openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc23.src.rpm on Fedora 22
with patch 0002-Avoid-running-neutron-db-manage-twice
Place patch in SOURCES and update correspondingly spec file.
You might be also hit by https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
*******************************************************************************
I also commented out second line in /etc/httpd/conf.d/mod_dnssd.conf
*******************************************************************************
SELINUX converted to permissive mode on all depoyment nodes
# packstack --answer-file=./answer3Node.txt
****************
END UPDATE
****************
Following bellow is brief instruction for three node
deployment test Controller&&Network&&Compute across Fedora 22 VMs for RDO Kilo, which was performed on Fedora 22 host with
QEMU/KVM/Libvirt Hypervisor
(16 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ).
Three VMs (4 GB
RAM,4 VCPUS) have been setup. Controller VM one (management subnet)
VNIC, Network Node VM three VNICS (management,vtep's external subnets),
Compute Node VM two VNICS (management,vtep's subnets)
I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related with VM serves as RDO Kilo Nodes, by some
reason it causes network congestion when forwarding packets to Internet
and vice versa.
# virsh net-list
Name State Autostart Persistent
--------------------------------------------------------------------------
default active yes yes
openstackvms active yes yes
public active yes yes
vteps active yes yes
*********************************************************************************
1. First Libvirt subnet "openstackvms" serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2.
Second Libvirt subnet "public" serves for simulation external network
Network Node attached to public,latter on "eth2" interface (belongs to
"public") is supposed to be converted into OVS port of br-ex on Network
Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs
running on Compute Node access to Internet due to match to external
network created by packstack installation 172.24.4.224/28.
*************************************************
On Hypervisor Host ( Fedora 22)
*************************************************
# iptables -S -t nat
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .
***********************************************************************************
3.
Third Libvirt subnet "vteps" serves for VTEPs endpoint simulation.
Network and Compute Node VMs are attached to this subnet.
********************************************************************************
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to match created by installer external network
**********************************************************************************
*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
#reboot
*************************************************
General Three node RDO Kilo system layout
*************************************************
1) to VM 192.169.142.127
$ spicy -h localhost -p 5902
2) to VM 192.169.142.147
$ spicy -h localhost -p 5901
3) to VM 192.169.142.137
$ spicy -h localhost -p 5900
From my standpoint understanding of core architecture of Neutron openstack flow in regards of nova-api metadata service access (and getting proper response from nova-api service) by VMs launching via nova causes a lot of problems due to leak of understanding of core concepts.
Neutron proxies metadata requests to Nova adding HTTP headers which Nova uses to identify the source instance. Neutron actually uses two proxies to do this: a namespace proxy and a metadata agent. This post shows how a metadata request gets from an instance to the Nova metadata service via a namespace proxy running in a Neutron router.
Here both services openstack-nova-api && neutron-server are running on Controller 192.169.142.127.
[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep nova-api
openstack-nova-api.service loaded active running OpenStack Nova API Server
[root@ip-192-169-142-127 ~(keystone_admin)]# systemctl | grep neutron-server
neutron-server.service loaded active running OpenStack Neutron Server
[root@vf22rls ~]# ip -4 address show dev eth0
2: eth0: mtu 1400 qdisc fq_codel state UP group default qlen 1000 inet 50.0.0.15/24 brd 50.0.0.255 scope global dynamic eth0 valid_lft 85770sec preferred_lft 85770sec
[root@vf22rls ~]# ip route default via 50.0.0.1 dev eth0 proto static metric 100 50.0.0.0/24 dev eth0 proto kernel scope link src 50.0.0.15 metric 100
****************************************************************************** 2.Namespace proxy receives request. The default gateway 50.0.0.1 exists within a Neutron router namespace on the network node.The Neutron-l3-agent started a namespace proxy in this namespace and added some iptables rules to redirect metadata requests to it. There are no special routes, so the request goes out the default gateway of course a Neutron router needs to have an interface on the subnet.
*******************************************************************************
Network Node 192.169.142.147
**********************************
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb
qrouter-79801567-a0b5-4780-bfae-ac00e185a148
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qdhcp-1bd1f3b8-8e4e-4193-8af0-023f0be4a0fb route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 50.0.0.1 0.0.0.0 UG 0 0 0 tapd6da9bb8-0e
50.0.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tapd6da9bb8-0e
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 iptables-save| grep 9697 -A neutron-l3-agent-INPUT -p tcp -m tcp --dport 9697 -j DROP -A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-79801567-a0b5-4780-bfae-ac00e185a148 netstat -anpt Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:9697 0.0.0.0:* LISTEN 3210/python2
The nameserver proxy adds two HTTP headers to the request:
X-Forwarded-For: with the instance's IP address
X-Neutron-Router-ID: with the uuid of the Neutron router
and proxies it to a Unix domain socket with name /var/lib/neutron/metadata_proxy.
*********************************************************************************** 3. Metadata agent receives request and queries the Neutron service The metadata agent listens on this Unix socket. It is a normal Linux service that runs in the main operating system IP namespace, and so it is able to reach the Neutron and Nova metadata services. Its configuration file has all the information required to do so.
***********************************************************************************
It reads the X-Forwarded-For and X-Neutron-Router-ID headers in the request and queries the Neutron service to find the ID of the instance that created the request.
*********************************************************************************** 4.Metadata agent proxies request to Nova metadata service It then adds these headers: X-Instance-ID: the instance ID returned from Neutron X-Instance-ID-Signature: instance ID signed with the shared-secret X-Forwarded-For: the instance's IP address and proxies the request to the Nova metadata service. 5. Nova metadata service receives request The metadata service was started by nova-api. The handler checks the X-Instance-ID-Signature with the shared key, looks up the data and returns the response which travels back via the two proxies to the instance.
************************************************************************************
******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"
************************************************
Next, create the docker.filters file:
************************************************
$ vi /etc/nova/rootwrap.d/docker.filters
Insert Lines
# nova-rootwrap command filters for setting up network in the docker driver # This file should be owned by (and only-writeable by) the root user [Filters] # nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*' ln: CommandFilter, /bin/ln, root
*****************************************
Add line /etc/glance/glance-api.conf
***************************************** container_formats=ami,ari,aki,bare,ovf,ova,docker
******************************************************************
Launch new instance via uploaded image :- ******************************************************************
# . keystonerc_demo
# nova boot --image "rastasheep/ubuntu-sshd:14.04" --flavor m1.tiny
--nic net-id=private-net-id UbuntuDocker either via dashboard ***************************************************** Update before reboot /etc/cr.d/rc.local as follows :- ***************************************************** [root@fedora21wks ~(keystone_admin)]# cat /etc/rc.d/rc.local #!/bin/bash chmod 666 /var/run/docker.sock ; systemctl restart openstack-nova-compute
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/01_start-sshd.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenerated by Boris just in case !
SSHD started !
*** Running /etc/my_init.d/database.sh...
Derby database started !
*** Running /etc/my_init.d/run.sh...
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error
Waiting for domain1 to start ..............
Successfully started the domain : domain1
domain Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name> admin
Enter admin password for user "admin">
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:
admin:0f2HOP1vCiDd
Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false
Bad Network Configuration. DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000009: instance-00000009: unknown error
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c5c4594da13d boris/docker-glassfish41:latest "/sbin/my_init" 26 minutes ago Up 26 minutes nova-d751e04c-8f9b-4171-988a-cd57fb37574c
a58781eba98b tutum/tomcat:latest "/run.sh" 4 hours ago Up 4 hours nova-3024f190-8dbb-4faf-b2b0-e627d6faba97
cd1418845931 eugeneware/docker-wordpress-nginx:latest "/bin/bash /start.sh 5 hours ago Up 5 hours nova-c0211200-eee9-431e-aa64-db5cdcadad66
700fe66add76 rastasheep/ubuntu-sshd:14.04 "/usr/sbin/sshd -D" 7 hours ago Up 7 hours nova-9d0ebc1d-5bfa-44d7-990d-957d7fec5ea2
Following bellow is brief instruction for two node
deployment test
Controller&&Network+Compute Nodes for RDO Kilo, which was performed on Fedora 21 host with
KVM/Libvirt Hypervisor . Two VMs (4 GB
RAM,2 VCPUS) have been setup. Controller&&Network VM two (management subnet,VTEP's subnet)
VNICs, Compute Node VM two VNICS (management,VTEP's subnets). Management network finally converted to public.SELINUX should be set to permissive mode ( vs packstack deployments on CentOS 7.1)
*********************************
Two Libvirt networks created
*********************************
# cat openstackvms.xml
<network>
<name>openstackvms</name>
<uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr2' stp='on' delay='0' />
<mac address='52:54:00:60:f8:6d'/>
<ip address='192.169.142.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.169.142.2' end='192.169.142.254' />
</dhcp>
</ip>
</network>
**********************************************************************
Libvirt's default network 192.168.122.0/24 was used as VTEP's
**********************************************************************
Follow https://www.rdoproject.org/Quickstart until packstack startup.
You might have to switch to rdo-testing.repo manually (/etc/yum.repos.d) .
Just updated "enabled=1 or 0" in corresponding *.repo file. Anyway in
meantime make sure that release and testing repos are in expected state,
to avoid unpredictable consequences.
******************
Then run :-
******************
# packstack --answer-file=./answerTwoNode.txt
**********************************************************************************
Up on packstack completion on Controller Node create following files ,
designed to convert mgmt network into external
**********************************************************************************
Following bellow is brief instruction for traditional three node deployment test Controller&&Network&&Compute for oncoming RDO Kilo, which was performed on Fedora 21 host with KVM/Libvirt Hypervisor
(16 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ) Three VMs (4 GB RAM,2 VCPUS) have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)
SELINUX stays in
enforcing mode.
I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.
[root@junoJVC01 ~]# virsh net-list
Name State Autostart Persistent
--------------------------------------------------------------------------
default active yes yes
openstackvms active yes yes
public active yes yes
vteps active yes yes
*********************************************************************************
1. First Libvirt subnet "openstackvms" serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network Network Node attached to public,latter on "eth3" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.25 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.
*************************************************
On Hypervisor Host ( Fedora 21)
*************************************************
[root@junoJVC01 ~] # iptables -S -t nat
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .
[root@junoJVC01 ~]# virsh net-info public
Name: public
UUID: d0e9965b-f92c-40c1-b749-b609aed42cf2
Active: yes
Persistent: yes
Autostart: yes
Bridge: virbr3
***********************************************************************************
3. Third Libvirt subnet "vteps" serves for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
***********************************************************************************
**************************************
At this point run on Controller:-
**************************************
Keep SELINUX=enforcing ( RDO Kilo is supposed to handle this)
**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to match created by installer external network
**********************************************************************************
*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart
OVS PORT should be eth2 (third Ethernet interface on Network Node) Libvirt bridge VIRBR2 in real deployment is a your router to External network. OVS BRIDGE br-ex should have IP belongs to External network
In case CONFIG_KEYSTONE_SERVICE_NAME=httpd on Controller :-
[root@ip-192-169-142-147 ~(keystone_admin)]# ovs-vsctl show
d9a60201-a2c2-4c6a-ad9d-63cc2ae296b3 Bridge br-ex Port phy-br-ex Interface phy-br-ex type: patch options: {peer=int-br-ex} Port "eth3" Interface "eth3"
Port br-ex
Interface br-ex
type: internal
Port "eth2"
Interface "eth2"
Port "qg-d433fa46-e2"
Interface "qg-d433fa46-e2"
type: internal
Bridge br-tun
fail_mode: secure
Port "vxlan-0a000089"
Interface "vxlan-0a000089"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.0.0.147", out_key=flow, remote_ip="10.0.0.137"}
Port br-tun
Interface br-tun
type: internal
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port br-int
Interface br-int
type: internal
Port "tap70da94fb-c1"
tag: 1
Interface "tap70da94fb-c1"
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "qr-0737c492-f6"
tag: 1
Interface "qr-0737c492-f6"
type: internal
ovs_version: "2.3.1"
**********************************************************
Following bellow is Network Node status verification
**********************************************************
[root@ip-192-169-142-147 ~(keystone_admin)]# openstack-status == neutron services == neutron-server: inactive (disabled on boot) neutron-dhcp-agent: active neutron-l3-agent: active neutron-metadata-agent: active neutron-openvswitch-agent: active == Support services == libvirtd: active openvswitch: active dbus: active
[root@ip-192-169-142-147 ~(keystone_admin)]# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+------------------------------------------------------+
| 7ecdfc27-57cf-410d-9a76-8e9eb76582cb | public | 5fc0118a-f710-448d-af67-17dbfe01d5fc 172.24.4.224/28 |
| 98dd1928-96e8-47fb-a2fe-49292ae092ba | demo_net | ba2cded7-5546-4a64-aa49-7ef4d077dee3 50.0.0.0/24 |
+--------------------------------------+----------+------------------------------------------------------+
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns
qdhcp-98dd1928-96e8-47fb-a2fe-49292ae092ba
qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 172.24.4.231/32 -j DNAT --to-destination 50.0.0.14
-A neutron-l3-agent-OUTPUT -d 172.24.4.235/32 -j DNAT --to-destination 50.0.0.18
-A neutron-l3-agent-OUTPUT -d 172.24.4.228/32 -j DNAT --to-destination 50.0.0.19
-A neutron-l3-agent-POSTROUTING ! -i qg-d433fa46-e2 ! -o qg-d433fa46-e2 -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-PREROUTING -d 172.24.4.231/32 -j DNAT --to-destination 50.0.0.14
-A neutron-l3-agent-PREROUTING -d 172.24.4.235/32 -j DNAT --to-destination 50.0.0.18
-A neutron-l3-agent-PREROUTING -d 172.24.4.228/32 -j DNAT --to-destination 50.0.0.19
-A neutron-l3-agent-float-snat -s 50.0.0.14/32 -j SNAT --to-source 172.24.4.231
-A neutron-l3-agent-float-snat -s 50.0.0.18/32 -j SNAT --to-source 172.24.4.235
-A neutron-l3-agent-float-snat -s 50.0.0.19/32 -j SNAT --to-source 172.24.4.228
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-d433fa46-e2 -j SNAT --to-source 172.24.4.229
-A neutron-l3-agent-snat -m mark ! --mark 0x2 -m conntrack --ctstate DNAT -j SNAT --to-source 172.24.4.229
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat
[root@ip-192-169-142-147 ~(keystone_admin)]# ip netns exec qrouter-d63ca3f3-5b71-4540-bb5c-01b44ce3081b netstat -antp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9697 0.0.0.0:* LISTEN 3525/python2
[root@ip-192-169-142-137 ~]# ovs-vsctl show
a0cb406e-b028-4b09-8849-e6e2869ab051
Bridge br-tun
fail_mode: secure
Port "vxlan-0a000093"
Interface "vxlan-0a000093"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="10.0.0.137", out_key=flow, remote_ip="10.0.0.147"}
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port "qvoe90ef79b-80"
tag: 1
Interface "qvoe90ef79b-80"
Port br-int
Interface br-int
type: internal
Port "qvobf1c441c-ad"
tag: 1
Interface "qvobf1c441c-ad"
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "qvo6597428d-5b"
tag: 1
Interface "qvo6597428d-5b"
Bridge br-ex
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
ovs_version: "2.3.1"
[root@ip-192-169-142-137 ~]# brctl show
bridge name bridge id STP enabled interfaces
qbr6597428d-5b 8000.1a483dd02cee no qvb6597428d-5b
tap6597428d-5b
qbrbf1c441c-ad 8000.ca2f911ff649 no qvbbf1c441c-ad
qbre90ef79b-80 8000.16342824f4ba no qvbe90ef79b-80
tape90ef79b-80
**************************************************
Controller Node status verification
**************************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: active
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== Horizon service ==
openstack-dashboard: active == neutron services == neutron-server: active neutron-dhcp-agent: inactive (disabled on boot) neutron-l3-agent: inactive (disabled on boot) neutron-metadata-agent: inactive (disabled on boot)
== Swift services ==
openstack-swift-proxy: active
openstack-swift-account: active
openstack-swift-container: active
openstack-swift-object: active
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
openstack-cinder-backup: active
== Ceilometer services ==
openstack-ceilometer-api: active
openstack-ceilometer-central: active
openstack-ceilometer-compute: inactive (disabled on boot)
openstack-ceilometer-collector: active
openstack-ceilometer-alarm-notifier: active
openstack-ceilometer-alarm-evaluator: active
openstack-ceilometer-notification: active
== Support services ==
mysqld: inactive (disabled on boot)
libvirtd: active
dbus: active
target: active
rabbitmq-server: active
memcached: active
== Keystone users ==
/usr/lib/python2.7/site-packages/keystoneclient/shell.py:65: DeprecationWarning: The keystone CLI is deprecated in favor of python-openstackclient. For a Python library, continue using python-keystoneclient.
'python-keystoneclient.', DeprecationWarning)
+----------------------------------+------------+---------+----------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+----------------------+
| 4e1008fd31944fecbb18cdc215af23ec | admin | True | root@localhost |
| 621b84dd4b904760b8aa0cc7b897c95c | ceilometer | True | ceilometer@localhost |
| 4d6cdea3b7bc49948890457808c0f6f8 | cinder | True | cinder@localhost |
| 8393bb4de49a44b798af8b118b9f0eb6 | demo | True | |
| f9be6eaa789e4b3c8771372fffb00230 | glance | True | glance@localhost |
| a518b95a92044ad9a4b04f0be90e385f | neutron | True | neutron@localhost |
| 40dddef540fb4fa5a69fb7baa03de657 | nova | True | nova@localhost |
| 5fbb2b97ab9d4192a3f38f090e54ffb1 | swift | True | swift@localhost |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+--------------+-------------+------------------+-----------+--------+
| 1b4a6b08-d63c-4d8d-91da-16f6ba177009 | cirros | qcow2 | bare | 13200896 | active |
| cb05124d-0d30-43a7-a033-0b7ff0ea1d47 | Fedor21image | qcow2 | bare | 158443520 | active |