Quoting http://blog.oddbit.com/2014/01/14/direct-access-to-nova-metadata/
In an environment running Neutron, a request from your instance must traverse a number of steps:
From the instance to a router,
Through a NAT rule in the router namespace,
To an instance of the neutron-ns-metadata-proxy,
To the actual Nova metadata service
When there are problem accessing the metadata, it can be helpful to verify that the metadata service itself is configured correctly and returning meaningful information.
end quoting and start reproducing on Controller of Two Node Neutron GRE+OVS+Gluster Fedora 20 Cluster
[root@dallas1 ~(keystone_admin)]$ ip netns list
qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f
Check on the Routing on Cloud controller's router namespace, it should show
port 80 for 169.254.169.254 routes to the host at port 8700
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 8700
Check routing table inside the router namespace:
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r
default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d proto kernel scope link src 192.168.1.100
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.0.3:53 0.0.0.0:* LISTEN
tcp6 0 0 fe80::f816:3eff:feef:53 :::* LISTEN
udp 0 0 10.0.0.3:53 0.0.0.0:*
udp 0 0 0.0.0.0:67 0.0.0.0:*
udp6 0 0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2830/python
[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova 2830 1 0 09:41 ? 00:00:57 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2856 2830 0 09:41 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2874 2830 0 09:41 ? 00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2875 2830 0 09:41 ? 00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
Checks are done then follow http://blog.oddbit.com/2014/01/14/direct-access-to-nova-metadata/
[root@dallas1 ~]# grep shared_secret /etc/nova/nova.conf
neutron_metadata_proxy_shared_secret = fedora
[root@dallas1 ~]# . keystonerc_boris
[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| 8543e339-724c-438e-80be-8259906ccf6d | UbuntuTRS005 | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.116 |
| 8bb32603-c27b-4665-a025-859f1a5bc04e | UbuntuTRS031 | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.113 |
| 177ab5b8-c86b-44d8-aa50-b4b09cc46274 | VF20RS007 | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.112 |
| a34ece35-afd2-466e-b591-93b269c8e41a | VF20RS017 | ACTIVE | None | Running | int=10.0.0.7, 192.168.1.114 |
| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.115 |
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
[root@dallas1 ~(keystone_boris)]$ python
Python 2.7.5 (default, Feb 19 2014, 13:47:28)
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>timport hmac
>>> import hashlib
>>> hmac.new('fedora','8543e339-724c-438e-80be-8259906ccf6d',hashlib.sha256).hexdigest()
'c31469feb2b865d76285612331d009bf2b1109674bf4cb745954f1e482c62e7f'
>>>
# exit
#. keystonerc_admin
[root@dallas1 ~(keystone_admin)]$ keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| 28d7e48acf74466e84fbb3cbd53c1ccb | admin | True |
| e896be65e94a4893b870bc29ba86d7eb | ostenant | True |
| 2c28cccb99fd4939a5af03548089ab07 | services | True |
+----------------------------------+----------+---------+
exit
# sudo su -
[root@dallas1 ~]# . keystonerc_boris
[root@dallas1 ~(keystone_boris)]$ curl \
-H 'x-instance-id: 8543e339-724c-438e-80be-8259906ccf6d' \
-H 'x-tenant-id: e896be65e94a4893b870bc29ba86d7eb' \
-H 'x-instance-id-signature: c31469feb2b865d76285612331d009bf2b1109674bf4cb745954f1e482c62e7f' \
http://localhost:8700/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
Snapshots with different VMs involved :-
In an environment running Neutron, a request from your instance must traverse a number of steps:
From the instance to a router,
Through a NAT rule in the router namespace,
To an instance of the neutron-ns-metadata-proxy,
To the actual Nova metadata service
When there are problem accessing the metadata, it can be helpful to verify that the metadata service itself is configured correctly and returning meaningful information.
end quoting and start reproducing on Controller of Two Node Neutron GRE+OVS+Gluster Fedora 20 Cluster
[root@dallas1 ~(keystone_admin)]$ ip netns list
qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f
Check on the Routing on Cloud controller's router namespace, it should show
port 80 for 169.254.169.254 routes to the host at port 8700
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 8700
Check routing table inside the router namespace:
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r
default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d proto kernel scope link src 192.168.1.100
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.0.3:53 0.0.0.0:* LISTEN
tcp6 0 0 fe80::f816:3eff:feef:53 :::* LISTEN
udp 0 0 10.0.0.3:53 0.0.0.0:*
udp 0 0 0.0.0.0:67 0.0.0.0:*
udp6 0 0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2830/python
[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova 2830 1 0 09:41 ? 00:00:57 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2856 2830 0 09:41 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2874 2830 0 09:41 ? 00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2875 2830 0 09:41 ? 00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
Checks are done then follow http://blog.oddbit.com/2014/01/14/direct-access-to-nova-metadata/
[root@dallas1 ~]# grep shared_secret /etc/nova/nova.conf
neutron_metadata_proxy_shared_secret = fedora
[root@dallas1 ~]# . keystonerc_boris
[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
| 8543e339-724c-438e-80be-8259906ccf6d | UbuntuTRS005 | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.116 |
| 8bb32603-c27b-4665-a025-859f1a5bc04e | UbuntuTRS031 | SUSPENDED | None | Shutdown | int=10.0.0.5, 192.168.1.113 |
| 177ab5b8-c86b-44d8-aa50-b4b09cc46274 | VF20RS007 | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.112 |
| a34ece35-afd2-466e-b591-93b269c8e41a | VF20RS017 | ACTIVE | None | Running | int=10.0.0.7, 192.168.1.114 |
| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037 | ACTIVE | None | Running | int=10.0.0.2, 192.168.1.115 |
+--------------------------------------+--------------+-----------+------------+-------------+-----------------------------+
[root@dallas1 ~(keystone_boris)]$ python
Python 2.7.5 (default, Feb 19 2014, 13:47:28)
[GCC 4.8.2 20131212 (Red Hat 4.8.2-7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>timport hmac
>>> import hashlib
>>> hmac.new('fedora','8543e339-724c-438e-80be-8259906ccf6d',hashlib.sha256).hexdigest()
'c31469feb2b865d76285612331d009bf2b1109674bf4cb745954f1e482c62e7f'
>>>
# exit
#. keystonerc_admin
[root@dallas1 ~(keystone_admin)]$ keystone tenant-list
+----------------------------------+----------+---------+
| id | name | enabled |
+----------------------------------+----------+---------+
| 28d7e48acf74466e84fbb3cbd53c1ccb | admin | True |
| e896be65e94a4893b870bc29ba86d7eb | ostenant | True |
| 2c28cccb99fd4939a5af03548089ab07 | services | True |
+----------------------------------+----------+---------+
exit
# sudo su -
[root@dallas1 ~]# . keystonerc_boris
[root@dallas1 ~(keystone_boris)]$ curl \
-H 'x-instance-id: 8543e339-724c-438e-80be-8259906ccf6d' \
-H 'x-tenant-id: e896be65e94a4893b870bc29ba86d7eb' \
-H 'x-instance-id-signature: c31469feb2b865d76285612331d009bf2b1109674bf4cb745954f1e482c62e7f' \
http://localhost:8700/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
hostname
instance-action
instance-id
instance-type
kernel-id
local-hostname
local-ipv4
placement/
public-hostname
public-ipv4
public-keys/
ramdisk-id
reservation-id
Snapshots with different VMs involved :-