OpenStack Networking concepts
The OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:
Controller node: may host the Neutron server service, which provides the networking API and communicates with and tracks the agents.
DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.
Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.
OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.
L3 agent: performs L3 forwarding and NAT.
Otherwise a separate box hosts Neutron Server and all services mentioned above
Compute node: has an OVS plugin agent and openstack-nova-compute service.
Namespaces (View also Identifying and Troubleshooting Neutron Namespaces )
For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list` command, and can interact with the namespaces with the `ip netns exec
As mentioned in Direct_access _to_Nova_metadata
in an environment running Neutron, a request from your instance must traverse a number of steps:
1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service
Reproducing Dirrect_access_to_Nova_metadata I was able to get list of EC2 metadata available, but not their values. However, my major concern was getting values of metadata obtained in post Direct_access _to_Nova_metadata
and also at /openstack location. The last ones seem to me important not less then present in EC2 list . Not all of /openstack metadata are provided by EC2 list.
Commands been run bellow are supposed to verify Nova&Neutron Set up to be performed successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn't matter did you set up RDO Havana cloud environment manually or via packstack
Run on Controller Node :-
[root@dallas1 ~(keystone_admin)]$ ip netns list
qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f
Check on the Routing on Cloud controller's router namespace, it should show
port 80 for 169.254.169.254 routes to the host at port 8700
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 8700
Check routing table inside the router namespace:
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r
default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d proto kernel scope link src 192.168.1.100
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.0.3:53 0.0.0.0:* LISTEN
tcp6 0 0 fe80::f816:3eff:feef:53 :::* LISTEN
udp 0 0 10.0.0.3:53 0.0.0.0:*
udp 0 0 0.0.0.0:67 0.0.0.0:*
udp6 0 0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2830/python
[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova 2830 1 0 09:41 ? 00:00:57 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2856 2830 0 09:41 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2874 2830 0 09:41 ? 00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2875 2830 0 09:41 ? 00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
On another cluster
[root@dfw02 ~(keystone_admin)]$ ip netns list
qrouter-86b3008c-297f-4301-9bdc-766b839785f1
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8
qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b
qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2746/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746
nova 2746 1 0 08:57 ? 00:02:31 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2830 2746 0 08:57 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2851 2746 0 08:57 ? 00:00:10 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2858 2746 0 08:57 ? 00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
root 9976 11489 0 16:31 pts/3 00:00:00 grep --color=auto 2746
Inside namespaces output seems like this
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 4946/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946
root 4946 1 0 08:58 ? 00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=86b3008c-297f-4301-9bdc-766b839785f1 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log --log-dir=/var/log/neutron
root 10396 11489 0 16:33 pts/3 00:00:00 grep --color=auto 4946
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 4746/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746
root 4746 1 0 08:58 ? 00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=bf360d81-79fb-4636-8241-0a843f228fc8 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log --log-dir=/var/log/neutron
1. At this point you should be able (inside any running Havana instance) to launch your browser ("links" at least if there is no Light Weight X environment) to
http://169.254.169.254/openstack/latest (not EC2)
The response will be : meta_data.json password vendor_data.json
If Light Weight X Environment is unavailable then use "links"
What is curl http://curl.haxx.se/docs/faq.html#What_is_cURL
Now you should be able to run on F20 instance
[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1286 100 1286 0 0 1109 0 0:00:01 0:00:01 --:--:-- 1127
. . . . . . . .
"uuid": "10142280-44a2-4830-acce-f12f3849cb32",
"availability_zone": "nova",
"hostname": "vf20rs0404.novalocal",
"launch_index": 0,
"public_keys": {"key2": "ssh-rsa . . . . . Generated by Nova\n"},
"name": "VF20RS0404"
On another instance (in my case Ubuntu 14.04 )
root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json
Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1292 100 1292 0 0 444 0 0:00:02 0:00:02 --:--:-- 446
{"random_seed": "...",
"uuid": "8c79e60c-4f1d-44e5-8446-b42b4d94c4fc",
"availability_zone": "nova",
"hostname": "ubuntutrs0407.novalocal",
"launch_index": 0,
"public_keys": {"key2": "ssh-rsa .... Generated by Nova\n"},
"name": "UbuntuTRS0407"}
Running VMs on Compute node:-
[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None | Shutdown | int=10.0.0.7, 192.168.1.106 |
| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.107 |
| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.115 |
| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402 | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.103 |
| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404 | ACTIVE | None | Running | int=10.0.0.5, 192.168.1.105 |
+--------------------------------------+---------------+-----------+------------+-------------+--------------------
Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser
2. I have provided some information about the OpenStack metadata api, which is available at
browser should be launched to
What allows to to get any of displayed parameters
For instance :-
To verify instance-id launch virt-manger connected to Compute Node
which shows same value "000000a4"
Another option in text mode is "links" browser
$ ssh -l ubuntu -i key2.pem 192.168.1.109
Inside Ubuntu 14.04 instance :-
# apt-get -y install links
# links
Press ESC to get to menu:-
The OpenStack Networking components are deployed on the Controller, Compute, and Network nodes in the following configuration:
Controller node: may host the Neutron server service, which provides the networking API and communicates with and tracks the agents.
DHCP agent: spawns and controls dnsmasq processes to provide leases to instances. This agent also spawns neutron-ns-metadata-proxy processes as part of the metadata system.
Metadata agent: Provides a metadata proxy to the nova-api-metadata service. The neutron-ns-metadata-proxy direct traffic that they receive in their namespaces to the proxy.
OVS plugin agent: Controls OVS network bridges and routes between them via patch, tunnel, or tap without requiring an external OpenFlow controller.
L3 agent: performs L3 forwarding and NAT.
Otherwise a separate box hosts Neutron Server and all services mentioned above
Compute node: has an OVS plugin agent and openstack-nova-compute service.
Namespaces (View also Identifying and Troubleshooting Neutron Namespaces )
For each network you create, the Network node (or Controller node, if combined) will have a unique network namespace (netns) created by the DHCP and Metadata agents. The netns hosts an interface and IP addresses for dnsmasq and the neutron-ns-metadata-proxy. You can view the namespaces with the `ip netns list` command, and can interact with the namespaces with the `ip netns exec
namespace
command
` command.As mentioned in Direct_access _to_Nova_metadata
in an environment running Neutron, a request from your instance must traverse a number of steps:
1. From the instance to a router,
2. Through a NAT rule in the router namespace,
3. To an instance of the neutron-ns-metadata-proxy,
4. To the actual Nova metadata service
Reproducing Dirrect_access_to_Nova_metadata I was able to get list of EC2 metadata available, but not their values. However, my major concern was getting values of metadata obtained in post Direct_access _to_Nova_metadata
and also at /openstack location. The last ones seem to me important not less then present in EC2 list . Not all of /openstack metadata are provided by EC2 list.
Commands been run bellow are supposed to verify Nova&Neutron Set up to be performed successfully , otherwise passing four steps 1,2,3,4 is supposed to fail and it will force you to analyse corresponding Logs file ( View References). It doesn't matter did you set up RDO Havana cloud environment manually or via packstack
Run on Controller Node :-
[root@dallas1 ~(keystone_admin)]$ ip netns list
qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d
qdhcp-166d9651-d299-47df-a5a1-b368e87b612f
Check on the Routing on Cloud controller's router namespace, it should show
port 80 for 169.254.169.254 routes to the host at port 8700
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d iptables -L -t nat | grep 169
REDIRECT tcp -- anywhere 169.254.169.254 tcp dpt:http redir ports 8700
Check routing table inside the router namespace:
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d ip r
default via 192.168.1.1 dev qg-8fbb6202-3d
10.0.0.0/24 dev qr-2dd1ba70-34 proto kernel scope link src 10.0.0.1
192.168.1.0/24 dev qg-8fbb6202-3d proto kernel scope link src 192.168.1.100
[root@dallas1 ~(keystone_admin)]$ ip netns exec qrouter-cb80b040-f13f-4a67-a39e-353b1c873a0d netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ ip netns exec qdhcp-166d9651-d299-47df-a5a1-b368e87b612f netstat -na
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 10.0.0.3:53 0.0.0.0:* LISTEN
tcp6 0 0 fe80::f816:3eff:feef:53 :::* LISTEN
udp 0 0 10.0.0.3:53 0.0.0.0:*
udp 0 0 0.0.0.0:67 0.0.0.0:*
udp6 0 0 fe80::f816:3eff:feef:53 :::*
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
[root@dallas1 ~(keystone_admin)]$ iptables-save | grep 8700
-A INPUT -p tcp -m multiport --dports 8700 -m comment --comment "001 metadata incoming" -j ACCEPT
[root@dallas1 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2830/python
[root@dallas1 ~(keystone_admin)]$ ps -ef | grep 2830
nova 2830 1 0 09:41 ? 00:00:57 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2856 2830 0 09:41 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2874 2830 0 09:41 ? 00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2875 2830 0 09:41 ? 00:00:01 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
On another cluster
[root@dfw02 ~(keystone_admin)]$ ip netns list
qrouter-86b3008c-297f-4301-9bdc-766b839785f1
qrouter-bf360d81-79fb-4636-8241-0a843f228fc8
qdhcp-426bb226-0ab9-440d-ba14-05634a17fb2b
qdhcp-1eea88bb-4952-4aa4-9148-18b61c22d5b7
[root@dfw02 ~(keystone_admin)]$ netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 2746/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 2746
nova 2746 1 0 08:57 ? 00:02:31 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2830 2746 0 08:57 ? 00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2851 2746 0 08:57 ? 00:00:10 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova 2858 2746 0 08:57 ? 00:00:02 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
root 9976 11489 0 16:31 pts/3 00:00:00 grep --color=auto 2746
Inside namespaces output seems like this
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-86b3008c-297f-4301-9bdc-766b839785f1 netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 4946/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4946
root 4946 1 0 08:58 ? 00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/86b3008c-297f-4301-9bdc-766b839785f1.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=86b3008c-297f-4301-9bdc-766b839785f1 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-86b3008c-297f-4301-9bdc-766b839785f1.log --log-dir=/var/log/neutron
root 10396 11489 0 16:33 pts/3 00:00:00 grep --color=auto 4946
[root@dfw02 ~(keystone_admin)]$ ip netns exec qrouter-bf360d81-79fb-4636-8241-0a843f228fc8 netstat -lntp | grep 8700
tcp 0 0 0.0.0.0:8700 0.0.0.0:* LISTEN 4746/python
[root@dfw02 ~(keystone_admin)]$ ps -ef | grep 4746
root 4746 1 0 08:58 ? 00:00:00 /usr/bin/python /bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/bf360d81-79fb-4636-8241-0a843f228fc8.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=bf360d81-79fb-4636-8241-0a843f228fc8 --state_path=/var/lib/neutron --metadata_port=8700 --verbose --log-file=neutron-ns-metadata-proxy-bf360d81-79fb-4636-8241-0a843f228fc8.log --log-dir=/var/log/neutron
1. At this point you should be able (inside any running Havana instance) to launch your browser ("links" at least if there is no Light Weight X environment) to
http://169.254.169.254/openstack/latest (not EC2)
The response will be : meta_data.json password vendor_data.json
If Light Weight X Environment is unavailable then use "links"
What is curl http://curl.haxx.se/docs/faq.html#What_is_cURL
Now you should be able to run on F20 instance
[root@vf20rs0404 ~] # curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1286 100 1286 0 0 1109 0 0:00:01 0:00:01 --:--:-- 1127
. . . . . . . .
"uuid": "10142280-44a2-4830-acce-f12f3849cb32",
"availability_zone": "nova",
"hostname": "vf20rs0404.novalocal",
"launch_index": 0,
"public_keys": {"key2": "ssh-rsa . . . . . Generated by Nova\n"},
"name": "VF20RS0404"
On another instance (in my case Ubuntu 14.04 )
root@ubuntutrs0407:~#curl http://169.254.169.254/openstack/latest/meta_data.json | tee meta_data.json
Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1292 100 1292 0 0 444 0 0:00:02 0:00:02 --:--:-- 446
{"random_seed": "...",
"uuid": "8c79e60c-4f1d-44e5-8446-b42b4d94c4fc",
"availability_zone": "nova",
"hostname": "ubuntutrs0407.novalocal",
"launch_index": 0,
"public_keys": {"key2": "ssh-rsa .... Generated by Nova\n"},
"name": "UbuntuTRS0407"}
Running VMs on Compute node:-
[root@dallas1 ~(keystone_boris)]$ nova list
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------+
| d0f947b1-ff6a-4ff0-b858-b63a3d07cca3 | UbuntuTRS0405 | SUSPENDED | None | Shutdown | int=10.0.0.7, 192.168.1.106 |
| 8c79e60c-4f1d-44e5-8446-b42b4d94c4fc | UbuntuTRS0407 | ACTIVE | None | Running | int=10.0.0.6, 192.168.1.107 |
| 8775924c-dbbd-4fbb-afb8-7e38d9ac7615 | VF20RS037 | SUSPENDED | None | Shutdown | int=10.0.0.2, 192.168.1.115 |
| d22a2376-33da-4a0e-a066-d334bd2e511d | VF20RS0402 | SUSPENDED | None | Shutdown | int=10.0.0.4, 192.168.1.103 |
| 10142280-44a2-4830-acce-f12f3849cb32 | VF20RS0404 | ACTIVE | None | Running | int=10.0.0.5, 192.168.1.105 |
+--------------------------------------+---------------+-----------+------------+-------------+--------------------
Launching browser to http://169.254.169.254/openstack/latest/meta_data.json on another Two Node Neutron GRE+OVS F20 Cluster. Output is sent directly to browser
2. I have provided some information about the OpenStack metadata api, which is available at
/openstack
, but if you are concerned about the EC2 metadata API.browser should be launched to
http://169.254.169.254/latest/meta-data/
What allows to to get any of displayed parameters
For instance :-
OR via CLI
ubuntu@ubuntutrs0407:~$ curl http://169.254.169.254/latest/meta-data/instance-id
i-000000a4
ubuntu@ubuntutrs0407:~$ curl http://169.254.169.254/latest/meta-data/public-hostname
ubuntutrs0407.novalocal
ubuntu@ubuntutrs0407:~$ curl http://169.254.169.254/latest/meta-data/public-ipv4
192.168.1.107To verify instance-id launch virt-manger connected to Compute Node
which shows same value "000000a4"
Another option in text mode is "links" browser
$ ssh -l ubuntu -i key2.pem 192.168.1.109
Inside Ubuntu 14.04 instance :-
# apt-get -y install links
# links
Press ESC to get to menu:-
References