Sunday, January 25, 2015

Set up Two Node RDO Juno ML2&OVS&VXLAN Cluster runnig Docker Hypervisor on Controller and KVM on Compute (CentOS 7, Fedora 21)

****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd
First packages for rpmbuild :-

 $ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
    dbus-devel docbook-style-xsl elfutils-devel  \
    glib2-devel  gnutls-devel  gobject-introspection-devel \
    gperf     gtk-doc intltool kmod-devel libacl-devel \
    libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
    libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
    libselinux-devel libtool pam-devel python3-devel python3-lxml \
    qrencode-devel  python2-devel  xz-devel

Second:-

$cd rpmbuild/SPEC
$rpmbuild -bb systemd.spec
$ cd ../RPMS/x86_64

Third:-

$ sudo yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

.  .  .  .  .  .  .  .  .  .

Dependencies Resolved

=================================================================================================
 Package                  Arch    Version      Repository                                   Size
=================================================================================================
Installing:
 libgudev1-devel          x86_64  218-3.fc21   /libgudev1-devel-218-3.fc21.x86_64          281 k
 systemd-debuginfo        x86_64  218-3.fc21   /systemd-debuginfo-218-3.fc21.x86_64         69 M
 systemd-journal-gateway  x86_64  218-3.fc21   /systemd-journal-gateway-218-3.fc21.x86_64  571 k
Updating:
 libgudev1                x86_64  218-3.fc21   /libgudev1-218-3.fc21.x86_64                 51 k
 systemd                  x86_64  218-3.fc21   /systemd-218-3.fc21.x86_64                   22 M
 systemd-compat-libs      x86_64  218-3.fc21   /systemd-compat-libs-218-3.fc21.x86_64      237 k
 systemd-devel            x86_64  218-3.fc21   /systemd-devel-218-3.fc21.x86_64            349 k
 systemd-libs             x86_64  218-3.fc21   /systemd-libs-218-3.fc21.x86_64             1.0 M
 systemd-python           x86_64  218-3.fc21   /systemd-python-218-3.fc21.x86_64           185 k
 systemd-python3          x86_64  218-3.fc21   /systemd-python3-218-3.fc21.x86_64          191 k

Transaction Summary
=================================================================================================
Install  3 Packages
Upgrade  7 Packages

Total size: 94 M
Is this ok [y/d/N]: y

  View also  https://ask.openstack.org/en/question/59789/attempt-to-install-nova-docker-driver-on-fedora-21/
*************************************************************************************** 
As a final result of performing configuration bellow Juno dashboard will automatically  spawn,launch  and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node

Set up initial configuration via RDO Juno packstack run

- Controller node: Nova, Keystone, Cinder, Glance, Neutron (using Open vSwitch plugin && VXLAN )
- Compute node: Nova (nova-compute), Neutron (openvswitch-agent)


juno1dev.localdomain   -  Controller (192.168.1.127)
juno2dev.localdomain   -  Compute   (192.168.1.137)

Management&&Public  network is 192.168.1.0/24
VXLAN tunnel is (192.168.0.127,192.168.0.137)


Answer File :-

[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.168.1.127
CONFIG_COMPUTE_HOSTS=192.168.1.137
CONFIG_NETWORK_HOSTS=192.168.1.127
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.168.1.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.168.1.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.168.1.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789

CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.168.1.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4

Only on Controller updates :-
[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="192.168.1.127"
NETMASK="255.255.255.0"
DNS1="83.221.202.254"
BROADCAST="192.168.1.255"
GATEWAY="192.168.1.1"
NM_CONTROLLED="no"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no
ONBOOT="yes"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex

DEVICETYPE="ovs"

[root@juno1 network-scripts(keystone_admin)]# cat ifcfg-enp2s0
DEVICE="enp2s0"
# HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

************************
On Controller :-
************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# service network restart
# reboot

[root@juno1dev ~(keystone_admin)]# ifconfig

br-ex: flags=4163  mtu 1500
        inet 192.168.1.127  netmask 255.255.255.0  broadcast 192.168.1.255
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 0  (Ethernet)
        RX packets 516087  bytes 305856360 (291.6 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 474282  bytes 62485754 (59.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


enp2s0: flags=4163  mtu 1500
        inet6 fe80::222:15ff:fe63:e4e2  prefixlen 64  scopeid 0x20
        ether 00:22:15:63:e4:e2  txqueuelen 1000  (Ethernet)
        RX packets 1121900  bytes 1194013198 (1.1 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 768667  bytes 82497428 (78.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 17

enp5s1: flags=4163  mtu 1500
        inet 192.168.0.127  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::2e0:53ff:fe13:174c  prefixlen 64  scopeid 0x20
        ether 00:e0:53:13:17:4c  txqueuelen 1000  (Ethernet)
        RX packets 376087  bytes 49012215 (46.7 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1136402  bytes 944635587 (900.8 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10
        loop  txqueuelen 0  (Local Loopback)
        RX packets 1381792  bytes 250829475 (239.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1381792  bytes 250829475 (239.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0



After packstack completion  switch both nodes to IPv4  iptables firewall
*********************************************************************************
As of 01/25/2015 dnsmasq fails to serve private subnets, unless following lines
to be commented out
*********************************************************************************

# -A INPUT -j REJECT --reject-with icmp-host-prohibited
# -A FORWARD -j REJECT --reject-with icmp-host-prohibited
 
Set up Nova-Docker on Controller&&Network Node
***************************
Initial docker setup
***************************
# yum install python-pbr

# yum install docker-io -y
# yum install -y python-pip git
 
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
#  mkdir /etc/nova/rootwrap.d


************************************************************************************
On Fedora 21 even running systemd 218-3 you should expect
six.__version__  to be dropped to 1.2 right after `python setup.py install`

Then run:-

# pip install --upgrade six

Downloading/unpacking six from https://pypi.python.org/packages/3.3/s/six/six-1.9.0-py2.py3-none-any.whl#md5=9ac7e129a80f72d6fc1f0216f6e9627b
  Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six
  Found existing installation: six 1.7.3
    Uninstalling six:
      Successfully uninstalled six
Successfully installed six
Cleaning up...
***************************************************************************************

Proceed as normal.

************************************************
Create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

*************************************
Restart Service glance-api
*************************************
usermod -G docker nova
systemctl restart openstack-glance-api

********************************************************************************
  Creating openstack-nova-docker service per http://blog.oddbit.com/2015/01/17/running-novalibvirt-and-novadocker-on-the-same-host/
Due to configuration of answer-file  in our case /etc/nova/nova.conf on Controller doesn't have any compute_driver at all , and libvirt driver on Compute node.
*********************************************************************************

Create new file /etc/nova/nova-docker.conf


[DEFAULT]
 host=juno1dev.localdomain
 compute_driver=novadocker.virt.docker.DockerDriver
 log_file=/var/log/nova/nova-docker.log
 state_path=/var/lib/nova-docker
 
Create openstack-nova-compute.service unit on  system, and saved it as
/etc/systemd/system/openstack-nova-docker.service
 
[Unit]
Description=OpenStack Nova Compute Server (Docker)
After=syslog.target network.target

[Service]
Environment=LIBGUESTFS_ATTACH_METHOD=appliance
Type=notify
Restart=always
User=nova
ExecStart=/usr/bin/nova-compute --config-file /etc/nova/nova.conf \
          --config-file /etc/nova/nova-docker.conf

[Install]
WantedBy=multi-user.target

 
SCP /usr/bin/nova-compute  from Compute node to Controller and run :- 
 
# systemctl enable openstack-nova-docker
# systemctl start openstack-nova-docker
 
Update /etc/nova/nova.conf on Compute Node

vif_plugging_is_fatal=False 
vif_pligging_timeout=0
# systemctl restart openstack-nova-compute 
 

********************************************************************************
On Fedora 21 keep this entries as is ( no changes),however to launch new
instance on Compute, you would have stop service openstcak-nova-docker
on Controller. Just for 2-3 min coming from spawn => active , restart
openstcak-nova-docker on Controller
********************************************************************************

As final result dashboard will autonatically spawn,load and run Nova-Dockers containers on Controller and usual nova instances supposed to run on KVM Hypervisor (Libvirt driver) on Compute Node
 
 
  
 
 
[root@juno1dev ~(keystone_admin)]# nova service-list
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                 | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 2  | nova-scheduler   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 3  | nova-conductor   | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
| 4  | nova-cert        | juno1dev.localdomain | internal | enabled | up    | 2015-01-26T06:42:16.000000 | -               |
| 5  | nova-compute     | juno2dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:23.000000 | -               |
| 6  | nova-compute     | juno1dev.localdomain | nova     | enabled | up    | 2015-01-26T06:42:24.000000 | -               |
+----+------------------+----------------------+----------+---------+-------+----------------------------+-----------------+

[root@juno1dev ~(keystone_admin)]# systemctl | grep nova

openstack-nova-api.service          loaded active running   OpenStack Nova API Server
openstack-nova-cert.service         loaded active running   OpenStack Nova Cert Server
openstack-nova-conductor.service    loaded active running   OpenStack Nova Conductor Server
openstack-nova-consoleauth.service  loaded active running   OpenStack Nova VNC console auth Server
openstack-nova-docker.service       loaded active running   OpenStack Nova Compute Server (Docker)
openstack-nova-novncproxy.service   loaded active running   OpenStack Nova NoVNC Proxy Server
openstack-nova-scheduler.service    loaded active running   OpenStack Nova Scheduler Server
 
 
 
  
 
******************************************* 
Tunning VNC Console in dashboard :-
*******************************************
 
Controller - 192.168.1.127 


running: nova-consoleauth nova-novncproxy nova.conf: 

novncproxy_host=0.0.0.0 
novncproxy_port=6080 
novncproxy_base_url=http://192.168.1.127:6080/vnc_auto.html 


Compute - 192.168.1.137 

running: nova-compute nova.conf:
 
vnc_enabled=True
novncproxy_base_url=http://192.168.1.137:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.1.137

References
 
https://ask.openstack.org/en/question/520/vnc-console-in-dashboard-fails-to-connect-ot-server-code-1006/

Saturday, January 17, 2015

Set up LVMiSCSI cinder backend for RDO Juno on Fedora 21 for Two Node Cluster (Controller&&Network and Compute)

UPATE 11/17/2015
See Storage Node (LVMiSCSI) deployment for RDO Liberty on CentOS 7.1
END UPDATE

During RDO Juno set up on Fedora 21 Workstation service target is deactivated
on boot up, and tgtd is started (versus CentOS 7 installation procedure) , what
requires some additional efforts to tune LVMiSCSI cinder back end on newest Fedora release. Actually, RDO Juno packstack multi node setup follows procedure posted here http://lxer.com/module/newswire/view/207415/index.html

Service tgtd should be stopped and disabled on Controller
Service target should be enabled and started on Controller

[root@juno1f21 ~(keystone_admin)]# service target status
Redirecting to /bin/systemctl status  target.service
● target.service - Restore LIO kernel target configuration
   Loaded: loaded (/usr/lib/systemd/system/target.service; enabled)
   Active: active (exited) since Sat 2015-01-17 15:45:44 MSK; 12min ago
  Process: 1512 ExecStart=/usr/bin/targetctl restore (code=exited, status=0/SUCCESS)
 Main PID: 1512 (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/target.service

In general - here is a summary of the iSCSI fabric objects hierarchy (see also the underlying configFS layout) . View http://linux-iscsi.org/wiki/Targetcli : -


+-targetcli
  |
  +-Targets
    | Identified by their WWNs or IQN (for iSCSI).
    | Targets identify a group of Endpoints.
    |
    +-TPGs (Target Portal Groups, iSCSI only)
      | The TPG is identified by its numerical Tag, starting at 1. It
      | groups several Network Portals, and caps LUNs and Node ACLs.
      | For fabrics other than iSCSI, targetcli masks the TPG level.
      |
      +-Network Portals (iSCSI only)
      |   A Network Portal adds an IP address and a port. Without at
      |   least one Network Portal, the Target remains disabled.
      |
      +-LUNs
      |   LUNs point at the Storage Objects, and are numbered 0-255.
      |
      +-ACLs
        | Identified by initiator WWNs/IQNs, ACLs group permissions
        | for that specific initiator. If ACLs are enabled, one
        | NodeACL is required per authorized initiator.
        |
        + Mapped LUNs
            Determine which LUNs an initiator will see. E.g., if
            Mapped LUN 1 points at LUN 0, the initiator referenced
            by the NodeACL will see LUN 0 as LUN 1.


In targetcli environment follow procedure described here http://www.server-world.info/en/note?os=Fedora_21&p=iscsi
create  ACL iqn.1994-05.com.redhat:28205be4fa2c just matching InitiatorName
in file /etc/iscsi/initiatorname.iscsi on Compute Node


   On  Compute Node follow http://www.server-world.info/en/note?os=Fedora_21&p=iscsi&f=2

*************************************************
Update /etc/iscsi/iscsid.conf  to match :-
*************************************************

  node.session.auth.username = username
  node.session.auth.password = password

  assigned in targetcli set up on Controller , then run on Compute node

  # systemctl  start iscsid
  # systemctl  enable iscsid

*****************************************************************************
 Update /etc/cinder/cinder.conf on Controller as follows in DEFAULT section
******************************************************************************

  enabled_backends = lvm001

  Then  place in bottom of cinder.conf:

   [lvm001]
   iscsi_helper=lioadm
   volume_group=cinder-volumes001
   iscsi_ip_address=192.168.1.127
   volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
   volume_backend_name=LVM_iSCSI001

   ****************
   Now run :-
   ****************

   [root@juno1f21 ~(keystone_admin)]#  cinder type-create lvms

   [root@juno1f21 ~(keystone_admin)]#  cinder type-key lvms set    volume_backend_name=LVM_iSCSI001

   [root@juno1f21 ~(keystone_admin)]# for i in api scheduler volume ; do service openstack-cinder-${i} restart ; done

*******************************************************************************
  Via drop down menu "volume type" create VF21LVMS01 with lvms type :-
*******************************************************************************



 
 and launch instance of Fedora 21 via LVMiSCSI volume created

 Compute node will report :-

[root@juno2f21 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752


[root@juno2f21 ~]# service iscsid status

Redirecting to /bin/systemctl status  iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled)
   Active: active (running) since Sat 2015-01-17 15:24:12 MSK; 1h 4min ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 27674 ExecStop=/sbin/iscsiadm -k 0 2 (code=exited, status=0/SUCCESS)
  Process: 27680 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 27682 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─27681 /usr/sbin/iscsid
           └─27682 /usr/sbin/iscsid

Jan 17 15:44:57 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:03 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:39 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)
Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is operational after recov...s)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)

Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is 
operational after recov...s)
Hint: Some lines were ellipsized, use -l to show in full.

**********************************************************************
 Verify  volume-id shown it targetcli>ls report :
**********************************************************************

[root@juno1f21 ~(keystone_admin)]# nova list --all-tenants
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
| ID                                   | Name             | Status    | Task State | Power State | Networks                              |
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
| 3f06cb34-797d-45d1-989e-cba14e902b6c | UbuntuUtopicRX01 | SUSPENDED | -          | Shutdown    | demo_network=40.0.0.17, 192.168.1.154 |
| 7fcbcf6f-67a7-4603-9c09-6e725d403a04 | VF21GLX01        | SUSPENDED | -          | Shutdown    | demo_network=40.0.0.16, 192.168.1.153 |
| a731443e-1355-44c0-811b-97cf9eab987e | VF21LVX001       | ACTIVE    | -          | Running     | demo_network=40.0.0.18, 192.168.1.155 |
+--------------------------------------+------------------+-----------+------------+-------------+---------------------------------------+
[root@juno1f21 ~(keystone_admin)]# nova show a731443e-1355-44c0-811b-97cf9eab987e
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                     |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-SRV-ATTR:host                 | juno2f21.localdomain                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | juno2f21.localdomain                                     |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000008                                        |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2015-01-17T12:33:50.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2015-01-17T12:33:38Z                                     |
| demo_network network                 | 40.0.0.18, 192.168.1.155                                 |
| flavor                               | m1.small (2)                                             |
| hostId                               | 40dba45d18a87067afdd4187c4467eed967a11c3b59df8b921f6b16e |
| id                                   | a731443e-1355-44c0-811b-97cf9eab987e                     |
| image                                | Attempt to boot from volume - no image supplied          |
| key_name                             | oskey57                                                  |
| metadata                             | {}                                                       |
| name                                 | VF21LVX001                                               |
| os-extended-volumes:volumes_attached | [{"id": "d96a5ad7-bd0b-438a-8ffb-4cb631ed8752"}]         |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 25f74c1d135c4727b1406cb35f9df70a                         |
| updated                              | 2015-01-17T12:52:06Z                                     |
| user_id                              | 0025c17969f64708a886d4bb1fa354cc                         |
+--------------------------------------+---------------------------------------------------------

[root@juno1f21 ~(keystone_admin)]# targetcli
targetcli shell version 2.1.fb38
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> ls
o- / ...................................................................................... [...]
  o- backstores ........................................................................... [...]
  | o- block ............................................................... [Storage Objects: 1]
  | | o- iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752  [/dev/cinder-volumes001/volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752 (5.0GiB) write-thru activated]


Creating volume for Ubuntu Utopic


    
   

    Reporting from Compute side :-

    [root@juno1f21 ~(keystone_admin)]# ssh 192.168.1.137
Last login: Sat Jan 17 16:24:44 2015 from juno1f21.localdomain
[root@juno2f21 ~]# service iscsid status
Redirecting to /bin/systemctl status  iscsid.service
● iscsid.service - Open-iSCSI
   Loaded: loaded (/usr/lib/systemd/system/iscsid.service; enabled)
   Active: active (running) since Sat 2015-01-17 15:24:12 MSK; 1h 53min ago
     Docs: man:iscsid(8)
           man:iscsiadm(8)
  Process: 27674 ExecStop=/sbin/iscsiadm -k 0 2 (code=exited, status=0/SUCCESS)
  Process: 27680 ExecStart=/usr/sbin/iscsid (code=exited, status=0/SUCCESS)
 Main PID: 27682 (iscsid)
   CGroup: /system.slice/iscsid.service
           ├─27681 /usr/sbin/iscsid
           └─27682 /usr/sbin/iscsid


Jan 17 15:45:03 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:09 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (No...t)
Jan 17 15:45:43 juno2f21.localdomain iscsid[27681]: connect to 192.168.1.127:3260 failed (Co...d)
Jan 17 15:45:46 juno2f21.localdomain iscsid[27681]: connection1:0 is operational after recov...s)
Jan 17 17:05:38 juno2f21.localdomain iscsid[27681]: Connection2:0 to [target: iqn.2010-10.or...ow

Hint: Some lines were ellipsized, use -l to show in full.

[root@juno2f21 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.1.127

192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-d96a5ad7-bd0b-438a-8ffb-4cb631ed8752
192.168.1.127:3260,1 iqn.2010-10.org.openstack:volume-e87a3ee8-fa04-4bab-aedc-31bd2f4d4c02


   

Sunday, January 11, 2015

Set up Nova-Docker on OpenStack RDO Juno on top of Fedora 21

****************************************************************************************
UPDATE as of 01/31/2015 to get Docker && Nova-Docker working on Fedora 21
****************************************************************************************
Per https://github.com/docker/docker/issues/10280
download systemd-218-3.fc22.src.rpm && build 218-3 rpms and upgrade systemd.

First set up  packages for rpmbuild :-

 $ sudo yum install audit-libs-devel autoconf  automake cryptsetup-devel \
    dbus-devel docbook-style-xsl elfutils-devel  \
    glib2-devel  gnutls-devel  gobject-introspection-devel \
    gperf     gtk-doc intltool kmod-devel libacl-devel \
    libblkid-devel     libcap-devel libcurl-devel libgcrypt-devel \
    libidn-devel libmicrohttpd-devel libmount-devel libseccomp-devel \
    libselinux-devel libtool pam-devel python3-devel python3-lxml \
    qrencode-devel  python2-devel  xz-devel 

Second :-

 $ cd rpmbuild/SPEC
 $ rpmbuild -bb systemd.spec
 $ cd ../RPMS/x86_64

Third :-

$ yum install libgudev1-218-3.fc21.x86_64.rpm \
libgudev1-devel-218-3.fc21.x86_64.rpm \
systemd-218-3.fc21.x86_64.rpm \
systemd-compat-libs-218-3.fc21.x86_64.rpm \
systemd-debuginfo-218-3.fc21.x86_64.rpm \
systemd-devel-218-3.fc21.x86_64.rpm \
systemd-journal-gateway-218-3.fc21.x86_64.rpm \
systemd-libs-218-3.fc21.x86_64.rpm \
systemd-python-218-3.fc21.x86_64.rpm \
systemd-python3-218-3.fc21.x86_64.rpm

  View also  https://ask.openstack.org/en/question/59789/attempt-to-install-nova-docker-driver-on-fedora-21/

***************************************************************************************
  Recently Filip Krikava made a fork on github and created a Juno branch using
the latest commit +  Fix the problem when an image is not located in the local docker image registry

 Master https://github.com/stackforge/nova-docker.git is targeting latest Nova ( Kilo release ), forked branch is supposed to work for Juno , reasonably including commits after "Merge oslo.i18n". Posting bellow is supposed to test Juno Branch https://github.com/fikovnik/nova-docker.git


Quote ([2]) :-

The Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.
 


*****************************************************
As of 11/12/2014 third line may be per official fork
https://github.com/stackforge/nova-docker/tree/stable/juno
# git clone https://github.com/stackforge/nova-docker *****************************************************


***************************
Initial docker setup
***************************

# yum install docker-io -y
# yum install -y python-pip git
# git clone https://github.com/fikovnik/nova-docker.git
# cd nova-docker
# git branch -v -a

   master                1ed1820 A note no firewall drivers.
  remotes/origin/HEAD   -> origin/master
  remotes/origin/juno   1a08ea5 Fix the problem when an image
            is not located in the local docker image registry.
  remotes/origin/master 1ed1820 A note no firewall drivers.
# git checkout -b juno origin/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d


Just after `python setup.py install` six version might drop to 1.2
in this case run `pip install --upgrade six` and 1.7.3 will be back
again.

Setup via official branch

***************************
Initial docker setup
***************************
# yum install python-pbr

# yum install docker-io -y
# yum install -y python-pip git
 
# git clone https://github.com/stackforge/nova-docker
# cd nova-docker
# git checkout stable/juno
# python setup.py install
# systemctl start docker
# systemctl enable docker
# chmod 660  /var/run/docker.sock
# pip install pbr
#  mkdir /etc/nova/rootwrap.d


**************************************************************
On Fedora 21 even running systemd 218-3 you should expect
six.__version__  to be dropped to 1.2 right after `python setup.py install`

Then run:-

# pip install --upgrade six

Downloading/unpacking six from https://pypi.python.org/packages/3.3/s/six/six-1.9.0-py2.py3-none-any.whl#md5=9ac7e129a80f72d6fc1f0216f6e9627b
  Downloading six-1.9.0-py2.py3-none-any.whl
Installing collected packages: six
  Found existing installation: six 1.7.3
    Uninstalling six:
      Successfully uninstalled six
Successfully installed six
Cleaning up...
**************************************************************

Proceed as normal.


******************************
Update nova.conf
******************************
vi /etc/nova/nova.conf
set "compute_driver = novadocker.virt.docker.DockerDriver"

************************************************
Next, create the docker.filters file:
************************************************

vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

*****************************************
Add line /etc/glance/glance-api.conf
*****************************************
container_formats=ami,ari,aki,bare,ovf,ova,docker
:wq

************************
Restart Services
************************
usermod -G docker nova
systemctl restart openstack-nova-compute
systemctl status openstack-nova-compute
systemctl restart openstack-glance-api
Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending  phusion/baseimage and starting three daemons
at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno

*******************************************************************************
Verification nova-docker driver  been  built on Fedora 21
*******************************************************************************

Build bellow is extending  phusion/baseimage to start several daemons at a time
during launching nova-docker container. It has been tested on Nova-Docker RDO
Juno on top of CentOS 7 ( view Set up GlassFish 4.1 Nova-Docker Container via phusion/baseimage on RDO Juno ). Here it is reproduced on Nova-Docker RDO Juno on top of Fedora 21 coming afterwards `packstack --allinone` Juno installation on Fedora 21,  been run pretty smoothly .

*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash

if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
    echo "No SSH host key available. Generating one..."
    export LC_ALL=C
    export DEBIAN_FRONTEND=noninteractive
    dpkg-reconfigure openssh-server
    echo "SSH KEYS REDONE !"
fi
/usr/sbin/sshd > log &

and insert in Dockerfile:-

ADD 01_sshd_start.sh /etc/my_init.d/

*********************************************************************************
 Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending  phusion/baseimage and starting three daemons
at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno
**********************************************************************************

 FROM phusion/baseimage

MAINTAINER Boris Derzhavets

RUN apt-get update
RUN echo 'root:root' |chpasswd
RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config

RUN apt-get update && apt-get install -y wget
RUN wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp  jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH


RUN apt-get update && \
    apt-get install -y wget unzip pwgen expect net-tools vim && \
    wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip && \
    unzip glassfish-4.1.zip -d /opt && \
    rm glassfish-4.1.zip && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:$PATH

ADD 01_sshd_start.sh  /etc/my_init.d
ADD run.sh /etc/my_init.d/
ADD database.sh  /etc/my_init.d/
ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)
EXPOSE 22  4848 8080 8181 9009

CMD ["/sbin/my_init"]


********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting  on RDO Juno on top of Fedora 21 ( view http://lxer.com/module/newswire/view/209277/index.html ).
********************************************************************************
# cat database.sh

#!/bin/bash

set -e
asadmin start-database --dbhost 127.0.0.1 --terse=true > log &

echo "Derby database started !"

the important  change is binding dbhost to 127.0.0.1 , which  is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about --dbhost  key value of Derby Database

*********************
Build image
*********************

[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r--r--. 1 root root   217 Jan  7 00:27 change_admin_password_func.sh
-rw-r--r--. 1 root root   833 Jan  7 00:27 change_admin_password.sh
-rw-r--r--. 1 root root   473 Jan  7 00:27 circle.yml
-rw-r--r--. 1 root root    44 Jan  7 00:27 database.sh
-rw-r--r--. 1 root root  1287 Jan  7 00:27 Dockerfile
-rw-r--r--. 1 root root   167 Jan  7 00:27 enable_secure_admin.sh
-rw-r--r--. 1 root root 11323 Jan  7 00:27 LICENSE
-rw-r--r--. 1 root root  2123 Jan  7 00:27 README.md
-rw-r--r--. 1 root root   354 Jan  7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t derby/docker-glassfish41 .

*************************
Upload image to glance
*************************

# . keystonerc_admin
# docker save derby/docker-glassfish41:latest | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name derby/docker-glassfish41:latest

**********************
Launch instance
**********************
# .  keystonerc_demo
# nova boot --image "derby/docker-glassfish41:latest" --flavor m1.small --key-name  oskey57    --nic net-id=demo_network-id DerbyGlassfish41


 

 

  

 
 

[root@fedora21 docker-glassfish41(keystone_admin)]# docker logs 4eb390cf155d
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
*** Running /etc/my_init.d/database.sh...
Derby database started !
*** Running /etc/my_init.d/run.sh...
Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-0000000d: instance-0000000d: unknown error
Waiting for domain1 to start ...........
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name>  admin
Enter admin password for user "admin">
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:

     admin:H5V0x7fVKKWu

Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false
Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-0000000d: instance-0000000d: unknown error



Thursday, January 08, 2015

Set up GlassFish 4.1 Nova-Docker Container via docker's phusion/baseimage on RDO Juno

The problem here is that  phusion/baseimage per  https://github.com/phusion/baseimage-docker  should provide ssh access to container , however it doesn't.
On 01/21/2015 I had completely rewrite this blog post, reasons which brought 
to this step clearly come up from comparison of two `docker logs container-id`
as of 01/08/2015 and as of 01/21/2015. I kept original version untouched at
bderzhavets.wordpress.com.

*******************************************************************************
To   bring sshd back to life  create in building folder script  01_sshd_start.sh
*******************************************************************************
#!/bin/bash


if [[ ! -e /etc/ssh/ssh_host_rsa_key ]]; then
    echo "No SSH host key available. Generating one..."
    export LC_ALL=C
    dpkg-reconfigure openssh-server
    echo "SSH KEYS regenerated by Boris just in case !"
fi
/usr/sbin/sshd > log &
echo "SSHD started !"

and insert in Dockerfile:-

ADD 01_sshd_start.sh /etc/my_init.d/ 

 *******************************************************************
 Then what ? Finally it appears to be a case (01/21/2015):-
*******************************************************************

CONTAINER ID        IMAGE                               COMMAND             CREATED             STATUS              PORTS               NAMES

4ef00f2fa5b9        dbahack/docker-glassfish41:latest   "/sbin/my_init"     36 seconds ago      Up 34 seconds                           nova-246d094b-bcd1-49c3-a490-0f74a7609d9a

1ee743c3cf3c        dba57/docker-glassfish41:latest     "/sbin/my_init"     18 minutes ago      Up 18 minutes                           nova-0c8b4b9d-14e1-43b5-8a55-dff8aa50fca1  

[root@junoDocker01 docker-glassfish41(keystone_admin)]# docker logs 4ef00f2fa5b9

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/01_start-sshd.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.
SSH KEYS regenarated by Boris just in case !
SSHD started !




*** Running /etc/my_init.d/database.sh...
Derby database started !
*** Running /etc/my_init.d/run.sh...
Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000011: instance-00000011: unknown error
Waiting for domain1 to start .......
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name>  admin
Enter admin password for user "admin">
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:

     admin:lHX30b8Kip0F

Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false
Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000011: instance-00000011: unknown error

*********************************************************************************
Another option follow https://github.com/phusion/baseimage-docker/commit/2640bc7b036f216a149d6c8e284008f26bbb41f9
*********************************************************************************
Add to Dockerfile 

RUN rm -f /etc/service/sshd/down
RUN /etc/my_init.d/00_regen_ssh_host_keys.sh


But in this case you would have run :-
# docker exec ontainer-id /usr/sbin/sshd -D
to activate SSH login to container


*********************************************************************************
 Following bellow is Dockerfile been used to build image for GlassFish 4.1 nova-docker container extending  phusion/baseimage and starting three daemons
at a time when launching nova-docker instance been built via image been prepared to be used by Nova-Docker driver on Juno
**********************************************************************************


FROM phusion/baseimage

MAINTAINER Boris Derzhavets

ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN echo 'root:root' |chpasswd
RUN sed -ri 's/^PermitRootLogin\s+.*/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -ri 's/UsePAM yes/#UsePAM yes/g' /etc/ssh/sshd_config
RUN apt-get update && apt-get install -y wget
RUN wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz
RUN cp  jdk-8u25-linux-x64.tar.gz /opt
RUN cd /opt; tar -zxvf jdk-8u25-linux-x64.tar.gz
ENV PATH /opt/jdk1.8.0_25/bin:$PATH


RUN apt-get update && \
    apt-get install -y wget unzip pwgen expect net-tools vim && \
    wget http://download.java.net/glassfish/4.1/release/glassfish-4.1.zip && \
    unzip glassfish-4.1.zip -d /opt && \
    rm glassfish-4.1.zip && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

ENV PATH /opt/glassfish4/bin:$PATH

ADD 01_sshd_start.sh  /etc/my_init.d
ADD run.sh /etc/my_init.d/
ADD database.sh  /etc/my_init.d/
ADD change_admin_password.sh /change_admin_password.sh
ADD change_admin_password_func.sh /change_admin_password_func.sh
ADD enable_secure_admin.sh /enable_secure_admin.sh
RUN chmod +x /*.sh /etc/my_init.d/*.sh

# 4848 (administration), 8080 (HTTP listener), 8181 (HTTPS listener), 9009 (JPDA debug port)
EXPOSE 22  4848 8080 8181 9009

CMD ["/sbin/my_init"]



********************************************************************************
I had to update database.sh script as follows to make nova-docker container
starting  on RDO Juno
********************************************************************************
# cat database.sh

#!/bin/bash

set -e
asadmin start-database --dbhost 127.0.0.1 --terse=true >log &

echo "Derby database started !"

the important  change is binding dbhost to 127.0.0.1 , which  is not required for loading docker container. Nova-Docker Driver ( http://www.linux.com/community/blogs/133-general-linux/799569-running-nova-docker-on-openstack-rdo-juno-centos-7 ) seems to be more picky about --dbhost  key value of Derby Database

*********************
Build image
*********************

[root@junolxc docker-glassfish41]# ls -l
total 44
-rw-r--r--. 1 root root   217 Jan  7 00:27 change_admin_password_func.sh
-rw-r--r--. 1 root root   833 Jan  7 00:27 change_admin_password.sh
-rw-r--r--. 1 root root   473 Jan  7 00:27 circle.yml
-rw-r--r--. 1 root root    44 Jan  7 00:27 database.sh
-rw-r--r--. 1 root root  1287 Jan  7 00:27 Dockerfile
-rw-r--r--. 1 root root   167 Jan  7 00:27 enable_secure_admin.sh
-rw-r--r--. 1 root root 11323 Jan  7 00:27 LICENSE
-rw-r--r--. 1 root root  2123 Jan  7 00:27 README.md
-rw-r--r--. 1 root root   354 Jan  7 00:27 run.sh

[root@junolxc docker-glassfish41]# docker build -t boris/docker-glassfish41 .

*************************
Upload image to glance
*************************

# . keystonerc_admin
# docker save boris/docker-glassfish41:latest | glance image-create --is-public=True   --container-format=docker --disk-format=raw --name boris/docker-glassfish41:latest

**********************
Launch instance
**********************
# .  keystonerc_demo
# nova boot --image "boris/docker-glassfish41:latest" --flavor m1.small --key-name  osxkey    --nic net-id=demo_network-id OracleGlassfish41


[root@junodocker (keystone_admin)]# ssh root@192.168.1.175
root@192.168.1.175's password:
Last login: Fri Jan  9 10:09:50 2015 from 192.168.1.57

root@instance-00000045:~# ps -ef
 UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 10:15 ?        00:00:00 /usr/bin/python3 -u /sbin/my_init
root        12     1  0 10:15 ?        00:00:00 /usr/sbin/sshd

root        46     1  0 10:15 ?        00:00:08 /opt/jdk1.8.0_25/bin/java -Djava.library.path=/opt/glassfish4/glassfish/lib -cp /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar com.sun.enterprise.admin.cli.optional.DerbyControl start 127.0.0.1 1527 true /opt/glassfish4/glassfish/databases

root       137     1  0 10:15 ?        00:00:00 /bin/bash /etc/my_init.d/run.sh
root       358   137  0 10:15 ?        00:00:05 java -jar /opt/glassfish4/bin/../glassfish/lib/client/appserver-cli.jar start-domain --debug=false -w

root       375   358  0 10:15 ?        00:02:59 /opt/jdk1.8.0_25/bin/java -cp /opt/glassfish4/glassfish/modules/glassfish.jar -XX:+UnlockDiagnosticVMOptions -XX:NewRatio=2 -XX:MaxPermSize=192m -Xmx512m -client -javaagent:/opt/glassfish4/glassfish/lib/monitor/flashlight-agent.jar -Djavax.xml.accessExternalSchema=all -Djavax.net.ssl.trustStore=/opt/glassfish4/glassfish/domains/domain1/config/cacerts.jks -Djdk.corba.allowOutputStreamSubclass=true -Dfelix.fileinstall.dir=/opt/glassfish4/glassfish/modules/autostart/ -Dorg.glassfish.additionalOSGiBundlesToStart=org.apache.felix.shell,org.apache.felix.gogo.runtime,org.apache.felix.gogo.shell,org.apache.felix.gogo.command,org.apache.felix.shell.remote,org.apache.felix.fileinstall -Dcom.sun.aas.installRoot=/opt/glassfish4/glassfish -Dfelix.fileinstall.poll=5000 -Djava.endorsed.dirs=/opt/glassfish4/glassfish/modules/endorsed:/opt/glassfish4/glassfish/lib/endorsed -Djava.security.policy=/opt/glassfish4/glassfish/domains/domain1/config/server.policy -Dosgi.shell.telnet.maxconn=1 -Dfelix.fileinstall.bundles.startTransient=true -Dcom.sun.enterprise.config.config_environment_factory_class=com.sun.enterprise.config.serverbeans.AppserverConfigEnvironmentFactory -Dfelix.fileinstall.log.level=2 -Djavax.net.ssl.keyStore=/opt/glassfish4/glassfish/domains/domain1/config/keystore.jks -Djava.security.auth.login.config=/opt/glassfish4/glassfish/domains/domain1/config/login.conf -Dfelix.fileinstall.disableConfigSave=false -Dfelix.fileinstall.bundles.new.start=true -Dcom.sun.aas.instanceRoot=/opt/glassfish4/glassfish/domains/domain1 -Dosgi.shell.telnet.port=6666 -Dgosh.args=--nointeractive -Dcom.sun.enterprise.security.httpsOutboundKeyAlias=s1as -Dosgi.shell.telnet.ip=127.0.0.1 -DANTLR_USE_DIRECT_CLASS_LOADING=true -Djava.awt.headless=true -Dcom.ctc.wstx.returnNullForDefaultNamespace=true -Djava.ext.dirs=/opt/jdk1.8.0_25/lib/ext:/opt/jdk1.8.0_25/jre/lib/ext:/opt/glassfish4/glassfish/domains/domain1/lib/ext -Djdbc.drivers=org.apache.derby.jdbc.ClientDriver -Djava.library.path=/opt/glassfish4/glassfish/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib com.sun.enterprise.glassfish.bootstrap.ASMain -upgrade false -domaindir /opt/glassfish4/glassfish/domains/domain1 -read-stdin true -asadmin-args --host,,,localhost,,,--port,,,4848,,,--secure=false,,,--terse=false,,,--echo=false,,,--interactive=false,,,start-domain,,,--verbose=false,,,--watchdog=true,,,--debug=false,,,--domaindir,,,/opt/glassfish4/glassfish/domains,,,domain1 -domainname domain1 -instancename server -type DAS -verbose false -asadmin-classpath /opt/glassfish4/glassfish/lib/client/appserver-cli.jar -debug false -asadmin-classname com.sun.enterprise.admin.cli.AdminMain

root      1186    12  0 14:02 ?        00:00:00 sshd: root@pts/0
root      1188  1186  0 14:02 pts/0    00:00:00 -bash
root      1226  1188  0 15:45 pts/0    00:00:00 ps -ef




 
  
    Original idea of using ./run.sh script is coming from 
    https://registry.hub.docker.com/u/bonelli/glassfish-4.1/

   

*********************************
This log I got about 01/08/2015
*********************************


[root@junodocker ~(keystone_admin)]# docker logs 65a3f4cf1994

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
No SSH host key available. Generating one...
Creating SSH2 RSA key; this may take some time ...
Creating SSH2 DSA key; this may take some time ...
Creating SSH2 ECDSA key; this may take some time ...
Creating SSH2 ED25519 key; this may take some time ...
invoke-rc.d: policy-rc.d denied execution of restart.

*** Running /etc/my_init.d/database.sh...
Starting database in Network Server mode on host 127.0.0.1 and port 1527.


--------- Derby Network Server Information --------
Version: CSS10100/10.10.2.0 - (1582446)  Build: 1582446  DRDA Product Id: CSS10100
-- listing properties --
derby.drda.traceDirectory=/opt/glassfish4/glassfish/databases
derby.drda.maxThreads=0
derby.drda.sslMode=off
derby.drda.keepAlive=true
derby.drda.minThreads=0
derby.drda.portNumber=1527
derby.drda.logConnections=false
derby.drda.timeSlice=0
derby.drda.startNetworkServer=false
derby.drda.host=127.0.0.1
derby.drda.traceAll=false
------------------ Java Information ------------------
Java Version:    1.8.0_25
Java Vendor:     Oracle Corporation
Java home:       /opt/jdk1.8.0_25/jre
Java classpath:  /opt/glassfish4/glassfish/lib/asadmin/cli-optional.jar:/opt/glassfish4/javadb/lib/derby.jar:/opt/glassfish4/javadb/lib/derbytools.jar:/opt/glassfish4/javadb/lib/derbynet.jar:/opt/glassfish4/javadb/lib/derbyclient.jar
OS name:         Linux
OS architecture: amd64
OS version:      3.10.0-123.el7.x86_64
Java user name:  root
Java user home:  /root
Java user dir:   /
java.specification.name: Java Platform API Specification
java.specification.version: 1.8
java.runtime.version: 1.8.0_25-b17
--------- Derby Information --------
[/opt/glassfish4/javadb/lib/derby.jar] 10.10.2.0 - (1582446)
[/opt/glassfish4/javadb/lib/derbytools.jar] 10.10.2.0 - (1582446)
[/opt/glassfish4/javadb/lib/derbynet.jar] 10.10.2.0 - (1582446)
[/opt/glassfish4/javadb/lib/derbyclient.jar] 10.10.2.0 - (1582446)
------------------------------------------------------
----------------- Locale Information -----------------
Current Locale :  [English/United States [en_US]]
Found support for locale: [cs]
     version: 10.10.2.0 - (1582446)
Found support for locale: [de_DE]
     version: 10.10.2.0 - (1582446)
Found support for locale: [es]
     version: 10.10.2.0 - (1582446)
Found support for locale: [fr]
     version: 10.10.2.0 - (1582446)
Found support for locale: [hu]
     version: 10.10.2.0 - (1582446)
Found support for locale: [it]
     version: 10.10.2.0 - (1582446)
Found support for locale: [ja_JP]
     version: 10.10.2.0 - (1582446)
Found support for locale: [ko_KR]
     version: 10.10.2.0 - (1582446)
Found support for locale: [pl]
     version: 10.10.2.0 - (1582446)
Found support for locale: [pt_BR]
     version: 10.10.2.0 - (1582446)
Found support for locale: [ru]
     version: 10.10.2.0 - (1582446)
Found support for locale: [zh_CN]
     version: 10.10.2.0 - (1582446)
Found support for locale: [zh_TW]
     version: 10.10.2.0 - (1582446)
------------------------------------------------------
------------------------------------------------------

Starting database in the background.
Log redirected to /opt/glassfish4/glassfish/databases/derby.log.
Command start-database executed successfully.

*** Running /etc/my_init.d/run.sh...
Bad Network Configuration.  DNS can not resolve the hostname:
java.net.UnknownHostException: instance-00000045: instance-00000045: unknown error
Waiting for domain1 to start .......
Successfully started the domain : domain1
domain  Location: /opt/glassfish4/glassfish/domains/domain1
Log File: /opt/glassfish4/glassfish/domains/domain1/logs/server.log
Admin Port: 4848
Command start-domain executed successfully.
=> Modifying password of admin to random in Glassfish
spawn asadmin --user admin change-admin-password
Enter the admin password>
Enter the new admin password>
Enter the new admin password again>
Command change-admin-password executed successfully.
=> Enabling secure admin login
spawn asadmin enable-secure-admin
Enter admin user name>  admin
Enter admin password for user "admin">
You must restart all running servers for the change in secure admin to take effect.
Command enable-secure-admin executed successfully.
=> Done!
========================================================================
You can now connect to this Glassfish server using:

     admin:fCZNVP80JiyI

Please remember to change the above password as soon as possible!
========================================================================
=> Restarting Glassfish server
Waiting for the domain to stop .
Command stop-domain executed successfully.
=> Starting and running Glassfish server
=> Debug mode is set to: false