My final target was to create two node oVirt 3.3.2 cluster and virtual machines using replicated glusterfs 3.4.1 volumes based on XFS formatted partitions. Choice of IPv4 firewall with iptables for tuning cluster environment
and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle ( i was Informix DBA@Verizon for about 5 years , it was nice time ).
oVirt is an open source alternative to VMware vSphere, and provides an awesome KVM management interface for multi-node virtualization.
oVirt 3.3.2 clean install was performed as follows :-
1. Created ovirtmgmt bridge under /etc/sysconfig/network-interfaces
2. Fixed bug with NFS Server
https://bugzilla.redhat.com/show_bug.cgi?id=970595
3. Set up IPv4 firewall with iptables
4. Disabled NetworkManager and enabled network service
5.To be able perform current 3.3.2 install on F19 , set up per
http://postgresql.1045698.n5.nabble.com/How-to-install-latest-stable-postgresql-on-Debian-td5005417.html
# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400
Otherwise, setup fails to perform Misc Configuration. Systemctl status postgresql.service reports a servers crash during setup.
Appears to be known issue http://www.ovirt.org/OVirt_3.3.2_release_notes
On Fedora 19 with recent versions of PostgreSQL it may be necessary to manually change kernel.shmmax settings (BZ 1039616)
Runtime shared memory mapping :-
[root@ovirt1 ~]# systemctl list-units | grep postgres
postgresql.service loaded active running PostgreSQL database server
[root@ovirt1 ~]# ipcs -a
------ Message Queues --------
key msqid owner perms used-bytes messages
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 0 root 644 80 2
0x00000000 32769 root 644 16384 2
0x00000000 65538 root 644 280 2
0x00000000 163843 boris 600 4194304 2 dest
0x0052e2c1 360452 postgres 600 43753472 8
0x00000000 294917 boris 600 2097152 2 dest
0x0112e4a1 393222 root 600 1000 11
0x00000000 425991 boris 600 393216 2 dest
0x00000000 557065 boris 600 1048576 2 dest
------ Semaphore Arrays --------
key semid owner perms nsems
0x000000a7 65536 root 600 1
0x0052e2c1 458753 postgres 600 17
0x0052e2c2 491522 postgres 600 17
0x0052e2c3 524291 postgres 600 17
0x0052e2c4 557060 postgres 600 17
0x0052e2c5 589829 postgres 600 17
0x0052e2c6 622598 postgres 600 17
0x0052e2c7 655367 postgres 600 17
0x0052e2c8 688136 postgres 600 17
0x0052e2c9 720905 postgres 600 17
0x0052e2ca 753674 postgres 600 17
After creating replication gluster volume ovirt-data02 via Web Admin I ran manually :
gluster volume set ovirt-data02 auth.allow 192.168.1.* ;
gluster volume set ovirt-data02 group virt ;
gluster volume set ovirt-data02 cluster.quorum-type auto ;
gluster volume set ovirt-data02 performance.cache-size 1GB ;
Currently apache-sshd is 0.9.0-3 . https://bugzilla.redhat.com/show_bug.cgi?id=1021273
Adding new host works fine , just /etc/sysconfig/iptables on master server should
have :
-A INPUT -p tcp -m multiport --dport 24007:24108 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT
Personally i was experiencing one issue during second host deployment, which required service vdsmd restart on second host to allow system bring it up at the end of installation. Two installs behaved absolutely similar
[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:40:40 MSK; 50s ago
Process: 2896 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 3166 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3166 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 15:40:41 hv02.localdomain python[3192]: detected unhandled Python exception in '/usr/bin/vdsm-tool'
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: [427B blob data]
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: vdsm vds WARNING Unable to load the json rpc server module. Ple...led.
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 make_client_response()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 3
[root@hv02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart vdsmd.service
[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:41:42 MSK; 2s ago
Process: 3355 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
Process: 3358 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 3418 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3418 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: vdsm: Running test_conflicting_conf
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: SUCCESS: ssl configured to true. No conflicts
Dec 24 15:41:42 hv02.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 15:41:43 hv02.localdomain vdsm[3418]: vdsm vds WARNING Unable to load the json rpc server module. Ple...led.
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 make_client_response()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 3
Moreover if during core install on first server same report comes up during
awaiting host to become VDSM operational install will hang for a while and
finally won't bring up master server. Workaround is the same. Once again it's my personal experience. It's random error during core "all in one" install.
Successfull install looks like :
[root@hv01 ~]# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131224125431.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ INFO ] No product updates found
--== ALL IN ONE CONFIGURATION ==--
Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:
--== NETWORK CONFIGURATION ==--
Host fully qualified DNS name of this server [hv01.localdomain]:
[WARNING] Failed to resolve hv01.localdomain using DNS, it can be resolved only locally
iptables was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:
--== DATABASE CONFIGURATION ==--
Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== OVIRT ENGINE CONFIGURATION ==--
Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:
--== PKI CONFIGURATION ==--
Organization name for certificate [localdomain]:
--== APACHE CONFIGURATION ==--
Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== SYSTEM CONFIGURATION ==--
Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available
--== CONFIGURATION PREVIEW ==--
Database name : engine
Database secured connection : False
Database host : localhost
Database user name : engine
Database host name validation : False
Datbase port : 5432
NFS setup : True
PKI organization : localdomain
NFS mount point : /var/lib/exports/iso
Application mode : both
Firewall manager : iptables
Configure WebSocket Proxy : True
Host FQDN : hv01.localdomain
Datacenter storage type : nfs
Configure local database : True
Set application as default page : True
Configure Apache SSL : True
Configure VDSM on this host : True
Local storage domain directory : /var/lib/images
Please confirm installation settings (OK, Cancel) [OK]:
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Initializing PostgreSQL
[ INFO ] Creating PostgreSQL database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating database schema
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: C9:58:51:74:73:1E:DB:DC:05:8E:82:65:42:98:70:12
Internal CA ED:E6:6B:E9:F8:80:11:B2:52:C9:3E:93:4C:41:6A:44:4C:F8:94:B1
Web access is enabled at:
http://hv01.localdomain:80/ovirt-engine
https://hv01.localdomain:443/ovirt-engine
Please use the user "admin" and password specified in order to login into oVirt Engine
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Waiting for VDSM host to become operational. This may take several minutes...
[ INFO ] The VDSM Host is now operational
[ INFO ] Restarting nfs services
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20131224125906-setup.conf'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131224125431.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
Service vdsmd should report during awaiting master server to become operational:
[root@hv01 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 13:18:14 MSK; 14s ago
Process: 13200 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
Process: 13203 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 13262 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─13262 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 13:18:14 hv01.localdomain vdsmd_init_common.sh[13203]: SUCCESS: ssl configured to true. No conflicts
Dec 24 13:18:14 hv01.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 2
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 parse_server_challenge()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 ask_user_info()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 2
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 ask_user_info()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 make_client_response()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 3
Dec 24 13:18:15 hv01.localdomain vdsm[13262]: vdsm vds WARNING Unable to load the json rpc server module. Pl...led.
[root@ovirt1 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT icmp -- anywhere anywhere icmp any
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:postgres
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:xprtld:6166
ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49216
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:synchronet-db
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:sunrpc
ACCEPT udp -- anywhere anywhere state NEW udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:pftp
ACCEPT udp -- anywhere anywhere state NEW udp dpt:pftp
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:rquotad
ACCEPT udp -- anywhere anywhere state NEW udp dpt:rquotad
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:892
ACCEPT udp -- anywhere anywhere state NEW udp dpt:892
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:nfs
ACCEPT udp -- anywhere anywhere state NEW udp dpt:filenet-rpc
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:32803
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere multiport dports 24007:24108
ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc
ACCEPT udp -- anywhere anywhere udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere multiport dports 38465:38485
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@ovirt1 ~]# ssh ovirt2
Last login: Sat Dec 21 23:17:05 2013 from ovirt1.localdomain
[root@ovirt2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp dpt:54321
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT udp -- anywhere anywhere udp dpt:snmp
ACCEPT tcp -- anywhere anywhere tcp dpt:16514
ACCEPT tcp -- anywhere anywhere multiport dports xprtld:6166
ACCEPT tcp -- anywhere anywhere multiport dports 49152:49216
ACCEPT tcp -- anywhere anywhere tcp dpt:24007
ACCEPT tcp -- anywhere anywhere tcp dpt:webcache
ACCEPT udp -- anywhere anywhere udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere tcp dpt:38465
ACCEPT tcp -- anywhere anywhere tcp dpt:38466
ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere tcp dpt:38467
ACCEPT tcp -- anywhere anywhere tcp dpt:nfs
ACCEPT tcp -- anywhere anywhere tcp dpt:38469
ACCEPT tcp -- anywhere anywhere tcp dpt:39543
ACCEPT tcp -- anywhere anywhere tcp dpt:55863
ACCEPT tcp -- anywhere anywhere tcp dpt:38468
ACCEPT udp -- anywhere anywhere udp dpt:963
ACCEPT tcp -- anywhere anywhere tcp dpt:965
ACCEPT tcp -- anywhere anywhere tcp dpt:ctdb
ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn
ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds
ACCEPT tcp -- anywhere anywhere tcp dpts:24007:24108
ACCEPT tcp -- anywhere anywhere tcp dpts:49152:49251
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
[root@ovirt1 ~]# vgcreate vg_virt /dev/sda3
[root@ovirt1 ~]# lvcreate -L 91000M -n lv_gluster vg_virt /dev/sda3
Logical volume "lv_gluster" created
[root@ovirt1 ~]# lvscan
ACTIVE '/dev/fedora00/root' [170.90 GiB] inherit
ACTIVE '/dev/fedora00/swap' [7.89 GiB] inherit
ACTIVE '/dev/vg_virt/lv_gluster' [88.87 GiB] inherit
[root@ovirt1 ~]# mkfs.xfs -f -i size=512 /dev/mapper/vg_virt-lv_gluster
meta-data=/dev/mapper/vg_virt-lv_gluster isize=512 agcount=16, agsize=1456000 blks
= sectsz=4096 attr=2, projid32bit=0
data = bsize=4096 blocks=23296000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=11375, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ovirt1 ~]# mkdir /data1
[root@ovirt1 ~]# chown -R 36:36 /data1
[root@ovirt1 ~]# echo "/dev/mapper/vg_virt-lv_gluster /data1 xfs defaults 1 2" >> /etc/fstab
[root@ovirt1 ~]# mount -a
The last line corresponds ovirt-data05 replicated gluster volume based on
XFS formatted mounted via /etc/fstab LVM partition
/dev/mapper/vg_virt-lv_gluster (similar on both peers)
[root@ovirt1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora00-root 169G 35G 125G 22% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 152K 3.9G 1% /dev/shm
tmpfs 3.9G 988K 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 80K 3.9G 1% /tmp
/dev/sda1 477M 87M 361M 20% /boot
ovirt1.localdomain:ovirt-data02 169G 35G 125G 22% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02
192.168.1.137:/var/lib/exports/export 169G 35G 125G 22% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:/var/lib/exports/iso 169G 35G 125G 22% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
/dev/mapper/vg_virt-lv_gluster 89G 36M 89G 1% /data1
ovirt1.localdomain:ovirt-data05 89G 36M 89G 1% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
Fedora 20 KVM installation on XFS Gluster domain
and synchronization is my personal preference. Now I also know that postgres requires enough shared memory allocation like Informix or Oracle ( i was Informix DBA@Verizon for about 5 years , it was nice time ).
oVirt is an open source alternative to VMware vSphere, and provides an awesome KVM management interface for multi-node virtualization.
oVirt 3.3.2 clean install was performed as follows :-
1. Created ovirtmgmt bridge under /etc/sysconfig/network-interfaces
[root@ovirt1 network-scripts]# cat ifcfg-ovirtmgmt
DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.1.137
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=83.221.202.254
NM_CONTROLLED=”no”
DEVICE=ovirtmgmt
TYPE=Bridge
ONBOOT=yes
DELAY=0
BOOTPROTO=static
IPADDR=192.168.1.137
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
DNS1=83.221.202.254
NM_CONTROLLED=”no”
In particular (my box) :
[root@ovirt1 network-scripts]# cat ifcfg-enp2s0
BOOTPROTO=none
TYPE=”Ethernet”
ONBOOT=”yes”
NAME=”enp2s0″
BRIDGE=”ovirtmgmt”
HWADDR=00:22:15:63:e4:e2
BOOTPROTO=none
TYPE=”Ethernet”
ONBOOT=”yes”
NAME=”enp2s0″
BRIDGE=”ovirtmgmt”
HWADDR=00:22:15:63:e4:e2
2. Fixed bug with NFS Server
https://bugzilla.redhat.com/show_bug.cgi?id=970595
3. Set up IPv4 firewall with iptables
4. Disabled NetworkManager and enabled network service
5.To be able perform current 3.3.2 install on F19 , set up per
http://postgresql.1045698.n5.nabble.com/How-to-install-latest-stable-postgresql-on-Debian-td5005417.html
# sysctl -w kernel.shmmax=419430400
kernel.shmmax = 419430400
# sysctl -n kernel.shmmax
419430400
Otherwise, setup fails to perform Misc Configuration. Systemctl status postgresql.service reports a servers crash during setup.
Appears to be known issue http://www.ovirt.org/OVirt_3.3.2_release_notes
On Fedora 19 with recent versions of PostgreSQL it may be necessary to manually change kernel.shmmax settings (BZ 1039616)
Runtime shared memory mapping :-
[root@ovirt1 ~]# systemctl list-units | grep postgres
postgresql.service loaded active running PostgreSQL database server
[root@ovirt1 ~]# ipcs -a
------ Message Queues --------
key msqid owner perms used-bytes messages
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 0 root 644 80 2
0x00000000 32769 root 644 16384 2
0x00000000 65538 root 644 280 2
0x00000000 163843 boris 600 4194304 2 dest
0x0052e2c1 360452 postgres 600 43753472 8
0x00000000 294917 boris 600 2097152 2 dest
0x0112e4a1 393222 root 600 1000 11
0x00000000 425991 boris 600 393216 2 dest
0x00000000 557065 boris 600 1048576 2 dest
------ Semaphore Arrays --------
key semid owner perms nsems
0x000000a7 65536 root 600 1
0x0052e2c1 458753 postgres 600 17
0x0052e2c2 491522 postgres 600 17
0x0052e2c3 524291 postgres 600 17
0x0052e2c4 557060 postgres 600 17
0x0052e2c5 589829 postgres 600 17
0x0052e2c6 622598 postgres 600 17
0x0052e2c7 655367 postgres 600 17
0x0052e2c8 688136 postgres 600 17
0x0052e2c9 720905 postgres 600 17
0x0052e2ca 753674 postgres 600 17
After creating replication gluster volume ovirt-data02 via Web Admin I ran manually :
gluster volume set ovirt-data02 auth.allow 192.168.1.* ;
gluster volume set ovirt-data02 group virt ;
gluster volume set ovirt-data02 cluster.quorum-type auto ;
gluster volume set ovirt-data02 performance.cache-size 1GB ;
Currently apache-sshd is 0.9.0-3 . https://bugzilla.redhat.com/show_bug.cgi?id=1021273
Adding new host works fine , just /etc/sysconfig/iptables on master server should
have :
-A INPUT -p tcp -m multiport --dport 24007:24108 -j ACCEPT
-A INPUT -p tcp --dport 111 -j ACCEPT
-A INPUT -p udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m multiport --dport 38465:38485 -j ACCEPT
Personally i was experiencing one issue during second host deployment, which required service vdsmd restart on second host to allow system bring it up at the end of installation. Two installs behaved absolutely similar
[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:40:40 MSK; 50s ago
Process: 2896 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 3166 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3166 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 15:40:41 hv02.localdomain python[3192]: detected unhandled Python exception in '/usr/bin/vdsm-tool'
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: [427B blob data]
Dec 24 15:40:41 hv02.localdomain vdsm[3166]: vdsm vds WARNING Unable to load the json rpc server module. Ple...led.
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 2
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 ask_user_info()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 make_client_response()
Dec 24 15:40:41 hv02.localdomain python[3166]: DIGEST-MD5 client step 3
[root@hv02 ~]# service vdsmd restart
Redirecting to /bin/systemctl restart vdsmd.service
[root@hv02 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 15:41:42 MSK; 2s ago
Process: 3355 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
Process: 3358 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 3418 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─3418 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: vdsm: Running test_conflicting_conf
Dec 24 15:41:42 hv02.localdomain vdsmd_init_common.sh[3358]: SUCCESS: ssl configured to true. No conflicts
Dec 24 15:41:42 hv02.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 15:41:43 hv02.localdomain vdsm[3418]: vdsm vds WARNING Unable to load the json rpc server module. Ple...led.
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 parse_server_challenge()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 2
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 ask_user_info()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 make_client_response()
Dec 24 15:41:43 hv02.localdomain python[3418]: DIGEST-MD5 client step 3
Moreover if during core install on first server same report comes up during
awaiting host to become VDSM operational install will hang for a while and
finally won't bring up master server. Workaround is the same. Once again it's my personal experience. It's random error during core "all in one" install.
Successfull install looks like :
[root@hv01 ~]# engine-setup
[ INFO ] Stage: Initializing
[ INFO ] Stage: Environment setup
Configuration files: ['/etc/ovirt-engine-setup.conf.d/10-packaging-aio.conf', '/etc/ovirt-engine-setup.conf.d/10-packaging.conf']
Log file: /var/log/ovirt-engine/setup/ovirt-engine-setup-20131224125431.log
Version: otopi-1.1.2 (otopi-1.1.2-1.fc19)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ INFO ] Stage: Environment customization
--== PACKAGES ==--
[ INFO ] Checking for product updates...
[ INFO ] No product updates found
--== ALL IN ONE CONFIGURATION ==--
Configure VDSM on this host? (Yes, No) [No]: Yes
Local storage domain path [/var/lib/images]:
Local storage domain name [local_storage]:
--== NETWORK CONFIGURATION ==--
Host fully qualified DNS name of this server [hv01.localdomain]:
[WARNING] Failed to resolve hv01.localdomain using DNS, it can be resolved only locally
iptables was detected on your computer. Do you wish Setup to configure it? (yes, no) [yes]:
--== DATABASE CONFIGURATION ==--
Where is the database located? (Local, Remote) [Local]:
Setup can configure the local postgresql server automatically for the engine to run. This may conflict with existing applications.
Would you like Setup to automatically configure postgresql, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== OVIRT ENGINE CONFIGURATION ==--
Engine admin password:
Confirm engine admin password:
Application mode (Both, Virt, Gluster) [Both]:
Default storage type: (NFS, FC, ISCSI, POSIXFS) [NFS]:
--== PKI CONFIGURATION ==--
Organization name for certificate [localdomain]:
--== APACHE CONFIGURATION ==--
Setup can configure the default page of the web server to present the application home page. This may conflict with existing applications.
Do you wish to set the application as the default page of the web server? (Yes, No) [Yes]:
Setup can configure apache to use SSL using a certificate issued from the internal CA.
Do you wish Setup to configure that, or prefer to perform that manually? (Automatic, Manual) [Automatic]:
--== SYSTEM CONFIGURATION ==--
Configure an NFS share on this server to be used as an ISO Domain? (Yes, No) [Yes]:
Local ISO domain path [/var/lib/exports/iso]:
Local ISO domain name [ISO_DOMAIN]:
Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
--== END OF CONFIGURATION ==--
[ INFO ] Stage: Setup validation
[WARNING] Less than 16384MB of memory is available
--== CONFIGURATION PREVIEW ==--
Database name : engine
Database secured connection : False
Database host : localhost
Database user name : engine
Database host name validation : False
Datbase port : 5432
NFS setup : True
PKI organization : localdomain
NFS mount point : /var/lib/exports/iso
Application mode : both
Firewall manager : iptables
Configure WebSocket Proxy : True
Host FQDN : hv01.localdomain
Datacenter storage type : nfs
Configure local database : True
Set application as default page : True
Configure Apache SSL : True
Configure VDSM on this host : True
Local storage domain directory : /var/lib/images
Please confirm installation settings (OK, Cancel) [OK]:
[ INFO ] Stage: Transaction setup
[ INFO ] Stopping engine service
[ INFO ] Stopping websocket-proxy service
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Initializing PostgreSQL
[ INFO ] Creating PostgreSQL database
[ INFO ] Configuring PostgreSQL
[ INFO ] Creating database schema
[ INFO ] Creating CA
[ INFO ] Configuring WebSocket Proxy
[ INFO ] Generating post install configuration file '/etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf'
[ INFO ] Stage: Transaction commit
[ INFO ] Stage: Closing up
--== SUMMARY ==--
[WARNING] Less than 16384MB of memory is available
An ISO NFS share has been created on this host.
If IP based access restrictions are required, edit:
entry /var/lib/exports/iso in /etc/exports.d/ovirt-engine-iso-domain.exports
SSH fingerprint: C9:58:51:74:73:1E:DB:DC:05:8E:82:65:42:98:70:12
Internal CA ED:E6:6B:E9:F8:80:11:B2:52:C9:3E:93:4C:41:6A:44:4C:F8:94:B1
Web access is enabled at:
http://hv01.localdomain:80/ovirt-engine
https://hv01.localdomain:443/ovirt-engine
Please use the user "admin" and password specified in order to login into oVirt Engine
--== END OF SUMMARY ==--
[ INFO ] Starting engine service
[ INFO ] Restarting httpd
[ INFO ] Waiting for VDSM host to become operational. This may take several minutes...
[ INFO ] The VDSM Host is now operational
[ INFO ] Restarting nfs services
[ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20131224125906-setup.conf'
[ INFO ] Stage: Clean up
Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20131224125431.log
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
[ INFO ] Execution of setup completed successfully
Service vdsmd should report during awaiting master server to become operational:
[root@hv01 ~]# service vdsmd status
Redirecting to /bin/systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
Active: active (running) since Tue 2013-12-24 13:18:14 MSK; 14s ago
Process: 13200 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh --post-stop (code=exited, status=0/SUCCESS)
Process: 13203 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh --pre-start (code=exited, status=0/SUCCESS)
Main PID: 13262 (vdsm)
CGroup: name=systemd:/system/vdsmd.service
└─13262 /usr/bin/python /usr/share/vdsm/vdsm
Dec 24 13:18:14 hv01.localdomain vdsmd_init_common.sh[13203]: SUCCESS: ssl configured to true. No conflicts
Dec 24 13:18:14 hv01.localdomain systemd[1]: Started Virtual Desktop Server Manager.
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 2
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 parse_server_challenge()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 ask_user_info()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 2
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 ask_user_info()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 make_client_response()
Dec 24 13:18:15 hv01.localdomain python[13262]: DIGEST-MD5 client step 3
Dec 24 13:18:15 hv01.localdomain vdsm[13262]: vdsm vds WARNING Unable to load the json rpc server module. Pl...led.
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT icmp -- anywhere anywhere icmp any
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:postgres
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:xprtld:6166
ACCEPT tcp -- anywhere anywhere state NEW tcp dpts:49152:49216
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:synchronet-db
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:sunrpc
ACCEPT udp -- anywhere anywhere state NEW udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:pftp
ACCEPT udp -- anywhere anywhere state NEW udp dpt:pftp
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:rquotad
ACCEPT udp -- anywhere anywhere state NEW udp dpt:rquotad
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:892
ACCEPT udp -- anywhere anywhere state NEW udp dpt:892
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:nfs
ACCEPT udp -- anywhere anywhere state NEW udp dpt:filenet-rpc
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:32803
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:http
ACCEPT tcp -- anywhere anywhere multiport dports 24007:24108
ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc
ACCEPT udp -- anywhere anywhere udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere multiport dports 38465:38485
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Last login: Sat Dec 21 23:17:05 2013 from ovirt1.localdomain
[root@ovirt2 ~]# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere tcp dpt:54321
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
ACCEPT udp -- anywhere anywhere udp dpt:snmp
ACCEPT tcp -- anywhere anywhere tcp dpt:16514
ACCEPT tcp -- anywhere anywhere multiport dports xprtld:6166
ACCEPT tcp -- anywhere anywhere multiport dports 49152:49216
ACCEPT tcp -- anywhere anywhere tcp dpt:24007
ACCEPT tcp -- anywhere anywhere tcp dpt:webcache
ACCEPT udp -- anywhere anywhere udp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere tcp dpt:38465
ACCEPT tcp -- anywhere anywhere tcp dpt:38466
ACCEPT tcp -- anywhere anywhere tcp dpt:sunrpc
ACCEPT tcp -- anywhere anywhere tcp dpt:38467
ACCEPT tcp -- anywhere anywhere tcp dpt:nfs
ACCEPT tcp -- anywhere anywhere tcp dpt:38469
ACCEPT tcp -- anywhere anywhere tcp dpt:39543
ACCEPT tcp -- anywhere anywhere tcp dpt:55863
ACCEPT tcp -- anywhere anywhere tcp dpt:38468
ACCEPT udp -- anywhere anywhere udp dpt:963
ACCEPT tcp -- anywhere anywhere tcp dpt:965
ACCEPT tcp -- anywhere anywhere tcp dpt:ctdb
ACCEPT tcp -- anywhere anywhere tcp dpt:netbios-ssn
ACCEPT tcp -- anywhere anywhere tcp dpt:microsoft-ds
ACCEPT tcp -- anywhere anywhere tcp dpts:24007:24108
ACCEPT tcp -- anywhere anywhere tcp dpts:49152:49251
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
REJECT all -- anywhere anywhere PHYSDEV match ! --physdev-is-bridged reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Creating XFS replicated Gluster Storage
[root@ovirt1 ~]# pvcreate /dev/sda3
[root@ovirt1 ~]# vgcreate vg_virt /dev/sda3
[root@ovirt1 ~]# lvcreate -L 91000M -n lv_gluster vg_virt /dev/sda3
Logical volume "lv_gluster" created
[root@ovirt1 ~]# lvscan
ACTIVE '/dev/fedora00/root' [170.90 GiB] inherit
ACTIVE '/dev/fedora00/swap' [7.89 GiB] inherit
ACTIVE '/dev/vg_virt/lv_gluster' [88.87 GiB] inherit
[root@ovirt1 ~]# mkfs.xfs -f -i size=512 /dev/mapper/vg_virt-lv_gluster
meta-data=/dev/mapper/vg_virt-lv_gluster isize=512 agcount=16, agsize=1456000 blks
= sectsz=4096 attr=2, projid32bit=0
data = bsize=4096 blocks=23296000, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=11375, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ovirt1 ~]# mkdir /data1
[root@ovirt1 ~]# chown -R 36:36 /data1
[root@ovirt1 ~]# echo "/dev/mapper/vg_virt-lv_gluster /data1 xfs defaults 1 2" >> /etc/fstab
[root@ovirt1 ~]# mount -a
Creating replicated gluster volume beased on XFS LVM via Web Admin Console
The last line corresponds ovirt-data05 replicated gluster volume based on
XFS formatted mounted via /etc/fstab LVM partition
/dev/mapper/vg_virt-lv_gluster (similar on both peers)
[root@ovirt1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora00-root 169G 35G 125G 22% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 152K 3.9G 1% /dev/shm
tmpfs 3.9G 988K 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 80K 3.9G 1% /tmp
/dev/sda1 477M 87M 361M 20% /boot
ovirt1.localdomain:ovirt-data02 169G 35G 125G 22% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data02
192.168.1.137:/var/lib/exports/export 169G 35G 125G 22% /rhev/data-center/mnt/192.168.1.137:_var_lib_exports_export
ovirt1.localdomain:/var/lib/exports/iso 169G 35G 125G 22% /rhev/data-center/mnt/ovirt1.localdomain:_var_lib_exports_iso
/dev/mapper/vg_virt-lv_gluster 89G 36M 89G 1% /data1
ovirt1.localdomain:ovirt-data05 89G 36M 89G 1% /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:ovirt-data05
Fedora 20 KVM installation on XFS Gluster domain