Saturday, August 08, 2015

Once again RDO Kilo Set up for 3 Fedora 22 Nodes Controller+Network+Compute (ML2&OVS&VXLAN) as of 08/10/2015

 After upgrade to upstream version of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22 significantly changed. Details follow bellow

*********************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*********************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack

******************************************************************************
Action to be undertaken on Controller before deployment:
******************************************************************************
You might be hit by bug  https://bugzilla.redhat.com/show_bug.cgi?id=1249482
Actually as of time of writing having status "MODIFIED"
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
 /usr/lib/python2.7/site-packages/packstack/puppet/templates.

Another option rebuild  openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc23.src.rpm on Fedora 22
with patch 0002-Avoid-running-neutron-db-manage-twice 
Place patch in SOURCES and update correspondingly spec file.

$ rpmbuild -bb openstack-packstack.spec
$ cd ../RPMS/noarch
$ dnf install openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-doc-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-puppet-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm

I confirm that patch above works for RDO Kilo Multinode packstack deployment
on Fedora 22, it has been merged stable Kilo branch on 08/10/2015.    
Please,view :- https://review.openstack.org/#/c/209032/

You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
*******************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
*******************************************************************************

SELINUX converted to permissive mode on all depoyment nodes

# packstack --answer-file=./answer3Node.txt

   Following bellow is brief instruction  for three node deployment test Controller&&Network&&Compute across Fedora 22 VMs for RDO Kilo, which was performed on Fedora 22 host with QEMU/KVM/Libvirt Hypervisor (16 GB RAM, Intel Core i7-4790 Haswell CPU, ASUS Z97-P ).
    Three VMs (4 GB RAM,4 VCPUS)  have been setup. Controller VM one (management subnet) VNIC, Network Node VM three VNICS (management,vtep's external subnets), Compute Node VM two VNICS (management,vtep's subnets)

I avoid using default libvirt subnet 192.168.122.0/24 for any purposes related
with VM serves as RDO Kilo Nodes, by some reason it causes network congestion when forwarding packets to Internet and vice versa.
 

Three Libvirt networks created

# cat openstackvms.xml
<network>
   <name>openstackvms</name>
   <uuid>d0e9964a-f91a-40c0-b769-a609aee41bf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr1' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='192.169.142.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='192.169.142.2' end='192.169.142.254' />
     </dhcp>
   </ip>
 </network>

# cat public.xml
<network>
   <name>public</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr2' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='172.24.4.225' netmask='255.255.255.240'>
     <dhcp>
       <range start='172.24.4.226' end='172.24.4.238' />
     </dhcp>
   </ip>
 </network>

# cat vteps.xml
<network>
   <name>vteps</name>
   <uuid>d0e9965b-f92c-40c1-b749-b609aed42cf2</uuid>
   <forward mode='nat'>
     <nat>
       <port start='1024' end='65535'/>
     </nat>
   </forward>
   <bridge name='virbr3' stp='on' delay='0' />
   <mac address='52:54:00:60:f8:6d'/>
   <ip address='10.0.0.1' netmask='255.255.255.0'>
     <dhcp>
       <range start='10.0.0.1' end='10.0.0.254' />
     </dhcp>
   </ip>
 </network>

# virsh net-list
 Name                 State      Autostart     Persistent
--------------------------------------------------------------------------
 default               active        yes           yes
 openstackvms    active        yes           yes
 public                active        yes           yes
 vteps                 active         yes          yes


*********************************************************************************
1. First Libvirt subnet "openstackvms"  serves as management network.
All 3 VM are attached to this subnet
**********************************************************************************
2. Second Libvirt subnet "public" serves for simulation external network  Network Node attached to public,latter on "eth2" interface (belongs to "public") is supposed to be converted into OVS port of br-ex on Network Node. This Libvirt subnet via bridge virbr2 172.24.4.225 provides VMs running on Compute Node access to Internet due to match to external network created by packstack installation 172.24.4.224/28.

  


*************************************************
On Hypervisor Host ( Fedora 22)
*************************************************
# iptables -S -t nat 
. . . . . .
-A POSTROUTING -s 172.24.4.224/28 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p tcp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -p udp -j MASQUERADE --to-ports 1024-65535
-A POSTROUTING -s 172.24.4.224/28 ! -d 172.24.4.224/28 -j MASQUERADE
. . . . . .
***********************************************************************************
3. Third Libvirt subnet "vteps" serves  for VTEPs endpoint simulation. Network and Compute Node VMs are attached to this subnet.
********************************************************************************


************************************
Answer-file - answer3Node.txt
************************************
[root@ip-192-169-142-127 ~(keystone_admin)]# cat answer3Node.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_DEFAULT_PASSWORD=
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=y
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=192.169.142.127
CONFIG_COMPUTE_HOSTS=192.169.142.137
CONFIG_NETWORK_HOSTS=192.169.142.147
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=192.169.142.127
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=192.169.142.127
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=PW_PLACEHOLDER
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=PW_PLACEHOLDER
CONFIG_MARIADB_HOST=192.169.142.127
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=7207ae344ed04957
CONFIG_KEYSTONE_DB_PW=abcae16b785245c3
CONFIG_KEYSTONE_REGION=RegionOne
CONFIG_KEYSTONE_ADMIN_TOKEN=3ad2de159f9649afb0c342ba57e637d9
CONFIG_KEYSTONE_ADMIN_PW=7049f834927e4468
CONFIG_KEYSTONE_DEMO_PW=bf737b785cfa4398
CONFIG_KEYSTONE_TOKEN_FORMAT=UUID
# Here 2 options available
CONFIG_KEYSTONE_SERVICE_NAME=httpd
# CONFIG_KEYSTONE_SERVICE_NAME=keystone
CONFIG_GLANCE_DB_PW=41264fc52ffd4fe8
CONFIG_GLANCE_KS_PW=f6a9398960534797
CONFIG_GLANCE_BACKEND=file
CONFIG_CINDER_DB_PW=5ac08c6d09ba4b69
CONFIG_CINDER_KS_PW=c8cb1ecb8c2b4f6f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=10G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_CINDER_NETAPP_LOGIN=
CONFIG_CINDER_NETAPP_PASSWORD=
CONFIG_CINDER_NETAPP_HOSTNAME=
CONFIG_CINDER_NETAPP_SERVER_PORT=80
CONFIG_CINDER_NETAPP_STORAGE_FAMILY=ontap_cluster
CONFIG_CINDER_NETAPP_TRANSPORT_TYPE=http
CONFIG_CINDER_NETAPP_STORAGE_PROTOCOL=nfs
CONFIG_CINDER_NETAPP_SIZE_MULTIPLIER=1.0
CONFIG_CINDER_NETAPP_EXPIRY_THRES_MINUTES=720
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_START=20
CONFIG_CINDER_NETAPP_THRES_AVL_SIZE_PERC_STOP=60
CONFIG_CINDER_NETAPP_NFS_SHARES_CONFIG=
CONFIG_CINDER_NETAPP_VOLUME_LIST=
CONFIG_CINDER_NETAPP_VFILER=
CONFIG_CINDER_NETAPP_VSERVER=
CONFIG_CINDER_NETAPP_CONTROLLER_IPS=
CONFIG_CINDER_NETAPP_SA_PASSWORD=
CONFIG_CINDER_NETAPP_WEBSERVICE_PATH=/devmgr/v2
CONFIG_CINDER_NETAPP_STORAGE_POOLS=
CONFIG_NOVA_DB_PW=1e1b5aeeeaf342a8
CONFIG_NOVA_KS_PW=d9583177a2444f06
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eth1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eth0
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=808e36e154bd4cee
CONFIG_NEUTRON_DB_PW=0e2b927a21b44737
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=a965cd23ed2f4502
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000
CONFIG_NEUTRON_ML2_VXLAN_GROUP=239.1.1.2
CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=1001:2000
CONFIG_NEUTRON_OVS_TUNNEL_IF=eth1
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=8f75bfd461234c30
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=a60aacbedde7429a
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=y
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=44faa4ebc3da4459
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=PW_PLACEHOLDER
CONFIG_HEAT_AUTH_ENC_KEY=fc3fb7fee61e46b0
CONFIG_HEAT_KS_PW=PW_PLACEHOLDER
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=PW_PLACEHOLDER
CONFIG_CEILOMETER_SECRET=19ae0e7430174349
CONFIG_CEILOMETER_KS_PW=337b08d4b3a44753
CONFIG_MONGODB_HOST=192.169.142.127
CONFIG_NAGIOS_PW=02f168ee8edd44e4


**********************************************************************************
Up on packstack completion on Network Node create following files ,
designed to  match created by installer external network
**********************************************************************************

[root@ip-192-169-142-147 network-scripts]# cat ifcfg-br-ex
DEVICE="br-ex"
BOOTPROTO="static"
IPADDR="172.24.4.232"
NETMASK="255.255.255.240"
DNS1="83.221.202.254"
BROADCAST="172.24.4.239"
GATEWAY="172.24.4.225"
NM_CONTROLLED="no"
TYPE="OVSIntPort"
OVS_BRIDGE=br-ex
DEVICETYPE="ovs"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="yes"
IPV6INIT=no


[root@ip-192-169-142-147 network-scripts]# cat ifcfg-eth2
DEVICE="eth2"
HWADDR=00:22:15:63:E4:E2
ONBOOT="yes"
TYPE="OVSPort"
DEVICETYPE="ovs"
OVS_BRIDGE=br-ex
NM_CONTROLLED=no
IPV6INIT=no

*************************************************
Next step to performed on Network Node :-
*************************************************
# chkconfig network on
# systemctl stop NetworkManager
# systemctl disable NetworkManager
#reboot

*************************************************
General Three node RDO Kilo system layout
*************************************************



***********************
 Controller Node
***********************
[root@ip-192-169-142-127 neutron(keystone_admin)]# cat /etc/neutron/plugins/ml2/ml2_conf.ini| grep -v ^# | grep -v ^$
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers =openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =1001:2000
vxlan_group =239.1.1.2
[securitygroup]
enable_security_group = True

   


   Network Node


*********************
Network Node
*********************
[root@ip-192-169-142-147 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.147
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver



********************
Compute Node
*******************
[root@ip-192-169-142-137 openvswitch(keystone_admin)]# cat ovs_neutron_plugin.ini | grep -v ^$| grep -v ^#
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =10.0.0.137
bridge_mappings =physnet1:br-ex
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

   


   By some reasons virt-manager doesn't allow to set remote connection to Spice
   Session running locally on F22 Virtualization Host 192.168.1.95

   So from remote Fedora host run :-
    
  # ssh -L 5900:127.0.0.1:5900 -N -f -l root 192.168.1.95
    # ssh -L 5901:127.0.0.1:5901 -N -f -l root 192.168.1.95
  # ssh -L 5902:127.0.0.1:5902 -N -f -l root 192.168.1.95

  Then spicy installed on remote host would connect

   1)  to VM 192.169.142.127
        $ spicy -h localhost -p 5902  
   2)  to VM 192.169.142.147
        $ spicy -h localhost -p 5901
   3) to VM 192.169.142.137
        $ spicy -h localhost -p 5900
   


   Dashboard snapshots

  
  
  


Friday, July 31, 2015

CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo installed on Fedora 22 . After upgrade to upstream version
of openstack-puppet-modules-2015.1.9 procedure of RDO Kilo install on F22
significantly changed. Details follow bellow :-

*****************************************************************************************
RDO Kilo set up on Fedora ( openstack-puppet-modules-2015.1.9-4.fc23.noarch)
*****************************************************************************************
# dnf install -y https://rdoproject.org/repos/rdo-release.rpm
# dnf install -y openstack-packstack

Generate answer-file and make update :-
# packstack  --gen-answer-file answer-file-aio.txt
   and set CONFIG_KEYSTONE_SERVICE_NAME=httpd

****************************************************************************
I also commented out second line in  /etc/httpd/conf.d/mod_dnssd.conf
****************************************************************************
As pre-install step apply patch https://review.openstack.org/#/c/209032/
to fix neutron_api.pp. Location of puppet templates
 /usr/lib/python2.7/site-packages/packstack/puppet/templates.

Another option rebuild  openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc23.src.rpm on Fedora 22
with patch 0002-Avoid-running-neutron-db-manage-twice 
Place patch in SOURCES and update correspondingly spec file.

$ rpmbuild -bb openstack-packstack.spec
$ cd ../RPMS/noarch
$ dnf install openstack-packstack-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-doc-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm
openstack-packstack-puppet-2015.1-0.10.dev1608.g6447ff7.fc22.noarch.rpm

You might be also hit by  https://bugzilla.redhat.com/show_bug.cgi?id=1234042
Workaround is in comments 6,11
****************
Then run :-
****************

# packstack  --answer-file=./answer-file-aio.txt

If swift puppet generate error :-

192.168.1.57_swift.pp:                            [ ERROR ]              
Applying Puppet manifests                         [ ERROR ]

ERROR : Error appeared during Puppet run: 192.168.1.57_swift.pp
Error: Could not get latest version: undefined method `[]' for nil:NilClass


Then run :  `dnf check-update`  and replace obsoleted packages.
For instance :-

[root@fedora22wks ~]# yum check-update
Yum command has been deprecated, redirecting to '/usr/bin/dnf check-update'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Last metadata expiration check performed 0:21:19 ago on Fri Aug  7 12:25:41 2015.
Obsoleting Packages
python-mysql.x86_64                      1.3.6-4.fc22                              updates      
    MySQL-python.x86_64                  1.3.6-3.fc22                              @System    
  
[root@fedora22wks ~]# yum install python-mysql
Yum command has been deprecated, redirecting to '/usr/bin/dnf install python-mysql'.
See 'man dnf' and 'man yum2dnf' for more information.
To transfer transaction metadata from yum to DNF, run:
'dnf install python-dnf-plugins-extras-migrate && dnf-2 migrate'

Last metadata expiration check performed 0:21:47 ago on Fri Aug  7 12:25:41 2015.
Dependencies resolved.
=================================================================================================
 Package                  Arch               Version                   Repository           Size
=================================================================================================
Installing:
 python-mysql             x86_64             1.3.6-4.fc22              updates              98 k
     replacing  MySQL-python.x86_64 1.3.6-3.fc22

Transaction Summary
=================================================================================================
Install  1 Package

Total download size: 98 k
Installed size: 265 k
Is this ok [y/N]: y
Downloading Packages:
python-mysql-1.3.6-4.fc22.x86_64.rpm                              62 kB/s |  98 kB     00:01   
-------------------------------------------------------------------------------------------------
Total                                                             35 kB/s |  98 kB     00:02    
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Installing  : python-mysql-1.3.6-4.fc22.x86_64                                             1/2
  Obsoleting  : MySQL-python-1.3.6-3.fc22.x86_64                                             2/2
  Verifying   : python-mysql-1.3.6-4.fc22.x86_64                                             1/2
  Verifying   : MySQL-python-1.3.6-3.fc22.x86_64                                             2/2

Installed:
  python-mysql.x86_64 1.3.6-4.fc22                                                              

Complete!
***************************
Rerun packstack.
***************************

Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

 [root@fedora22server ~(keystone_admin)]# uname -a
Linux fedora22server.localdomain 4.1.3-200.fc22.x86_64 #1 SMP Wed Jul 22 19:51:58 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

[root@fedora22server ~(keystone_admin)]# rpm -qa \*qemu\*
qemu-system-x86-2.3.0-6.fc22.x86_64
qemu-img-2.3.0-6.fc22.x86_64
qemu-guest-agent-2.3.0-6.fc22.x86_64
qemu-kvm-2.3.0-6.fc22.x86_64
ipxe-roms-qemu-20150407-1.gitdc795b9f.fc22.noarch
qemu-common-2.3.0-6.fc22.x86_64
libvirt-daemon-driver-qemu-1.2.13.1-2.fc22.x86_64


[root@fedora22server ~(keystone_admin)]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 15991 MB
node 0 free: 4399 MB
node distances:
node   0
  0:  10

[root@fedora22server ~(keystone_admin)]# virsh capabilities
<capabilities>
<host>
    <uuid>00fd5d2c-dad7-dd11-ad7e-7824af431b53</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>Haswell-noTSX</model>
      <vendor>Intel</vendor>
      <topology sockets='1' cores='4' threads='2'/>
      <feature name='invtsc'/>
      <feature name='abm'/>
      <feature name='pdpe1gb'/>
      <feature name='rdrand'/>
      <feature name='f16c'/>
      <feature name='osxsave'/>
      <feature name='pdcm'/>
      <feature name='xtpr'/>
      <feature name='tm2'/>
      <feature name='est'/>
      <feature name='smx'/>
      <feature name='vmx'/>
      <feature name='ds_cpl'/>
      <feature name='monitor'/>
      <feature name='dtes64'/>
      <feature name='pbe'/>
      <feature name='tm'/>
      <feature name='ht'/>
      <feature name='ss'/>
      <feature name='acpi'/>
      <feature name='ds'/>
      <feature name='vme'/>
      <pages unit='KiB' size='4'/>
      <pages unit='KiB' size='2048'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
      <suspend_hybrid/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
        <uri_transport>rdma</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>16374824</memory>
          <pages unit='KiB' size='4'>4093706</pages>
          <pages unit='KiB' size='2048'>0</pages>
          <distances>
            <sibling id='0' value='10'/>
          </distances>
          <cpus num='8'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/>
            <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/>
            <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/>
            <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/>
            <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/>
            <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/>
            <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/>
            <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/>
          </cpus>
        </cell>
      </cells>
    </topology>

On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:
  • Set the vcpu_pin_set value to a list or range of logical CPU cores  to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores. 
  • vcpu_pin_set=2,3,6,7
  • Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used  the default of 512 MB: 
  • reserved_host_memory_mb=512 
# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************
Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service


 At this point if  creating  a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement='static' cpuset='2-3,6-7'>1</vcpu>

Add to vmlinuz grub2 command line at the end 
isolcpus=2,3,6,7

***************
REBOOT
***************

[root@fedora22server ~(keystone_admin)]# nova aggregate-create performance
+----+-------------+-------------------+-------+----------+
| Id | Name        | Availability Zone | Hosts | Metadata |
+----+-------------+-------------------+-------+----------+
| 1  | performance | -                 |       |          |
+----+-------------+-------------------+-------+----------+

[root@fedora22server ~(keystone_admin)]# nova aggregate-set-metadata 1 pinned=true
Metadata has been successfully updated for aggregate 1.
+----+-------------+-------------------+-------+---------------+
| Id | Name        | Availability Zone | Hosts | Metadata      |
+----+-------------+-------------------+-------+---------------+
| 1  | performance | -                 |       | 'pinned=true' |
+----+-------------+-------------------+-------+---------------+

[root@fedora22server ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name                 | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.small.performance | 4096      | 20   | 0         |      | 4     | 1.0         | True      |
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+
[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated

[root@fedora22server ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true

[root@fedora22server ~(keystone_admin)]# hostname
fedora22server.localdomain

[root@fedora22server ~(keystone_admin)]# nova aggregate-add-host 1 fedora22server.localdomain
Host fedora22server.localdomain has been successfully added for aggregate 1
+----+-------------+-------------------+------------------------------+---------------+
| Id | Name        | Availability Zone | Hosts                        | Metadata      |
+----+-------------+-------------------+------------------------------+---------------+
| 1  | performance | -                 | 'fedora22server.localdomain' | 'pinned=true' |
+----+-------------+-------------------+------------------------------+---------------+

[root@fedora22server ~(keystone_admin)]# . keystonerc_demo
[root@fedora22server ~(keystone_demo)]# glance image-list
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+
| ID                                   | Name                            | Disk Format | Container Format | Size        | Status |
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+
| bf6f5272-ae26-49ae-b0f9-3c4fcba350f6 | CentOS71Image                   | qcow2       | bare             | 1004994560  | active |
| 05ac955e-3503-4bcf-8413-6a1b3c98aefa | cirros                          | qcow2       | bare             | 13200896    | active |
| 7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 | VF22Image                       | qcow2       | bare             | 228599296   | active |
| c695e7fa-a69f-4220-abd8-2269b75af827 | Windows Server 2012 R2 Std Eval | qcow2       | bare             | 17182752768 | active |
+--------------------------------------+---------------------------------+-------------+------------------+-------------+--------+

[root@fedora22server ~(keystone_demo)]#neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id                                   | name     | subnets                                             |
+--------------------------------------+----------+-----------------------------------------------------+
| 0daa3a02-c598-4c46-b1ac-368da5542927 | public   | 8303b2f3-2de2-44c2-bd5e-fc0966daec53 192.168.1.0/24 |
| c85a4215-1558-4a95-886d-a2f75500e052 | demo_net | 0cab6cbc-dd80-42c6-8512-74d7b2cbf730 50.0.0.0/24    |
+--------------------------------------+----------+-----------------------------------------------------+

*************************************************************************
At this point attempt to launch F22 Cloud instance with created flavor
m1.small.performance
*************************************************************************

[root@fedora22server ~(keystone_demo)]# nova boot --image  7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52 --key-name oskeydev --flavor  m1.small.performance --nic net-id=c85a4215-1558-4a95-886d-a2f75500e052 vf22-instance

+--------------------------------------+--------------------------------------------------+
| Property                             | Value                                            |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                           |
| OS-EXT-AZ:availability_zone          | nova                                             |
| OS-EXT-STS:power_state               | 0                                                |
| OS-EXT-STS:task_state                | scheduling                                       |
| OS-EXT-STS:vm_state                  | building                                         |
| OS-SRV-USG:launched_at               | -                                                |
| OS-SRV-USG:terminated_at             | -                                                |
| accessIPv4                           |                                                  |
| accessIPv6                           |                                                  |
| adminPass                            | XsGr87ZLGX8P                                     |
| config_drive                         |                                                  |
| created                              | 2015-07-31T08:03:49Z                             |
| flavor                               | m1.small.performance (6)                         |
| hostId                               |                                                  |
| id                                   | 4b99f3cf-3126-48f3-9e00-94787f040e43             |
| image                                | VF22Image (7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52) |
| key_name                             | oskeydev                                         |
| metadata                             | {}                                               |
| name                                 | vf22-instance                                    |
| os-extended-volumes:volumes_attached | []                                               |
| progress                             | 0                                                |
| security_groups                      | default                                          |
| status                               | BUILD                                            |
| tenant_id                            | 14f736e6952644b584b2006353ca51be                 |
| updated                              | 2015-07-31T08:03:50Z                             |
| user_id                              | 4ece2385b17a4490b6fc5a01ff53350c                 |
+--------------------------------------+--------------------------------------------------+
[root@fedora22server ~(keystone_demo)]#nova list
+--------------------------------------+---------------+---------+------------+-------------+-----------------------------------+
| ID                                   | Name          | Status  | Task State | Power State | Networks                          |
+--------------------------------------+---------------+---------+------------+-------------+-----------------------------------+
| 93906a61-ec0b-481d-b964-2bb99d095646 | CentOS71RLX   | SHUTOFF | -          | Shutdown    | demo_net=50.0.0.21, 192.168.1.159 |
| ac7e9be5-d2dc-4ec0-b0a1-4096b552e578 | VF22Devpin    | ACTIVE  | -          | Running     | demo_net=50.0.0.22                |
| b93c9526-ded5-4b7a-ae3a-106b34317744 | VF22Devs      | SHUTOFF | -          | Shutdown    | demo_net=50.0.0.19, 192.168.1.157 |
| bef20a1e-3faa-4726-a301-73ca49666fa6 | WinSrv2012    | SHUTOFF | -          | Shutdown    | demo_net=50.0.0.16                |
| 4b99f3cf-3126-48f3-9e00-94787f040e43 | vf22-instance | ACTIVE  | -          | Running     | demo_net=50.0.0.23, 192.168.1.160                |
+--------------------------------------+---------------+---------+------------+-------------+-----------------------------------+
[root@fedora22server ~(keystone_demo)]#virsh list
 Id    Name                           State
----------------------------------------------------
 2     instance-0000000c              running
 3     instance-0000000d              running

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@fedora22server ~(keystone_demo)]#virsh dumpxml instance-0000000d > vf22-instance.xml
<domain type='kvm' id='3'>
  <name>instance-0000000d</name>
  <uuid>4b99f3cf-3126-48f3-9e00-94787f040e43</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1.0-3.fc23"/>
      <nova:name>vf22-instance</nova:name>
      <nova:creationTime>2015-07-31 08:03:54</nova:creationTime>
      <nova:flavor name="m1.small.performance">
        <nova:memory>4096</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>4</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="4ece2385b17a4490b6fc5a01ff53350c">demo</nova:user>
        <nova:project uuid="14f736e6952644b584b2006353ca51be">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="7b2085b8-4fe7-4d32-a5c9-5eadaf8bfc52"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <shares>4096</shares>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='6'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='7'/>
    <emulatorpin cpuset='2-3,6-7'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
    <memnode cellid='0' mode='strict' nodeset='0'/>
  </numatune>

  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>Fedora Project</entry>
      <entry name='product'>OpenStack Nova</entry>
      <entry name='version'>2015.1.0-3.fc23</entry>
      <entry name='serial'>f1b336b1-6abf-4180-865a-b6be5670352e</entry>
      <entry name='uuid'>4b99f3cf-3126-48f3-9e00-94787f040e43</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='2'/>
    <numa>
      <cell id='0' cpus='0-3' memory='4194304' unit='KiB'/>
    </numa>
  </cpu>

  <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/disk'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/var/lib/nova/instances/_base/6c60a5ed1b3037bbdb2bed198dac944f4c0d09cb'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:4f:25:03'/>
      <source bridge='qbr567b21fe-52'/>
      <target dev='tap567b21fe-52'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='file'>
      <source path='/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <serial type='pty'>
      <source path='/dev/pts/2'/>
      <target port='1'/>
      <alias name='serial1'/>
    </serial>
    <console type='file'>
      <source path='/var/lib/nova/instances/4b99f3cf-3126-48f3-9e00-94787f040e43/console.log'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5901' autoport='yes' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
      <stats period='10'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c359,c706</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c359,c706</imagelabel>
  </seclabel>
</domain>

 

Tuesday, July 28, 2015

CPU Pinning and NUMA Topology on RDO Kilo && Hypervisor Upgrade up to qemu-kvm-ev-2.1.2-23.el7.1 on CentOS 7.1

Posting bellow follows up http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
on RDO Kilo upgraded via  qemu-kvm-ev-2.1.2-23.el7.1  on CentOS 7.1
Recent build on CentOS 7.X qemu-kvm-ev-2.1.2-23.el7.1 
enables CPU Pinning and NUMA Topology for RDO Kilo on CentOS 7.1
Qemu-kvm upgrade is supposed to be done as post installation procedure,
i.e. after RDO Kilo deployment on the system. Final target is to reproduce mentioned article on i7 4790 Haswell CPU box, perform launching nova instance with CPU pinning.

See also CPU Pinning and NUMA Topology on RDO Kilo on Fedora Server 22

[root@Centos71 x86_64]# numactl --hardware
available: 1 nodes (0)
node 0 cpus: 0 1 2 3 4 5 6 7
node 0 size: 16326 MB
node 0 free: 4695 MB
node distances:
node   0
0:  10

# virsh capabilities
   .  .  .  .  .

 <topology>
      <cells num='1'>
        <cell id='0'>
          <memory unit='KiB'>16718464</memory>
          <pages unit='KiB' size='4'>4179616</pages>
          <pages unit='KiB' size='2048'>0</pages>
          <distances>
            <sibling id='0' value='10'/>
          </distances>
          <cpus num='8'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0,4'/>
            <cpu id='1' socket_id='0' core_id='1' siblings='1,5'/>
            <cpu id='2' socket_id='0' core_id='2' siblings='2,6'/>
            <cpu id='3' socket_id='0' core_id='3' siblings='3,7'/>
            <cpu id='4' socket_id='0' core_id='0' siblings='0,4'/>
            <cpu id='5' socket_id='0' core_id='1' siblings='1,5'/>
            <cpu id='6' socket_id='0' core_id='2' siblings='2,6'/>
            <cpu id='7' socket_id='0' core_id='3' siblings='3,7'/>
          </cpus>
        </cell>
      </cells>
</topology>



On each Compute node that pinning of virtual machines will be permitted on open the /etc/nova/nova.conf file and make the following modifications:
  • Set the vcpu_pin_set value to a list or range of logical CPU cores  to reserve for virtual machine processes. OpenStack Compute will ensure guest virtual machine instances are pinned to these virtual CPU cores. 
  • vcpu_pin_set=2,3,6,7
  • Set the reserved_host_memory_mb to reserve RAM for host processes. For the purposes of testing used  the default of 512 MB: 
  • reserved_host_memory_mb=512 
# systemctl restart openstack-nova-compute.service

************************************
SCHEDULER CONFIGURATION
************************************
Update /etc/nova/nova.conf

scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,
ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,
NUMATopologyFilter,AggregateInstanceExtraSpecsFilter

# systemctl restart openstack-nova-scheduler.service


 At this point if  creating  a guest you may see some changes to appear in the XML, pinning the guest vCPU(s) to the cores listed in vcpu_pin_set:

<vcpu placement='static' cpuset='2-3,6-7'>1</vcpu>

Add to vmlinuz grub2 command line at the end 


isolcpus=2,3,6,7

***************************
REBOOT SYSTEM
***************************

[root@Centos71 ~(keystone_admin)]# nova aggregate-create performance
+----+-------------+-------------------+-------+----------+
| Id | Name        | Availability Zone | Hosts | Metadata |
+----+-------------+-------------------+-------+----------+
| 1  | performance | -                 |       |          |
+----+-------------+-------------------+-------+----------+

[root@Centos71 ~(keystone_admin)]# nova aggregate-set-metadata 1
pinned=true
Metadata has been successfully updated for aggregate 1.
+----+-------------+-------------------+-------+---------------+
| Id | Name        | Availability Zone | Hosts | Metadata      |
+----+-------------+-------------------+-------+---------------+
| 1  | performance | -                 |       | 'pinned=true' |
+----+-------------+-------------------+-------+---------------+


Create a new flavor for performance intensive instances. Here is created the m1.small.performance flavor, based on the values used in the existing m1.small flavor. The differences in behaviour between the two will be the result of the metadata to be  added  to the new flavor.


[root@Centos71 ~(keystone_admin)]# nova flavor-create m1.small.performance 6 4096 20 4
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name                 | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+
| 6  | m1.small.performance | 4096      | 20   | 0                |          | 4     | 1.0  | True      |
+----+----------------------+-----------+------+-----------+------+-------+-------------+-----------+

[root@Centos71 ~(keystone_admin)]# nova flavor-key 6 set hw:cpu_policy=dedicated

[root@Centos71 ~(keystone_admin)]# nova flavor-key 6 set aggregate_instance_extra_specs:pinned=true


[root@Centos71 ~(keystone_admin)]# nova aggregate-add-host 1 Centos71.localdomain

Host Centos71.localdomain has been successfully added for aggregate 1
+----+-------------+-------------------+------------------------+---------------+
| Id | Name        | Availability Zone | Hosts                  | Metadata      |
+----+-------------+-------------------+------------------------+---------------+
| 1  | performance | -                 | 'Centos71.localdomain' | 'pinned=true' |
+----+-------------+-------------------+------------------------+---------------+



[root@Centos71 ~(keystone_admin)]# .   keystonerc_demo

[root@Centos71 ~(keystone_demo)]# glance image-list
+--------------------------------------+-----------------+-------------+------------------+------------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size       | Status |
+--------------------------------------+-----------------+-------------+------------------+------------+--------+
| 4a2d708c-7624-439f-9e7e-6e133062e23a | CentOS71Image   | qcow2       | bare             | 1004994560 | active |
| fae94d4b-e810-46a9-8a8f-94dfb812e098 | cirros          | qcow2       | bare             | 13200896   | active |
| f823f0a0-bcdf-416d-915a-8d7cc0278ed7 | Fedora20image   | qcow2       | bare             | 210829312  | active |
| 2198786d-e77f-47ec-959f-fcf7435d5e78 | Fedora21image   | qcow2       | bare             | 158443520  | active |
| 5f1ca33e-d5cc-43fb-9d88-5a7ef0f75959 | VF22Image       | qcow2       | bare             | 228599296  | active |
+--------------------------------------+-----------------+-------------+------------------+------------+--------+

[root@Centos71 ~(keystone_demo)]#  neutron net-list
+--------------------------------------+----------+-----------------------------------------------------+
| id                                   | name     | subnets                                             |
+--------------------------------------+----------+-----------------------------------------------------+
| ab4bd4f8-22b7-43c3-ac60-c1a917a230d7 | public   | 4cbcf377-3742-4385-8362-c071f499ad9c 192.168.1.0/24 |
| 93e0b4be-7900-4a34-adef-758578f75774 | demo_net | 17a5e4f7-fd1b-45fa-b84f-c3af1378c42c 50.0.0.0/24    |
+--------------------------------------+----------+-----------------------------------------------------+

[root@Centos71 ~(keystone_demo)]# nova boot --image 5f1ca33e-d5cc-43fb-9d88-5a7ef0f75959 --key-name oskeydev --flavor m1.small.performance --nic net-id=93e0b4be-7900-4a34-adef-758578f75774 vf22-instance

+--------------------------------------+--------------------------------------------------+
| Property                             | Value                                            |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                           |
| OS-EXT-AZ:availability_zone          | nova                                             |
| OS-EXT-STS:power_state               | 0                                                |
| OS-EXT-STS:task_state                | scheduling                                       |
| OS-EXT-STS:vm_state                  | building                                         |
| OS-SRV-USG:launched_at               | -                                                |
| OS-SRV-USG:terminated_at             | -                                                |
| accessIPv4                           |                                                  |
| accessIPv6                           |                                                  |
| adminPass                            | trBdDQWd75ck                                     |
| config_drive                         |                                                  |
| created                              | 2015-07-29T07:59:23Z                             |
| flavor                               | m1.small.performance (6)                         |
| hostId                               |                                                  |
| id                                   | d7fda7ca-7124-4c8b-a085-1da784d57348             |
| image                                | VF22Image (5f1ca33e-d5cc-43fb-9d88-5a7ef0f75959) |
| key_name                             | oskeydev                                         |
| metadata                             | {}                                               |
| name                                 | vf22-instance                                    |
| os-extended-volumes:volumes_attached | []                                               |
| progress                             | 0                                                |
| security_groups                      | default                                          |
| status                               | BUILD                                            |
| tenant_id                            | 8c9defac20a74633af4bb4773e45f11e                 |
| updated                              | 2015-07-29T07:59:23Z                             |
| user_id                              | da79d2c66db747eab942bdbe20bb3f44                 |
+--------------------------------------+--------------------------------------------------+

[root@Centos71 ~(keystone_demo)]# nova list
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------------+
| ID                                   | Name          | Status    | Task State | Power State | Networks                          |
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------------+
| 455877f2-7070-48a7-bb24-e0702be2fbc5 | CentOS7RSX05  | SUSPENDED | -          | Shutdown    | demo_net=50.0.0.13, 192.168.1.153 |
| 44645495-a158-4b99-b96b-26f8178fa28f | VF22Devs      | ACTIVE    | -          | Running     | demo_net=50.0.0.23, 192.168.1.163 |
| d7fda7ca-7124-4c8b-a085-1da784d57348 | vf22-instance | ACTIVE    | -          | Running     | demo_net=50.0.0.24, 192.168.1.164 |
+--------------------------------------+---------------+-----------+------------+-------------+-----------------------------------+

[root@Centos71 ~(keystone_demo)]# nova show d7fda7ca-7124-4c8b-a085-1da784d57348
+--------------------------------------+----------------------------------------------------------+
| Property                             | Value                                                    |
+--------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                   |
| OS-EXT-AZ:availability_zone          | nova                                                     |
| OS-EXT-STS:power_state               | 1                                                        |
| OS-EXT-STS:task_state                | -                                                        |
| OS-EXT-STS:vm_state                  | active                                                   |
| OS-SRV-USG:launched_at               | 2015-07-29T08:00:12.000000                               |
| OS-SRV-USG:terminated_at             | -                                                        |
| accessIPv4                           |                                                          |
| accessIPv6                           |                                                          |
| config_drive                         |                                                          |
| created                              | 2015-07-29T07:59:23Z                                     |
| demo_net network                     | 50.0.0.24, 192.168.1.164                                 |
| flavor                               | m1.small.performance (6)                                 |
| hostId                               | de84a0c94e3271ef0f4620e113814fa69132fea1be65e7ad33edde7d |
| id                                   | d7fda7ca-7124-4c8b-a085-1da784d57348                     |
| image                                | VF22Image (5f1ca33e-d5cc-43fb-9d88-5a7ef0f75959)         |
| key_name                             | oskeydev                                                 |
| metadata                             | {}                                                       |
| name                                 | vf22-instance                                            |
| os-extended-volumes:volumes_attached | []                                                       |
| progress                             | 0                                                        |
| security_groups                      | default                                                  |
| status                               | ACTIVE                                                   |
| tenant_id                            | 8c9defac20a74633af4bb4773e45f11e                         |
| updated                              | 2015-07-29T08:00:12Z                                     |
| user_id                              | da79d2c66db747eab942bdbe20bb3f44                         |
+--------------------------------------+---------------------------------------------------------


[root@Centos71 x86_64]# virsh list
 Id    Name                           State
----------------------------------------------------
 2     instance-0000000d              running
 3     instance-0000000e              running

[root@Centos71 x86_64]# virsh dumpxml instance-0000000e > vf22-02.xml

Please, see http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
regarding detailed explanation of highlighted blocks, keeping in mind that pinning is done to logical CPU cores ( not physical due to 4 Core CPU with HT enabled ). Multiple cells are also absent, due limitations of i7 47XX Haswell CPU architecture

[root@Centos71 x86_64]# cat  vf22-02.xml

<domain type='kvm' id='3'>
  <name>instance-0000000e</name>
  <uuid>d7fda7ca-7124-4c8b-a085-1da784d57348</uuid>
  <metadata>
    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
      <nova:package version="2015.1.0-3.el7"/>
      <nova:name>vf22-instance</nova:name>
      <nova:creationTime>2015-07-29 08:00:06</nova:creationTime>
      <nova:flavor name="m1.small.performance">
        <nova:memory>4096</nova:memory>
        <nova:disk>20</nova:disk>
        <nova:swap>0</nova:swap>
        <nova:ephemeral>0</nova:ephemeral>
        <nova:vcpus>4</nova:vcpus>
      </nova:flavor>
      <nova:owner>
        <nova:user uuid="da79d2c66db747eab942bdbe20bb3f44">demo</nova:user>
        <nova:project uuid="8c9defac20a74633af4bb4773e45f11e">demo</nova:project>
      </nova:owner>
      <nova:root type="image" uuid="5f1ca33e-d5cc-43fb-9d88-5a7ef0f75959"/>
    </nova:instance>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static'>4</vcpu>
  <cputune>
    <shares>4096</shares>
    <vcpupin vcpu='0' cpuset='2'/>
    <vcpupin vcpu='1' cpuset='6'/>
    <vcpupin vcpu='2' cpuset='3'/>
    <vcpupin vcpu='3' cpuset='7'/>
    <emulatorpin cpuset='2-3,6-7'/>
  </cputune>
  <numatune>
    <memory mode='strict' nodeset='0'/>
    <memnode cellid='0' mode='strict' nodeset='0'/>
  </numatune>

  <resource>
    <partition>/machine</partition>
  </resource>
    <sysinfo type='smbios'>
      <system>
        <entry name='manufacturer'>Fedora Project</entry>
        <entry name='product'>OpenStack Nova</entry>
        <entry name='version'>2015.1.0-3.el7</entry>
        <entry name='serial'>b3fae7c3-10bd-455b-88b7-95e586342203</entry>
        <entry name='uuid'>d7fda7ca-7124-4c8b-a085-1da784d57348</entry>
      </system>
    </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.1.0'>hvm</type>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
    <apic/>
  </features>
  <cpu mode='host-model'>
    <model fallback='allow'/>
    <topology sockets='2' cores='1' threads='2'/>
    <numa>
      <cell id='0' cpus='0-3' memory='4194304'/>
    </numa>
  </cpu>

 <clock offset='utc'>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/nova/instances/d7fda7ca-7124-4c8b-a085-1da784d57348/disk'/>
      <backingStore type='file' index='1'>
        <format type='raw'/>
        <source file='/var/lib/nova/instances/_base/99f1a80be14fc2563a2af39e944ee1c305ed8c34'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='fa:16:3e:94:77:fd'/>
      <source bridge='qbr2d20d535-5c'/>
      <target dev='tap2d20d535-5c'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='file'>
      <source path='/var/lib/nova/instances/d7fda7ca-7124-4c8b-a085-1da784d57348/console.log'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target port='1'/>
      <alias name='serial1'/>
    </serial>
    <console type='file'>
      <source path='/var/lib/nova/instances/d7fda7ca-7124-4c8b-a085-1da784d57348/console.log'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' port='5901' autoport='yes' listen='0.0.0.0 ( only  Compute )' keymap='en-us'>
      <listen type='address' address='0.0.0.0 ( only  Compute )'/>
    </graphics>
    <sound model='ich6'>
      <alias name='sound0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </sound>
    <video>
      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
      <stats period='10'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c670,c918</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c670,c918</imagelabel>
  </seclabel>
</domain>