WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] How To Pin domU VCPU To Specific CPU During Instance Creatio

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] How To Pin domU VCPU To Specific CPU During Instance Creation
From: Adrian Turcu <adriant@xxxxxxxxxx>
Date: Tue, 08 Jul 2008 11:54:22 +0100
Delivery-date: Tue, 08 Jul 2008 03:54:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Organization: Newbay Software Ltd
Reply-to: adriant@xxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.12 (X11/20070719)
Hi all

I was browsing the archives to find a solution to my "problem" but with no luck.
Here is the scenario:

Host:
Hardware: Dell PE 1950, 4 x dual core CPU, 16GB RAM
OS: FC8, kernel 2.6.21-2950.fc8xen
Xen version: 3.1.0-rc7-2950.fc8

Guests:
OS: FC8, kernel 2.6.21-2950.fc8xen

I want to be able during guest instance creation to pin down each of the VCPUs 
to specific CPU cores.
I can do that after the instance is up by using "xm vcpu-pin" command, but I 
would love to be able to do it straight from the config file.

two config files:

### shared-db4
kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
name = "shared-db4"
memory = 8192
cpus = "4,5"
vcpus = 2
vif = [ 'mac=00:16:3E:13:02:01, bridge=br162', 'mac=00:16:3E:13:04:01, 
bridge=br164' ]
disk = [ 
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-0-part1,hdb1,w'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-1-part1,hdc1,w'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130001-lun-2-part1,hdd1,w'
 ]
root = "/dev/hda1 ro"
extra = "3 selinux=0 enforcing=0"
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'



### shared-smq6
kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
name = "shared-smq6"
memory = 2560
cpus = "1,2"
vcpus = 2
vif = [ 'mac=00:16:3E:13:03:03, bridge=br163' ]
disk = [ 
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-root-lun-0-part1,hda1,r'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-0-part1,hdb1,w'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-1-part1,hdc1,w'
 ,
'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.celerra.domu-00163e130003-lun-2-part1,hdd1,w'
 ]
root = "/dev/hda1 ro"
extra = "3 selinux=0 enforcing=0"
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'restart'


"xm vcpu-list" output:
Name                              ID  VCPU   CPU State   Time(s) CPU Affinity
Domain-0                           0     0     0   r--  118567.5 any cpu
Domain-0                           0     1     -   --p       2.9 any cpu
Domain-0                           0     2     -   --p      30.4 any cpu
Domain-0                           0     3     -   --p       2.2 any cpu
Domain-0                           0     4     -   --p       3.2 any cpu
Domain-0                           0     5     -   --p       2.0 any cpu
Domain-0                           0     6     -   --p       2.0 any cpu
Domain-0                           0     7     -   --p       3.8 any cpu
shared-db4                         6     0     4   r--  446383.3 4
shared-db4                         6     1     5   -b-   89830.3 5
shared-smq4                        2     0     6   -b-   53710.6 6-7
shared-smq4                        2     1     6   -b-   87263.8 6-7
shared-smq6                        5     0     1   -b-   21681.7 1-2
shared-smq6                        5     1     1   -b-   31198.6 1-2

shared-db4 was altered after instance creation by using "xm vcpu-pin shared-db4 
0 4 ; xm vcpu-pin shared-db4 1 5",
the rest of the guests are as they were created using "xm create <config file>" 
command or automatically started at host reboot (/etc/xen/auto folder).

Don't know if this has an impact or not, but I am using sedf scheduler and I 
have a cron job which sets weight=1 for all newly created instances:
#!/bin/bash

# change weigth to 1
/usr/sbin/xm sched-sedf | grep -v Name | tr -s ' ' | cut -d\  -f7,1 | while 
read a b ; do if [ $b -eq 0 ] ; then /usr/sbin/xm sched-sedf $a -w1 ; fi ; done


The reason:

I can see in the guest domains a lot of percentage spent in "CPU Steal" column
when the systems are under heavy CPU pressure.
Changing the CPU affinity on each VCPU seem to keep "CPU steal" in the guests 
to almost 0 during similar system loads.

I also came across this old article (maybe still valid):

http://virt.kernelnewbies.org/ParavirtBenefits

which in particular states:

"The time spent waiting for a physical CPU is never billed against a process,
allowing for accurate performance measurement even when there is CPU time 
contention between *multiple virtual machines*.

The amount of time the virtual machine slowed down due to such CPU time 
contention is split out as so-called "steal time"
in /proc/stat and properly displayed in tools like vmstat(1), top(1) and 
sar(1)."

Is this because the CPU affinity is shared with Domain-0?
Maybe I am mixing stuff here, nevertheless, I'd like to be able to pin each 
VCPU to a physical CPU core (if that makes sense).


Thank you in advance,
Adrian


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users