WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] How To Pin domU VCPU To Specific CPU During InstanceCrea

Hi Adrian,

FYI.
If you can try the latest xen-unstable, you can fulfill your demand.

# cat /etc/xen/vm1 | grep cpu
vcpus = 2
cpus = ["0", "1"]
# xm create vm1
Using config file "/etc/xen/vm1".
Started domain vm1
# cat /etc/xen/vm2 | grep cpu
vcpus = 2
cpus = ["1", "0"]
# xm create vm2
Using config file "/etc/xen/vm2".
Started domain vm2
# xm vcpu-list vm1 vm2
Name                                ID  VCPU   CPU State   Time(s) CPU Affinity
vm1                                  1     0     0   -b-      18.4 0
vm1                                  1     1     1   -b-      17.6 1
vm2                                  2     0     1   -b-      18.6 1
vm2                                  2     1     0   -b-      17.4 0

Best regards,
 Kan

Tue, 08 Jul 2008 15:58:12 +0100, Adrian Turcu wrote:

>No problem. I'll give it some more time here and then follow-up with the 
>devel list.
>
>Thank you,
>Adrian
>
>Todd Deshane wrote:
>> 
>> 
>> On Tue, Jul 8, 2008 at 10:47 AM, Adrian Turcu <adriant@xxxxxxxxxx
>> <mailto:adriant@xxxxxxxxxx>> wrote:
>> 
>>     Thanks for the quick reply Todd, but I guess my problem is not to
>>     exclude certain CPUs to be used by the guests,
>>     but to pin down VCPUs to specific CPUs when using a list.
>>     Take this one for example on my config:
>> 
>>     ### shared-smq6
>>     cpus = "1,2"
>>     vcpus = 2
>> 
>>     That means, I use a circular list of CPU 1 and CPU 2, and 2 VCPUs
>>     which can pick any from the list.
>>     This is true as per output of "xm vcpu-list shared-smq6" command:
>> 
>>     Name                              ID  VCPU   CPU State   Time(s) CPU
>>     Affinity
>>     shared-smq6                        5     0     1   -b-   21713.0 1-2
>>     shared-smq6                        5     1     1   -b-   31214.3 1-2
>> 
>>     What I would like is to be able to say in the config file directly,
>>     i.e. "use CPU 1 for VCPU 0 and CPU 2 for VCPU 1"
>>     At the moment I can do that only by using "xm vcpu-pin" command.
>> 
>>     If that is already in those threads, I cannot see it to be honest.
>>     Could you just sent the kind of config you envisage by using ^ ?
>> 
>> 
>> I actually don't have a lot of personal experience with vcpu pinning.
>> 
>> That thread I gave you was the first time I saw the syntax for it.
>> 
>> Any thoughts or experiences from others?
>> 
>> If after a day or two on the users list and no response/no solutions.
>> Feel free to post a fresh post to xen-devel with all the details of what
>> you have tried, what works etc.
>> 
>> If it was me, I would try to read through the source code to find the
>> answer. I can't commit to helping you with that today due to time
>> constraints.
>> 
>> Good luck.
>> 
>> Best Regards,
>> Todd
>>  
>> 
>> 
>>     Thank you,
>>     Adrian
>> 
>> 
>>     Todd Deshane wrote:
>>     >
>>     >
>>     > On Tue, Jul 8, 2008 at 6:54 AM, Adrian Turcu <adriant@xxxxxxxxxx
>>     <mailto:adriant@xxxxxxxxxx>
>>     > <mailto:adriant@xxxxxxxxxx <mailto:adriant@xxxxxxxxxx>>> wrote:
>>     >
>>     >     Hi all
>>     >
>>     >     I was browsing the archives to find a solution to my "problem" 
>> but
>>     >     with no luck.
>>     >     Here is the scenario:
>>     >
>>     >     Host:
>>     >     Hardware: Dell PE 1950, 4 x dual core CPU, 16GB RAM
>>     >     OS: FC8, kernel 2.6.21-2950.fc8xen
>>     >     Xen version: 3.1.0-rc7-2950.fc8
>>     >
>>     >     Guests:
>>     >     OS: FC8, kernel 2.6.21-2950.fc8xen
>>     >
>>     >     I want to be able during guest instance creation to pin down
>>     each of
>>     >     the VCPUs to specific CPU cores.
>>     >     I can do that after the instance is up by using "xm vcpu-pin"
>>     >     command, but I would love to be able to do it straight from the
>>     >     config file.
>>     >
>>     >
>>     >
>>     > I would suggest this thread:
>>     >
>>     http://markmail.org/search/?q=xen-devel+ian+pratt+cpu+pin+syntax#
>> query:xen-devel%20ian%20pratt%20cpu%20pin%20syntax+page:1+mid:
>> 2vlhnty3zemednba+state:results
>>     >
>>     > Take a look at the syntax with the ^
>>     >
>>     > Hope that helps,
>>     > Todd
>>     >
>>     >
>>     >
>>     >
>>     >     two config files:
>>     >
>>     >     ### shared-db4
>>     >     kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
>>     >     ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
>>     >     name = "shared-db4"
>>     >     memory = 8192
>>     >     cpus = "4,5"
>>     >     vcpus = 2
>>     >     vif = [ 'mac=00:16:3E:13:02:01, bridge=br162',
>>     >     'mac=00:16:3E:13:04:01, bridge=br164' ]
>>     >     disk = [
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-root-lun-0-part1,hda1,r'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130001-lun-0-part1,hdb1,w'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130001-lun-1-part1,hdc1,w'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130001-lun-2-part1,hdd1,w'
>>     >     ]
>>     >     root = "/dev/hda1 ro"
>>     >     extra = "3 selinux=0 enforcing=0"
>>     >     on_poweroff = 'destroy'
>>     >     on_reboot   = 'restart'
>>     >     on_crash    = 'restart'
>>     >
>>     >
>>     >
>>     >     ### shared-smq6
>>     >     kernel = "/boot/vmlinuz-2.6.21-2950.fc8xen"
>>     >     ramdisk = "/boot/initrd-2.6.21-2950.fc8xen-domU.img"
>>     >     name = "shared-smq6"
>>     >     memory = 2560
>>     >     cpus = "1,2"
>>     >     vcpus = 2
>>     >     vif = [ 'mac=00:16:3E:13:03:03, bridge=br163' ]
>>     >     disk = [
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-root-lun-0-part1,hda1,r'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130003-lun-0-part1,hdb1,w'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130003-lun-1-part1,hdc1,w'
>>     >     ,
>>     >    
>>     'phy:disk/by-path/ip-nas01-681:3260-iscsi-iqn.2008-02.com.newbay.
>> celerra.domu-00163e130003-lun-2-part1,hdd1,w'
>>     >     ]
>>     >     root = "/dev/hda1 ro"
>>     >     extra = "3 selinux=0 enforcing=0"
>>     >     on_poweroff = 'destroy'
>>     >     on_reboot   = 'restart'
>>     >     on_crash    = 'restart'
>>     >
>>     >
>>     >     "xm vcpu-list" output:
>>     >     Name                              ID  VCPU   CPU State  
>>     Time(s) CPU
>>     >     Affinity
>>     >     Domain-0                           0     0     0   r--
>>      118567.5 any cpu
>>     >     Domain-0                           0     1     -   --p      
>>     2.9 any cpu
>>     >     Domain-0                           0     2     -   --p    
>>      30.4 any cpu
>>     >     Domain-0                           0     3     -   --p      
>>     2.2 any cpu
>>     >     Domain-0                           0     4     -   --p      
>>     3.2 any cpu
>>     >     Domain-0                           0     5     -   --p      
>>     2.0 any cpu
>>     >     Domain-0                           0     6     -   --p      
>>     2.0 any cpu
>>     >     Domain-0                           0     7     -   --p      
>>     3.8 any cpu
>>     >     shared-db4                         6     0     4   r--  446383.
>> 3 4
>>     >     shared-db4                         6     1     5   -b-   89830.
>> 3 5
>>     >     shared-smq4                        2     0     6   -b-  
>>     53710.6 6-7
>>     >     shared-smq4                        2     1     6   -b-  
>>     87263.8 6-7
>>     >     shared-smq6                        5     0     1   -b-  
>>     21681.7 1-2
>>     >     shared-smq6                        5     1     1   -b-  
>>     31198.6 1-2
>>     >
>>     >     shared-db4 was altered after instance creation by using "xm
>>     vcpu-pin
>>     >     shared-db4 0 4 ; xm vcpu-pin shared-db4 1 5",
>>     >     the rest of the guests are as they were created using "xm create
>>     >     <config file>" command or automatically started at host reboot
>>     >     (/etc/xen/auto folder).
>>     >
>>     >     Don't know if this has an impact or not, but I am using sedf
>>     >     scheduler and I have a cron job which sets weight=1 for all newly
>>     >     created instances:
>>     >     #!/bin/bash
>>     >
>>     >     # change weigth to 1
>>     >     /usr/sbin/xm sched-sedf | grep -v Name | tr -s ' ' | cut -d\
>>      -f7,1
>>     >     | while read a b ; do if [ $b -eq 0 ] ; then /usr/sbin/xm
>>     sched-sedf
>>     >     $a -w1 ; fi ; done
>>     >
>>     >
>>     >     The reason:
>>     >
>>     >     I can see in the guest domains a lot of percentage spent in "CPU
>>     >     Steal" column
>>     >     when the systems are under heavy CPU pressure.
>>     >     Changing the CPU affinity on each VCPU seem to keep "CPU steal"
>>  in
>>     >     the guests to almost 0 during similar system loads.
>>     >
>>     >     I also came across this old article (maybe still valid):
>>     >
>>     >     http://virt.kernelnewbies.org/ParavirtBenefits
>>     >
>>     >     which in particular states:
>>     >
>>     >     "The time spent waiting for a physical CPU is never billed
>>     against a
>>     >     process,
>>     >     allowing for accurate performance measurement even when there
>>     is CPU
>>     >     time contention between *multiple virtual machines*.
>>     >
>>     >     The amount of time the virtual machine slowed down due to such 
>> CPU
>>     >     time contention is split out as so-called "steal time"
>>     >     in /proc/stat and properly displayed in tools like vmstat(1),
>>     top(1)
>>     >     and sar(1)."
>>     >
>>     >     Is this because the CPU affinity is shared with Domain-0?
>>     >     Maybe I am mixing stuff here, nevertheless, I'd like to be able
>>  to
>>     >     pin each VCPU to a physical CPU core (if that makes sense).
>>     >
>>     >
>>     >     Thank you in advance,
>>     >     Adrian
>>     >
>>     >
>>     >     _______________________________________________
>>     >     Xen-users mailing list
>>     >     Xen-users@xxxxxxxxxxxxxxxxxxx
>>     <mailto:Xen-users@xxxxxxxxxxxxxxxxxxx>
>>     <mailto:Xen-users@xxxxxxxxxxxxxxxxxxx
>>     <mailto:Xen-users@xxxxxxxxxxxxxxxxxxx>>
>>     >     http://lists.xensource.com/xen-users
>>     >
>>     >
>>     >
>>     >
>>     > --
>>     > Todd Deshane
>>     > http://todddeshane.net
>>     > check out our book: http://runningxen.com
>> 
>> 
>> 
>> 
>> 
>> -- 
>> Todd Deshane
>> http://todddeshane.net
>> check out our book: http://runningxen.com
>
>
>
>
>_______________________________________________
>Xen-users mailing list
>Xen-users@xxxxxxxxxxxxxxxxxxx
>http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users