Hello Dustin,
Thanks a lot for the detailed explaination. It indeed clarified my understanding about VCPU/CPUs.
What I have understood in a nutshell is, please correct me if its wrong, that "what matters is the CPU core and CPU affinity a VCPU is using rather than just the VCPU number; thus all the domains could have the same VCPU #, let's say 0, but as long as they are pinned and restricted to a particular core then they are restricted to only that explicity core".
Its a valid argument that to enhance resource/cpu utilization, one should not bother too much about which core is being used by which domU. This is particulary important in situations where perfomance is the key criteria. But in my project, and for LHC grid computing; its a policy decision that each grid job will be allowed to use only one core per CPU (not because of performance reasons but rather resource accounting reasions). In a non-virtualized environment, this is handled by the Batch system configuration but if the job is executed on a virutal machine, which I am is researching on, then comes the question of core utilization for a VM. Thus I stumbled upon this issue of tinkering with vcpu-set/vcpu-pin.
To achieve this, i first modified that /etc/xen/xend-config and restricted the dom0 to use only one CPU. Now my dom0 is only using one CPU while all other dom0 instances for each core are in --p "paused state" with no CPU allocated to them. Then I launched my VM with the modified config file which had vcpus=1, cpus="0". xm list shows the following:
[root@~]# xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
CernVM 1 0 0 -b- 11.3 0
Domain-0 0 0 3 r-- 111.5 any cpu
Domain-0 0 1 - --p 12.1 any cpu
Domain-0 0 2 - --p 5.3 any cpu
Domain-0 0 3 - --p 2.7 any cpu
I had arrived the same state earlier by using vcpu-pin/vcpu-set but following the above process (as you advised too) its much simpler and cleaner. Interestingly, I observed that before the launching the domU, dom0 was using CPU 2 and later on it switched to CPU 3. But I guess that's OK as its not using CPU 0.
So the VCPU/CPU is sorted out, what about the scheduling of these CPUs either using sedf or credit-scheduler? Once a domU is restricted to one core, then I wanted to further optimize its performance by modifying its weight using credit-scheduler as the application to be run in the domU is memory/cpu intensive.
Thanks,
Omer
On Mon, Oct 6, 2008 at 2:51 PM, Dustin Henning
<Dustin.Henning@xxxxxxxxxxx> wrote:
Hi,
I have a dual core SMP machine ( in total 4 cores). I have been trying to set restrict vcpu/cpus for my domU to one core/one vpcu but it have not fully worked. As there are two commands "xm vcpu-set" and "xm vcpu-pin". By using these commands, i have observed that the sequence in which they are used plays a role. e.g. I have the following state in the beginning:
[root@lxb ~]# xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
==== == ===== ==== ==== ====== === =====
Domain-0 0 0 3 r-- 5593.4 any cpu
Domain-0 0 1 1 -b- 15361.9 any cpu
Domain-0 0 2 0 -b- 10137.5 any cpu
Domain-0 0 3 - --p 78.9 any cpu
test_lxb 20 0 2 -b- 21169.0 any cpu
What I want to achieve is that my domU (test_lxb) uses one VCPU pinned to one CPU. In the above state, both my domU and dom0 are using VCPU 0 (which is pinned to use either CPU 3 or 2.) After few "vcpu-set" and "vcpu-pin", I reach the following stage where dom0 is pinned to CPU 3 and domU (test_lxb) is pinned to CPU 2:
[root@lxb ~]# xm vcpu-list
Name ID VCPUs CPU State Time(s) CPU Affinity
Name ID VCPUs CPU State Time(s) CPU Affinity
==== == ===== ==== ==== ====== ========
Domain-0 0 0 3 r-- 5600.4 3
Domain-0 0 1 3 -b- 15372.5 3
Domain-0 0 2 3 -b- 10140.0 3
Domain-0 0 3 - --p 78.9 3
test_lxb 20 0 2 -b- 21169.5 2
But domU is still using VCPU 0 which is also used by my domU; now i would like to restrict VCPU 0 to CPU 2 only for domU only...I am wondering how to achieve this last mile?
Any ideas? Thanks for you help in advance!
Regards
--
Omer
-------------------------------------------------------
CERN -- European Organization for Nuclear
Research, IT Department, CH-1211,
Geneva 23, Switzerland
You have misinterpreted the meaning VCPU numbers. VCPU 0 is the first virtual CPU for any domain, VCPU 1 is the second virtual CPU for any domain, etcetera. Additional single VCPU domUs will have a VCPU 0 as well. Each VCPU 0 is actually a separate VCPU; they are all identified as CPU 0 to a different domain, and the VCPU identification just tells you what the domU sees them as (minus the V). CPU indicates which CPU/core a VCPU is currently using, and CPU Affinity indicates which ones it is allowed to use. Furthermore, for performance reasons, if you want Dom0 to only use one CPU/core, you should assign it only one VCPU (which will be 0, so for what you are trying to do, you probably ultimately want output more like this):
Name ID VCPU CPU State Time(s) CPU Affinity
Domain-0 0 0 0 r-- 5600.4 0
test_lxb 1 0 1 -b- 21169.5 1
test_abc 2 0 2 -b- 21169.5 2
test_def 3 0 3 -b- 21169.5 3
Obviously state and time will be variable. Additionally, which core/cpu is used for which domain shouldn't matter much. Regarding getting to this state, the number of VCPUs dom0 has initially (and which CPUs/cores they use) is configurable (probably /etc/xen/xend-config). The same is true for domUs. That said, see the example configs in /etc/xen for more info on how to do this, but you should be able to cause each domU to start up with the CPU/core you want it to use, and then you won't really need to use vcpu-set or vcpu-pin. Finally, if I don't bring it up, someone else probably will, the idea behind virtualization is to better use available processing power. With that in mind, your domUs may not each need their own full CPU/core. (For instance, I have a quad-core with four HVMs that have one vcpu each, where each uses a separate core, and then my dom0 has four VCPUs, where each uses a separate core; even this isn't by any means fully utilizing the hardware, but I am more concerned with maintaining optimal performance of my HVMs). Good luck with your project,
Dustin
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
--
Omer
-------------------------------------------------------
CERN -- European Organization for Nuclear
Research, IT Department, CH-1211,
Geneva 23, Switzerland
Phone: +41 (0) 22 767 2224
Fax: +41 (0) 22 766 8683
E-mail : Omer.Khalid@xxxxxxx
Homepage:
http://cern.ch/Omer.Khalid