[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Odd CPU Scheduling Behavior



This behavior of 2+ vcpus sharing the same pcpu is good only if there in data 
sharing between the vcpus, so the cache performance would be better. But in the 
case of apps (like SPECjbb for instance) where there is absolutely no sharing 
it may be beneficial to having the vcpus run in parallel on the different pcpus.


On another note is there a way to modify the scheduler such that with no 
affinity assigned, I can get the vcpu/VMid and pcpu combination every few 
seconds during a run.

Thanks
- Padma

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Petersson, Mats
Sent: Thursday, March 29, 2007 9:04 AM
To: Carb, Brian A; Emmanuel Ackaouy
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] Odd CPU Scheduling Behavior

 

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Carb, Brian A
> Sent: 29 March 2007 16:58
> To: Emmanuel Ackaouy
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-devel] Odd CPU Scheduling Behavior
> 
> Emmanuel,
> 
> Yes - both vcpus are progressing, but the load gets pushed to 
> one cpu. If I run top in interactive mode in each vm while 
> the test is running, and monitor cpu usage (set delay to 1 
> and show separate cpu states), each of the vm's cpus are 
> getting equally loaded on average.
> 
> There are a few more oddities: 
> 
> First, I see this behavior almost all the time when I run the 
> test. However, occasionally, I do not see this behavior at 
> all, and the load stays spread out on both cpus for the 
> duration of the test (2 minutes).

Whilst I have no idea as to the answer of the original question, I would like 
to point out that the scenario where two CPU-bound threads on a dual core 
processor and four VCPU's sharing the same domain sharing the same core is 
probably better for cache-hit-rate than sharing the VCPU's evenly over the 
CPU-cores, and if I had a say in the design, I would aim to keep it that way. 
[This may not be trivial to achieve, but if what you're saying is correct, then 
it's a GOOD THING(tm)]. 


--
Mats
> 
> Second, if I boot my ES7000/one to use only 4 CPUs (2 
> dual-core sockets), the load always stays evenly distributed 
> on both cpus.
> 
> brian carb
> unisys corporation - malvern, pa
> 
> -----Original Message-----
> From: Emmanuel Ackaouy [mailto:ackaouy@xxxxxxxxx] 
> Sent: Thursday, March 29, 2007 11:42 AM
> To: Carb, Brian A
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Odd CPU Scheduling Behavior
> 
> There is no gang scheduling in Xen so what you see is not unexpected.
> Both VCPUs of the same VM are as likely to run on the same 
> physical CPU than not. For each VM though, both its VCPUs 
> should get equal CPU time if they are runnable even if they 
> alternatively run on the same physical CPU.
> 
> I have seen some multithreaded applications/libraries back 
> off using execution vehicles (processes) to schedule a 
> runnable thread when it doesn't seem to make forward 
> progress, probably because some code somewhere assumes 
> another process is hogging the CPU and it's therefore better 
> to lower the number of execution vehicles. In this case, 
> multithreaded apps running on a 2CPU guest on Xen sometimes 
> only schedule work on 1CPU when there is another VM competing 
> for the physical CPU resources.
> 
> Are both VCPUs of each VM making forward progress during your test?
> 
> On Mar 29, 2007, at 16:58, Carb, Brian A wrote:
> 
> > We're seeing a cpu scheduling behavior in Xen and we're 
> wondering if 
> > anyone can explain it.
> >  
> > We're running XEN 3.0.4 on a Unisys ES7000/one with 8 CPUs (4 
> > dual-core sockets) and 32GB memory. XEN is built on SLES10, and the 
> > system is booted with dom0_mem=512mb. We have 2 
> para-virtual machines, 
> > each booted with 2 vcpus and 2GB memory, and each running SLES10 and
> > Apache2 with worker multi-processing modules.
> >  
> > The vcpus of dom0, vm1 and vm2 are pinned as follows:
> >  
> > dom0 is relegated to 2 vcpus (xm vcpu-set 0 2) and these 
> are pinned to 
> > cpus 0-1
> > vm1 uses 2 vcpus pinned to cpus 2-3
> > vm2 uses 2 vcpus pinned to cpus 2-3
> >  
> > The cpus 4 through 7 are left unused.
> >  
> > Our test runs http_load against the Apache2 web servers in 
> the 2 vms. 
> > Since Apache2 is using worker multi-processing modules, we 
> expect that 
> > each vm will spread its load over the 2 vcpus, and during 
> the test we 
> > have verified this using top and sar inside a vm console.
> >  
> > The odd behavior occurs when we monitor cpu usage using xenmon in 
> > interactive mode. By pressing "c", we can observe the load 
> on each of 
> > the cpus. When we examine cpus 2 and 3 initially, each is 
> used equally 
> > by vm1 and vm2. However, shortly after we start our 
> testing, cpu2 runs
> > vm1 exclusively 100% of the time, and cpu3 runs vm2 100% of 
> the time. 
> > When the test completes, CPUs 2 and 3 go back to sharing the load of
> > vm1 and vm2.
> >  
> > Is this the expected behavior?
> >
> > brian carb
> > unisys corporation - malvern,
> > pa_______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.