[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: The caculation of the credit in credit_scheduler




>-----Original Message-----
>From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
>Sent: Tuesday, November 09, 2010 10:27 PM
>To: Jiang, Yunhong
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>Subject: Re: The caculation of the credit in credit_scheduler
>
>On 05/11/10 07:06, Jiang, Yunhong wrote:
>> The reason is how the credit is caculated. Although the 3 HVM domains is 
>> pinned
>to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when credit 
>account.
>That means the I/O intensive HVM domain will never be under credit, thus it 
>will
>preempt the CPU intensive whenever it is boost (i.e. after I/O access to 
>QEMU), and
>it is set to be TS_UNDER only at the tick time, and then, boost again.
>
>I suspect that the real reason you're having trouble is that pinning and
>the credit mechanism don't work very well together.  Instead of pinning,
>have you tried using the cpupools interface to make a 2-cpu pool to put
>the VMs into?  That should allow the credit to be divided appropriately.

I have a quick look in the code, and seems the cpu pool should not help on such 
situation. The CPU poll only cares about the CPUs a domain can be scheduled, 
but not about the credit caculation.

Will do the experiment later.

Thanks
--jyh

>
>> I didn't try credit2, so no idea if this will happen to credit2 also.
>
>Credit2 may do better at dividing.  However, it doesn't implement
>pinning (just ignores it).  So you couldn't do your test unless you used
>cpupools, or limited cpus=2 to Xen's commandline.
>
>Also, credit2 isn't yet designed to handle 64 cpus yet, so it may not
>work very well on a system with 64 cores.
>
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.