This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] RE: The caculation of the credit in credit_scheduler

To: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Subject: [Xen-devel] RE: The caculation of the credit in credit_scheduler
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Wed, 10 Nov 2010 10:39:16 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Delivery-date: Tue, 09 Nov 2010 18:41:51 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CD95A22.2090902@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <789F9655DD1B8F43B48D77C5D30659732FD0A5C9@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CD95A22.2090902@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcuAGiP87pmZEFniTfKjbbcUcdeSkQAZeFSg
Thread-topic: The caculation of the credit in credit_scheduler

>-----Original Message-----
>From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
>Sent: Tuesday, November 09, 2010 10:27 PM
>To: Jiang, Yunhong
>Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>Subject: Re: The caculation of the credit in credit_scheduler
>On 05/11/10 07:06, Jiang, Yunhong wrote:
>> The reason is how the credit is caculated. Although the 3 HVM domains is 
>> pinned
>to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when credit 
>That means the I/O intensive HVM domain will never be under credit, thus it 
>preempt the CPU intensive whenever it is boost (i.e. after I/O access to 
>QEMU), and
>it is set to be TS_UNDER only at the tick time, and then, boost again.
>I suspect that the real reason you're having trouble is that pinning and
>the credit mechanism don't work very well together.  Instead of pinning,
>have you tried using the cpupools interface to make a 2-cpu pool to put
>the VMs into?  That should allow the credit to be divided appropriately.

I have a quick look in the code, and seems the cpu pool should not help on such 
situation. The CPU poll only cares about the CPUs a domain can be scheduled, 
but not about the credit caculation.

Will do the experiment later.


>> I didn't try credit2, so no idea if this will happen to credit2 also.
>Credit2 may do better at dividing.  However, it doesn't implement
>pinning (just ignores it).  So you couldn't do your test unless you used
>cpupools, or limited cpus=2 to Xen's commandline.
>Also, credit2 isn't yet designed to handle 64 cpus yet, so it may not
>work very well on a system with 64 cores.
>  -George

Xen-devel mailing list