WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler

To: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Subject: Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Wed, 10 Nov 2010 07:03:49 +0100
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Delivery-date: Tue, 09 Nov 2010 22:04:54 -0800
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1289369032; x=1320905032; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; z=Message-ID:=20<4CDA35C5.7060806@xxxxxxxxxxxxxx>|Date:=20 Wed,=2010=20Nov=202010=2007:03:49=20+0100|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20"Jiang,=20Yunhong"=20<yunhong.jiang@xxxxxxxx m>|CC:=20George=20Dunlap=20<George.Dunlap@xxxxxxxxxxxxx>, =20=0D=0A=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@ lists.xensource.com>,=0D=0A=20"Dong,=20Eddie"=20<eddie.do ng@xxxxxxxxx>,=20=0D=0A=20"Zhang,=20Xiantao"=20<xiantao.z hang@xxxxxxxxx>|Subject:=20Re:=20[Xen-devel]=20RE:=20The =20caculation=20of=20the=20credit=20in=20credit_scheduler |References:=20<789F9655DD1B8F43B48D77C5D30659732FD0A5C9@ shsmsx501.ccr.corp.intel.com>=09<4CD95A22.2090902@xxxxxxx ix.com>=09<789F9655DD1B8F43B48D77C5D30659732FD7DF70@shsms x501.ccr.corp.intel.com>=09<4CDA31A8.4050308@xxxxxxxxxxxx om>=20<789F9655DD1B8F43B48D77C5D30659732FD7E0EC@shsmsx501 .ccr.corp.intel.com>|In-Reply-To:=20<789F9655DD1B8F43B48D 77C5D30659732FD7E0EC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> |Content-Transfer-Encoding:=207bit; bh=C/OSQzURlnDZjJ7VCoCJNBsqtOlc09sCSeEid88Zhy4=; b=FpUv6YgHKg4qg9SnSTjdo1nMI98YH8BkgVICOx98VzCbo9PDoXQnCizO VRjaLjM4h9pSs6XM1D3qk/kYXRKzXBSstm5empOryubJNG/BcggPzR+09 s5fNBRIgo4IRsSzSyjyR1BVpHv2PAmKUfKS3M2JyLeppL4brUaOJSSlgs DrFWtShsL1+zBMkSwhiNODx64gDLmQBFvo0mqzsHNwuFQwGte+ZYVd2mC uECjVPYhfWZ7PcSdbLPKQGHUiI2x3;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=s4T5LwMXyXRj+s2AkdOVhdEU26FkyeK/cs5nN6C844rrIY+XSGd97Zq7 ASNawAVd41s0yTwsiETBa0tCKpJwP8h90z3vTivZiD0kqvMzTRKe85nft HlAJTRDXQwxeqR2iEfeDWiFK2wcLX/qAwlVS5LERArPBek7nl3t9lhXxB G50S/QQRSf9xAgYH5lKMApaluOFhs5Fu0LrvguQ5BFlG5+IAG2sx9Qmmo AuTTrKDVnsj0FbLzwg9Qzz0XMFu2o;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <789F9655DD1B8F43B48D77C5D30659732FD7E0EC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <789F9655DD1B8F43B48D77C5D30659732FD0A5C9@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CD95A22.2090902@xxxxxxxxxxxxx> <789F9655DD1B8F43B48D77C5D30659732FD7DF70@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4CDA31A8.4050308@xxxxxxxxxxxxxx> <789F9655DD1B8F43B48D77C5D30659732FD7E0EC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.15) Gecko/20101030 Iceowl/1.0b1 Icedove/3.0.10
On 11/10/10 06:55, Jiang, Yunhong wrote:


-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Juergen Gross
Sent: Wednesday, November 10, 2010 1:46 PM
To: Jiang, Yunhong
Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
Subject: Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler

On 11/10/10 03:39, Jiang, Yunhong wrote:


-----Original Message-----
From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
Sent: Tuesday, November 09, 2010 10:27 PM
To: Jiang, Yunhong
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
Subject: Re: The caculation of the credit in credit_scheduler

On 05/11/10 07:06, Jiang, Yunhong wrote:
The reason is how the credit is caculated. Although the 3 HVM domains is
pinned
to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when credit
account.
That means the I/O intensive HVM domain will never be under credit, thus it will
preempt the CPU intensive whenever it is boost (i.e. after I/O access to QEMU),
and
it is set to be TS_UNDER only at the tick time, and then, boost again.

I suspect that the real reason you're having trouble is that pinning and
the credit mechanism don't work very well together.  Instead of pinning,
have you tried using the cpupools interface to make a 2-cpu pool to put
the VMs into?  That should allow the credit to be divided appropriately.

I have a quick look in the code, and seems the cpu pool should not help on such
situation. The CPU poll only cares about the CPUs a domain can be scheduled, but
not about the credit caculation.

With cpupools you avoid the pinning. This will result in better credit
calculation results.

My system is doing testing, so I can't do the experiment now, but I'm not sure 
if the cpupool will help the credit caculation.

From the code in csched_acct() at "common/sched_credit.c", the credit_fair is 
caculated as followed, and the creadt_fair's original value is caculated by sum all 
pcpu's credit, without considering the cpu poll.

         credit_fair = ( ( credit_total
                           * sdom->weight
                           * sdom->active_vcpu_count )
                         + (weight_total - 1)
                       ) / weight_total;

Or did I missed anything?

The scheduler sees only the pcpus and domains in the pool, as it is cpupool
specific.
BTW: the credit scheduler's problem with cpu pinning was the main reason for
introducing cpupools.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel