[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler


  • To: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
  • Date: Wed, 10 Nov 2010 07:03:49 +0100
  • Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>, "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
  • Delivery-date: Tue, 09 Nov 2010 22:04:54 -0800
  • Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=s4T5LwMXyXRj+s2AkdOVhdEU26FkyeK/cs5nN6C844rrIY+XSGd97Zq7 ASNawAVd41s0yTwsiETBa0tCKpJwP8h90z3vTivZiD0kqvMzTRKe85nft HlAJTRDXQwxeqR2iEfeDWiFK2wcLX/qAwlVS5LERArPBek7nl3t9lhXxB G50S/QQRSf9xAgYH5lKMApaluOFhs5Fu0LrvguQ5BFlG5+IAG2sx9Qmmo AuTTrKDVnsj0FbLzwg9Qzz0XMFu2o;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On 11/10/10 06:55, Jiang, Yunhong wrote:


-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Juergen Gross
Sent: Wednesday, November 10, 2010 1:46 PM
To: Jiang, Yunhong
Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
Subject: Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler

On 11/10/10 03:39, Jiang, Yunhong wrote:


-----Original Message-----
From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
Sent: Tuesday, November 09, 2010 10:27 PM
To: Jiang, Yunhong
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
Subject: Re: The caculation of the credit in credit_scheduler

On 05/11/10 07:06, Jiang, Yunhong wrote:
The reason is how the credit is caculated. Although the 3 HVM domains is
pinned
to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when credit
account.
That means the I/O intensive HVM domain will never be under credit, thus it will
preempt the CPU intensive whenever it is boost (i.e. after I/O access to QEMU),
and
it is set to be TS_UNDER only at the tick time, and then, boost again.

I suspect that the real reason you're having trouble is that pinning and
the credit mechanism don't work very well together.  Instead of pinning,
have you tried using the cpupools interface to make a 2-cpu pool to put
the VMs into?  That should allow the credit to be divided appropriately.

I have a quick look in the code, and seems the cpu pool should not help on such
situation. The CPU poll only cares about the CPUs a domain can be scheduled, but
not about the credit caculation.

With cpupools you avoid the pinning. This will result in better credit
calculation results.

My system is doing testing, so I can't do the experiment now, but I'm not sure 
if the cpupool will help the credit caculation.

From the code in csched_acct() at "common/sched_credit.c", the credit_fair is 
caculated as followed, and the creadt_fair's original value is caculated by sum all 
pcpu's credit, without considering the cpu poll.

         credit_fair = ( ( credit_total
                           * sdom->weight
                           * sdom->active_vcpu_count )
                         + (weight_total - 1)
                       ) / weight_total;

Or did I missed anything?

The scheduler sees only the pcpus and domains in the pool, as it is cpupool
specific.
BTW: the credit scheduler's problem with cpu pinning was the main reason for
introducing cpupools.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.