[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] RE: The caculation of the credit in credit_scheduler



Yes, this works. Thanks very much!

--jyh

>-----Original Message-----
>From: Juergen Gross [mailto:juergen.gross@xxxxxxxxxxxxxx]
>Sent: Wednesday, November 10, 2010 2:04 PM
>To: Jiang, Yunhong
>Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>Subject: Re: [Xen-devel] RE: The caculation of the credit in credit_scheduler
>
>On 11/10/10 06:55, Jiang, Yunhong wrote:
>>
>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Juergen Gross
>>> Sent: Wednesday, November 10, 2010 1:46 PM
>>> To: Jiang, Yunhong
>>> Cc: George Dunlap; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, 
>>> Xiantao
>>> Subject: Re: [Xen-devel] RE: The caculation of the credit in 
>>> credit_scheduler
>>>
>>> On 11/10/10 03:39, Jiang, Yunhong wrote:
>>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: George Dunlap [mailto:George.Dunlap@xxxxxxxxxxxxx]
>>>>> Sent: Tuesday, November 09, 2010 10:27 PM
>>>>> To: Jiang, Yunhong
>>>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Zhang, Xiantao
>>>>> Subject: Re: The caculation of the credit in credit_scheduler
>>>>>
>>>>> On 05/11/10 07:06, Jiang, Yunhong wrote:
>>>>>> The reason is how the credit is caculated. Although the 3 HVM domains is
>>> pinned
>>>>> to 2 PCPU and share the 2 CPUs, they will all get 2* 300 credit when 
>>>>> credit
>>> account.
>>>>> That means the I/O intensive HVM domain will never be under credit, thus 
>>>>> it
>will
>>>>> preempt the CPU intensive whenever it is boost (i.e. after I/O access to
>QEMU),
>>> and
>>>>> it is set to be TS_UNDER only at the tick time, and then, boost again.
>>>>>
>>>>> I suspect that the real reason you're having trouble is that pinning and
>>>>> the credit mechanism don't work very well together.  Instead of pinning,
>>>>> have you tried using the cpupools interface to make a 2-cpu pool to put
>>>>> the VMs into?  That should allow the credit to be divided appropriately.
>>>>
>>>> I have a quick look in the code, and seems the cpu pool should not help on 
>>>> such
>>> situation. The CPU poll only cares about the CPUs a domain can be 
>>> scheduled, but
>>> not about the credit caculation.
>>>
>>> With cpupools you avoid the pinning. This will result in better credit
>>> calculation results.
>>
>> My system is doing testing, so I can't do the experiment now, but I'm not 
>> sure if
>the cpupool will help the credit caculation.
>>
>>> From the code in csched_acct() at "common/sched_credit.c", the credit_fair 
>>> is
>caculated as followed, and the creadt_fair's original value is caculated by 
>sum all
>pcpu's credit, without considering the cpu poll.
>>
>>          credit_fair = ( ( credit_total
>>                            * sdom->weight
>>                            * sdom->active_vcpu_count )
>>                          + (weight_total - 1)
>>                        ) / weight_total;
>>
>> Or did I missed anything?
>
>The scheduler sees only the pcpus and domains in the pool, as it is cpupool
>specific.
>BTW: the credit scheduler's problem with cpu pinning was the main reason for
>introducing cpupools.
>
>
>Juergen
>
>--
>Juergen Gross                 Principal Developer Operating Systems
>TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
>Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
>Domagkstr. 28                           Internet: ts.fujitsu.com
>D-80807 Muenchen                 Company details:
>ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.