WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC] Scheduler work, part 1: High-level goals and inter

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Scheduler work, part 1: High-level goals and interface.
From: "Tian, Kevin" <kevin.tian@xxxxxxxxx>
Date: Sat, 11 Apr 2009 18:00:48 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sat, 11 Apr 2009 03:01:16 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49DF7FBF.9060209@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <de76405a0904090858g145f07cja3bd7ccbd6b30ce9@xxxxxxxxxxxxxx> <49DE415F.3060002@xxxxxxxx> <0A882F4D99BBF6449D58E61AAFD7EDD61036A60D@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <49DF708F.6070102@xxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA34172EC1C9C@xxxxxxxxxxxxxxxxxxxxxxxxx> <49DF7FBF.9060209@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acm6AJsiWH7BuvOZSe2hrHCvJrBdoAAitM4Q
Thread-topic: [Xen-devel] [RFC] Scheduler work, part 1: High-level goals and interface.
>From: Jeremy Fitzhardinge [mailto:jeremy@xxxxxxxx] 
>Sent: 2009年4月11日 1:20
>Ian Pratt wrote:
>>> I don't know what the performance characteristics of 
>modern-HT is, but
>>> in P4-HT the throughput of a given thread was very 
>dependent on what the
>>> other thread was doing. If its competing with some other arbitrary
>>> domain, then its hard to make any estimates about what the 
>throughput
>>> of a given vcpu's thread is.
>>>     
>>
>> The original Northwood P4's were fairly horrible as regards 
>performance predictability, but things got considerably better 
>with later steppings. Nehalem has some interesting features 
>that ought to make it better yet.
>>
>> Presenting sibling pairs to guests is probably preferable 
>(it avoids any worries about side channel crypto attacks), but 
>I certainly wouldn't restrict it to just that: server hosted 
>desktop workloads often involve large numbers of single VCPU 
>guests, and you want every logical processor available.
>>
>> Scaling the accounting if two threads share a core is a good 
>way of ensuring things tend toward longer term fairness.
>>
>> Possibly having two modes of operation would be good thing:
>>
>>  1. explicitly present HT to guests and gang schedule threads
>>
>>  2. normal free-for-all with HT aware accounting.
>>
>> Of course, #1 isn't optimal if guests may migrate between HT 
>and non-HT systems.
>>   
>
>This can probably be extended to Intel's hyper-dynamic flux mode (that 
>may not be the real marketing name), where it can overclock 
>one core if 
>the other is idle.

the normal name for this is Turbo Boost. However it'd be difficult
for software to accounting for extra cycles gained from overclock,
as whether boost actually happens and how much cycles can
be boosted are completely controlled by hardware unit. There's
some feedback mechanism though, to gain average frequency in
an elapsed time. However currently cpufreq governor runs in 
time based style w/o connection to scheduler. That's one part
we could further enhance.

Thanks,
Kevin
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>