WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Fri, 16 Jan 2009 08:17:14 +0000
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 16 Jan 2009 00:17:29 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4970344D.7030009@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acl3stZM6ndU0Hji5kKxxCiMZ6JLDw==
Thread-topic: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part
User-agent: Microsoft-Entourage/12.15.0.081119
On 16/01/2009 07:16, "Juergen Gross" <juergen.gross@xxxxxxxxxxxxxxxxxxx>
wrote:

>> Something like that would be better. Of course you'd need to measure work
>> done in the domUs as well, as one of the critical factors for this patch
>> would be how it affects fairness. It's one reason I'm leery of this patch --
>> our scheduler is unpredictable enough as it is without giving domains
>> another lever to pull!
> 
> Keir, is the data I posted recently okay?
> I think my approach requires less changes than the "yield after spin" variant,
> which needed more patches in the hypervisor and didn't seem to be settled.
> Having my patches in the hypervisor at least would make life much easier for
> our BS2000 system...
> I would add some code to ensure a domain isn't misusing the new interface.

It didn't sound like there was much average difference between the two
approaches, also that George's patches may be going in anyway for general
scheduling stability reasons, and also that any other observed hiccups may
also simply point to limitations of the scheduler implementation which
George may look at further.

Do you have an explanation for why shell commands behave differently with
your patch, or alternatively why they can be delayed so long with the yield
approach?

The approach taken in Linux is not merely 'yield on spinlock' by the way, it
is 'block on event channel on spinlock' essentially turning a contended
spinlock into a sleeping mutex. I think that is quite different behaviour
from merely yielding, and expecting the scheduler to do something sensible
with your yield request.

Overall I think George should consider your patch as part of his overall
scheduler refurbishment work. I personally remain unconvinced that the
reactive approach cannot get predictable performance close to your approach,
and without needing new hypervisor interfaces.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel