WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part

To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part
From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
Date: Fri, 19 Dec 2008 15:15:09 +0000
Cc: Juergen Gross <juergen.gross@xxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 19 Dec 2008 07:15:40 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:cc:in-reply-to:mime-version:content-type:references :x-google-sender-auth; bh=tgCTqwlzwgpvUPFf0v0DbbMPLpzdZsBS5mxbZ0AkgIk=; b=cS5cQUNyWwxE8feqlZkzxy7F2xGQAZ7a3gykL33TdDmmSMDGU/UUZonZ8jumiFS1lk 2DI4o40/vbr+ajGWVO61M8sT2Q06wE+vfPcjB55uRy64Q+xifBDc+thcKF1a9z0Qv5O7 IomxGkrG89HPsj06b80PCFtSB/bid+7k6Soc8=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:references:x-google-sender-auth; b=Mu2I7S6O6Z5yIs1rHWeDcqEJkVUDLhu8n3Ba6F4V2317rMCLzuCADlgKKcU5Luo8Qq dx6cL5efgLgPfAhQMKnORAE4E1sWp8zmZ9J+VJCJWzftuJnJvexyj42Av9ce3v2P+D3d 3FEMfCfos98AVZfLx2Tg/1f/Z06Q2m6/OCHyw=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C5712033.206AD%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <494B7892.76E4.0078.0@xxxxxxxxxx> <C5712033.206AD%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
The general idea seems interesting.  I think we've kicked it around
internally before, but ended up sticking with a "yield after spinning
for awhile" strategy just for simplicity.  However, as Jeurgen says,
this flag could, in principle, save all of the "spin for a while"
timewasting in the first place.

As for mis-use: If we do things right, a guest shouldn't be able to
get an advantage from setting the flag when it doesn't need to. If we
add the ability to preempt it after 1ms, and deduct the extra credits
from the VM for the extra time run, then it will only run a little
longer, and then have to wait longer to be scheduled again time.  (I
think the more accurate credit accounting part of Naoki's patches are
sure to be included in the scheduler revision.)  If it doesn't yield
after the critical section is over, it risks  being pre-empted at the
next critical section.

The thing to test would be concurrent kernel builds and dbench, with
multiple domains, each domain vcpus == pcpus.

Would you mind coding up a yield-after-spinning-awhile patch, and
comparing the results to your "don't-deschedule-me" patch kernel build
at least, and possibly dbench?  I'm including some patches which
should be included when testing the "yield after spinning awhile"
patch, otherwise nothing interesting will happen.  They're a bit
hackish, but seem to work pretty well for their purpose.

 -George


On Fri, Dec 19, 2008 at 9:56 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> On 19/12/2008 09:33, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 19.12.08 10:10 >>>
>>> I haven't seen any win on any real world setup. So I remain unconvinced, and
>>> it'll need more than you alone championing the patch to get it in. There
>>> have been no other general comments so far (Jan's have been about specific
>>> details).
>>
>> I think I'd generally welcome a change like this, but I'm not certain how far
>> I feel convinced that the submission meets one very basic criteria:
>> avoidance of mis-use of the feature by a domain (simply stating that a vCPU
>> will be de-scheduled after 1ms anyway doesn't seem sufficient to me). This
>> might need to include ways to differentiate between Dom0/DomU and/or
>> CPU- vs IO-bound vCPU-s.
>
> The most likely person to comment on that in the coming weeks would be
> George, who's kindly signed up to do some design work on the scheduler.
>
>  -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

Attachment: scheduler.cpu_pick-avoids-redundancy.patch
Description: Text Data

Attachment: scheduler.push-redundant-vcpus.patch
Description: Text Data

Attachment: scheduler.yield-reduces-priority.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel