[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch 2 of 2]: PV-domain SMP performance Linux-part


  • To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>
  • From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Fri, 19 Dec 2008 15:15:09 +0000
  • Cc: Juergen Gross <juergen.gross@xxxxxxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 19 Dec 2008 07:15:40 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:references:x-google-sender-auth; b=Mu2I7S6O6Z5yIs1rHWeDcqEJkVUDLhu8n3Ba6F4V2317rMCLzuCADlgKKcU5Luo8Qq dx6cL5efgLgPfAhQMKnORAE4E1sWp8zmZ9J+VJCJWzftuJnJvexyj42Av9ce3v2P+D3d 3FEMfCfos98AVZfLx2Tg/1f/Z06Q2m6/OCHyw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

The general idea seems interesting.  I think we've kicked it around
internally before, but ended up sticking with a "yield after spinning
for awhile" strategy just for simplicity.  However, as Jeurgen says,
this flag could, in principle, save all of the "spin for a while"
timewasting in the first place.

As for mis-use: If we do things right, a guest shouldn't be able to
get an advantage from setting the flag when it doesn't need to. If we
add the ability to preempt it after 1ms, and deduct the extra credits
from the VM for the extra time run, then it will only run a little
longer, and then have to wait longer to be scheduled again time.  (I
think the more accurate credit accounting part of Naoki's patches are
sure to be included in the scheduler revision.)  If it doesn't yield
after the critical section is over, it risks  being pre-empted at the
next critical section.

The thing to test would be concurrent kernel builds and dbench, with
multiple domains, each domain vcpus == pcpus.

Would you mind coding up a yield-after-spinning-awhile patch, and
comparing the results to your "don't-deschedule-me" patch kernel build
at least, and possibly dbench?  I'm including some patches which
should be included when testing the "yield after spinning awhile"
patch, otherwise nothing interesting will happen.  They're a bit
hackish, but seem to work pretty well for their purpose.

 -George


On Fri, Dec 19, 2008 at 9:56 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
> On 19/12/2008 09:33, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>>>> Keir Fraser <keir.fraser@xxxxxxxxxxxxx> 19.12.08 10:10 >>>
>>> I haven't seen any win on any real world setup. So I remain unconvinced, and
>>> it'll need more than you alone championing the patch to get it in. There
>>> have been no other general comments so far (Jan's have been about specific
>>> details).
>>
>> I think I'd generally welcome a change like this, but I'm not certain how far
>> I feel convinced that the submission meets one very basic criteria:
>> avoidance of mis-use of the feature by a domain (simply stating that a vCPU
>> will be de-scheduled after 1ms anyway doesn't seem sufficient to me). This
>> might need to include ways to differentiate between Dom0/DomU and/or
>> CPU- vs IO-bound vCPU-s.
>
> The most likely person to comment on that in the coming weeks would be
> George, who's kindly signed up to do some design work on the scheduler.
>
>  -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

Attachment: scheduler.cpu_pick-avoids-redundancy.patch
Description: Text Data

Attachment: scheduler.push-redundant-vcpus.patch
Description: Text Data

Attachment: scheduler.yield-reduces-priority.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.