[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH] scheduler: credit scheduler for client virtualization


  • To: "NISHIGUCHI Naoki" <nisiguti@xxxxxxxxxxxxxx>
  • From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Thu, 4 Dec 2008 12:37:14 +0000
  • Cc: Ian.Pratt@xxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, disheng.su@xxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 04 Dec 2008 04:37:40 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=FBOSwmpDqjgnPIy3lzzAkQGhBLazUbddX5KqOU3AffQ5dm84gh97QKLZ/IK0LD53aI MZIiUXee8IFILYcDqBkQ710WYxOLLjwYLrRvs3zfG7hugEGPUQA9I7nisMfa6lB9F/s8 hU1cprAqApEd4AI1GwrEUwmNqqmhItPlFhBjI=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

On Thu, Dec 4, 2008 at 12:21 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> I see -- the current setup is good if there's only one "boosted" VM
> (per cpu) at a time; but if there are two "boosted" VMs, they're back
> to taking turns at 30 ms.  Your 2ms patch allows several
> latency-sensitive VMs to share the "low latency" boost.  That makes
> sense.  I agree with your suggestion: we can set the timer to 2ms only
> if the next waiting vcpu on the queue is also BOOST.

There was a paper earlier this year about scheduling and I/O performance:
 http://www.cs.rice.edu/CS/Architecture/docs/ongaro-vee08.pdf

One of the things he noted was that if a driver domain is accepting
network packets for multiple VMs, we sometimes get the following
pattern:
* driver domain wakes up, starts processing packets.  Because it's in
"over", it doesn't get boosted.
* Passes a packet to VM 1, waking it up.  It runs in "boost",
preempting the (now lower-priority) driver domain.
* Other packets (possibly even for VM 1) sit in the driver domain's
queue, waiting for it to get cpu time.

Their tests, for 3 networking guests and 3 cpu-intensive guests,
showed a 40% degradation in performance due to this problem.  While
we're thinking about the scheduler, it might be worth seeing if we can
solve this.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.