WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Scheduler follow-up: Design target (was [RFC] Scheduler work

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Scheduler follow-up: Design target (was [RFC] Scheduler work, part 1)
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Tue, 14 Apr 2009 13:38:16 +0100
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "Tian, Kevin" <kevin.tian@xxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Delivery-date: Tue, 14 Apr 2009 05:38:42 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=eNQmKwYmb5CWJS5PvzBhGzTLuLA+eD0GE4s9XAvMcQ4=; b=FmR7cklVGrkglT8kBnfwdJmC1g5fudTj1GhegBoIeH9dkar1X2MqdwNyTvDtNGM7pL t2eOf+x21xMsYRTLo4fLPQtyB1JH6Oq1h6VX8/XAWjTWIT2aMULGVJCHOss7A+ZyzoCu Df2LpvRFj7T838eNZTcYsZuugr1tpJQjzmecY=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:date:x-google-sender-auth:message-id:subject :from:to:cc:content-type:content-transfer-encoding; b=Rbebtqo/b0dkeu31ikXXVc+VBaRWwsXi5rmrBrV2MaUV3xK+yciqkO8vlhd/dbcdoG 1YNn5PRoeqm5rMPLu8lYphLRUGA2YNJS0eEt2qwjibJ0SREnoSHrcsvJyqcdi6F+gchw IOJh1XB0pnm78BfjlQ3FK6IeDts3v7HgtkLJU=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hey all,

Thanks for the feedback; and, sorry for sending it just before a
holiday weekend so there was a delay in writing up a response.  (OTOH,
as I did read the e-mails as they came out, it's given me more time to
think and coalesce.)

A couple of high bits: This first e-mail was meant to lay out design
goals and discuss interface.  If we can agree (for example) that we
want latency-sensitive workloads (such as network, audio, and video)
to perform well, and use latency-sensitive workloads as test cases
while developing, then we don't need to agree on a specific algorithm
up-front.

OK, with that in mind, some specific responses:

* [Jeremy] Is that forward-looking enough?  That hardware is currently
available; what's going to be commonplace in 2-3 years?

I think we need to distinguish between "works optimally" and "works
well".  Obviously we want the design to be scalable, and we don't want
to have to do a major revision in a year because 16 logical cpus works
well but 32 tanks.  And it may be a good idea to "lead" the target, so
that when we actually ship something it will be right on, rather than
6 months behind.

Still, in 2-3 years, will the vast majority of servers have 32 logical
cpus, or still only 16 or less?

Any thoughts on a reasonable target?

* [Kevin Tian] How is 80%/800% chosen here?

Heuristics.  80% is a general rule of thumb for optimal server
performance.  Above 80% and you may get a higher total throughput (or
maybe not) but it will be common for individual VMs to have to wait
for CPU resources, which may cause significant performance impact.

(I should clarify, 80% means 80% of *all* resources, not 80% of one
cpu; i.e., if you have 4 cores, xenuse may report 360% of one cpu;
but 100% of all resources would be 400% of one cpu.)

800% was just a general boundary.  I think it's sometimes as important
to say what you *aren't* doing as what you are doing.  For example, if
someone comes in and says, "This new scheduler sucks if you have a
load average of 10 (i.e., 1000% utilization)", we can say, "Running
with a load average of 10 isn't what we're designing for.  Patches
will be accepted if they don't adversely impact performance at 80%.
Otherwise feel free to write your own scheduler for that kind of
system."  OTOH, if a hosting provider (for example) says, "Performance
really tanks around a load of 3", we should make an effort to
accomodate that.

* [Kevin Tian] How about VM number in total you'd like to support?

Good question.  I'll do some research for how many VMs a virtual
desktop system might want to support.

For servers, I think a reasonable design space would be between 1 VM
every 3 cores (for a few extremely high-load servers) to 8 VMs every
core (for highly aggregated servers).  I suppose server farms may want
more.

Does anyone else have any thoughts on this subject -- either
suggestions for different numbers, or other use cases they want
considered?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>