WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] planned csched improvements?

To: Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>
Subject: Re: [Xen-devel] planned csched improvements?
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Tue, 20 Oct 2009 10:37:19 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 20 Oct 2009 02:37:45 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=xunLD7GhIseVFOrtPIQKMIaomsnLfBQeAbjF86oqUBI=; b=eiHhUFThdmNouJTuBPLGfQdDVy21tRget696deUbpj03oYaYn6Tu9X1runhLY0PGjc l0VtOTCsP+ePEpVaEytR5hI76Ff3KcdwjcVDE2ehmxcdy4CitaKU4ff4erCITIds8D56 QOQd3ZFKBzVUQY67borLvyQ67AWjGDacTIPD8=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=o1UYhycMY2dcLTgM2Uj8K+8/thUORUIlLiZeIxchax5iouilQ4GGkOuaWyWZTF/UFf Qk9UgaeC2b47d5v+z2rFD6OZq4IeHq1IMu5qMMbk8OYVKXzNzLl2vzJONopy1sc1L07d UCL6wNXCgez+mXoohiblAwluqxFOTJYUvikC4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20091019170150.59e13b73@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4ACF6A8F02000078000190E2@xxxxxxxxxxxxxxxxxx> <de76405a0910090859x2d08ae8cl70e2f848f247daa3@xxxxxxxxxxxxxx> <20091016171652.2d8aa35a@xxxxxxxxxxxxxxxxxxxx> <de76405a0910190234r2276c66fkead48420ffb6e25e@xxxxxxxxxxxxxx> <20091019170150.59e13b73@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Oct 20, 2009 at 1:01 AM, Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> Yeah, I've been thinking in the back of my mind, some sort of multiple
> runqueus

There already are multiple runqueues; the overhead comes from the
"steal work" method of moving vcpus between them, which works fine for
low number of cpus but doesn't scale well.

Hmm, I thought I had written up my plans for load-balancing in an
e-mail to the list, but I can't seem to find them now.  Standby
sometime for a description. :-)

> Agree. I'm hoping to collect all that information over the next couple/few
> months.  The last attempt, made a year ago, didn't yield in a whole lot
> of information because of problems with 32bit tools and 64bit guest apps
> interaction.

I have some good tools for collecting scheduling activity and
analyzing using xentrace and xenalyze.  When you get things set up,
let me know and I'll post some information about using xentrace /
xenalyze to characterize a workload's scheduling.

> In a nutshell, there's tremendous smarts in the DB, and so I think it
> prefers a simplified schedular/OS that it can provide hints to and interact
> a little with.  Ideally, it would like ability for a privileged thread
> to tell the OS/hyp, I want to yield cpu to thread #xyz.

If the thread is not scheduled on a vcpu by the OS, then when the DB
says to yield to that thread, the OS can switch on the running vcpu,
no changes needed.

The only potential modification would be if the DB wants to yield to a
thread which is scheduled on another vcpu, but that vcpu is not
currently running.  Then the guest OS *may* want to be able to ask the
HV to yield the currently running vcpu to the other vcpu.  That
interface is work thinking about.

> Moreover, my focus is large, 32 to 128 logical processors, with 1/2
> to 1TB memory.  As such, I also want to address VCPUs being confined
> to logical block of physical CPUs, taking into consideration that
> licenses are per physical cpu core.

This sounds like it would benefit from the "CPU pools" patch submitted
by Juergen Gross.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel