WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] credit2 data structures

To: Jan Beulich <JBeulich@xxxxxxxx>
Subject: Re: [Xen-devel] credit2 data structures
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 14 Oct 2011 06:35:46 +0200
Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 13 Oct 2011 21:36:46 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1318566947; x=1350102947; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; bh=JvSELwk+8t3/+GT0aiZ5TUhRiUJGuGHRjaG/lmatwk8=; b=KdnRxlsfDCJ0GK8IWmNNe2z8tGkq1RHyoexa+M0/6mTvOPHvL4qBzWwg +J1boO92XNWxE46TdqHaRHGDNZcBxGfp1ERXalngrsIMM72MsV2gy5+sR IZr0InclUGgtVW2sCQEoRWgL61lymdfusi8T9Nggfi8YtZh6/nn9zZuWZ EO5tMhrJ/20gujhDOnBU4QdLC4dyfXNeZuFXIrUoRfB/qM8XoBBLCP7+i 31BU0MOx1Wwn5rXLvcZOCycMYsW2g;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=NcuFX40d9IfOasohcCsnhze3gMw+yXBVQRF3NZtsf6GGAGv8bxQgMuRO qwV0KRUtEeAUcGniVl+Hjp4bFXTXxAHFg336sUhOJfE2zwaztbtgKyix+ k6zIvJPxoEZNzAQ29Bi/QsqC2RaYujTkSF7XVwJLqjQmR5JKHt/UCSKe5 HFrrL5U03HDjIo8wxGlXDILRXKqNtIyeA6mCkaZILv2ConCECee04vXwZ lk9099JmtVVUaJTZ4l83IYFAjr+bL;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E970F2E020000780005B2F8@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4E96CEBD020000780005B151@xxxxxxxxxxxxxxxxxxxx> <CAFLBxZaHwav97=PVvZDTKzBVaeGc9FYh4xs9mj3cC=WbEzTZtw@xxxxxxxxxxxxxx> <4E96F48A020000780005B2A3@xxxxxxxxxxxxxxxxxxxx> <4E96DF7B.5060502@xxxxxxxxxxxxxx> <4E970F2E020000780005B2F8@xxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.21) Gecko/20110831 Iceowl/1.0b2 Icedove/3.1.13
On 10/13/2011 04:17 PM, Jan Beulich wrote:
On 13.10.11 at 14:54, Juergen Gross<juergen.gross@xxxxxxxxxxxxxx>  wrote:
On 10/13/2011 02:24 PM, Jan Beulich wrote:
On 13.10.11 at 12:11, George Dunlap<George.Dunlap@xxxxxxxxxxxxx>   wrote:
On Thu, Oct 13, 2011 at 10:42 AM, Jan Beulich<JBeulich@xxxxxxxx>   wrote:
Apart from the possibility of allocating the arrays (and maybe also the
cpumask_t-s) separately (for which I can come up with a patch on top
of what I' currently putting together) - is it really necessary to have
all these, the more that there can be multiple instances of the structure
with CPU pools?
I'm not quite sure what it is that you're asking.  Do you mean, are
all of the things in each runqueue structure necessary?  Specifically,
I guess, the cpumask_t structures (because the rest of the structure
isn't significantly larger than the per-cpu structure for credit1)?
No, it's really the NR_CPUS-sized array of struct csched_runqueue_data.
Credit1 otoh has *no* NR_CPUS sized arrays at all.

At first blush, all of those cpu masks are necessary.  The assignment
of cpus to runqueues may be arbitrary, so we need a cpu mask for that.
   In theory, "idle" and "tickled" only need bits for the cpus actually
assigned to this runqueue (which should be 2-8 under normal
circumstances).  But then we would need some kind of mechanism to
translate "mask just for these cpus" to "mask of all cpus" in order to
use the normal cpumask mechanisms, which seems like a lot of extra
complexity just to save a few bytes.  Surely a system with 4096
logical cpus can afford 6 megabytes of memory for scheduling?
I'm not concerned about the total amount if run on a system that
large. I'm more concerned about this being a single chunk (possibly
allocated post-boot, where we're really aiming at having no
allocations larger than a page at all) and this size being allocated
even when running on a much smaller system (i.e. depending only
on compile time parameters).

For one thing, the number of runqueues in credit2 is actually meant to
be smaller than the number of logical cpus -- it's meant to be one per
L2 cache, which should have between 2 and 8 logical cpus, depending on
the architecture.  I just put NR_CPUS because it was easier to get
working.  Making that an array of pointers, which is allocated on an
as-needed basis, should reduce that requirement a great deal.
That would help, but would probably not suffice (since a NR_CPUS
sized array of pointers is still going to be larger than a page). We
may need to introduce dynamic per-CPU data allocation for this...
Couldn't the run-queue data be dynamically allocated and the pcpu-data of
credit2 contain a pointer to it?
Not if the per-CPU data is also per scheduler instance (which I can't
easily tell whether it is).

Each cpu has only one dynamically allocated scheduler pcpu-data area which is
anchored in the per_cpu area of that cpu.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
PDG ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>