[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] credit2 data structures



On Thu, Oct 13, 2011 at 10:42 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
> Apart from the possibility of allocating the arrays (and maybe also the
> cpumask_t-s) separately (for which I can come up with a patch on top
> of what I' currently putting together) - is it really necessary to have
> all these, the more that there can be multiple instances of the structure
> with CPU pools?

I'm not quite sure what it is that you're asking.  Do you mean, are
all of the things in each runqueue structure necessary?  Specifically,
I guess, the cpumask_t structures (because the rest of the structure
isn't significantly larger than the per-cpu structure for credit1)?

At first blush, all of those cpu masks are necessary.  The assignment
of cpus to runqueues may be arbitrary, so we need a cpu mask for that.
 In theory, "idle" and "tickled" only need bits for the cpus actually
assigned to this runqueue (which should be 2-8 under normal
circumstances).  But then we would need some kind of mechanism to
translate "mask just for these cpus" to "mask of all cpus" in order to
use the normal cpumask mechanisms, which seems like a lot of extra
complexity just to save a few bytes.  Surely a system with 4096
logical cpus can afford 6 megabytes of memory for scheduling?

For one thing, the number of runqueues in credit2 is actually meant to
be smaller than the number of logical cpus -- it's meant to be one per
L2 cache, which should have between 2 and 8 logical cpus, depending on
the architecture.  I just put NR_CPUS because it was easier to get
working.  Making that an array of pointers, which is allocated on an
as-needed basis, should reduce that requirement a great deal.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.