[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] "cpus" config parameter broken?



The current hypervisor interface has the advantage of flexibility. You can
easily enforce various policies (including strict checking, or modulo
arithmetic) in the toolstack on top of the current interface. But you can't
(easily) implement the current hypervisor policy in the toolstack on top of
strict checking or modulo arithmetic (if one of those policies becomes
hardcoded into the hypervisor).

The current interface assumes the lowest levels of the toolstack know what
they are doing, and presents a policy that is as permissive as possible.

 -- Keir

On 10/1/08 23:46, "Dan Magenheimer" <dan.magenheimer@xxxxxxxxxx> wrote:

>> You mean CPUs beyond NR_CPUS? All the cpumask iterators are
>> careful not to
>> return values beyond NR_CPUS, regardless of what stray bits
>> lie beyond that
>> range in the longword bitmap.
> 
> I see... you are allowing for any future box to grow to NR_CPUS
> and I am assuming that, even with future hot-add processors,
> Xen will be told by the box the maximum number of processors
> that will ever be online (call this max_pcpu), and that max_pcpu
> is probably less than NR_CPUS.  So for these NR_CPUS-max_pcpu
> processors that are "non-existent" (and especially for the
> foreseeable future on the vast majority of machines for which
> max_pcpu=npcpu=constant and ncpu << NR_CPUS), trying to set
> bits for non-existent processors should not be silently ignored
> and discarded, but should either be entirely
> disallowed or, at least, should be retained and ignored.
> I would propose "disallowed" for n > max_pcpu and retained
> and ignored for online_pcpu < n < max_pcpu.
> 
> A related aside, for either model for hot-add (yours or mine),
> the current modulo mechanism in xm_vcpu_pin is not scaleable
> and imho should be removed now as well before anybody comes to
> depend on it.
> 
> And lastly, this hot-add discussion reinforces in my mind the
> difference between affinity and restriction (and pinning) which
> are all muddled in the current hypervisor and tools.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.