[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity



>>> On 06.11.13 at 10:39, Dario Faggioli <dario.faggioli@xxxxxxxxxx> wrote:
> Now, we're talking about killing vc->cpu_affinity and not introducing
> vc->node_affinity and, instead, introduce vc->cpu_hard_affinity and
> vc->cpu_soft_affinity and, more important, not to link any of the above
> to d->node_affinity. That means all the above operations _will_NOT_
> automatically affect d->node_affinity any longer, at least from the
> hypervisor (and, most likely, libxc) perspective. OTOH, I'm almost sure
> that I can force libxl (and xl) to retain the exact same behaviour it is
> exposing to the user (just by adding an extra call when needed).
> 
> So, although all this won't be an issue for xl and libxl consumers (or,
> at least, that's my goal), it will change how the hypervisor used to
> behave in all those situations. This means that xl and libxl users will
> see no change, while folks issuing hypercalls and/or libxc calls will.
> 
> Is that ok? I mean, I know there are no stability concerns for those
> APIs, but still, is that an acceptable change?

I would think that as long as d->node_affinity is empty, it should
still be set based on all vCPU-s' hard affinities (as it is an error -
possibly to be interpret as "do this for me" to try to set an empty
node affinity there's no conflict here). Or alternatively a flag
could be set once it got set, preventing further implicit updates.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.