[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] numa: select nodes by cpu affinity



On 04/08/2010 17:01, "Andrew Jones" <drjones@xxxxxxxxxx> wrote:

> I also considered managing the nodemask as new domain state, as you do,
> as it may come in useful elsewhere, but my principle of least patch
> instincts kept me from doing it...

Yeah, I don't fancy iterating over all vcpus for every little bitty
allocation. So it's a perf thing mainly for me.

> I'm not sure about keeping track of the last_alloc_node and then always
> avoiding it (at least when there's more than 1 node) by checking it
> last. I liked the way it worked before, favoring the node of the
> currently running processor, but I don't have any perf numbers to know
> what would be better.

Well, you can expect vcpus to move around within their affinity masks over
moderate timescales (like say seconds or minutes). And in fact the original
credit scheduler *loves* to migrate vcpus around the place over much less
reasonable timescales than that (sub second). It is nice to balance our
allocations rather than hitting one node 'unfairly' hard.

> I've attached a patch with a couple minor tweaks. It removes the
> unnecessary node clearing from an empty initialized nodemask, and also
> moves a couple of domain_update_node_affinity() calls outside
> for_each_vcpu loops.

Thanks, I tweaked your tweaks (just one tiny optimisation) and applied it so
it should show up in the staging tree rsn.

 -- Keir

> Andrew



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.