[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][PATCH] domheap optimization for NUMA



On 3/4/08 11:39, "Andre Przywara" <andre.przywara@xxxxxxx> wrote:

> By the way, can we solve the DMA_BITSIZE problem (your mail from 28th
> Feb) with this? If no node is specified, use the current behaviour of
> preferring non DMA zones, else stick to the given node.
> If you agree, I will implement this.

I don't think that gets us what we want. The fact is we specify NUMA node on
nearly 100% of allocations (either explicitly via MEMF_node() or via passing
a non-NULL domain pointer). So you would *always* prefer local DMA pages
over remote non-DMA pages. That's not necessarily better than the current
policy.

My point in my email of Feb 28th was that we should set dma_bitsize
'appropriately' (well, according to a slightly arbitrary policy :-) so that
*some* DMA memory is set aside and only used to satisfy allocations which
cannot be satisfied by a remote, while *some* memory is always made
available on every node for local allocations.

Does that make sense?

>> NUMA_NO_NODE probably needs to be pulled out of asm-x86/numa.h and made the
>> official arch-neutral way to specify 'don't care' for numa nodes.
> Is this really needed? I provided memflags=0 is all don't care cases,
> this should work and is more compatible. But beware that this silently
> assumes in page_alloc.c#alloc_domheap_pages that NUMA_NO_NODE is 0xFF,
> otherwise this trick will not work.

Yes it is needed if your patch is to work across all architectures, not just
x86! Your current patch is broken in this respect because you quite
unnecessarily define domain_to_node() and vcpu_to_node() in asm/numa.h
rather than xen/numa.h.

Please address architectural portability and re-send the patch. Apart from
that I think it's just about ready to go in.

 Thanks,
 Keir

> Attached again a diff against my last version and the full patch (for
> some reason a missing bracket slipped through my last one, sorry for that).
> 
> This is only quick-tested (booted and created a guest on each node).



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.