[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: Still TODO for 4.2? xl domain numa memory allocation vs xm/xend



On Fri, 2012-01-20 at 12:33 +0000, Ian Campbell wrote:
> On Fri, 2012-01-20 at 12:04 +0000, Dario Faggioli wrote:
> > On Fri, 2012-01-20 at 11:54 +0000, Ian Campbell wrote: 
> 
> > > > Of course, even in such mode, if the user explicitly tells us what he
> > > > wants, e.g., by means of cpupools, pinning, etc., we should still honour
> > > > such request.
> > > 
> > > Do we get this right now?
> > > 
> > Sorry, not sure what you mean here...
> 
> I meant is "if the user explicitly tells us what he wants, e.g., by
> means of cpupools, pinning, etc." do we still honour such request?

It appears that with cpupools we do not. After running
cpupool-numa-split I started a guest with pool=Pool-node1 and got:
# xl cpupool-list 
Name               CPUs   Sched     Active   Domain count
Pool-node0           8    credit       y          1
Pool-node1           8    credit       y          1

(so dom0 on node0, guest on node 1) but:
(XEN) Memory location of each domain:
(XEN) Domain 0 (total: 131072):
(XEN)     Node 0: 61098
(XEN)     Node 1: 69974
(XEN) Domain 1 (total: 6290427):
(XEN)     Node 0: 3407101
(XEN)     Node 1: 2883326

With your patches to support vcpu pin and giving the guest vcpus="8-15"
I see effectively the same thing. (xl vcpu-list shows the affinity is
correct, so your patches seem correct in that regard).

Your patches do the affinity setting pretty early so I'm not sure what's
going on.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.