[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/mm: pod: Use the correct memory flags for alloc_domheap_page{, s}



On Thu, 2015-10-22 at 17:13 +0100, Julien Grall wrote:
> On 22/10/15 16:48, Jan Beulich wrote:
> > > > > On 22.10.15 at 17:43, <julien.grall@xxxxxxxxxx> wrote:
> > > @@ -477,7 +477,7 @@ p2m_pod_offline_or_broken_replace(struct
> > > page_info *p)
> > >  
> > >      free_domheap_page(p);
> > >  
> > > -    p = alloc_domheap_page(d, PAGE_ORDER_4K);
> > > +    p = alloc_domheap_page(d, 0);
> > 
> > I realize that this is the easiest fix, but I think here we instead
> > want
> > something like
> 
> It sounds sensible to me to re-allocate the page on the same numa
> node.
> 
Indeed. It may be worth mentioning this in the changelog too, IMHO.

> I will send another version of this patch. Although, I would
> appreciate
> if someone can test it because I don't have any NUMA platform.
> 
I'm up for it... What would it be a reasonable test, that actually
stress this?

I certainly can do a "regular" test cycle such as: boot --> create a
guest --> play a bit with it --> shutdown. Is it enough?

I think it should be an HVM guest, right? And perhaps I should specify
different mem= and memmax= ?

Just let me know and, if you remember, Cc me when sending next version.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.