[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Claim mode and HVM PoD interact badly



On Fri, Jan 10, 2014 at 03:10:48PM +0000, Wei Liu wrote:
> On Fri, Jan 10, 2014 at 09:58:07AM -0500, Konrad Rzeszutek Wilk wrote:
> > On Fri, Jan 10, 2014 at 11:59:42AM +0000, Ian Campbell wrote:
> > > create ^
> > > owner Wei Liu <wei.liu2@xxxxxxxxxx>
> > > thanks
> > > 
> > > On Fri, 2014-01-10 at 11:56 +0000, Wei Liu wrote:
> > > > When I have following configuration in HVM config file:
> > > >   memory=128
> > > >   maxmem=256
> > > > and have claim_mode=1 in /etc/xen/xl.conf, xl create fails with
> > > > 
> > > > xc: error: Could not allocate memory for HVM guest as we cannot claim 
> > > > memory! (22 = Invalid argument): Internal error
> > > > libxl: error: libxl_dom.c:647:libxl__build_hvm: hvm building failed
> > > > libxl: error: libxl_create.c:1000:domcreate_rebuild_done: cannot 
> > > > (re-)build domain: -3
> > > > libxl: error: libxl_dm.c:1467:kill_device_model: unable to find device 
> > > > model pid in /local/domain/82/image/device-model-pid
> > > > libxl: error: libxl.c:1425:libxl__destroy_domid: 
> > > > libxl__destroy_device_model failed for 82
> > > > 
> > > > With claim_mode=0, I can sucessfuly create HVM guest.
> > > 
> > > Is it trying to claim 256M instead of 128M? (although the likelyhood
> > 
> > No. 128MB actually.
> > 
> 
> Huh? My debug message says otherwise. It tried to claim 248MB (256MB -
> 8MB video ram). Did I misread your message...

The 'claim' being the hypercall to set the 'clamp' on how much memory
the guest can allocate. This is based on:

242     unsigned long i, nr_pages = args->mem_size >> PAGE_SHIFT;

  /* try to claim pages for early warning of insufficient memory available */
337     if ( claim_enabled ) {
343         rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);

Your 'mem_size' is 128MB, cur_pages is 0xc0, so it ends up 'claiming'
that the guest only needs 128MB - 768kB.
> 
> On the hypervisor side d->tot_pages = 30688, d->max_pages = 33024 (128MB
> + 1MB slack). So the claim failed.

Correct.
> 
> > > that you only have 128-255M free is quite low, or are you
> > > autoballooning?)
> > 
> > This patch fixes it for me. It basically sets the amount of pages
> > claimed to be 'maxmem' instead of 'memory' for PoD.
> > 
> > I don't know PoD very well, and this claim is only valid during the
> > allocation of the guests memory - so the 'target_pages' value might be
> > the wrong one. However looking at the hypervisor's
> > 'p2m_pod_set_mem_target' I see this comment:
> > 
> >  316  *     B <T': Set the PoD cache size equal to the number of 
> > outstanding PoD
> >  317  *   entries.  The balloon driver will deflate the balloon to give back
> >  318  *   the remainder of the ram to the guest OS.
> > 
> > Which implies to me that we _need_ the 'maxmem' amount of memory at boot 
> > time.
> > And then it is the responsibility of the balloon driver to give the memory
> > back (and this is where the 'static-max' et al come in play to tell the
> > balloon driver to balloon out).
> > 
> > 
> > diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
> > index 77bd365..65e9577 100644
> > --- a/tools/libxc/xc_hvm_build_x86.c
> > +++ b/tools/libxc/xc_hvm_build_x86.c
> > @@ -335,7 +335,12 @@ static int setup_guest(xc_interface *xch,
> >  
> >      /* try to claim pages for early warning of insufficient memory 
> > available */
> >      if ( claim_enabled ) {
> > -        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
> > +        unsigned long nr = nr_pages - cur_pages;
> > +
> > +        if ( pod_mode )
> > +            nr = target_pages - 0x20;
> > +
> 
> Yes it should work because this makes nr smaller than d->tot_pages and
> d->max_pages. But according to the comment you pasted above this looks
> like wrong fix...

It should be: 

tot_pages = 128MB
max_pages = 256MB
nr = 256MB - 0x20.

So tot_pages < max_pages > nr && nr > tot_pages

If I got my variables right.
Which means that 'nr' is greater than tot_pages but less than max_pages.

> 
> Wei.
> 
> > +        rc = xc_domain_claim_pages(xch, dom, nr);
> >          if ( rc != 0 )
> >          {
> >              PERROR("Could not allocate memory for HVM guest as we cannot 
> > claim memory!");

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.