[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH dom0 creation fails - the system freezes



On Thu, Jul 26, 2018 at 10:31:21AM +0200, Juergen Gross wrote:
> On 26/07/18 10:15, bercarug@xxxxxxxxxx wrote:
> > On 07/25/2018 07:12 PM, Roger Pau Monné wrote:
> >> On Wed, Jul 25, 2018 at 05:05:35PM +0300, bercarug@xxxxxxxxxx wrote:
> >>> On 07/25/2018 05:02 PM, Wei Liu wrote:
> >>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
> >>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
> >>>>>>> What could be causing the available memory loss problem?
> >>>>>> That seems to be Linux aggressively ballooning out memory, you go
> >>>>>> from
> >>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
> >>>>> This might be related to the tools thinking dom0 is a PV domain.
> >>>> Good point.
> >>>>
> >>>> In that case, xenstore-ls -fp would also be useful. The output should
> >>>> show the balloon target for Dom0.
> >>>>
> >>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to
> >>>> see
> >>>> if it makes any difference.
> >>>>
> >>>> Wei.
> >>> Also tried setting autoballooning off, but it had no effect.
> >> This is a Linux/libxl issue that I'm not sure what's the best way to
> >> solve. Linux has the following 'workaround' in the balloon driver:
> >>
> >> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
> >>            &static_max);
> >> if (err != 1)
> >>     static_max = new_target;
> >> else
> >>     static_max >>= PAGE_SHIFT - 10;
> >> target_diff = xen_pv_domain() ? 0
> >>         : static_max - balloon_stats.target_pages;
> >>
> >> I suppose this is used to cope with the memory reporting mismatch
> >> usually seen on HVM guests. This however interacts quite badly on a
> >> PVH Dom0 that has for example:
> >>
> >> /local/domain/0/memory/target = "8391840"   (n0)
> >> /local/domain/0/memory/static-max = "17179869180"   (n0)
> >>
> >> One way to solve this is to set target and static-max to the same
> >> value initially, so that target_diff on Linux is 0. Another option
> >> would be to force target_diff = 0 for Dom0.
> >>
> >> I'm attaching a patch for libxl that should solve this, could you
> >> please give it a try and report back?
> >>
> >> I'm still unsure however about the best way to fix this, need to think
> >> about it.
> >>
> >> Roger.
> >> ---8<---
> >> diff --git a/tools/libxl/libxl_mem.c b/tools/libxl/libxl_mem.c
> >> index e551e09fed..2c984993d8 100644
> >> --- a/tools/libxl/libxl_mem.c
> >> +++ b/tools/libxl/libxl_mem.c
> >> @@ -151,7 +151,9 @@ retry_transaction:
> >>           *target_memkb = info.current_memkb;
> >>       }
> >>       if (staticmax == NULL) {
> >> -        libxl__xs_printf(gc, t, max_path, "%"PRIu64, info.max_memkb);
> >> +        libxl__xs_printf(gc, t, max_path, "%"PRIu64,
> >> +                         libxl__domain_type(gc, 0) ==
> >> LIBXL_DOMAIN_TYPE_PV ?
> >> +                         info.max_memkb : info.current_memkb);
> >>           *max_memkb = info.max_memkb;
> >>       }
> >>  
> >>
> > I have tried Roger's patch and it fixed the memory decrease problem. "xl
> > list -l"
> > 
> > no longer causes any issue.
> > 
> > The output of "xenstore-ls -fp" shows that both target and static-max
> > are now
> > 
> > set to the same value.
> 
> Right.
> 
> Meaning that it will be impossible to add memory to PVH dom0 e.g. after
> memory hotplug.

Likely. HVM guests ATM can only boot ballooned down (when target !=
max) by using PoD IIRC.

Right now if the user doesn't specify a 'max' value in the command
line for a PV(H) Dom0 it's set to LONG_MAX.

Maybe a better option would be to set max == current if no max is
specified on the command line?

This however doesn't fully solve the problem, since setting target !=
static-max for a Linux PVH guest will cause the balloon driver to go
nuts.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.