[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch



On 04/07/16 15:58, PGNet Dev wrote:
> On 07/04/2016 04:22 AM, George Dunlap wrote:
>> Thanks for your persistence. :-)
> 
> I appreciate the reply :-)
> 
>> It's likely that this is related to a known problem with the interface
>> between the balloon driver and the toolstack.  The warning itself is
>> benign: it simply means that the balloon driver asked Xen for another
>> page (thinking incorrectly it was a few pages short), and was told
>> "No" by Xen.
> 
> Reading
> 
> 
> https://blog.xenproject.org/2014/02/14/ballooning-rebooting-and-the-feature-youve-never-heard-of/
> 
> 
>     "... Populate-on-demand comes into play in Xen whenever you start an
> HVM guest with maxmem and memory set to different values. ..."
> 
> Which sounds like you can turn ballooning in the DomU off.
> 
> But, currently, my DomUs are all PVHVM, and all have
> 
>     maxmem = 2048
>     memory = 2048
> 
> It appears that having 'maxmem' == 'memory' results in the '"No" by Xen'
> answer rather than ballooning driver not being used.
> 
> Which is the intended case?

It's more complicated than that, unfortunately. :-)

A guest has lots of different bits of memory used for different things.
There's guest RAM, but an HVM / PVHVM guest also has ROMs that the BIOS
needs to have access to -- but of course since there isn't really any
ROM, it needs to be allocated as RAM.  Then there's extra memory for
video cards &c, all of which from Xen's perspective looks like RAM
allocated to the VM.  And just to make things more fun, there are
traditional "holes" in memory where there ends up being nothing anyway.

The toolstack takes the number above and ends up allocating not exactly
2048 MiB to the guest, but a slightly larger number such that the guest
looks like it has about 2048 MiB of RAM, while still taking into account
all of the other random things that need a page here and a page there.
Then it tells Xen, "The maximum amount of memory the guest is allowed to
have is X", and writes a target value in xenstore for the guest to read.

Then inside the guest, there's another process -- the balloon driver --
whose job it is to monitor the 'target value' in xenstore and try to
make the guest's actual memory usage match that.  It does this by
releasing pages back to Xen if it thinks the target value is lower than
what it currently has, and by asking Xen for more pages if the target
value is higher than what it currently has.  And because sometimes it
takes a while for pages to become free, if it asks for more pages and is
told 'no', it just waits for a bit and asks for pages again.

Unfortunately, the interface was designed for PV guests back in the days
before things were so complicated.  The problem is that now the
calculation for "how much memory I need" doesn't match the toolstack's
idea.  So when the balloon driver comes up, it looks at the value from
the toolstack, and thinks, "Oh, looks like I'm a page or two short.
Better ask for more."

So the only way to fix this is:
1. Fix the interface so that the balloon driver actually knows that it
doesn't need to do anything
2. Manually write a lower value into xenstore (different to what the
toolstack gives)
3. Disable the balloon driver entirely.

Obviously #1 isn't an option for you; if you don't need ballooning, then
#3 is probably the best option -- I think you should be able to
blacklist the balloon driver so that it doesn't actually load.  That
should eliminate the warning messages you get.

> btw, is there a relevant tracking bug for this?

Not really.  We've tried some bug tracking systems but none have really
"stuck"; instead we end up just keeping track of things individually.

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.