[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] freemem-slack and large memory environments



On Wed, 18 Feb 2015, Ian Campbell wrote:
> On Tue, 2015-02-10 at 14:34 -0700, Mike Latimer wrote:
> > On Monday, February 09, 2015 06:27:54 PM Mike Latimer wrote:
> > > While testing commit 2563bca1, I found that libxl_get_free_memory returns > > > 0
> > > until there is more free memory than required for freemem-slack. This 
> > > means
> > > that during the domain creation process, freed memory is first set aside 
> > > for
> > > freemem-slack, then marked as truly free for consumption.
> > > 
> > > On machines with large amounts of memory, freemem-slack can be very high
> > > (26GB on a 2TB test machine). If freeing this memory takes more time than
> > > allowed during domain startup, domain creation fails with ERROR_NOMEM.
> > > (Commit 2563bca1 doesn't help here, as free_memkb remains 0 until
> > > freemem-slack is satisfied.)
> > > 
> > > There is already a 15% limit on the size of freemem-slack (commit 
> > > a39b5bc6),
> > > but this does not take into consideration very large memory environments.
> > > (26GB is only 1.2% of 2TB), where this limit is not hit.
> 
> Stefano,
> 
> What is "freemem-slack" for?

I think it comes from xapi: they always keep a minimum amount of free
memory in the system as it seems to be empirically required by the
hypervisor.


> It seems to have been added in 7010e9b7 but
> the commit log makes no mention of it whatsoever. Was it originally just
> supposed to be the delta between the host memory and dom0 memory at
> start of day?

Yes, that is right.


> This seems to then change in a39b5bc64, to add an arbitrary caP which
> seems to be working around an invalid configuration (dom0_mem +
> autoballooning on).

Correct again.


> Now that we autodetect the use of dom0_mem and set autoballooning
> correctly perhaps we should just revert a39b5bc64?

We could do that and theoretically it makes perfect sense, but it would
result in an even bigger waste of memory.
I think we should either introduce an hard upper limit for
freemem-slack as Mike suggested, or remove freemem-slack altogether and
properly fix any issues caused by lack of memory in the system (properly
account memory usage).
After all we are just at the beginning of the release cycle, it is the
right time to do this.


> Ian.
> 
> > > 
> > > It seems that there are two approaches to resolve this:
> > > 
> > >  - Introduce a hard limit on freemem-slack to avoid unnecessarily large
> > > reservations
> > >  - Increase the retry count during domain creation to ensure enough time 
> > > is
> > > set aside for any cycles spent freeing memory for freemem-slack (on the 
> > > test
> > > machine, doubling the retry count to 6 is the minimum required)
> > > 
> > > Which is the best approach (or did I miss something)?
> > 
> > Sorry - forgot to CC relevant maintainers.
> > 
> > -Mike
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.