[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Pointed questions re Xen memory overcommit



> From: George Dunlap [mailto:dunlapg@xxxxxxxxx]
> Subject: Re: [Xen-devel] Pointed questions re Xen memory overcommit
> 
> On Mon, Feb 27, 2012 at 11:40 PM, Dan Magenheimer
> <dan.magenheimer@xxxxxxxxxx> wrote:
> >> From: Olaf Hering [mailto:olaf@xxxxxxxxx]
> >
> > Hi Olaf --
> >
> > Thanks for the reply!  Since Tim answers my questions later in the
> > thread, one quick comment...
> >
> >> To me memory overcommit means swapping, which is what xenpaging does:
> >> turn the whole guest gfn range into some sort of virtual memory,
> >> transparent to the guest.
> >>
> >> xenpaging is the red emergency knob to free some host memory without
> >> caring about the actual memory constraints within the paged guests.
> >
> > Sure, but the whole point of increasing RAM in one or more guests
> > is to increase performance, and if overcommitting *always* means
> > swapping, why would anyone use it?
> >
> > So xenpaging is fine and useful, but IMHO only in conjunction
> > with some other technology that reduces total physical RAM usage
> > to less than sum(max_mem(all VMs)).
> 
> I agree -- overcommitting means giving the guests the illusion of more
> aggregate memory than there is.  Paging is one way of doing that; page
> sharing is another way.  The big reason paging is needed is if guests
> start to "call in" the committments, by writing to previously shared
> pages.  I would think tmem would also come under "memory overcommit".

Yes and no.  By default, tmem's primary role is to grease the
transfer of RAM capacity from one VM to another while minimizing the
loss of performance that occurs when aggressively selfballooning
(or maybe doing "host-policy-driven-ballooning-with-a-semantic-gap").

However, tmem has two optional features "tmem_compress" and
"tmem_dedup" which do result in "memory overcommit" and neither
has the "call in the commitments" issue that occurs with shared
pages, so tmem does not require xenpaging.

That said, I can conceive of a RAMster*-like implementation for
which the ability to move hypervisor pages to dom0 might be
useful/necessary, so some parts of the xenpaging code in the
hypervisor might be required.

* http://lwn.net/Articles/481681/ 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.