WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [RFC] transcendent memory for Linux

On Fri, 19 Jun 2009 16:53:45 -0700 (PDT)
Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> wrote:

> Tmem has some similarity to IBM's Collaborative Memory Management,
> but creates more of a partnership between the kernel and the
> "privileged entity" and is not very invasive.  Tmem may be
> applicable for KVM and containers; there is some disagreement on
> the extent of its value. Tmem is highly complementary to ballooning
> (aka page granularity hot plug) and memory deduplication (aka
> transparent content-based page sharing) but still has value
> when neither are present.

The basic idea seems to be that you reduce the amount of memory
available to the guest and as a compensation give the guest some
tmem, no? If that is the case then the effect of tmem is somewhat
comparable to the volatile page cache pages.

The big advantage of this approach is its simplicity, but there
are down sides as well:
1) You need to copy the data between the tmem pool and the page
cache. At least temporarily there are two copies of the same
page around. That increases the total amount of used memory.
2) The guest has a smaller memory size. Either the memory is
large enough for the working set size in which case tmem is
ineffective, or the working set does not fit which increases
the memory pressure and the cpu cycles spent in the mm code.
3) There is an additional turning knob, the size of the tmem pool
for the guest. I see the need for a clever algorithm to determine
the size for the different tmem pools.

Overall I would say its worthwhile to investigate the performance
impacts of the approach.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel