[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: Q about System-wide Memory Management Strategies



Hi Joanna --

The slides you refer to are over two years old, and there's
been a lot of progress in this area since then.  I suggest
you google for "Transcendent Memory" and especially
my presentation at the most recent Xen Summit North America
and/or http://oss.oracle.com/projects/tmem 

Specifically, I now have "selfballooning" built into
the guest kernel... I don't see direct ballooning as
feasible (certainly without other guest changes such
as cleancache and frontswap).

Anyway, I have limited availability in the next couple of
weeks but would love to talk (or email) more about
this topic after that (but would welcome clarification
questions in the meantime).

Dan

> -----Original Message-----
> From: Joanna Rutkowska [mailto:joanna@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: Monday, August 02, 2010 3:39 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx; Dan Magenheimer
> Cc: qubes-devel@xxxxxxxxxxxxxxxx
> Subject: Q about System-wide Memory Management Strategies
> 
> Dan, Xen.org'ers,
> 
> I have a few questions regarding strategies for optimal memory
> assignment among VMs (PV DomU and Dom0, all Linux-based).
> 
> We've been thinking about implementing a "Direct Ballooning" strategy
> (as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
> that would be running in Dom0 and, based on the statistics provided by
> ballond daemons running in DomUs, to adjust memory assigned to all VMs
> in the system (via xm mem-set).
> 
> Rather than trying to maximize the number of VMs we could run at the
> same time, in Qubes OS we are more interested in optimizing user
> experience for running "reasonable number" of VMs (i.e.
> minimizing/eliminating swapping). In other words, given the number of
> VMs that the user feels the need to run at the same time (in practice
> usually between 3-6), and given the amount of RAM in the system (4-6 GB
> in practice today), how to optimally distribute it among the VMs? In
> our
> model we assume the disk backend(s) are in Dom0.
> 
> Some specific questions:
> 1) What is the best estimator of the "ideal" amount of RAM each VM
> would
> like to have? Dan mentions [1] the Commited_AS value from
> /proc/meminfo,
> but what about the fs cache? I would expect that we should (ideally)
> allocate Commited_AS + some_cache amount of RAM, no?
> 
> 2) What's the best estimator for "minimal reasonable" amount of RAM for
> VM (below which the swapping would kill the performance for good)? The
> rationale behind this, is that if we couldn't allocate "ideal" amount
> of
> RAM (point 1 above), then we would be scaling the available RAM down,
> until this "reasonable minimum" value. Below this, we would display a
> message to the user that they should close some VMs (or will close
> "inactive" one automatically), and also we would refuse to start any
> new
> AppVMs.
> 
> 3) Assuming we have enough RAM to satisfy all the VMs' "ideal"
> requests,
> what should we do with the excessive RAM -- options are:
> a) distribute among all the VMs (more per-VM RAM, means larger FS
> caches, means faster I/O), or
> b) assign it to Dom0, where the disk backend is running (larger FS
> cache
> means faster disk backends, means faster I/O in each VM?)
> 
> Thanks,
> joanna.
> 
> [1]
> http://www.xen.org/files/xensummitboston08/MemoryOvercommit-
> XenSummit2008.pdf
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.