[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] copy on write memory



Certainly, UML is best known for copy on write filesystems. But UML's SKAS mode is a different way of managing memory, and that was the starting point for this proposal, which is about using the copy on write semantics of Linux memory management to share memory pages between Xen domains. I see now that one person's starting point may prove to be another's red herring.

The notion is that there are applications of Xen where there would be very many virtual computers running the same set of applications for much of the time (eg standard web hosting, honeypots).

In the standard case of a single OS on a single machine, all processes are loaded from a common filesystem, so the OS knows which page sets start out with shared information. It can use this information to share pages between processes so long as those pages are not written to, allocating distinct pages to distinct processes when those pages are written to - the copy on write semantics.

In the Xen case, you don't want to reinvent and reimplement existing mechanisms, especially as these may differ in subtle ways from one guest operating system to another. So I suggest it would make sense to create mechanisms that allow some Xen domains to operate as memory management servers to groups of related domains.

In effect we create a memory manager privilege. Suppose we have a memory manager domain M1 with a collection of memory client domains M1-x, where each memory client domain has its own kernel address space KASx, a set of modified pages Wx and a set of shared pages Rx. Then the situation we want to see is this:

M1      doesn't actually execute any application code, just manages memory for 
its clients
M1-a    executes application code in (KASa, Ra, Wa) calling on M1 for memory 
management
M1-b    executes application code in (KASb, Rb, Wb) calling on M1 for memory 
management
M1-c    executes application code in (KASc, Rc, Wc) calling on M1 for memory 
management
...

There is a connection with copy-on-write storage. The execution state of a client domain x can be frozen as:

'Rx'  which identifies a set of pages that are shared read-only with similar 
clients
KASx  which is the kernel address space page set for this domain
Wx    which is the set of user address space pages that have been written to by 
this domain

The total long-term state of a client domain can be characterised by adding

'SRx' which identifies blocks in read-only storage that are shared with similar 
clients
SWx   which identifies which files in read-write storage that belong to this 
client domain.

It seems to me that a memory manager domain, which pretty much has to serve pages initially drawn from a filesystem that is shared read-only between its clients, is also in a position to manage copy-on-write use of that file system for its clients, as it already knows which blocks and clean and which are dirty.

Peri

On Mon, 15 Nov 2004, Peri Hankey wrote:

It occurred to me that the equivalent in the Xen world would be to use one Linux xenU domain purely as a page-table manager for a collection of separate xenU domains that are expected or known have similar process populations.


UML copy on write is only for filesystems, isn't it ?

The Xen equivalent would be cloning the xenU root filesystem
as an LVM snapshot, from a read-only LVM snapshot.  Then each
xenU virtual system would only use the disk space it writes
to and no more.




-------------------------------------------------------
This SF.Net email is sponsored by: InterSystems CACHE
FREE OODBMS DOWNLOAD - A multidimensional database that combines
robust object and relational technologies, making it a perfect match
for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.