[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] copy on write memory



It's true, you did mention it before, but I was looking for something else at the time. What I have in mind doesn't require so much configuration. On the other hand it doesn't exist, and this does.

But the patch is against quite an old source, and it doesn't compile straight out of the box. Do you know if there are updated patches against 2.6.9?

I get this error (which I haven't yet examined in detail):

 CC [M]  fs/xip2fs/file.o
fs/xip2fs/file.c: In function `xip2_do_file_read':
fs/xip2fs/file.c:69: error: structure has no member named `buf'
fs/xip2fs/file.c: In function `__xip2_file_aio_read':
fs/xip2fs/file.c:119: error: structure has no member named `buf'
fs/xip2fs/file.c: In function `xip2_file_sendfile':
fs/xip2fs/file.c:302: error: structure has no member named `buf'

This was against xen-2.0.1 as of today 18 Nov 2004

Regards
Peri

urmk@xxxxxxxxxxxxxxxxx wrote:

The notion is that there are applications of Xen where there would be very many virtual computers running the same set of applications for much of the time (eg standard web hosting, honeypots).
* snip *
In the Xen case, you don't want to reinvent and reimplement existing mechanisms, especially as these may differ in subtle ways from one guest operating system to another. So I suggest it would make sense to create mechanisms that allow some Xen domains to operate as memory management servers to groups of related domains.
*snip*
It seems to me that a memory manager domain, which pretty much has to serve pages initially drawn from a filesystem that is shared read-only between its clients, is also in a position to manage copy-on-write use of that file system for its clients, as it already knows which blocks and clean and which are dirty.

I'd thought I'd sent this before but it's not in the archives so I'll
send it again... If this made it to the list once already, my apologies:

On the s/390 platform, we have a new filesystem called XIP2. This is a shared-memory filesystem based on ext2, which can be shared among any
number of guests.  Basically you populate the XIP2 fs and then "freeze"
it and share it.

Thats all pretty standard, but here comes the magic: Any data in the XIP2 filesystem is not copied into the cached memory of the guest.
XIP = eXecute In Place.  Binaries are run directly from the shared
memory and not cached locally, so if you throw common services and libraries (like apache, JVM, etc from your example) into it you get the
binaries themselves shared with basically no cost to the guests.

Not to say an automatic memory manager to determine when it could do
COW of ram isn't a good avenue to pursue as well, but XIP is a fairly
good starting point for most situations that you'd want shared memory
like this, I think.

The XIP2 source code is in the IBM patches to the kernel:
http://oss.software.ibm.com/linux390/linux-2.6.5-s390-04-april2004.shtml

and by this point it's quite likely already in the bitkeeper tree as
well, they've been pushing updates upstream.

-m



-------------------------------------------------------
This SF.Net email is sponsored by: InterSystems CACHE
FREE OODBMS DOWNLOAD - A multidimensional database that combines
robust object and relational technologies, making it a perfect match
for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel




-------------------------------------------------------
This SF.Net email is sponsored by: InterSystems CACHE
FREE OODBMS DOWNLOAD - A multidimensional database that combines
robust object and relational technologies, making it a perfect match
for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.