WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Shadow page tables?

To: Michael Vrable <mvrable@xxxxxxxxxxx>
Subject: Re: [Xen-devel] Shadow page tables?
From: Peri Hankey <mpah@xxxxxxxxxxxxxx>
Date: Thu, 21 Oct 2004 09:38:01 +0100
Cc: Xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 21 Oct 2004 10:04:54 +0100
Envelope-to: steven.hand@xxxxxxxxxxxx
In-reply-to: <20041012115013.B16621@xxxxxxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
References: <d2ab765004100801593b60c876@xxxxxxxxxxxxxx> <E1CFqmU-00052h-00@xxxxxxxxxxxxxxxxx> <20041012115013.B16621@xxxxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040115
Hello

Some time ago you talked about copy-on-write memory to enable large numbers of nearly identical machines to run on the same physical hardware. I have also thought about that - after all equivalent mechansisms already exist to provide copy-on-write semantics for process forking (and also I think in the /proc/mm mechanisms used in some versions of UML separate kernel address space (SKAS) handling).

Have you advanced along this path, or is anyone actively doing that? The problem you found the other day suggests that you were looking pretty closely at code that relates to this question. My very crude understanding is that one would want to share the physical memory allocation that is currently given to a single xenU domain between a group of xenU domains each of which has its own copy-on-write mapping that starts from a read-only image that is shared between all of them. The table grant mehanism would need to understand the relationship between members of such a group of domains. But as I say, this is still very hazy to me.

On the other point you raised, about copy-on-write filesystem images, I looked at evms2 (http://evms.sourceforge.net/), which sits above lvm2 and other systems and offers its own snapshot feature. It is only usable on disks that it controls, so you either have to use a separate disk from the one you boot from, or you have to use an initrd to be able to boot from an evms volume. The snapshot feature seemed 'snappier' (I was able to create a number of snapshot objects and activate them without memory problems of the kind that I encountered with lvm2 snapshots).

However it didn't seem possible to make it provide multiple persistent writable snapshots of the kind that would support multiple xenU domains where each uses its own copy-on-write image of an underlying readonly root filesystem. Did I miss something? Has anyone else tried using evms for these purposes?

I saw Ian's comments about gnbd and csnap (http://sources.redhat.com/cluster/), but it seems that the csnap mechanism is in a pretty early state of development.

Regards
Peri

Michael Vrable wrote:


Michael Vrable wrote:

On Fri, Oct 08, 2004 at 10:11:18AM +0100, Keir Fraser wrote:
We intend to flesh out the shadow p.t. code a little mroe to support
full memory virtualisation. It's not there yet, but it won't require
an enormous amount of code to get it going.

Any idea what the time frame for this is?  I was considering taking a
stab at it myself, but don't want to duplicate work.

My longer term goal is to try to get copy-on-write sharing of memory
pages between domains and to see how far Xen can scale in running many
nearly-identical virtual machines.  To implement this, however, will
require (at least as I've thought it through) memory virtualization.  So
that was the first thing I was planning to work on.

If no one else was actively working on this, I'm happy to discuss the
design and contribute what I end up with.  (Both for the memory
virtualization and the copy-on-write work.)

(This is also the reason for my interest in copy-on-write disks, though
it's looking like LVM may be too heavyweight for supporting large
numbers of VMs.)

--Michael Vrable


-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel





-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel