WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Mounting an existing running domU

Subject: Re: [Xen-users] Mounting an existing running domU
From: Martin Emrich <emme@xxxxxxxxxxxxxx>
Date: Wed, 19 Dec 2007 06:54:14 +0100
Cc: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 18 Dec 2007 21:54:48 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20071218044132.31991e01@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20071218044132.31991e01@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.6 (X11/20071022)
Hi!

John Maclean schrieb:
> Has anyone actually mounted the file system for domU whilst it was
> running? If so can any one describe the actual damage that was done and
> if anything was recoverable?

Have you considered using a cluster filesystem (OCFS2, GFS, ...) that is
designed for this pourpose? At work, we have a shared blockdevice
between two DomUs using OCFS2, and it works fine so far. You have to
tell the DomU in the config file that the device is used for shared
writing by adding a Plus sign to the "w" for "writable".

Ciao

Martin



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>