This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Filesystem Corruption

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Filesystem Corruption
From: Roman ZARAGOCI <roman.zaragoci@xxxxxxxxxxxxx>
Date: Tue, 04 Sep 2007 09:49:54 +0200
Delivery-date: Mon, 10 Sep 2007 09:08:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (Windows/20070728)

Our virtual machines have some filesystem corruption problems.
The filesystem is being mounted on read-only mode. When running fsck
command, everything goes ok.

Our configuration
Dom0 :
RHEL4 Update 3
Xen :, compiled from sources.
Linux Xen kernel : 2.6.16
File-System : ext3 w/ LVM

DomU :
Fedora Core 4 ou 6
Xen :, compiled from sources.
Linux Xen Kernel : 2.6.16
File-System : ext3, w/o LVM

I've read some stuff about risks using an ext3 filesystem with loop
devices, we cannot guarantee the write order on disk between vm's
filesystem and Dom0's filesystem.

Does everybody encounters this problem before ?

Do you think that should be caused by using an ext3 (with logging) for
vm's filesystem (wich are using .img files) ?

Any suggestions about using caches management for DomU and Dom0 ? (do we
need to delete cache and ext3 logging on DomUs)

Thanks in advance for your answers.

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>