WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] GFS2, OCFS2, and FUSE cause xenU to oops.

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] GFS2, OCFS2, and FUSE cause xenU to oops.
From: Kyler Laird <Kyler@xxxxxxxxxx>
Date: Fri, 23 Dec 2005 11:02:05 -0500
Delivery-date: Fri, 30 Dec 2005 15:57:09 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.11
I really need to share a filesystem and I'd rather not have to
export it from one domU to another so I tried mounting it with
GFS2 and then OCFS2.  Both caused the xenU kernel to oops just as
the mount was attempted.

I assumed that a FUSE-based solution would be a little less
problematic (if only because it doesn't require kernel patches)
but it also caused an oops right when the mount was attempted
(after fuse.ko was loaded and after the SSH session was
authenticated).

I've hacked up smbd so that I can make it work for now but it
would be incredibly appropriate for Xen to be able to use cluster
filesystems.

The oops from FUSE/sshfs follows.  This is with Xen 3.0-testing
(8259) on a dual Xeon in x86_64 mode.

--kyler

=================================================================
Unable to handle kernel paging request at ffff81000f708790 RIP:
<ffffffff801fef85>{__memcpy+117}
PGD d4b4063 PUD da97067 PMD 0
Oops: 0002 [1]
CPU 0
Modules linked in: fuse md5 ipv6
Pid: 745, comm: sshfs Not tainted 2.6.12.6-xenU
RIP: e030:[<ffffffff801fef85>] <ffffffff801fef85>{__memcpy+117}
RSP: e02b:ffff88000f725df8  EFLAGS: 00010202
RAX: ffff81000f708790 RBX: ffff88000f725e38 RCX: 0000000000000004
RDX: 0000000000000028 RSI: ffff88000d83f920 RDI: ffff81000f708790
RBP: 0000000000000028 R08: 0000001a00000030 R09: 800000001f9ec040
R10: fc00000000000f98 R11: 03fffffffffff000 R12: 0000000000000028
R13: 0000000000000028 R14: ffff88000d83f920 R15: ffff88000f725ef8
FS:  00002aaaab1b9ae0(0000) GS:ffffffff8036a300(0000)
knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process sshfs (pid: 745, threadinfo ffff88000f724000, task
ffff88000f872820)
Stack: ffff88000f725e38 ffffffff88043a96 00000000ffffffea
ffff88000d83f8f8
       ffff88000f466080 0000000000000030 ffff88000d83f920
ffffffff88044265
       0000000000000001 ffff88000d83f8f8
Call Trace:<ffffffff88043a96>{:fuse:fuse_copy_one+76}
<ffffffff88044265>{:fuse:fuse_dev_readv+397}
       <ffffffff80127e6e>{default_wake_function+0}
<ffffffff880443a4>{:fuse:fuse_dev_read+26}
       <ffffffff801692f5>{vfs_read+163} <ffffffff801698ab>{sys_read+69}
       <ffffffff801111d1>{system_call+117}
<ffffffff8011115c>{system_call+0}


Code: 4c 89 07 48 8d 7f 08 48 8d 76 08 75 ee 89 d1 83 e1 07 74 17
RIP <ffffffff801fef85>{__memcpy+117} RSP <ffff88000f725df8>
CR2: ffff81000f708790



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>