[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LVM userspace causing dom0 crash



Same Xen, but now with a 64 bit dom0 and 64 bit userspace we were able to trigger this across about 15 machines within 48 hours (which is an improvement):

BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
IP: [<ffffffff8134cbc2>] inode_has_perm+0x12/0x40
PGD 27248067 PUD 5390067 PMD 0
Oops: 0000 [#1] SMP
CPU 6
Modules linked in: ebtable_nat xen_gntdev e1000e
Pid: 3550, comm: lvremove Not tainted 3.3.6-1-x86_64 #1 Supermicro X8DT6/X8DT6 RIP: e030:[<ffffffff8134cbc2>] [<ffffffff8134cbc2>] inode_has_perm+0x12/0x40
RSP: e02b:ffff880023219bc8  EFLAGS: 00010246
RAX: 0000000000800002 RBX: ffff88000fedae90 RCX: ffff880023219bd8
RDX: 0000000000800000 RSI: 0000000000000000 RDI: ffff8800270c51e0
RBP: ffff880023219bc8 R08: 0000000000000080 R09: ffff88000fedae90
R10: ffff8800273f1b40 R11: ffff880023219bd8 R12: 0000000000000081
R13: ffff88000fedae90 R14: ffff880025ad8009 R15: ffff880025ad8008
FS: 00007f5a9af837a0(0000) GS:ffff88003fd80000(0063) knlGS:0000000000000000
CS:  e033 DS: 002b ES: 002b CR0: 000000008005003b
CR2: 0000000000000020 CR3: 000000000aa15000 CR4: 0000000000002660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process lvremove (pid: 3550, threadinfo ffff880023218000, task ffff88000e371d40)
Stack:
 ffff880023219c68 ffffffff8134d109 0000000000000009 0000000000000000
 ffff88000fedae90 0000000000000000 0000000000000000 0000000000000000
 0000000000000000 0000000000000000 0000000000000000 0000000000000000
Call Trace:
 [<ffffffff8134d109>] selinux_inode_permission+0xa9/0x100
 [<ffffffff8134ad37>] security_inode_permission+0x17/0x20
 [<ffffffff8113244c>] inode_permission+0x3c/0xd0
 [<ffffffff81134b21>] link_path_walk+0x91/0x800
 [<ffffffff81135903>] path_lookupat+0x53/0x690
 [<ffffffff8134d01d>] ? path_has_perm+0x4d/0x50
 [<ffffffff81135f6c>] do_path_lookup+0x2c/0xc0
 [<ffffffff81136717>] user_path_parent+0x47/0x80
 [<ffffffff81136a0e>] do_unlinkat+0x2e/0x1d0
 [<ffffffff8112bd09>] ? vfs_lstat+0x19/0x20
 [<ffffffff810431fe>] ? sys32_lstat64+0x2e/0x40
 [<ffffffff81136bc1>] sys_unlink+0x11/0x20
 [<ffffffff81731416>] sysenter_dispatch+0x7/0x21
 [<ffffffff8100961d>] ? xen_force_evtchn_callback+0xd/0x10
 [<ffffffff81009de2>] ? check_events+0x12/0x20
Code: 00 e8 b3 44 dd ff c9 c3 48 81 ff ff 0f 00 00 77 e8 0f 0b eb fe 0f 1f 40 00 55 48 89 e5 f6 46 0d 02 75 23 48 8b 76 38 48 8b 7f 68 <0f> b7 46 20 45 89 c1 8b 76 1c 49 89 c8 8b 7f 04 89 d1 89 c2 e8
RIP  [<ffffffff8134cbc2>] inode_has_perm+0x12/0x40
 RSP <ffff880023219bc8>
CR2: 0000000000000020
---[ end trace 9f021822c5071694 ]---

A different trace, but curious that it was triggered by lvm userspace still.

Have disabled SELinux and we've reset the test across about 25 machines.

-Chris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.