[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] kernel BUG at arch/x86/xen/mmu.c:1860!



Hi,

the same issue on old IBM x335 servers with QLogic ISP2312 HBAs and
multipath setup. Newer hardware (x3550M2 + QLA2462 HBA) seems to be fine.
I've tried several versions of pvops kernel and xen hypervisor from 4.x line
with the same results.

Roman

On Tue, Jan 04, 2011 at 04:19:02PM +0100, Christophe Saout wrote:
> Hi again,
> 
> > > >      > While doing LVM snapshot for migration and get the following:
> > > >      >
> > > >      > Dec 26 15:58:29 xen01 kernel: ------------[ cut here 
> > > > ]------------
> > > >      > Dec 26 15:58:29 xen01 kernel: kernel BUG at 
> > > > arch/x86/xen/mmu.c:1860!
> > > >      > Dec 26 15:58:29 xen01 kernel: invalid opcode: 0000 [#1] SMP
> > > >      > Dec 26 15:58:29 xen01 kernel: last sysfs file: 
> > > > /sys/block/dm-26/dev
> > > >      > Dec 26 15:58:29 xen01 kernel: CPU 0
> > > >      > Dec 26 15:58:29 xen01 kernel: Modules linked in: ipt_MASQUERADE
> > >
> > > It would be very good to track this down and get it fixed.. 
> > > hopefully you're able to help a bit and try some things to debug it.
> > > 
> > > Konrad maybe has some ideas to try.. 
> > 
> > I am seeing this with an lvcreate here, so I guess it's somehow related
> > to device-mapper stuff in general.
> > 
> > It doesn't look like this has been resolved yet.  Somewhere I saw a
> > request for the hypervisor message related to the pinning failure.
> > 
> > Here it is:
> > 
> > (XEN) mm.c:2364:d0 Bad type (saw 7400000000000001 != exp 1000000000000000) 
> > for mfn 41114f (pfn d514f)
> > (XEN) mm.c:2733:d0 Error while pinning mfn 41114f
> > 
> > I have a bit of experience in debugging things, so if I can help someone
> > with more information...
> 
> Additional information: This happened with a number of commands now.
> However, I am running a multipath setup and every time the crash seemed
> to be caused in the process context of the multipath daemon.  I think
> the daemon listens to events from the device-mapper subsystem to watch
> for changes and the problem somehow arises from there, since on another
> machine with the same XEN/Dom0 version without such a daemon I never had
> any troubles with LVM.
> 
>  [<ffffffff810052e2>] pin_pagetable_pfn+0x52/0x60    
>  [<ffffffff81006f5c>] xen_alloc_ptpage+0x9c/0xa0
>  [<ffffffff81006f8e>] xen_alloc_pte+0xe/0x10
>  [<ffffffff810decde>] __pte_alloc+0x7e/0xf0
>  [<ffffffff810e15c5>] handle_mm_fault+0x855/0x930
>  [<ffffffff8102dd9e>] ? pvclock_clocksource_read+0x4e/0x100
>  [<ffffffff810e734c>] ? do_mmap_pgoff+0x33c/0x380
>  [<ffffffff81452b96>] do_page_fault+0x116/0x3e0
>  [<ffffffff8144ff65>] page_fault+0x25/0x30
> 
> Cheers,
>       Christophe
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

-- 
----------------------------------------------------------------------
  ,''`.       [benco] | mailto: benco@xxxxxxx | silc: /msg benco
 : :' :  -------------------------------------------------------------
 `. `'           GPG publickey: http://www.acid.sk/pubkey.asc
   `-      KF  =  0DF6 0592 74D2 F17A DACF  A5C3 1720 CB7C F54C F429

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.