[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [SPAM] Re: kernel BUG at arch/x86/xen/mmu.c:1860!



> That is interesting data. Can you give more details on what 2.6.31 kernel
> and hypervisor you are using? Have you tried to rev the hypervisor
> up to Xen 4.1.0-rc7-pre for example?
I can do you one more, here are links to, kernel tarball as built by me (2.6.31.14), its configuration and my xen4.0.0 directory.

Tomorrow i will test 4.1.0 and 2.6.38, allthough i have to say, if one kernel works and the other doesnt, to me it seems like a kernel problem, not one of the hypervisor. Of course, since i dont know the inner workings of xen, that point of view might be ... flawed.

It would be nice if you could define a set of parameters that could prove beneficial in provoking the bug as it would make it easier for me to test different scenarios. Or better yet a script that definitly will provoke it within a given timeframe.

So far even restarting multipathd _could_ trigger it for me.
It also occured during bootup and when it did it kept on happening until i did a cold restart of the server. Does that make sense? Does data remain in the memory modules when i reboot a system? (init 6) And if that is so, whats the "Scrubbing memory ....." for that i see when xen is loading?

One thing i would like to verify is that the bug only occurs when running the kernel under xen and not when its running on its own.
I cant quite remember if i tried that in 2010.

Today i ran a loop of 300 lvcreate,snapshot,lvdelete on a standalone 2.6.32-xen0 kernel and did not receive an error. I didnt really have the time to try and catch it that way running under Xen. I will do that tomorrow.

I will report my findings.

best regards,


--
Andreas Olsowski


Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.