[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] xen: correctly rebuild mfn list list after migration.



On Thu, 2010-10-21 at 18:10 +0100, Jeremy Fitzhardinge wrote: 
> On 10/21/2010 03:10 AM, Ian Campbell wrote:
> > On Thu, 2010-10-14 at 19:27 +0100, Jeremy Fitzhardinge wrote:
> >> On 10/14/2010 01:51 AM, Ian Campbell wrote:
> >>> On Thu, 2010-10-14 at 07:49 +0100, Ian Campbell wrote:
> >>>> On Thu, 2010-10-14 at 01:37 +0100, Jeremy Fitzhardinge wrote:
> >>>>> I'm getting this triggering at boot:
> >>>>>
> >>>>> PM: Adding info for No Bus:xen!gntdev
> >>>>> ------------[ cut here ]------------
> >>>>> kernel BUG at /home/jeremy/git/linux/arch/x86/xen/mmu.c:480! 
> >>>> Probably a consequence of the bogus attempt to skip over completely
> >>>> empty mid levels which you pointed out in your next mail.
> >>>>
> >>>> I guess I should test 64 bit guests and not just PAE ones and I guess
> >>>> pre-ballooning is likely to interact as well.
> >>> I'm only able to reproduce this by booting ballooned and then ballooning
> >>> up, are you doing that or were you seeing it just by booting?
> >>>
> >>> What are your memory and maxmem settings?
> >> memory=512, no maxmem.
> > The BUG_ON I added to alloc_p2m was wrong (converted an mfn to an
> > address in the direct map and then compared it with a virtual address in
> > kernel map, IIRC). Here's an update patch which fixes all your previous
> > comments, this issue and a few other bits and bobs
> >
> > * s/p2m_mid_mfn_p/p2m_top_mfn_p/g
> > * Fix BUG_ON in alloc_p2m
> > * Skip correct number of pfns when mid == p2m_mid_missing
> > * Use correct value p2m_top_mfn[topidx] when mid == p2m_mid_missing
> 
> OK, it passed my sniff test - it boots and save/restored a few times.

Great. 

> (I added a S-O-B for you.)

Oops, thanks.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.