[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] xen-4.7 regression when saving a PV guest
Sorry for the incomplete subject. Got interrupted while writing the email and then forgot to complete it... :/ On 25.08.2016 17:48, Stefan Bader wrote: > When I try to save a PV guest with 4G of memory using xen-4.7 I get the > following error: > > II: Guest memory 4096 MB > II: Saving guest state to file... > Saving to /tmp/pvguest.save new xl format (info 0x3/0x0/1131) > xc: info: Saving domain 23, type x86 PV > xc: error: Bad mfn in p2m_frame_list[0]: Internal error > xc: error: mfn 0x4eb1c8, max 0x820000: Internal error > xc: error: m2p[0x4eb1c8] = 0xff7c8, max_pfn 0xbffff: Internal error > xc: error: Save failed (34 = Numerical result out of range): Internal error > libxl: error: libxl_stream_write.c:355:libxl__xc_domain_save_done: saving > domain: domain did not respond to suspend request: Numerical result out of > range > Failed to save domain, resuming domain > xc: error: Dom 23 not suspended: (shutdown 0, reason 255): Internal error > libxl: error: libxl_dom_suspend.c:460:libxl__domain_resume: xc_domain_resume > failed for domain 23: Invalid argument > EE: Guest not off after save! > FAIL > > From dmesg inside the guest: > [ 0.000000] e820: last_pfn = 0x100000 max_arch_pfn = 0x400000000 > > Somehow I am slightly suspicious about > > commit 91e204d37f44913913776d0a89279721694f8b32 > libxc: try to find last used pfn when migrating > > since that seems to potentially lower ctx->x86_pv.max_pfn which is checked > against in mfn_in_pseudophysmap(). Is that a known problem? > With xen-4.6 and the same dom0/guest kernel version combination this does > work. > > -Stefan > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxx > https://lists.xen.org/xen-devel > Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |