[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error restoring DomU when using GPLPV



Hi,

Yes, that's weird. Do you know what condition causes guest memory allocation
failure on xc_domain_restore? Is it due to hitting the guest maxmem limit in
Xen? If so, is maxmem the same value across multiple iterations of
save/restore or migration?
Sorry, i have no idea about it. Maybe I need to print more log in for(;;) in xc_domain_restore to see what is the difference between without and with balooning down pages.
I did some migration test on linux/windows PVHVM on Xen3.4.

* I printed the value of "pfn = region_pfn_type[i] & ~XEN_DOMCTL_PFINFO_LTAB_MASK;" in xc_domain_restore.c. When restoring fails with error "Failed allocation for dom 2: 33 extents of order 0", the value of pfn is less than that of restoring successfully. So i think it should not due to hitting the guest maxmem limit in Xen. Is it correct?

* After comparing difference between with and without ballooning down (gnttab+shinfo) pages, i find that:

If the windows pv driver balloon down those pages, there will be more pages with XEN_DOMCTL_PFINFO_XTAB type in saving process. Furthermore, more bogus/unmapped page are skipped in restoring process. If the winpv driver do not balloon down those pages, there are only a little such pages with XEN_DOMCTL_PFINFO_XTAB type to be processed during save/restore process.

* Another result about winpv driver with ballooning down those pages
When doing save/restore for the second time, i find p2msize in restoring process become 0xfefff which is less than the normal size 0x100000.

Any suggestion about those test result? Or any idea to resolve this problem in winpv or xen?

Thanks
Annie.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.