|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Error restoring DomU when using GPLPV
Hi,
I did some migration test on linux/windows PVHVM on Xen3.4.
* I printed the value of "pfn = region_pfn_type[i] &
~XEN_DOMCTL_PFINFO_LTAB_MASK;" in xc_domain_restore.c. When restoring
fails with error "Failed allocation for dom 2: 33 extents of order 0",
the value of pfn is less than that of restoring successfully. So i
think it should not due to hitting the guest maxmem limit in Xen. Is
it correct?
* After comparing difference between with and without ballooning down
(gnttab+shinfo) pages, i find that:
If the windows pv driver balloon down those pages, there will be more
pages with XEN_DOMCTL_PFINFO_XTAB type in saving process. Furthermore,
more bogus/unmapped page are skipped in restoring process.
If the winpv driver do not balloon down those pages, there are only a
little such pages with XEN_DOMCTL_PFINFO_XTAB type to be processed
during save/restore process.
* Another result about winpv driver with ballooning down those pages
When doing save/restore for the second time, i find p2msize in
restoring process become 0xfefff which is less than the normal size
0x100000.
Any suggestion about those test result? Or any idea to resolve this
problem in winpv or xen?
I did more save/restore test, and compare the logs between linux and
windows PVHVM. Those two vms have same memory size.
It seems that most log of them are identical, but the only difference
between them is also connected with XEN_DOMCTL_PFINFO_XTAB type pages.
From the comments in the code, XEN_DOMCTL_PFINFO_XTAB type means
invalid page.
For linux PVHVM, it have more 31 invalid pages than windows PVHVM during
saving process.
In for ( j = 0; j < batch; j++ ) of xc_domain_save.c, linux PVHVM will
take those pages with pfn value between f2003 and f2021 as invalid
pages. But windows PVHVM took those pages as normal pages.
Then in restoring process, more memory are allocated for windows PVHVM
than linux PVHVM. For example:
When windows PVHVM hit the issue: "Failed allocation for dom 2: 33
extents of order 0", and the log shows that nr_mfns before
"xc_domain_memory_populate_physmap" is 33. However it is only 14 at the
same process of restoring linux PVHVM.
It seems there should be more invalid pages in the saving process of
windows PVHVM. But i failed to get the root cause of it. Any suggestions?
Thanks
Annie.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-devel] Error restoring DomU when using GPLPV, (continued)
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Keir Fraser
- RE: [Xen-devel] Error restoring DomU when using GPLPV, James Harper
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Keir Fraser
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Keir Fraser
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV,
ANNIE LI <=
- Re: [Xen-devel] Error restoring DomU when using GPLPV, ANNIE LI
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Keir Fraser
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Keir Fraser
- RE: [Xen-devel] Error restoring DomU when using GPLPV, James Harper
- RE: [Xen-devel] Error restoring DomU when using GPLPV, James Harper
- Re: [Xen-devel] Error restoring DomU when using GPLPV, Wayne Gong
|
|
|
|
|