[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Error restoring DomU when using GPLPV



On 04/08/2009 10:01, "James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote:

> I assume that what happens is that the memory continues to grow until it
> hits max_pages, for some reason.  Is there a way to tell 'xm restore'
> not to delete the domain when the restore fails so I can see if nr_pages
> really does equal max_pages at the time that it dies?
> 
> The curious thing is that this only happens when GPLPV is running. A PV
> domU or a pure HVM DomU doesn't have this problem (presumably that would
> have been noticed during regression testing). It would be interesting to
> try a PVonHVM Linux DomU and see how that goes... hopefully someone who
> having the problem with GPLPV also has PVonHVM domains they could test.

Okay, also this is a normal save/restore (no live migration of pages)?

Could the grant-table/shinfo Xenheap pages be confusing matters I wonder.
The save process may save those pages out - since dom0 can map them it will
also save them - and then they get mistakenly restored as domheap pages at
the far end. All would work out okay in the end when you remap those special
pages during GPLPV restore as the domheap pages would get implciitly freed.
But maybe there is not allocation headroom for the guest in the meantime so
the restore fails.

Just a theory. Maybe you could try unmapping grant/shinfo pages in the
suspend callback? This may not help for live migration though, where pages
get transmitted before the callback. It may be necessary to allow dom0 to
specify 'map me a page but not if it's special' and plumb that up to
xc_domain_save. It'd be good to have the theory proved first.

 Cheers,
 Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.