WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Error restoring DomU when using GPLPV

To: wayne.gong@xxxxxxxxxx, annie.li@xxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Error restoring DomU when using GPLPV
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Fri, 4 Sep 2009 14:28:51 -0700 (PDT)
Cc: Joshua West <jwest@xxxxxxxxxxxx>, James Harper <james.harper@xxxxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 04 Sep 2009 14:31:01 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A9DF42A.2090908@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
I think I've tracked down the cause of this problem
in the hypervisor, but am unsure how to best fix it.

In tools/libxc/xc_domain_save.c, the static variable p2m_size
is said to be "number of pfns this guest has (i.e. number of
entries in the P2M)".  But apparently p2m_size is getting
set to a very large number (0x100000) regardless of the
maximum psuedophysical memory for the hvm guest.  As a result,
some "magic" pages in the 0xf0000-0xfefff range are getting
placed in the save file.  But since they are not "real"
pages, the restore process runs beyond the maximum number
of physical pages allowed for the domain and fails.
(The gpfn of the last 24 pages saved are f2020, fc000-fc012,
feffb, feffc, feffd, feffe.)

p2m_size is set in "save" with a call to a memory_op hypercall
(XENMEM_maximum_gpfn) which for an hvm domain returns
d->arch.p2m->max_mapped_pfn.  I suspect that the meaning
of max_mapped_pfn changed at some point to more match
its name, but this changed the semantics of the hypercall
as used by xc_domain_restore, resulting in this curious
problem.

Any thoughts on how to fix this?

> -----Original Message-----
> From: Annie Li 
> Sent: Tuesday, September 01, 2009 10:27 PM
> To: Keir Fraser
> Cc: Joshua West; James Harper; xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] Error restoring DomU when using GPLPV
> 
> 
> 
> > It seems this problem is connected with gnttab, not shareinfo.
> > I changed some code about grant table in winpv driver (not using 
> > balloon down shinfo+gnttab method), save/restore/migration can work 
> > properly on Xen3.4 now.
> >
> > What i changed is winpv driver use hypercall 
> XENMEM_add_to_physmap to 
> > map corresponding grant tables which devices require, instead of 
> > mapping all 32 pages grant table during initialization.  It seems 
> > those extra grant table mapping cause this problem. 
> 
> Wondering whether those extra grant table mapping is the root 
> cause of 
> the migration problem? or by luck as linux PVHVM too?
> 
> Thanks
> Annie.
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel