|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] live migration can fail due to XENMEM_maximum_gpfn
On 6/10/08 17:47, "John Levon" <levon@xxxxxxxxxxxxxxxxx> wrote:
> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> dom 11 max gpfn 262143
> ....
> dom 11 max gpfn 985087
>
> (1Gb Solaris HVM domU).
>
> I'm not sure how this should be fixed?
You are correct that there is a general issue here, if the guest arbitrarily
increases max_mapped_pfn. However, yours is more likely a specific problem
-- mappings being added in the 'I/O hole' 0xF0000000-0xFFFFFFFF by PV
drivers. This is strictly easier because we can fix it by assuming that no
new mappings will be created above 4GB after the domain starts/resumes
running. A simple fix, then, is for xc_domain_restore() to map something at
page 0xFFFFF (e.g., shared_info) if max_mapped_pfn is smaller than that.
This will bump max_mapped_pfn as high as necessary. Note that a newly-built
HVM guest will always have 0xFFFFF as minimum max_mapped_pfn since
xc_hvm_build() maps shared_info at 0xFFFFF to initialise it (arguably
xc_domain_restore() should be doing the same!).
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|