[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] superpages lost after migration of HVM domU



On 26/04/17 16:43, Olaf Hering wrote:
> On Thu, Apr 20, Jan Beulich wrote:
>
>>>>> On 20.04.17 at 18:04, <olaf@xxxxxxxxx> wrote:
>>> On Thu, Apr 20, Andrew Cooper wrote:
>>>
>>>> As it currently stands, the sending side iterates from 0 to p2m_size,
>>>> and sends every frame on the first pass.  This means we get PAGE_DATA
>>>> records linearly, in batches of 1024, or two aligned 2M superpages.
>>> Is there a way to preserve 1G pages? This 380G domU I'm looking at is
>>> built with 4k:461390 2M:2341 1G:365 pages.
>> I think we've hashed out a possible way to deal with this, by
>> speculatively allocating 1G pages as long as the allocation cap for
>> the domain allows, subsequently punching holes into those pages
>> if we can't allocate any new pages anymore (due to otherwise
>> overrunning the cap).
> The result is not pretty. This HVM-only approach appears to work for a
> domU with "memory=3024" and localhost migration.
> It is required to punch holes as soon as possible to avoid errors in
> xenforeignmemory_map due to "Over-allocation". Would be nice if the
> receiver gets a memory map upfront to avoid all stunts...

Oh - I was about to start working on this.  This is a pleasant surprise. :)

One of the many outstanding problems with migration is that there is not
a memory map at all.  There really should be one, and it should be at
the head of the migration stream, along with other things currently
missing such as the CPUID policy.  (I'm working on this, but it isn't
going very fast.)

I'll review the patch as soon as I am free.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.