[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: super page with live migration



May be best to shatter all superpages on the suspend side at start of live migration (not actually reallocate — just shatter the 2MB mappings in the p2m). Needs measuring though.

 -- Keir

On 28/9/08 08:00, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:

Just realize that superpage may extend the service shutdown time in migration process. Take a ideal convergence example for non-super page cases, just dozens of pages may keep dirty at last batch sent to target side, and thus service shutdown phase is short. However when superpage is enabled, it's possible that those dozens of dirty pages are multiplied with a 512 factor for an extreme case where each page comes from different 2M super page. Then service shutdown phase can be longer, though not measured. Not sure how such inefficiency can be optimized...

Thanks,
Kevin


 

From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx  [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Keir  Fraser
Sent: Saturday, September 27, 2008 6:49 PM
To: Ian  Pratt; Zhai, Edwin
Cc:  xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] Re: super  page with live migration

 
Yes, quite apart from anything else, you’re just  punting the problem of detecting superpages (or candidates for superpages) to  the sender. I’m pretty confident it won’t be slow and, for a live migration,  the superpage logic in the restore logic is only really going to get kicked in  the first batch (we could even add that as an extra condition) so won’t extend  ‘dead time’ at all.

 -- Keir

On 27/9/08 11:06, "Ian Pratt"  <Ian.Pratt@xxxxxxxxxxxxx> wrote:

 


I don't think the proposed logic is  particularly tricky, and it won't be slow - the initial test of looking for  the first page in a 2.MB extent acts as a good filter.

It will work  better than mmarking super pages on the  sender.

Ian



----- Original Message -----
From:  xen-devel-bounces@xxxxxxxxxxxxxxxxxxx  <xen-devel-bounces@xxxxxxxxxxxxxxxxxxx>
To: Keir Fraser
Cc:  xen-devel@xxxxxxxxxxxxxxxxxxx <xen-devel@xxxxxxxxxxxxxxxxxxx>
Sent:  Sat Sep 27 10:19:15 2008
Subject: [Xen-devel] Re: super page with live  migration

Keir Fraser wrote:
> On 26/9/08 08:45, "Zhai, Edwin"  <edwin.zhai@xxxxxxxxx> wrote:
>
>  
>> As  you know, we try to allocate super pages in xc_hvm_build for  better
>> performance in EPT case. But the same logic is missing in  xc_domain_restore.
>>
>> When try to add the logic, I  found it is blocked by the lazy populate
>> algorithm
>>  in restore -- only populating the pages received from source side rather  than
>> doing it at one time.
>>
>> So the result  is the EPT guest has performance drop after live  migration:(
>>
>> Do you have any plan to change the lazy  populate algorithm? The purpose of it,
>> I
>> believe, is  to make restore process work without guest memory layout
>>  knowledge.
>>    
>
> Yes: If  pseudo-phys page is not yet populated in target domain, AND it is
>  first page of a 2MB extent, AND no other pages in that extent are  yet
> populated, AND the next pages in the save-image stream populate  that extent
> in order, THEN allocate a superpage. This is made  trickier by the fact that
> the next 511 pages (to make the 2MB  extent) might be split across a batch
> boundary. So we'll have to  optimistically allocate a superpage in that case,
> and then shatter  it if it turns out that the contiguous stream of
> pseudo-phys  addresses is broken in the next batch.
>  

It's really  tricky logic, and may make the migration process longer:(

Maybe the  flag the start-of-a-superpage as Tim said can make it  easier.



Thanks,


>  --  Keir
>
>
>   


_______________________________________________
Xen-devel  mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.