[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc: try to find last used pfn when migrating



On 27/11/15 17:42, Andrew Cooper wrote:
> On 27/11/15 14:50, Juergen Gross wrote:
>> For migration the last used pfn of a guest is needed to size the
>> logdirty bitmap and as an upper bound of the page loop. Unfortunately
>> there are pv-kernels advertising a much higher maximum pfn as they
>> are really using in order to support memory hotplug. This will lead
>> to allocation of much more memory in Xen tools during migration as
>> really needed.
>>
>> Try to find the last used guest pfn of a pv-domu by scanning the p2m
>> tree from the last entry towards it's start and search for an entry
>> not being invalid.
>>
>> Normally the mid pages of the p2m tree containing all invalid entries
>> are being reused, so we can just scan the top page for identical
>> entries and skip them but the first one.
>>
>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>> ---
>>  tools/libxc/xc_sr_save.c        |  8 ++++----
>>  tools/libxc/xc_sr_save_x86_pv.c | 34 +++++++++++++++++++++++++++++++---
>>  2 files changed, 35 insertions(+), 7 deletions(-)
>>
>> diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
>> index 0c12e56..22b3f18 100644
>> --- a/tools/libxc/xc_sr_save.c
>> +++ b/tools/libxc/xc_sr_save.c
>> @@ -677,6 +677,10 @@ static int setup(struct xc_sr_context *ctx)
>>      DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>>                                      &ctx->save.dirty_bitmap_hbuf);
>>  
>> +    rc = ctx->save.ops.setup(ctx);
>> +    if ( rc )
>> +        goto err;
>> +
>>      dirty_bitmap = xc_hypercall_buffer_alloc_pages(
>>                     xch, dirty_bitmap, 
>> NRPAGES(bitmap_size(ctx->save.p2m_size)));
>>      ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
>> @@ -692,10 +696,6 @@ static int setup(struct xc_sr_context *ctx)
>>          goto err;
>>      }
>>  
>> -    rc = ctx->save.ops.setup(ctx);
>> -    if ( rc )
>> -        goto err;
>> -
>>      rc = 0;
>>  
>>   err:
> 
> While moving this, the restore side should be consistent (turns out it
> already is), and the docs updated.  There was an inaccuracy, so I went
> ahead and did it.
> 
> --8<--
> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
> index 64f6082..ae77155 100644
> --- a/tools/libxc/xc_sr_common.h
> +++ b/tools/libxc/xc_sr_common.h
> @@ -54,9 +54,11 @@ struct xc_sr_save_ops
>                            void **page);
>  
>      /**
> -     * Set up local environment to restore a domain.  This is called before
> -     * any records are written to the stream.  (Typically querying running
> -     * domain state, setting up mappings etc.)
> +     * Set up local environment to save a domain. (Typically querying
> +     * running domain state, setting up mappings etc.)
> +     *
> +     * This is called once before any common setup has occurred,
> allowing for
> +     * guest-specific adjustments to be made to common state.
>       */
>      int (*setup)(struct xc_sr_context *ctx);
>  
> @@ -121,8 +123,10 @@ struct xc_sr_restore_ops
>      int (*localise_page)(struct xc_sr_context *ctx, uint32_t type, void
> *page);
>  
>      /**
> -     * Set up local environment to restore a domain.  This is called before
> -     * any records are read from the stream.
> +     * Set up local environment to restore a domain.
> +     *
> +     * This is called once before any common setup has occurred,
> allowing for
> +     * guest-specific adjustments to be made to common state.
>       */
>      int (*setup)(struct xc_sr_context *ctx);
>  
> --8<--
> 
> Feel free to fold this into your patch, or I can submit it alone as a
> cleanup prerequisite for your functional change below.

I'll fold it in, thanks.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.