[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc: try to find last used pfn when migrating



On 30/11/15 08:17, Juergen Gross wrote:
> On 27/11/15 18:16, Andrew Cooper wrote:
>> On 27/11/15 15:53, Juergen Gross wrote:
>>> On 27/11/15 16:33, Wei Liu wrote:
>>>> On Fri, Nov 27, 2015 at 03:50:53PM +0100, Juergen Gross wrote:
>>>>> For migration the last used pfn of a guest is needed to size the
>>>>> logdirty bitmap and as an upper bound of the page loop. Unfortunately
>>>>> there are pv-kernels advertising a much higher maximum pfn as they
>>>>> are really using in order to support memory hotplug. This will lead
>>>>> to allocation of much more memory in Xen tools during migration as
>>>>> really needed.
>>>>>
>>>>> Try to find the last used guest pfn of a pv-domu by scanning the p2m
>>>>> tree from the last entry towards it's start and search for an entry
>>>>> not being invalid.
>>>>>
>>>>> Normally the mid pages of the p2m tree containing all invalid entries
>>>>> are being reused, so we can just scan the top page for identical
>>>>> entries and skip them but the first one.
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@xxxxxxxx>
>>>>> ---
>>>>>  tools/libxc/xc_sr_save.c        |  8 ++++----
>>>>>  tools/libxc/xc_sr_save_x86_pv.c | 34 +++++++++++++++++++++++++++++++---
>>>>>  2 files changed, 35 insertions(+), 7 deletions(-)
>>>>>
>>>>> diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
>>>>> index 0c12e56..22b3f18 100644
>>>>> --- a/tools/libxc/xc_sr_save.c
>>>>> +++ b/tools/libxc/xc_sr_save.c
>>>>> @@ -677,6 +677,10 @@ static int setup(struct xc_sr_context *ctx)
>>>>>      DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>>>>>                                      &ctx->save.dirty_bitmap_hbuf);
>>>>>  
>>>>> +    rc = ctx->save.ops.setup(ctx);
>>>>> +    if ( rc )
>>>>> +        goto err;
>>>>> +
>>>>>      dirty_bitmap = xc_hypercall_buffer_alloc_pages(
>>>>>                     xch, dirty_bitmap, 
>>>>> NRPAGES(bitmap_size(ctx->save.p2m_size)));
>>>>>      ctx->save.batch_pfns = malloc(MAX_BATCH_SIZE *
>>>>> @@ -692,10 +696,6 @@ static int setup(struct xc_sr_context *ctx)
>>>>>          goto err;
>>>>>      }
>>>>>  
>>>> p2m_size is set in two places for PV guest.  Since we are now relying on
>>>> ops.setup to set p2m_size, the invocation to xc_domain_nr_gpfns in
>>>> xc_domain_save should be removed, so that we only have one place to set
>>>> p2m_size.
>>> I wasn't sure this is needed in save case only. If it is I can make
>>> the change, of course. Andrew?
> I got this one wrong, sorry. I was relating to the xc_domain_nr_gpfns()
> call in x86_pv_domain_info(). And here the question really applies:
>
> Do we need this call at all? I don't think it is needed for the
> restore operation. At least in my tests the upper pfn bound was always
> taken from the data stream, not from xc_domain_nr_gpfns().

x86_pv_domain_info() was the result of attempting to consolidate all the
random checks for PV guests, for both the save and restore sides,
although most of the callsites subsequently vanished.

The function as a whole gets used several times on the restore side, and
once of save side.  Unilaterally rewriting ctx->x86_pv.max_pfn is not
the best option, so would probably be best to disband
x86_pv_domain_info() altogether and hoist the width/p2m checks into the
relevant callers.

>
>> That is most likely a byproduct of this codes somewhat-organic growth
>> over 18 months.
>>
>> The check in xc_domain_save() is a check left over from legacy
>> migration.  It was present in legacy migration as legacy merged the page
>> type into the upper 4 bits of the unsigned long pfn, meaning that a pfn
>> of (2^28)-1 was the INVALID_PFN representation in the 32bit stream.
>>
>> The check is less important for migration v2, but I left it in as an
>> upper sanity check.
>>
>> I am not aversed to removing the check if we are no longer going to
>> believe the result of XENMEM_maximum_gpfn, and requiring that setup()
>> get and sanity check the domain size.
> As I'm planning to support migration of larger domains in the future
> I think the check has to go away. Why not now.

Ideally, there should still be a sanity check.  Ian's idea of the caller
passing the expected size seems like a good candidate.

>
>> (Incidently, it seems crazy that we make a XENMEM_maximum_gpfn hypercall
>> to read a value from the shared info page, which is also (about to be)
>> mapped locally.)
> Indeed. It should be necessary for hvm guests only.

I agree.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.