[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v14 09/11] x86/domctl: Don't pause the whole domain if only getting vcpu state



>>> On 25.07.18 at 14:14, <aisaila@xxxxxxxxxxxxxxx> wrote:
> This patch is focused on moving the for loop to the caller so
> now we can save info for a single vcpu instance with the save_one
> handlers.
> 
> Signed-off-by: Alexandru Isaila <aisaila@xxxxxxxxxxxxxxx>

First of all I'd appreciate if this patch was last in the series, after all
infrastructure changes have been done.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -793,6 +793,14 @@ static int hvm_save_cpu_ctxt_one(struct vcpu *v, 
> hvm_domain_context_t *h)
>      struct segment_register seg;
>      struct hvm_hw_cpu ctxt = {};
>  
> +    /*
> +     * We don't need to save state for a vcpu that is down; the restore
> +     * code will leave it down if there is nothing saved.
> +     */
> +    if ( v->pause_flags & VPF_down )
> +        return 0;
> +
> +
>      /* Architecture-specific vmcs/vmcb bits */
>      hvm_funcs.save_cpu_ctxt(v, &ctxt);
>  
> @@ -897,13 +905,6 @@ static int hvm_save_cpu_ctxt(struct domain *d, 
> hvm_domain_context_t *h)
>  
>      for_each_vcpu ( d, v )
>      {
> -        /*
> -         * We don't need to save state for a vcpu that is down; the restore
> -         * code will leave it down if there is nothing saved.
> -         */
> -        if ( v->pause_flags & VPF_down )
> -            continue;

Why is this not done in the respective prereq patch?

> @@ -1196,7 +1197,7 @@ static int hvm_save_cpu_xsave_states_one(struct vcpu 
> *v, hvm_domain_context_t *h
>      unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
>      int err = 0;
>  
> -    if ( !cpu_has_xsave )
> +    if ( !cpu_has_xsave || !xsave_enabled(v) )
>          return 0;   /* do nothing */
>  
>      err = _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id, size);
> @@ -1221,8 +1222,6 @@ static int hvm_save_cpu_xsave_states(struct domain *d, 
> hvm_domain_context_t *h)
>  
>      for_each_vcpu ( d, v )
>      {
> -        if ( !xsave_enabled(v) )
> -            continue;

Same here. The goal of the prereq patches is that the patch
here does not have to touch anything other than the abstract
logic.

> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -138,9 +138,12 @@ size_t hvm_save_size(struct domain *d)
>  int hvm_save_one(struct domain *d, unsigned int typecode, unsigned int 
> instance,
>                   XEN_GUEST_HANDLE_64(uint8) handle, uint64_t *bufsz)
>  {
> -    int rv;
> +    int rv = 0;
>      hvm_domain_context_t ctxt = { };
>      const struct hvm_save_descriptor *desc;
> +    bool is_single_instance = false;
> +    uint32_t off = 0;
> +    struct vcpu *v;
>  
>      if ( d->is_dying ||
>           typecode > HVM_SAVE_CODE_MAX ||
> @@ -148,43 +151,94 @@ int hvm_save_one(struct domain *d, unsigned int 
> typecode, unsigned int instance,
>           !hvm_sr_handlers[typecode].save )
>          return -EINVAL;
>  
> +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> +        instance < d->max_vcpus )
> +        is_single_instance = true;

It seems quite pointless to allow instance >= d->max_vcpus to
progress further. In fact at this point all HVMSR_PER_VCPU
instances ought to have a save_one() handler, and hence I'm
thinking you could get away without a is_single_instance local
variable.

>      ctxt.size = hvm_sr_handlers[typecode].size;
> -    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU )
> +    if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU &&
> +        instance == d->max_vcpus )

And as a result I don't understand this addition.

I'm afraid the rest of the changes to this function would change
too much as a result, so I'm skipping them for now.

> @@ -196,7 +250,9 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)

I don't think the remaining changes actually belongs in a patch with
a title like the one here has.

>      struct hvm_save_header hdr;
>      struct hvm_save_end end;
>      hvm_save_handler handler;
> -    unsigned int i;
> +    hvm_save_one_handler save_one_handler;
> +    unsigned int i, rc;

rc would not normally be of unsigned type. I'm anyway struggling
to understand why you introduce the variable. You don't even log
its value in the error case(s).

> +    struct vcpu *v = NULL;

Pointless initializer afaict.

> @@ -224,11 +280,32 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>      for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
>      {
>          handler = hvm_sr_handlers[i].save;
> -        if ( handler != NULL )
> +        save_one_handler = hvm_sr_handlers[i].save_one;
> +        if ( save_one_handler != NULL )
>          {
>              printk(XENLOG_G_INFO "HVM%d save: %s\n",
>                     d->domain_id, hvm_sr_handlers[i].name);

Please also log the vCPU ID (perhaps using "HVM %pv save: ...".
and also on the error path a few lines down), making this message
at the same time distinguishable from ...

> -            if ( handler(d, h) != 0 )
> +            for_each_vcpu ( d, v )
> +            {
> +                rc = save_one_handler(v, h);
> +
> +                if( rc != 0 )
> +                {
> +                    printk(XENLOG_G_ERR
> +                           "HVM%d save: failed to save type %"PRIu16"\n",
> +                           d->domain_id, i);
> +                    return -EFAULT;
> +                }
> +            }
> +        }
> +        else if ( handler != NULL )
> +        {
> +            printk(XENLOG_G_INFO "HVM%d save: %s\n",
> +                   d->domain_id, hvm_sr_handlers[i].name);

... this one and ...

> +            rc = handler(d, h);
> +
> +            if( rc != 0 )
>              {
>                  printk(XENLOG_G_ERR
>                         "HVM%d save: failed to save type %"PRIu16"\n",

... for the error path from this one.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.