[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v14 10/11] x86/hvm: Remove redundant save functions



>>> On 25.07.18 at 14:14, <aisaila@xxxxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -516,8 +516,9 @@ static const struct hvm_mmio_ops hpet_mmio_ops = {
>  };
>  
>  
> -static int hpet_save(struct domain *d, hvm_domain_context_t *h)
> +static int hpet_save(struct vcpu *vcpu, hvm_domain_context_t *h)

Consistently v please.

> @@ -785,10 +770,10 @@ static int hvm_load_tsc_adjust(struct domain *d, 
> hvm_domain_context_t *h)
>  }
>  
>  HVM_REGISTER_SAVE_RESTORE(TSC_ADJUST, hvm_save_tsc_adjust,
> -                          hvm_save_tsc_adjust_one,
> +                          NULL,
>                            hvm_load_tsc_adjust, 1, HVMSR_PER_VCPU);

I think there's still an ordering issue with this last parts of series: You
shouldn't need to re-introduce NULL here. The abstract logic should
first be switched to no longer use the .save handlers, at which point
you can simply drop the field at the same time as you rename the
functions.

> --- a/xen/arch/x86/hvm/save.c
> +++ b/xen/arch/x86/hvm/save.c
> @@ -174,7 +174,7 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
> unsigned int instance,
>              rv = hvm_sr_handlers[typecode].save_one(d->vcpu[instance],
>                                                      &ctxt);
>          else
> -            rv = hvm_sr_handlers[typecode].save(d, &ctxt);
> +            rv = hvm_sr_handlers[typecode].save(d->vcpu[instance], &ctxt);

So this sits on the is_single_instance path and hence instance has
been bounds checked.

> @@ -207,7 +207,8 @@ int hvm_save_one(struct domain *d, unsigned int typecode, 
> unsigned int instance,
>      {
>          for_each_vcpu ( d, v )
>          {
> -            if ( (rv = hvm_sr_handlers[typecode].save(d, &ctxt)) != 0 )
> +            if ( (rv = hvm_sr_handlers[typecode].save(d->vcpu[instance],
> +                                                      &ctxt)) != 0 )

But aren't potentially accessing the array beyond bounds here?
Why don't you simply pass v? Otoh - why is this in a for_each_vcpu()
loop? Anyway, as said, I think the previous patch needs quite a bit
of work in this area. As a separate remark though - please make sure
your series can be applied in multiple steps, i.e. it needs to not break
things at any patch boundary.

> @@ -280,14 +280,13 @@ int hvm_save(struct domain *d, hvm_domain_context_t *h)
>      for ( i = 0; i <= HVM_SAVE_CODE_MAX; i++ )
>      {
>          handler = hvm_sr_handlers[i].save;
> -        save_one_handler = hvm_sr_handlers[i].save_one;
> -        if ( save_one_handler != NULL )
> +        if ( handler != NULL )
>          {
>              printk(XENLOG_G_INFO "HVM%d save: %s\n",
>                     d->domain_id, hvm_sr_handlers[i].name);
>              for_each_vcpu ( d, v )
>              {
> -                rc = save_one_handler(v, h);
> +                rc = handler(v, h);

Aren't you invoking HVMSR_PER_DOM handlers now once per vCPU
instead of once per domain?

> --- a/xen/include/asm-x86/hvm/save.h
> +++ b/xen/include/asm-x86/hvm/save.h
> @@ -95,7 +95,7 @@ static inline uint16_t hvm_load_instance(struct 
> hvm_domain_context *h)
>   * The save handler may save multiple instances of a type into the buffer;
>   * the load handler will be called once for each instance found when
>   * restoring.  Both return non-zero on error. */
> -typedef int (*hvm_save_handler) (struct domain *d, 
> +typedef int (*hvm_save_handler) (struct  vcpu *v,

Too many spaces after struct.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.