[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 1/2] x86/hvm: Introduce *save_one() functions



>>> On 07.05.18 at 10:24, <aisaila@xxxxxxxxxxxxxxx> wrote:
> This patch introduces save_one() functions. They will be called in the
> *save() so we can extract data for a single instance.

Mostly fine, but please split up into one patch per save type.

> --- a/xen/arch/x86/cpu/mcheck/vmce.c
> +++ b/xen/arch/x86/cpu/mcheck/vmce.c
> @@ -349,6 +349,14 @@ int vmce_wrmsr(uint32_t msr, uint64_t val)
>      return ret;
>  }
>  
> +void vmce_save_vcpu_ctxt_one(struct vcpu *v, struct hvm_vmce_vcpu *ctxt)

static (also elsewhere)

> @@ -1173,6 +1184,18 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, 
> hvm_load_cpu_ctxt,
>                                             save_area) + \
>                                    xstate_ctxt_size(xcr0))
>  
> +void hvm_save_cpu_xsave_states_one(struct vcpu *v, struct hvm_hw_cpu_xsave 
> **ctx, hvm_domain_context_t *h)

This is inconsistent with the others: Why the extra indirection for ctx?
And why the passing of h?

> +{
> +    unsigned int size = HVM_CPU_XSAVE_SIZE(v->arch.xcr0_accum);
> +    struct hvm_hw_cpu_xsave *ctxt = * ctx;
> +
> +    h->cur += size;

This belongs in the caller afaict.

> @@ -1339,6 +1358,39 @@ static const uint32_t msrs_to_send[] = {
>  };
>  static unsigned int __read_mostly msr_count_max = ARRAY_SIZE(msrs_to_send);
>  
> +int hvm_save_cpu_msrs_one(struct vcpu *v, struct hvm_msr **ctx, 
> hvm_domain_context_t *h)

Same as above; I can't even spot where you use h in this function.

>  static int hvm_save_mtrr_msr(struct domain *d, hvm_domain_context_t *h)
>  {
> -    int i;
>      struct vcpu *v;
>      struct hvm_hw_mtrr hw_mtrr;
> -    struct mtrr_state *mtrr_state;
>      /* save mtrr&pat */

Please take the opportunity and add a blank line after the declarations.

> --- a/xen/arch/x86/hvm/viridian.c
> +++ b/xen/arch/x86/hvm/viridian.c
> @@ -1028,6 +1028,12 @@ static int viridian_load_domain_ctxt(struct domain *d, 
> hvm_domain_context_t *h)
>  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN, viridian_save_domain_ctxt,
>                            viridian_load_domain_ctxt, 1, HVMSR_PER_DOM);
>  
> +void viridian_save_vcpu_ctxt_one(struct vcpu *v, struct 
> hvm_viridian_vcpu_context *ctxt)
> +{
> +    ctxt->vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> +    ctxt->vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending;
> +}
> +
>  static int viridian_save_vcpu_ctxt(struct domain *d, hvm_domain_context_t *h)
>  {
>      struct vcpu *v;
> @@ -1036,10 +1042,9 @@ static int viridian_save_vcpu_ctxt(struct domain *d, 
> hvm_domain_context_t *h)
>          return 0;
>  
>      for_each_vcpu( d, v ) {
> -        struct hvm_viridian_vcpu_context ctxt = {
> -            .vp_assist_msr = v->arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> -            .vp_assist_pending = v->arch.hvm_vcpu.viridian.vp_assist.pending,
> -        };
> +        struct hvm_viridian_vcpu_context ctxt;
> +
> +        viridian_save_vcpu_ctxt_one(v, &ctxt);

There is a reason ctxt has an initializer: You're now leaking 7 bytes of 
hypervisor
stack data (through the _pad[] array).

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.