[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 07/11] x86/hvm: Introduce viridian_save_vcpu_ctxt_one() func



On Vi, 2018-07-13 at 05:53 -0600, Jan Beulich wrote:
> > 
> > > 
> > > > 
> > > > On 13.07.18 at 13:14, <aisaila@xxxxxxxxxxxxxxx> wrote:
> > On Vi, 2018-07-13 at 04:34 -0600, Jan Beulich wrote:
> > > 
> > > > 
> > > > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 13.07.18 at 11:04, <aisaila@xxxxxxxxxxxxxxx> wrote:
> > > > --- a/xen/arch/x86/hvm/viridian.c
> > > > +++ b/xen/arch/x86/hvm/viridian.c
> > > > @@ -1026,20 +1026,26 @@ static int
> > > > viridian_load_domain_ctxt(struct
> > > > domain *d, hvm_domain_context_t *h)
> > > >  HVM_REGISTER_SAVE_RESTORE(VIRIDIAN_DOMAIN,
> > > > viridian_save_domain_ctxt,
> > > >                            viridian_load_domain_ctxt, 1,
> > > > HVMSR_PER_DOM);
> > > >  
> > > > -static int viridian_save_vcpu_ctxt(struct domain *d,
> > > > hvm_domain_context_t *h)
> > > > +static int viridian_save_vcpu_ctxt_one(struct vcpu *v,
> > > > hvm_domain_context_t *h)
> > > >  {
> > > > -    struct vcpu *v;
> > > > +    struct hvm_viridian_vcpu_context ctxt;
> > > >  
> > > > -    if ( !is_viridian_domain(d) )
> > > > +    if ( !is_viridian_domain(v->domain) )
> > > >          return 0;
> > > >  
> > > > -    for_each_vcpu( d, v ) {
> > > > -        struct hvm_viridian_vcpu_context ctxt = {
> > > > -            .vp_assist_msr = v-
> > > > > 
> > > > > arch.hvm_vcpu.viridian.vp_assist.msr.raw,
> > > > -            .vp_assist_pending = v-
> > > > > 
> > > > > arch.hvm_vcpu.viridian.vp_assist.pending,
> > > > -        };
> > > > +    memset(&ctxt, 0, sizeof(ctxt));
> > > > +    ctxt.vp_assist_msr = v-
> > > > > 
> > > > > arch.hvm_vcpu.viridian.vp_assist.msr.raw;
> > > > +    ctxt.vp_assist_pending = v-
> > > > > 
> > > > > arch.hvm_vcpu.viridian.vp_assist.pending;
> > > >  
> > > > -        if ( hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h,
> > > > &ctxt)
> > > > != 0 )
> > > > +    return hvm_save_entry(VIRIDIAN_VCPU, v->vcpu_id, h,
> > > > &ctxt);
> > > So you now hand on the return value here, but just to ...
> > > 
> > > > 
> > > > 
> > > > +}
> > > > +
> > > > +static int viridian_save_vcpu_ctxt(struct domain *d,
> > > > hvm_domain_context_t *h)
> > > > +{
> > > > +    struct vcpu *v;
> > > > +
> > > > +    for_each_vcpu( d, v ) {
> > > > +        if ( viridian_save_vcpu_ctxt_one(v, h) != 0 )
> > > >              return 1;
> > > ... not pass it on here. Is that really what we want? The vMCE
> > > case
> > > does
> > > it differently, for example. Which means - there are certainly
> > > inconsistencies right now, but since you have to touch all of
> > > this
> > > anyway,
> > > couldn't you make things end in a more consistent shape after
> > > this
> > > rework?
> > In the past there where either returning the value
> > from hvm_save_entry()(like in the hvm_save_tsc_adjust) or check the
> > return value(like in the hvm_save_cpu_ctxt ) like it did here. I
> > made
> > all the save_one funcs return the value from  hvm_save_entry() and
> > then
> > check it in the save func so that they all return in the same way.
> > 
> > I don't say that the old way was bad but it was not consistent.
> > 
> > And yes I have missed the if() check int he save func in the vmce
> > but I
> > think it's better to have the check in place there and have all the
> > save / save_one funcs consistent.
> Consistent - yes. But not the wrong way. It is almost never correct
> to convert one error indicator into another (one exception is errno-
> style error codes vs boolean). So imo it is this patch, not the vMCE
> one which wants to change.

I agree with you on the converting the error codes and I will have the
save funcs return the return value form the save_one funcs but further
down the series the return value is converted into return enum to cover
the CONTINUE value. What is the best way to handle this because we will
be back at version 8 if I drop it. 

Thanks, 
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.