[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v14 08/11] x86/hvm: Add handler for save_one funcs



On Ma, 2018-07-31 at 06:34 -0600, Jan Beulich wrote:
> > 
> > > 
> > > > 
> > > > On 25.07.18 at 14:14, <aisaila@xxxxxxxxxxxxxxx> wrote:
> > --- a/xen/arch/x86/hvm/save.c
> > +++ b/xen/arch/x86/hvm/save.c
> > @@ -85,16 +85,18 @@ int arch_hvm_load(struct domain *d, struct
> > hvm_save_header *hdr)
> >  /* List of handlers for various HVM save and restore types */
> >  static struct {
> >      hvm_save_handler save;
> > +    hvm_save_one_handler save_one;
> >      hvm_load_handler load;
> >      const char *name;
> >      size_t size;
> >      int kind;
> > -} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, "<?>"},
> > };
> > +} hvm_sr_handlers[HVM_SAVE_CODE_MAX + 1] = { {NULL, NULL, NULL,
> > "<?>"}, };
> This initializer looks flawed, and I'd appreciate if you could fix it
> as
> you have to touch it anyway: It only sets .name of array entry 0
> to a non-NULL string. Either this setting is not needed in the first
> place (as looks to be the case), or you'll want to initialize all
> array
> entries the same. Use the C99 (GNU extension in C89) for that
> purpose. Perhaps simply dropping the initializer could be prereq
> patch which could go in right away.
> 
> > 
> > --- a/xen/arch/x86/hvm/vlapic.c
> > +++ b/xen/arch/x86/hvm/vlapic.c
> > @@ -1576,9 +1576,9 @@ static int lapic_load_regs(struct domain *d,
> > hvm_domain_context_t *h)
> >      return 0;
> >  }
> >  
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden,
> > lapic_load_hidden,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
> > lapic_load_hidden,
> >                            1, HVMSR_PER_VCPU);
> > -HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs,
> > lapic_load_regs,
> > +HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
> > lapic_load_regs,
> These are per-vCPU as well - why do they get NULL inserted here,
> rather than there being another (two) prereq patch(es)?

Both LAPIC save functions have for for (vcpu) so the look like a
save_one function already, no need to do anything here.

> 
> > 
> > --- a/xen/include/asm-x86/hvm/save.h
> > +++ b/xen/include/asm-x86/hvm/save.h
> > @@ -97,6 +97,8 @@ static inline uint16_t hvm_load_instance(struct
> > hvm_domain_context *h)
> >   * restoring.  Both return non-zero on error. */
> >  typedef int (*hvm_save_handler) (struct domain *d, 
> >                                   hvm_domain_context_t *h);
> > +typedef int (*hvm_save_one_handler)(struct  vcpu *v,
> > +                                    hvm_domain_context_t *h);
> I don't think "one" is a valid part of the name here: PIC has
> multiple instances too (and eventually IO-APIC will), yet they're
> not to be managed this way. I think you want to use "vcpu"
> instead.
> 
> > 
> > @@ -114,12 +117,13 @@ void hvm_register_savevm(uint16_t typecode,
> >  
> >  /* Syntactic sugar around that function: specify the max number of
> >   * saves, and this calculates the size of buffer needed */
> > -#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
> > _k)             \
> > +#define HVM_REGISTER_SAVE_RESTORE(_x, _save, _save_one, _load,
> > _num, _k)  \
> >  static int __init
> > __hvm_register_##_x##_save_and_restore(void)            \
> >  {                                                                 
> >         \
> >      hvm_register_savevm(HVM_SAVE_CODE(_x),                        
> >         \
> >                          #_x,                                      
> >         \
> >                          &_save,                                   
> >         \
> > +                        _save_one,                                
> >         \
> While I generally appreciate the omission of the &, I'd
> prefer if you added it for consistency with the neighboring
> lines.

This was done so we can add NULL in the places that do not have
save_one functions.

Thanks, 
Alex

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.