[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/7] x86/vioapic: introduce a internal vIO APIC structure



On Mon, Mar 27, 2017 at 09:38:49AM -0600, Jan Beulich wrote:
> >>> On 27.03.17 at 12:18, <roger.pau@xxxxxxxxxx> wrote:
> > The reason to expand the hvm_vioapic structure instead of the hvm_hw_vioapic
> > one is that the variable number of pins functionality is only going to be 
> > used
> > by the hardware domain, so no modifications are needed to the save format.
> 
> As you say here you expand an existing structure, so the title
> isn't really correct.
> 
> > @@ -426,38 +426,43 @@ void vioapic_update_EOI(struct domain *d, u8 vector)
> >  
> >  static int ioapic_save(struct domain *d, hvm_domain_context_t *h)
> >  {
> > -    struct hvm_hw_vioapic *s = domain_vioapic(d);
> > +    struct hvm_vioapic *s = domain_vioapic(d);
> >  
> >      if ( !has_vioapic(d) )
> >          return 0;
> >  
> > -    return hvm_save_entry(IOAPIC, 0, h, s);
> > +    BUILD_BUG_ON(sizeof(struct hvm_hw_vioapic) !=
> > +                 sizeof(struct hvm_vioapic) -
> > +                 offsetof(struct hvm_vioapic, base_address));
> 
> This is too weak a check for my taste, and ...

I've removed this BUILD_BUG_ON.

> > +    return hvm_save_entry(IOAPIC, 0, h, &s->base_address);
> 
> ... this too fragile a use. See also below.
> 
> > --- a/xen/include/asm-x86/hvm/vioapic.h
> > +++ b/xen/include/asm-x86/hvm/vioapic.h
> > @@ -48,13 +48,16 @@
> >  #define VIOAPIC_REG_RTE0    0x10
> >  
> >  struct hvm_vioapic {
> > -    struct hvm_hw_vioapic hvm_hw_vioapic;
> >      struct domain *domain;
> > +    /* Layout below must match hvm_hw_vioapic. */
> > +    uint64_t base_address;
> > +    uint32_t ioregsel;
> > +    uint32_t id;
> > +    union vioapic_redir_entry redirtbl[VIOAPIC_NUM_PINS];
> >  };
> 
> It is mere luck that last old and first new fields aren't both 32-bit
> ones, or else this approach would not have worked at all. I think
> we need some better approach here, but the absolute minimum
> would be to also add a comment on the other side.
> 
> One possible approach would be to move the entire set of field
> declarations of struct hvm_hw_vioapic into a macro, using it both
> there and here. Which would then leave making sure there are no
> alignment effects because of fields added ahead of that macro's
> use (perhaps via BUILD_BUG_ON(), or maybe even better by
> making this an unnamed structure member inside struct
> hvm_ioapic).
> 
> The macro should then be #undef-ed in the header, except in the
> __XEN__ case. Perhaps the macro would also want to be given a
> parameter right away for the invoking site to specify the number
> of pins.

Yes, I think the unnamed structure is way better, here's what I've done:

save.h:

union vioapic_redir_entry
{
    uint64_t bits;
    struct {
        uint8_t vector;
        uint8_t delivery_mode:3;
        uint8_t dest_mode:1;
        uint8_t delivery_status:1;
        uint8_t polarity:1;
        uint8_t remote_irr:1;
        uint8_t trig_mode:1;
        uint8_t mask:1;
        uint8_t reserve:7;
        uint8_t reserved[4];
        uint8_t dest_id;
    } fields;
};

#define VIOAPIC_NUM_PINS  48 /* 16 ISA IRQs, 32 non-legacy PCI IRQS. */

#define DECLARE_VIOAPIC(name, cnt)                      \
    struct name {                                       \
        uint64_t base_address;                          \
        uint32_t ioregsel;                              \
        uint32_t id;                                    \
        union vioapic_redir_entry redirtbl[cnt];        \
    }

DECLARE_VIOAPIC(hvm_hw_vioapic, VIOAPIC_NUM_PINS);

#ifndef __XEN__
#undef DECLARE_VIOAPIC
#endif

vioapic.h:

struct hvm_vioapic {
    struct domain *domain;
    DECLARE_VIOAPIC(, VIOAPIC_NUM_PINS);
};

This seems to work fine, and now the BUILD_BUG_ON is just pointless.

Thanks for the tip, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.