[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/7] x86/hvm: convert gsi_assert_count into a variable size array



On Tue, Mar 28, 2017 at 03:10:27AM -0600, Jan Beulich wrote:
> >>> On 28.03.17 at 10:40, <roger.pau@xxxxxxxxxx> wrote:
> > On Tue, Mar 28, 2017 at 01:25:44AM -0600, Jan Beulich wrote:
> >> >>> On 27.03.17 at 19:28, <roger.pau@xxxxxxxxxx> wrote:
> > ARRAY_SIZE(((struct hvm_hw_vioapic *)0)->redirtbl)
> > 
> > Which is kind of cumbersome? I can define that into a macro I guess.
> 
> Indeed, ugly. How about
> 
> struct hvm_vioapic {
>     struct domain *domain;
>     union {
>         DECLARE_VIOAPIC(, VIOAPIC_NUM_PINS);
>         struct hvm_hw_vioapic domU;
>     };
> };
> 
> (with "domU" subject to improvement)? This might at once allow
> making the save/restore code look better.

Right, this indeed looks better, and will make the save/restore code neater.
I'm not sure I can come up with a better name, "guest", "static"?

> >> >> > @@ -495,7 +497,7 @@ static void irq_dump(struct domain *d)
> >> >> >             (uint32_t) hvm_irq->isa_irq.pad[0], 
> >> >> >             hvm_irq->pci_link.route[0], hvm_irq->pci_link.route[1],
> >> >> >             hvm_irq->pci_link.route[2], hvm_irq->pci_link.route[3]);
> >> >> > -    for ( i = 0 ; i < VIOAPIC_NUM_PINS; i += 8 )
> >> >> > +    for ( i = 0; i < hvm_irq->nr_gsis && i + 8 <= hvm_irq->nr_gsis; 
> >> >> > i += 8 )
> >> >> >          printk("GSI [%x - %x] %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" 
> >> >> > %2.2"PRIu8
> >> >> >                 " %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8"\n",
> >> >> >                 i, i+7,
> >> >> > @@ -507,6 +509,13 @@ static void irq_dump(struct domain *d)
> >> >> >                 hvm_irq->gsi_assert_count[i+5],
> >> >> >                 hvm_irq->gsi_assert_count[i+6],
> >> >> >                 hvm_irq->gsi_assert_count[i+7]);
> >> >> > +    if ( i != hvm_irq->nr_gsis )
> >> >> > +    {
> >> >> > +        printk("GSI [%x - %x]", i, hvm_irq->nr_gsis - 1);
> >> >> > +        for ( ; i < hvm_irq->nr_gsis; i++)
> >> >> > +            printk(" %2.2"PRIu8, hvm_irq->gsi_assert_count[i]);
> >> >> > +        printk("\n");
> >> >> > +    }
> >> >> 
> >> >> I realize you've just copied this from existing code, but what
> >> >> advantage does %2.2 have over just %2 here?
> >> > 
> >> > Shouldn't it be just %3 anyway? Would you be fine with me fixing this 
> >> > here, or
> >> > would you rather prefer a separate patch?
> >> 
> >> Well, if you also want to change the existing ones, then a separate
> >> patch would be preferred. As to %2 vs %3 - how would the latter
> >> be any better if we want/need to change the type from u8 anyway?
> > 
> > ATM uint8_t can be 255, so %3 seems better (although AFAICT this is limited 
> > to
> > 4, because there are 4 INT# lines to each GSI, and pending GSIs are not
> > accumulated). I'm not sure I follow why do we want/need to change the type.
> 
> For our DomU model there are four lines. Physical machines often
> declare more in ACPI, and I'm not sure there's a theoretical upper
> limit.

In any case, I'm not sure this is relevant to PVH Dom0, where Xen should simply
bind GSIs, but has no idea of all the interrupt routing behind them. This is
relevant to DomUs because in that case Xen acts as a PCI interrupt router
AFAICT.

> >> >> > @@ -89,13 +77,27 @@ struct hvm_irq {
> >> >> >      u8 round_robin_prev_vcpu;
> >> >> >  
> >> >> >      struct hvm_irq_dpci *dpci;
> >> >> > +
> >> >> > +    /*
> >> >> > +     * Number of wires asserting each GSI.
> >> >> > +     *
> >> >> > +     * GSIs 0-15 are the ISA IRQs. ISA devices map directly into 
> >> >> > this space
> >> >> > +     * except ISA IRQ 0, which is connected to GSI 2.
> >> >> > +     * PCI links map into this space via the PCI-ISA bridge.
> >> >> > +     *
> >> >> > +     * GSIs 16+ are used only be PCI devices. The mapping from PCI 
> >> >> > device to
> >> >> > +     * GSI is as follows: ((device*4 + device/8 + INTx#) & 31) + 16
> >> >> > +     */
> >> >> > +    unsigned int nr_gsis;
> >> >> > +    u8 gsi_assert_count[];
> >> >> >  };
> >> >> 
> >> >> I'm afraid at the same time (or in a later patch) the type of this array
> >> >> (and maybe also the type of pci_link_assert_count[]) will need to be
> >> >> widened.
> >> > 
> >> > This one seems to be fine for PVH Dom0 (you will have to look at the next
> >> > series), but I specifically avoid touching pci_link_assert_count. Again, 
> >> > better
> >> > to comment this on the next patch series that actually implements the 
> >> > interrupt
> >> > binding. This is needed because code in vioapic makes use of
> >> > gsi_assert_count.
> >> 
> >> Well, we can certainly discuss this there (whenever I get to look at
> >> it, as that's not 4.9 stuff now anymore), but for now I don't see the
> >> connection between my comment and your reply.
> > 
> > Hm, I'm not sure I understand why would you like to change the type of this
> > array. AFAICT the type of the array is not related to the number of vIO APIC
> > pins, or the number of vIO APICs.
> 
> See above - I think it is related to the number of declared INTx
> lines (of which a DomU at present would only have 4). Or maybe
> I'm mistaken where the INTx value comes from here.

AFAICT in any case the INTx is always limited to INTA-INTD (4), but I don't
think this is important for Dom0, because there Xen doesn't act as a PCI
interrupt router, and thus doesn't need to know the INTx. In fact
gsi_assert_count used with a PVH Dom0 should either be 0 or 1, because Xen is
identity mapping machine GSIs into guest GSIs, without any kind of routing.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.