[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/7] x86/hvm: convert gsi_assert_count into a variable size array



>>> On 28.03.17 at 11:34, <roger.pau@xxxxxxxxxx> wrote:
> On Tue, Mar 28, 2017 at 03:10:27AM -0600, Jan Beulich wrote:
>> >>> On 28.03.17 at 10:40, <roger.pau@xxxxxxxxxx> wrote:
>> > On Tue, Mar 28, 2017 at 01:25:44AM -0600, Jan Beulich wrote:
>> >> >>> On 27.03.17 at 19:28, <roger.pau@xxxxxxxxxx> wrote:
>> > ARRAY_SIZE(((struct hvm_hw_vioapic *)0)->redirtbl)
>> > 
>> > Which is kind of cumbersome? I can define that into a macro I guess.
>> 
>> Indeed, ugly. How about
>> 
>> struct hvm_vioapic {
>>     struct domain *domain;
>>     union {
>>         DECLARE_VIOAPIC(, VIOAPIC_NUM_PINS);
>>         struct hvm_hw_vioapic domU;
>>     };
>> };
>> 
>> (with "domU" subject to improvement)? This might at once allow
>> making the save/restore code look better.
> 
> Right, this indeed looks better, and will make the save/restore code neater.
> I'm not sure I can come up with a better name, "guest", "static"?

Well, "static" is a keyword, so can't be used, and Dom0 is a "guest"
too.

>> >> >> > @@ -495,7 +497,7 @@ static void irq_dump(struct domain *d)
>> >> >> >             (uint32_t) hvm_irq->isa_irq.pad[0], 
>> >> >> >             hvm_irq->pci_link.route[0], hvm_irq->pci_link.route[1],
>> >> >> >             hvm_irq->pci_link.route[2], hvm_irq->pci_link.route[3]);
>> >> >> > -    for ( i = 0 ; i < VIOAPIC_NUM_PINS; i += 8 )
>> >> >> > +    for ( i = 0; i < hvm_irq->nr_gsis && i + 8 <= hvm_irq->nr_gsis; 
>> >> >> > i += 8 )
>> >> >> >          printk("GSI [%x - %x] %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" 
>> >> >> > %2.2"PRIu8
>> >> >> >                 " %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8" %2.2"PRIu8"\n",
>> >> >> >                 i, i+7,
>> >> >> > @@ -507,6 +509,13 @@ static void irq_dump(struct domain *d)
>> >> >> >                 hvm_irq->gsi_assert_count[i+5],
>> >> >> >                 hvm_irq->gsi_assert_count[i+6],
>> >> >> >                 hvm_irq->gsi_assert_count[i+7]);
>> >> >> > +    if ( i != hvm_irq->nr_gsis )
>> >> >> > +    {
>> >> >> > +        printk("GSI [%x - %x]", i, hvm_irq->nr_gsis - 1);
>> >> >> > +        for ( ; i < hvm_irq->nr_gsis; i++)
>> >> >> > +            printk(" %2.2"PRIu8, hvm_irq->gsi_assert_count[i]);
>> >> >> > +        printk("\n");
>> >> >> > +    }
>> >> >> 
>> >> >> I realize you've just copied this from existing code, but what
>> >> >> advantage does %2.2 have over just %2 here?
>> >> > 
>> >> > Shouldn't it be just %3 anyway? Would you be fine with me fixing this 
>> >> > here, or
>> >> > would you rather prefer a separate patch?
>> >> 
>> >> Well, if you also want to change the existing ones, then a separate
>> >> patch would be preferred. As to %2 vs %3 - how would the latter
>> >> be any better if we want/need to change the type from u8 anyway?
>> > 
>> > ATM uint8_t can be 255, so %3 seems better (although AFAICT this is 
>> > limited to
>> > 4, because there are 4 INT# lines to each GSI, and pending GSIs are not
>> > accumulated). I'm not sure I follow why do we want/need to change the type.
>> 
>> For our DomU model there are four lines. Physical machines often
>> declare more in ACPI, and I'm not sure there's a theoretical upper
>> limit.
> 
> In any case, I'm not sure this is relevant to PVH Dom0, where Xen should 
> simply
> bind GSIs, but has no idea of all the interrupt routing behind them. This is
> relevant to DomUs because in that case Xen acts as a PCI interrupt router
> AFAICT.

Good point. But is all this refcounting code being bypassed for Dom0?
If so, you wouldn't need to extend the array here. If not, overflows
may still matter.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.