[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 06/19] xen/arm: Implement hypercall PHYSDEVOP_map_pirq



On 07/03/2014 12:27 PM, Ian Campbell wrote:
> On Thu, 2014-06-19 at 13:29 +0100, Stefano Stabellini wrote:
>> On Thu, 19 Jun 2014, Julien Grall wrote:
>>> On 06/18/2014 08:24 PM, Stefano Stabellini wrote:
>>>>>  /*
>>>>> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
>>>>> index e451324..c18b2ca 100644
>>>>> --- a/xen/arch/arm/vgic.c
>>>>> +++ b/xen/arch/arm/vgic.c
>>>>> @@ -82,10 +82,7 @@ int domain_vgic_init(struct domain *d)
>>>>>      /* Currently nr_lines in vgic and gic doesn't have the same meanings
>>>>>       * Here nr_lines = number of SPIs
>>>>>       */
>>>>> -    if ( is_hardware_domain(d) )
>>>>> -        d->arch.vgic.nr_lines = gic_number_lines() - 32;
>>>>> -    else
>>>>> -        d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
>>>>> +    d->arch.vgic.nr_lines = gic_number_lines() - 32;
>>>>>  
>>>>>      d->arch.vgic.shared_irqs =
>>>>>          xzalloc_array(struct vgic_irq_rank, DOMAIN_NR_RANKS(d));
>>>>
>>>> I see what you mean about virq != pirq.
>>>>
>>>> It seems to me that setting d->arch.vgic.nr_lines = gic_number_lines() -
>>>> 32 for the hardware domain is OK, but it is really a waste for the
>>>> others. We could find a way to pass down the info about how many SPIs we
>>>> need from libxl. Or we could delay the vgic allocations until the first
>>>> SPI is assigned to the domU.
>>>
>>> I gave a check on both midway and the versatile express and there is
>>> about 200 lines.
>>>
>>> It make the overhead of less than 8K per domain. Which is not too bad.
>>>
>>> If the host really support 1024 IRQs that would make an overhead of ~32K.
>>>
>>>> Similarly to the MMIO hole sizing, I don't think that it would be a
>>>> requirement for this patch series but it is something to keep in mind.
>>>
>>> Handling virq != pirq will be more complex as we need to take into
>>> account of the hotplug solution.
> 
> What's the issue here? Something to do with irqdesc->irq-pending lookup?
> 
> Seems like irqdesc needs to store the domain and virq number when the
> irq is passed through. I assume it must store the dmain already.

The issues are mostly:
        - we need to defer the vGIC IRQs allocation
        - Add a new hypercall to setup the number of IRQs
        - How do we handle hotplug?

>>> The vgic has a register which provide the number of lines, I suspect
>>> this number can't grow up while the guest is running.
>>
>> Of course not. But keep in mind that for non-PCI passthrough we would be
>> fully aware of all the assigned interrupts before starting the VM.
> 
> Are we ruling out hotplug of such devices? (I don't have a problem with
> that BTW)
> 
>> PCI passthrough and MSI-X are the issue because there can be many MSI
>> per device and the device can be hotplugged into the guest.
> 
> MSI(-X) AKA LPIs are in a different more dynamic number space though
> (from 8192 onwards). I think for that specific case we can dynamically
> do things.
> 
> The bigger issue would be the legacy INT-x interrupts (which I expect
> look like SPIs), those would no doubt need exposing somehow.

INT-x is shared between different PCI and this will means lots of rework
in the interrupt code (mostly now with the no maintenance interrupt
series). I hope we won't have to handle them.

> Do we think it is the case that we are eventually going to need a guest
> cfg option pci = 0|1? I think the answer is yes. Assinging a pci device
> would cause pci=1, or you can set pci=1 to enable hotplug of pci devices
> later (i.e. mmio space is reserved, INTx interrupts are assigned etc).

I'm not sure to understand what we would need a "pci" cfg option... For
now, this series doesn't aim to support PCI. So I think we could defer
this problem later.

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.