[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v2 13/22] xen/arm: its: Add virtual ITS command support



On Tue, May 5, 2015 at 4:38 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> On 05/05/15 11:28, Stefano Stabellini wrote:
>> On Mon, 4 May 2015, Julien Grall wrote:
>>> Hi Vijay,
>>>
>>> On 04/05/2015 16:19, Vijay Kilari wrote:
>>>>>>>> How did you implement the interrupt mode? Could it be improve?
>>>>>>>
>>>>>>>
>>>>>>>      1) In physical ITS driver its_device is created with devID
>>>>>>> 00:00.1
>>>>>>> with
>>>>>>> 256 MSI-x are reserved and is named as completion_dev, which is
>>>>>>> global.
>>>>>>
>>>>>>
>>>>>> That's a lot of MSI-x reserved... Can't you use only one per domain?
>>>>>
>>>>>
>>>>> Hmmm... I meant for all the domain, not "per domain".
>>>>
>>>>     Complexity with one irq for all domains is that if completion interrupt
>>>> comes it is difficult to find out  for which vITS/Domain ITS command
>>>> it came for.
>>>
>>> While reserving a single devID sounds feasible on all the future platform.
>>> Allocating 256 MSI-x sounds more difficult, you assume that any board will
>>> have at least 256 MSI-x free.
>>>
>>> Although, this is not scalable. How do you plan to handle more than 256
>>> domains? By increasing the number of reserved MSI-x?
>>>
>>> I don't ask you to implement the later now... but if increasing the number 
>>> of
>>> domain supported means rewriting all the completion code and maybe the vITS
>>> then you should ask yourself if it's really worth to take this current
>>> approach.
>>
>> As far as I understand there are max 2048 MSI-X per devid and max 8
>> functions per device (we can continue to 00:00.2, etc). That gives us
>> 16384 max domains with a PCI device assigned to them. We wouldn't use
>> any of these MSIs for domains without devices assigned to them. Overall
>> I think is OK as a limit, as long as we can handle the allocation
>> efficiently (we cannot really allocate 16384 data structures at boot
>> time).
>
> You assume that there is enough of LPIs unused. This may not be true on
> every platform.

Below is the note from Spec. Minimum ID bits should be 14 to support LPIs
 which makes minimum support of 8192 LPIs in any platform.

Note: an ITS or Distributor implementation might choose to support any
size of LPI identifier field up to and
including 32 bits. For example, an implementation might choose to
support 14 bits. Because IDs 0 to 8191 are
used for other classes of interrupt, a 14 bit identifier provides
support for 8192 LPIs. The number supported by
software is configured writing a value to the âIDbitsâ field in
GICR_PROPBASER (see section 5.4.23), subject to
the maximum supported by the implementation (see section 5.11).


>
>> Actually even 256 domains with devices assigned to them would be enough
>> for now, if we don't consume these MSIs with regular domains without PCI
>> passthrough.
>
> It would need some plumbing in the toolstack to use vITS only when PCI
> passthrough is used for the guest.
>
>>
>>>>>>>    I am adding one INT per command. This can be improved to add one
>>>>>>> INT
>>>>>>> cmd for all
>>>>>>>    the pending commands. Existing Linux driver sends 2 commands at a
>>>>>>> time.
>>>>>>
>>>>>>
>>>>>> You should not assume that other OS will send 2 commands at the same
>>>>>> time... It could be more or less.
>>>>>>
>>>>>> Although, having a INT per command is rather slow. One INT command per
>>>>>> batch would improve the boot time.
>>>>
>>>>     We cannot limit on number of commands sent at a time. we have to send
>>>> all the
>>>> pending commands in vITS queue at a time when trapped on CWRITER, Otherwise
>>>> we have to check for pending interrupts on completion interrupt and
>>>> translate
>>>> and send pending commands in interrupt context. Which complicates and adds
>>>> more
>>>> delays.
>>>
>>> If we don't limit the number of commands sent, we would allow a domain to
>>> flood the command queue. Therefore, other domains wouldn't be able to send
>>> command and will likely timeout and crash. This is one possible security 
>>> issue
>>> among many others.
>>>
>>> Nobody like security issue, it impacts both end-user and the project. Please
>>> have this security concern in mind before performance. Performance is 
>>> usually
>>> more easier to address later.
>>>
>>> As the vITS is only used for interrupt managing (mapping, unmapping), it's 
>>> not
>>> used in hot path such as receiving interrupt. So we don't care if it's 
>>> "slow"
>>> from the guest point of view as long as we emulate the behavior correctly
>>> without impacting the other domain.
>>
>> I think that rate limiting the guest vITS commands could be done in
>> second stage. I wouldn't worry about it for now, not because is not
>> important, but because we need to get the basic mechanics right first.
>> Rome wasn't built in a day.
>
> Even though Rome wasn't built in a day, the design has been well though
> before...
>
> The command queue is the big part of the vITS and tight with the
> physical ITS driver. If we don't think about rate limiting in the
> design, we may need to rework heavily the ITS.
>
> Regards,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.