[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 09/27] ARM: GICv3: introduce separate pending_irq structs for LPIs



Hi,

On 24/03/17 17:26, Stefano Stabellini wrote:
> On Fri, 24 Mar 2017, Andre Przywara wrote:
>>>> +struct pending_irq *lpi_to_pending(struct vcpu *v, unsigned int lpi,
>>>> +                                   bool allocate)
>>>> +{
>>>> +    struct lpi_pending_irq *lpi_irq, *empty = NULL;
>>>> +
>>>> +    spin_lock(&v->arch.vgic.pending_lpi_list_lock);
>>>> +    list_for_each_entry(lpi_irq, &v->arch.vgic.pending_lpi_list, entry)
>>>> +    {
>>>> +        if ( lpi_irq->pirq.irq == lpi )
>>>> +        {
>>>> +            spin_unlock(&v->arch.vgic.pending_lpi_list_lock);
>>>> +            return &lpi_irq->pirq;
>>>> +        }
>>>> +
>>>> +        if ( lpi_irq->pirq.irq == 0 && !empty )
>>>> +            empty = lpi_irq;
>>>> +    }
>>>> +
>>>> +    if ( !allocate )
>>>> +    {
>>>> +        spin_unlock(&v->arch.vgic.pending_lpi_list_lock);
>>>> +        return NULL;
>>>> +    }
>>>> +
>>>> +    if ( !empty )
>>>> +    {
>>>> +        empty = xzalloc(struct lpi_pending_irq);
>>>
>>> xzalloc will return NULL if memory is exhausted. There is a general lack
>>> of error checking within this series. Any missing error could be a
>>> potential target from a guest, leading to security issue. Stefano and I
>>> already spot some, it does not mean we found all. So Before sending the
>>> next version, please go through the series and verify *every* return.
>>>
>>> Also, I can't find the code which release LPIs neither in this patch nor
>>> in this series. A general rule is too have allocation and free within
>>> the same patch. It is much easier to spot missing free.
>>
>> There is no such code, on purpose. We only grow the number, but never
>> shrink it (to what?, where to stop?, what if we need more again?). As
>> said above, in the worst case this ends up at something where a static
>> allocation would have started with from the beginning.
> 
> Dellocate struct pending_irq when the domain is destroyed

Sure, I think this is what is missing at the moment.

> or when an LPI is disabled?

A certain struct pending_irq is not assigned to a particular LPI, it's
more a member of a pool to allocate from when in need.
I consider them like shadow list registers, which can in addition be
used as spill-overs.
Very much like an LR the pending_irq is only assigned to a certain LPI
for the lifetime of a *pending* LPI on a particular VCPU. As mentioned
earlier, under normal conditions this is very short for always edge
triggered LPIs which don't have an active state.

>> to what?, where to stop?
> 
> We don't stop, why stop? Let's go back to 0 if we can.
> 
>> what if we need more again?
> 
> We allocate them again?

I am afraid that this would lead to situations where we needlessly
allocate and deallocate pending_irqs. Under normal load I'd expect to
have something like zero to three LPIs pending at any given point in
time (mostly zero, to be honest).
So this will lead to a situation where *every* LPI that becomes pending
triggers a memory allocation - in the hot path. That's why the pool
idea. So if we are going to shrink the pool, I'd stop at something like
five entries, to not penalize the common case.
Does that sound useful?

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.