[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 25/32] ARM: vITS: handle MAPTI/MAPI command



Hi,

On 02/06/17 18:12, Julien Grall wrote:
> Hi Andre,
> 
> On 05/26/2017 06:35 PM, Andre Przywara wrote:
>> The MAPTI commands associates a DeviceID/EventID pair with a LPI/CPU
>> pair and actually instantiates LPI interrupts. MAPI is just a variant
>> of this comment, where the LPI ID is the same as the event ID.
>> We connect the already allocated host LPI to this virtual LPI, so that
>> any triggering LPI on the host can be quickly forwarded to a guest.
>> Beside entering the domain and the virtual LPI number in the respective
>> host LPI entry, we also initialize and add the already allocated
>> struct pending_irq to our radix tree, so that we can now easily find it
>> by its virtual LPI number.
>> We also read the property table to update the enabled bit and the
>> priority for our new LPI, as we might have missed this during an earlier
>> INVALL call (which only checks mapped LPIs). But we make sure that the
>> property table is actually valid, as all redistributors might still
>> be disabled at this point.
>> Since write_itte_locked() now sees its first usage, we change the
>> declaration to static.
>>
>> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
>> ---
>>   xen/arch/arm/gic-v3-its.c        |  27 ++++++++
>>   xen/arch/arm/vgic-v3-its.c       | 138
>> ++++++++++++++++++++++++++++++++++++++-
>>   xen/include/asm-arm/gic_v3_its.h |   3 +
>>   3 files changed, 165 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/arm/gic-v3-its.c b/xen/arch/arm/gic-v3-its.c
>> index 8864e0b..41fff64 100644
>> --- a/xen/arch/arm/gic-v3-its.c
>> +++ b/xen/arch/arm/gic-v3-its.c
>> @@ -876,6 +876,33 @@ int gicv3_remove_guest_event(struct domain *d,
>> paddr_t vdoorbell_address,
>>       return 0;
>>   }
>>   +/*
>> + * Connects the event ID for an already assigned device to the given
>> VCPU/vLPI
>> + * pair. The corresponding physical LPI is already mapped on the host
>> side
>> + * (when assigning the physical device to the guest), so we just
>> connect the
>> + * target VCPU/vLPI pair to that interrupt to inject it properly if
>> it fires.
>> + * Returns a pointer to the already allocated struct pending_irq that is
>> + * meant to be used by that event.
>> + */
>> +struct pending_irq *gicv3_assign_guest_event(struct domain *d,
>> +                                             paddr_t vdoorbell_address,
>> +                                             uint32_t vdevid,
>> uint32_t eventid,
>> +                                             uint32_t virt_lpi)
>> +{
>> +    struct pending_irq *pirq;
>> +    uint32_t host_lpi = 0;
> This should be INVALID_LPI and not 0.
> 
> [...]
> 
>> +/*
>> + * For a given virtual LPI read the enabled bit and priority from the
>> virtual
>> + * property table and update the virtual IRQ's state in the given
>> pending_irq.
>> + * Must be called with the respective VGIC VCPU lock held.
>> + */
>> +static int update_lpi_property(struct domain *d, struct pending_irq *p)
>> +{
>> +    paddr_t addr;
>> +    uint8_t property;
>> +    int ret;
>> +
>> +    /*
>> +     * If no redistributor has its LPIs enabled yet, we can't access the
>> +     * property table. In this case we just can't update the properties,
>> +     * but this should not be an error from an ITS point of view.
>> +     */
>> +    if ( !read_atomic(&d->arch.vgic.rdists_enabled) )
>> +        return 0;
> 
> AFAICT, there are no other place where the property would be updated.
> Should we put a warning to let the user know that property will not be
> updated? More that you don't return an error so no easy way to know
> what's going on.

So I thought about this again: If we handle an INV command while the
respective redistributor has LPIs off, even hardware can't do anything,
as having LPIs disabled means that there is no valid property table.
So a hardware redistributor would probably just ignore this request.

I see us only calling update_lpi_property during command handling, so I
think it is safe to just silently ignore it, as hardware would do the same?

>> +
>> +    addr = d->arch.vgic.rdist_propbase & GENMASK(51, 12);
>> +
>> +    ret = vgic_access_guest_memory(d, addr + p->irq - LPI_OFFSET,
>> +                                   &property, sizeof(property), false);
>> +    if ( ret )
>> +        return ret;
>> +
>> +    write_atomic(&p->lpi_priority, property & LPI_PROP_PRIO_MASK);
>> +
>> +    if ( property & LPI_PROP_ENABLED )
>> +        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
>> +    else
>> +        clear_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
>> +
>> +    return 0;
>> +}
>> +
>>   /* Must be called with the ITS lock held. */
>>   static int its_discard_event(struct virt_its *its,
>>                                uint32_t vdevid, uint32_t vevid)
>> @@ -538,6 +574,98 @@ static int its_handle_mapd(struct virt_its *its,
>> uint64_t *cmdptr)
>>       return ret;
>>   }
>>   +static int its_handle_mapti(struct virt_its *its, uint64_t *cmdptr)
>> +{
> 
> [...]
> 
>> +    /*
>> +     * radix_tree_insert() returns an error either due to an internal
>> +     * condition (like memory allocation failure) or because the LPI
>> already
>> +     * existed in the tree. We don't support the latter case, so we
>> always
>> +     * cleanup and return an error here in any case.
>> +     */
>> +out_remove_host_entry:
>> +    gicv3_remove_guest_event(its->d, its->doorbell_address, devid,
>> eventid);
> 
> Can gicv3_remove_guest_event fail? If yes, should we check the
> return/add comment? If not, then we should have an ASSERT(....).

Well, as we have an "if ( !ret ) return 0;" above, we only get here with
ret being != 0, so this is in an error path already.
I am not sure how we should react here then, I think reporting the first
error is probably more meaningful.

Cheers,
Andre.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.