|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v10 26/32] ARM: vITS: handle MOVI command
On Fri, 26 May 2017, Andre Przywara wrote:
> The MOVI command moves the interrupt affinity from one redistributor
> (read: VCPU) to another.
> For now migration of "live" LPIs is not yet implemented, but we store
> the changed affinity in our virtual ITTE and the pending_irq.
>
> Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>
> ---
> xen/arch/arm/vgic-v3-its.c | 66
> ++++++++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 66 insertions(+)
>
> diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
> index c350fa5..3332c09 100644
> --- a/xen/arch/arm/vgic-v3-its.c
> +++ b/xen/arch/arm/vgic-v3-its.c
> @@ -666,6 +666,66 @@ out_remove_mapping:
> return ret;
> }
>
> +static int its_handle_movi(struct virt_its *its, uint64_t *cmdptr)
> +{
> + uint32_t devid = its_cmd_get_deviceid(cmdptr);
> + uint32_t eventid = its_cmd_get_id(cmdptr);
> + uint16_t collid = its_cmd_get_collection(cmdptr);
> + unsigned long flags;
> + struct pending_irq *p;
> + struct vcpu *ovcpu, *nvcpu;
> + uint32_t vlpi;
> + int ret = -1;
> +
> + spin_lock(&its->its_lock);
> + /* Check for a mapped LPI and get the LPI number. */
> + if ( !read_itte_locked(its, devid, eventid, &ovcpu, &vlpi) )
> + goto out_unlock;
> +
> + if ( vlpi == INVALID_LPI )
> + goto out_unlock;
> +
> + /* Check the new collection ID and get the new VCPU pointer */
> + nvcpu = get_vcpu_from_collection(its, collid);
> + if ( !nvcpu )
> + goto out_unlock;
> +
> + p = gicv3_its_get_event_pending_irq(its->d, its->doorbell_address,
> + devid, eventid);
> + if ( unlikely(!p) )
> + goto out_unlock;
> +
> + /*
> + * TODO: This relies on the VCPU being correct in the ITS tables.
> + * This can be fixed by either using a per-IRQ lock or by using
> + * the VCPU ID from the pending_irq instead.
> + */
> + spin_lock_irqsave(&ovcpu->arch.vgic.lock, flags);
> +
> + /* Update our cached vcpu_id in the pending_irq. */
> + p->lpi_vcpu_id = nvcpu->vcpu_id;
> +
> + spin_unlock_irqrestore(&ovcpu->arch.vgic.lock, flags);
> +
> + /*
> + * TODO: lookup currently-in-guest virtual IRQs and migrate them,
> + * as the locking may be fragile otherwise.
> + * This is not easy to do at the moment, but should become easier
> + * with the introduction of a per-IRQ lock.
> + */
Sure but at least we can handle the inflight, but not in guest, case. It
is just a matter of adding (withing the arch.vgic.lock locked region):
if ( !list_empty(&p->lr_queue) )
{
gic_remove_irq(old, p);
clear_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
list_del_init(&p->lr_queue);
list_del_init(&p->inflight);
spin_unlock_irqrestore(&old->arch.vgic.lock, flags);
vgic_vcpu_inject_irq(new, irq);
}
That is simple and there are no problems with locking. The problem is
with the other case: !list_empty(&p->inflight) &&
list_empty(&p->lr_queue), which is the one for which you need to keep
this TODO comment.
> + /* Now store the new collection in the translation table. */
> + if ( !write_itte_locked(its, devid, eventid, collid, vlpi, &nvcpu) )
> + goto out_unlock;
> +
> + ret = 0;
> +
> +out_unlock:
> + spin_unlock(&its->its_lock);
> +
> + return ret;
> +}
> +
> #define ITS_CMD_BUFFER_SIZE(baser) ((((baser) & 0xff) + 1) << 12)
> #define ITS_CMD_OFFSET(reg) ((reg) & GENMASK(19, 5))
>
> @@ -711,6 +771,12 @@ static int vgic_its_handle_cmds(struct domain *d, struct
> virt_its *its)
> case GITS_CMD_MAPTI:
> ret = its_handle_mapti(its, command);
> break;
> + case GITS_CMD_MOVALL:
> + gdprintk(XENLOG_G_INFO, "vGITS: ignoring MOVALL command\n");
> + break;
> + case GITS_CMD_MOVI:
> + ret = its_handle_movi(its, command);
> + break;
> case GITS_CMD_SYNC:
> /* We handle ITS commands synchronously, so we ignore SYNC. */
> break;
> --
> 2.9.0
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |