|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] xen/gic: EOI irqs on the right pcpu
> @@ -733,6 +739,10 @@ static void maintenance_interrupt(int irq, void *dev_id,
> struct cpu_user_regs *r
> while ((i = find_next_bit((const long unsigned int *) &eisr,
> 64, i)) < 64) {
> struct pending_irq *p;
> + int cpu, eoi;
> +
> + cpu = -1;
> + eoi = 0;
>
> spin_lock_irq(&gic.lock);
> lr = GICH[GICH_LR + i];
> @@ -754,11 +764,23 @@ static void maintenance_interrupt(int irq, void
> *dev_id, struct cpu_user_regs *r
> p = irq_to_pending(v, virq);
> if ( p->desc != NULL ) {
> p->desc->status &= ~IRQ_INPROGRESS;
> - GICC[GICC_DIR] = virq;
> + /* Assume only one pcpu needs to EOI the irq */
> + cpu = cpumask_first(&p->eoimask);
> + cpumask_clear(&p->eoimask);
> + eoi = 1;
Either this assumption is true, in which case you can use an int for
p->eoimask and avoid frobbing around with cpumasks, or it isn't in which
case you can trivially adjust the call to on_select_cpus to DTRT.
I think the assumption is true and you can just use an int.
> }
> list_del_init(&p->inflight);
> spin_unlock_irq(&v->arch.vgic.lock);
>
> + if ( eoi ) {
> + /* this is not racy because we can't receive another irq of the
> + * same type until we EOI it. */
> + if ( cpu == smp_processor_id() )
> + gic_irq_eoi((void*)virq);
> + else
> + on_selected_cpus(cpumask_of(cpu), gic_irq_eoi, (void*)virq,
> 0);
> + }
> +
> i++;
> }
> }
> diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
> index f9c1a6b..c5370d5 100644
> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -676,9 +676,14 @@ void vgic_vcpu_inject_irq(struct vcpu *v, unsigned int
> irq, int virtual)
> n->irq = irq;
> n->priority = priority;
> if (!virtual)
> + {
> n->desc = irq_to_desc(irq);
> - else
> + cpumask_clear(&n->eoimask);
> + /* Assume we received the IRQ on the current pcpu */
> + cpumask_set_cpu(smp_processor_id(), &n->eoimask);
Is there any way for this to be false?
Perhaps eoimask should be part of arch_irq_desc so it can be saved at
the actual point we handle the interrupt?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |