[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 21/24] ARM: vITS: handle INVALL command



Hi Stefano,

On 05/12/16 19:51, Stefano Stabellini wrote:
On Mon, 5 Dec 2016, Julien Grall wrote:
Hi Stefano,

On 03/12/16 00:46, Stefano Stabellini wrote:
On Fri, 2 Dec 2016, Andre Przywara wrote:
When we receive the maintenance interrupt and we clear the LR of the
vLPI, Xen should re-enable the pLPI.
Given that the state of the LRs is sync'ed before calling gic_interrupt,
we can be sure to know exactly in what state the vLPI is at any given
time. But for this to work correctly, it is important to configure the
pLPI to be delivered to the same pCPU running the vCPU which handles
the vLPI (as it is already the case today for SPIs).

Why would that be necessary?

Because the state of the LRs of other pCPUs won't be up to date: we
wouldn't know for sure whether the guest EOI'ed the vLPI or not.

Well, there is still a small window when the interrupt may be received on the
previous pCPU. So we have to take into account this case.

That's right. We already have a mechanism to deal with that, based on
the GIC_IRQ_GUEST_MIGRATING flag. It should work with LPIs too.

Right.

This window may be bigger with LPIs, because a single vCPU may have thousand
interrupts routed. This would take a long time to move all of them when the
vCPU is migrating. So we may want to take a lazy approach and moving them when
they are received on the "wrong" pCPU.

That's possible. The only downside is that modifying the irq migration
workflow is difficult and we might want to avoid it if possible.

I don't think this would modify the irq migration work flow. If you look at the implementation of arch_move_irqs, it will just go over the vIRQ and call irq_set_affinity.

irq_set_affinity will directly modify the hardware and that's all.


Another approach is to let the scheduler know that migration is slower.
In fact this is not a new problem: it can be slow to migrate interrupts,
even few non-LPIs interrupts, even on x86. I wonder if the Xen scheduler
has any knowledge of that (CC'ing George and Dario). I guess that's the
reason why most people run with dom0_vcpus_pin.

I gave a quick look at x86, arch_move_irqs is not implemented. Only PIRQ are migrated when a vCPU is moving to another pCPU.

The function pirq_set_affinity, will change the affinity of a PIRQ but only in software (see irq_set_affinity). This is not yet replicated the configuration into the hardware.

In the case of ARM, we directly modify the configuration of the hardware. This adds much more overhead because you have to do an hardware access for every single IRQ.

Regards,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.