[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen: merge temporary vcpu pinning scenarios



On 23.07.19 16:14, Jan Beulich wrote:
On 23.07.2019 16:03, Jan Beulich wrote:
On 23.07.2019 15:44, Juergen Gross wrote:
On 23.07.19 14:42, Jan Beulich wrote:
v->processor gets latched into st->processor before raising the softirq,
but can't the vCPU be moved elsewhere by the time the softirq handler
actually gains control? If that's not possible (and if it's not obvious
why, and as you can see it's not obvious to me), then I think a code
comment wants to be added there.

You are right, it might be possible for the vcpu to move around.

OTOH is it really important to run the target vcpu exactly on the cpu
it is executing (or has last executed) at the time the NMI/MCE is being
queued? This is in no way related to the cpu the MCE or NMI has been
happening on. It is just a random cpu, and so it would be if we'd do the
cpu selection when the softirq handler is running.

One question to understand the idea nehind all that: _why_ is the vcpu
pinned until it does an iret? I could understand if it would be pinned
to the cpu where the NMI/MCE was happening, but this is not the case.

Then it was never finished or got broken, I would guess.

Oh, no. The #MC side use has gone away in 3a91769d6e, without cleaning
up other code. So there doesn't seem to be any such requirement anymore.

Ah, okay, so no need any longer to rename VCPU_AFFINITY_NMI. :-)

I'll add a patch removing the MCE cruft.


Juergen


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.