[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1.1] evtchn: add early-out to evtchn_move_pirqs()



Hi Jan,

On 26/04/2022 11:33, Jan Beulich wrote:
See the code comment. The higher the rate of vCPU-s migrating across
pCPU-s, the less useful this attempted optimization actually is. With
credit2 the migration rate looks to be unduly high even on mostly idle
systems, and hence on large systems lock contention here isn't very
difficult to observe (as was the case for a failed 4.12 osstest flight).

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Tested-by: Luca Fancellu <luca.fancellu@xxxxxxx>

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -1559,6 +1559,16 @@ void evtchn_move_pirqs(struct vcpu *v)
      unsigned int port;
      struct evtchn *chn;
+ /*
+     * The work done below is an attempt to keep pIRQ-s on the pCPU-s that the
+     * vCPU-s they're to be delivered to run on. In order to limit lock
+     * contention, check for an empty list prior to acquiring the lock. In the
+     * worst case a pIRQ just bound to this vCPU will be delivered elsewhere
+     * until the vCPU is migrated (again) to another pCPU.
+     */
+    if ( !v->pirq_evtchn_head )
+        return;

I was hoping Andrew would give some insight (hence why I haven't replied to your previous answer).

I am still not really convinced about this optimization. Aside what I wrote about the IRQ raised on the "wrong" pCPU, the lock contention would still be present if an OS is deciding to spread the PIRQs across all the vCPUs.

So it seems to me switching to a rwlock would help to address the contention on all the cases.

+
      spin_lock(&d->event_lock);
      for ( port = v->pirq_evtchn_head; port; port = chn->u.pirq.next_port )
      {


Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.