[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 2/2] xen/evtchn: rework per event channel lock



Hi Juergen,

On 12/10/2020 10:27, Juergen Gross wrote:
Currently the lock for a single event channel needs to be taken with
interrupts off, which causes deadlocks in some cases.

Rework the per event channel lock to be non-blocking for the case of
sending an event and removing the need for disabling interrupts for
taking the lock.

The lock is needed for avoiding races between sending an event or
querying the channel's state against removal of the event channel.

Use a locking scheme similar to a rwlock, but with some modifications:

- sending an event or querying the event channel's state uses an
   operation similar to read_trylock(), in case of not obtaining the
   lock the sending is omitted or a default state is returned

- closing an event channel is similar to write_lock(), but without
   real fairness regarding multiple writers (this saves some space in
   the event channel structure and multiple writers are impossible as
   closing an event channel requires the domain's event_lock to be
   held).

With this locking scheme it is mandatory that a writer will always
either start with an unbound or free event channel or will end with
an unbound or free event channel, as otherwise the reaction of a reader
not getting the lock would be wrong.

Fixes: e045199c7c9c54 ("evtchn: address races with evtchn_reset()")
Signed-off-by: Juergen Gross <jgross@xxxxxxxx>

The approach looks ok to me. I have a couple of remarks below.

[...]

diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index 509d3ae861..39a93f7556 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,45 @@ void notify_via_xen_event_channel(struct domain *ld, int 
lport);
  #define bucket_from_port(d, p) \
      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
+#define EVENT_WRITE_LOCK_INC MAX_VIRT_CPUS
+static inline void evtchn_write_lock(struct evtchn *evtchn)

I think it would be good to describe the locking expectation in-code.

+{
+    int val;
+
+    /* No barrier needed, atomic_add_return() is full barrier. */
+    for ( val = atomic_add_return(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+          val != EVENT_WRITE_LOCK_INC;
+          val = atomic_read(&evtchn->lock) )
+        cpu_relax();
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    arch_lock_release_barrier();
+
+    atomic_sub(EVENT_WRITE_LOCK_INC, &evtchn->lock);
+}
+
+static inline bool evtchn_tryread_lock(struct evtchn *evtchn)
+{
+    if ( atomic_read(&evtchn->lock) >= EVENT_WRITE_LOCK_INC )
+        return false;
+
+    /* No barrier needed, atomic_inc_return() is full barrier. */
+    if ( atomic_inc_return(&evtchn->lock) < EVENT_WRITE_LOCK_INC )
+        return true;
+
+    atomic_dec(&evtchn->lock);

NIT: Can you add a newline here?

+    return false;
+}
+

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.