[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4.1 2/2] xen/evtchn: rework per event channel lock



On 04.11.20 16:29, Jan Beulich wrote:
@@ -738,7 +725,8 @@ int evtchn_send(struct domain *ld, unsigned int lport)
lchn = evtchn_from_port(ld, lport); - spin_lock_irqsave(&lchn->lock, flags);
+    if ( !evtchn_read_trylock(lchn) )
+        return 0;

Isn't there a change in behavior here? While sends through
ECS_UNBOUND ports indeed get silently ignored, ECS_FREE ones ought
to be getting -EINVAL (as should ECS_UNBOUND ones if they're
Xen-consumer ones). With the failed trylock you don't know which
of the two the port is in the process of being transitioned
to/from. The same would apply for other operations distinguishing
the two states. Right now both evtchn_status() and
evtchn_bind_vcpu() only use the domain-wide lock, but the latter
is getting switched by "evtchn: convert domain event lock to an
r/w one" (granted there's an RFC remark there whether that
transformation is worthwhile). Anyway, the main point of my remark
is that there's another subtlety here which I don't think becomes
obvious from description or comments - where the two states are
mentioned, it gets to look as if they can be treated equally.

Hmm, evtchn_send() seems to be called always with interrupts enabled.
We could just use a standard read_lock() here if you think the different
states should be treated as today.


--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -105,6 +105,39 @@ void notify_via_xen_event_channel(struct domain *ld, int 
lport);
  #define bucket_from_port(d, p) \
      ((group_from_port(d, p))[((p) % EVTCHNS_PER_GROUP) / EVTCHNS_PER_BUCKET])
+/*
+ * Lock an event channel exclusively. This is allowed only when the channel is
+ * free or unbound either when taking or when releasing the lock, as any
+ * concurrent operation on the event channel using evtchn_read_trylock() will
+ * just assume the event channel is free or unbound at the moment when the
+ * evtchn_read_trylock() returns false.
+ */
+static inline void evtchn_write_lock(struct evtchn *evtchn)
+{
+    write_lock(&evtchn->lock);
+
+    evtchn->old_state = evtchn->state;
+}
+
+static inline void evtchn_write_unlock(struct evtchn *evtchn)
+{
+    /* Enforce lock discipline. */
+    ASSERT(evtchn->old_state == ECS_FREE || evtchn->old_state == ECS_UNBOUND ||
+           evtchn->state == ECS_FREE || evtchn->state == ECS_UNBOUND);
+
+    write_unlock(&evtchn->lock);
+}

These two aren't needed outside of event_channel.c, are they? If
so, and if they ought to go in a header rather than directly into
the .c file (where I'd prefer the latter, for the sake of minimal
exposure), then it should be event_channel.h.

I wanted to have the locking functions in one place.

In case you prefer it otherwise (and you seem to do so) I'd rather move
the write lock functions to the .c file.


@@ -114,6 +114,7 @@ struct evtchn
          u16 virq;      /* state == ECS_VIRQ */
      } u;
      u8 priority;
+    u8 old_state;      /* State when taking lock in write mode. */
      u32 fifo_lastq;    /* Data for fifo events identifying last queue. */
  #ifdef CONFIG_XSM
      union {

While there's a gap here after the prior patch (which I'm still
not overly happy about), I'm still inclined to ask that the field
be put inside #ifndef NDEBUG, as its only consumer is an
ASSERT().

Fine with me


Juergen



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.