[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 11/12] evtchn: convert vIRQ lock to an r/w one



Hi Jan,

On 28/09/2020 12:02, Jan Beulich wrote:
There's no need to serialize all sending of vIRQ-s; all that's needed
is serialization against the closing of the respective event channels
(by means of a barrier). To facilitate the conversion, introduce a new
rw_barrier().

Looking at the code below, all the spin_lock() have been replaced by a read_lock_*(). This is a bit surprising,


Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -160,7 +160,7 @@ struct vcpu *vcpu_create(struct domain *
      v->vcpu_id = vcpu_id;
      v->dirty_cpu = VCPU_CPU_CLEAN;
- spin_lock_init(&v->virq_lock);
+    rwlock_init(&v->virq_lock);
tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL); --- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -640,7 +640,7 @@ int evtchn_close(struct domain *d1, int
              if ( v->virq_to_evtchn[chn1->u.virq] != port1 )
                  continue;
              v->virq_to_evtchn[chn1->u.virq] = 0;
-            spin_barrier(&v->virq_lock);
+            rw_barrier(&v->virq_lock);
          }
          break;
@@ -794,7 +794,7 @@ void send_guest_vcpu_virq(struct vcpu *v ASSERT(!virq_is_global(virq)); - spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
port = v->virq_to_evtchn[virq];
      if ( unlikely(port == 0) )
@@ -807,7 +807,7 @@ void send_guest_vcpu_virq(struct vcpu *v
      spin_unlock(&chn->lock);
out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
  }
void send_guest_global_virq(struct domain *d, uint32_t virq)
@@ -826,7 +826,7 @@ void send_guest_global_virq(struct domai
      if ( unlikely(v == NULL) )
          return;
- spin_lock_irqsave(&v->virq_lock, flags);
+    read_lock_irqsave(&v->virq_lock, flags);
port = v->virq_to_evtchn[virq];
      if ( unlikely(port == 0) )
@@ -838,7 +838,7 @@ void send_guest_global_virq(struct domai
      spin_unlock(&chn->lock);
out:
-    spin_unlock_irqrestore(&v->virq_lock, flags);
+    read_unlock_irqrestore(&v->virq_lock, flags);
  }
void send_guest_pirq(struct domain *d, const struct pirq *pirq)
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -2,7 +2,7 @@
  #include <xen/irq.h>
  #include <xen/smp.h>
  #include <xen/time.h>
-#include <xen/spinlock.h>
+#include <xen/rwlock.h>

I would prefer if keep including <xen/spinlock.h> as the fact <xen/rwlock.h> include it is merely an implementation details.

  #include <xen/guest_access.h>
  #include <xen/preempt.h>
  #include <public/sysctl.h>
@@ -334,6 +334,12 @@ void _spin_unlock_recursive(spinlock_t *
      }
  }
+void _rw_barrier(rwlock_t *lock)
+{
+    check_barrier(&lock->lock.debug);
+    do { smp_mb(); } while ( _rw_is_locked(lock) );
+}

Why do you need to call smp_mb() at each loop? Would not it be sufficient to write something similar to spin_barrier(). I.e:

smp_mb();
while ( _rw_is_locked(lock) )
  cpu_relax();
smp_mb();

But I wonder if there is a risk with either implementation for _rw_is_locked() to always return true and therefore never end.

Let say we receive an interrupt, by the time it is handled, the read/lock may have been taken again.

spin_barrier() seems to handle this situation fine because it just wait for the head to change. I don't think we can do the same here...

I am thinking that it may be easier to hold the write lock when doing the update.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.