[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen/spinlocks: fix placement of preempt_[dis|en]able()



Hi,

On 13/03/2020 09:00, Jürgen Groß wrote:
On 13.03.20 09:55, Jan Beulich wrote:
On 13.03.2020 09:05, Juergen Gross wrote:
@@ -199,10 +199,10 @@ unsigned long _spin_lock_irqsave(spinlock_t *lock)
  void _spin_unlock(spinlock_t *lock)
  {
      arch_lock_release_barrier();
-    preempt_enable();
      LOCK_PROFILE_REL;
      rel_lock(&lock->debug);
      add_sized(&lock->tickets.head, 1);
+    preempt_enable();
      arch_lock_signal();
  }

arch_lock_signal() is a barrier on Arm, hence just like for patch 1
I wonder whether the insertion wouldn't better come after it.

The important barrier in spin_unlock() is arch_lock_release_barrier().

The one in arch_lock_signal() is just to ensure that waking up the other CPUs will not happen before the unlock is seen. The barrier would not have been necessary if the we didn't use 'sev'.


Either way is fine for me. It should be noted that preemption is only
relevant on the local cpu. So this is about trading lock state
visibility against preemption disabled time, and I agree the visible
time of the lock held should be minimized at higher priority than the
preemption disabled time.

I don't think the rationale is about "performance" here. The rationale is you don't know the implementation of arch_lock_signal(). If you get preempted by a thread trying to acquire the same lock, then it may not do the right thing.

Linux will also re-enable preemption only after the unlock has been completed. So it would be best to follow the same pattern.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.