[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
 
- To: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
 
- From: Waiman Long <waiman.long@xxxxxx>
 
- Date: Thu, 13 Mar 2014 16:05:19 -0400
 
- Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>,	Raghavendra K T <raghavendra.kt@xxxxxxxxxxxxxxxxxx>, kvm@xxxxxxxxxxxxxxx,	Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>,	virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx,	Andi Kleen <andi@xxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>,	Michel Lespinasse <walken@xxxxxxxxxx>,	Thomas Gleixner <tglx@xxxxxxxxxxxxx>, linux-arch@xxxxxxxxxxxxxxx,	Gleb Natapov <gleb@xxxxxxxxxx>, x86@xxxxxxxxxx,	Ingo Molnar <mingo@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx,	"Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>,	Arnd Bergmann <arnd@xxxxxxxx>, Scott J Norton <scott.norton@xxxxxx>,	Rusty Russell <rusty@xxxxxxxxxxxxxxx>,	Steven Rostedt <rostedt@xxxxxxxxxxx>, Chris Wright <chrisw@xxxxxxxxxxxx>,	Oleg Nesterov <oleg@xxxxxxxxxx>, Alok Kataria <akataria@xxxxxxxxxx>,	Aswin Chandramouleeswaran <aswin@xxxxxx>,	Chegu Vinod <chegu_vinod@xxxxxx>, linux-kernel@xxxxxxxxxxxxxxx,	David Vrabel <david.vrabel@xxxxxxxxxx>,	Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>,	Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
 
- Delivery-date: Thu, 13 Mar 2014 20:06:21 +0000
 
- List-id: Xen developer discussion <xen-devel.lists.xen.org>
 
 
 
On 03/13/2014 11:15 AM, Peter Zijlstra wrote:
 
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
 
+static inline void arch_spin_lock(struct qspinlock *lock)
+{
+       if (static_key_false(¶virt_unfairlocks_enabled))
+               queue_spin_lock_unfair(lock);
+       else
+               queue_spin_lock(lock);
+}
 
So I would have expected something like:
        if (static_key_false(¶virt_spinlock)) {
                while (!queue_spin_trylock(lock))
                        cpu_relax();
                return;
        }
At the top of queue_spin_lock_slowpath().
 
 I don't like the idea of constantly spinning on the lock. That can cause 
all sort of performance issues. My version of the unfair lock tries to 
grab the lock ignoring if there are others waiting in the queue or not. 
So instead of the doing a cmpxchg of the whole 32-bit word, I just do a 
cmpxchg of the lock byte in the unfair version. A CPU has only one 
chance to steal the lock. If it can't, it will be lined up in the queue 
just like the fair version. It is not as unfair as the other unfair 
locking schemes that spins on the lock repetitively. So lock starvation 
should be less a problem.
 On the other hand, it may not perform as well as the other unfair 
locking schemes. It is a compromise to provide some lock unfairness 
without sacrificing the good cacheline behavior of the queue spinlock.
 
+static inline int arch_spin_trylock(struct qspinlock *lock)
+{
+       if (static_key_false(¶virt_unfairlocks_enabled))
+               return queue_spin_trylock_unfair(lock);
+       else
+               return queue_spin_trylock(lock);
+}
 
That just doesn't make any kind of sense; a trylock cannot be fair or
unfair.
 
 
 Because I use a different cmpxchg for the fair and unfair versions, I 
also need a different version for trylock.
-Longman
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
 
 
    
     |