[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path



There is a problem in the current pending bit spinning code.  When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the spinning code will not be run.
As a result, the regular queuing code path might be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the lock and
clear the pending bit soon, it is actually better to wait than to be
queued up which has a higher overhead.

The following tables show the before-patch execution time (in ms)
of a micro-benchmark where 5M iterations of the lock/unlock cycles
were run on a 10-core Westere-EX x86-64 CPU with 2 different types of
loads - standalone (lock and protected data in different cachelines)
and embedded (lock and protected data in the same cacheline).

                  [Standalone/Embedded - same node]
  # of tasks    Ticket lock     Queue lock       %Change
  ----------    -----------     ----------       -------
       1          135/ 111       135/ 101          0%/  -9%
       2          890/ 779      1885/1990       +112%/+156%
       3         1932/1859      2333/2341        +21%/ +26%
       4         2829/2726      2900/2923         +3%/  +7%
       5         3834/3761      3655/3648         -5%/  -3%
       6         4963/4976      4336/4326        -13%/ -13%
       7         6299/6269      5057/5064        -20%/ -19%
       8         7691/7569      5786/5798        -25%/ -23%

Of course, the results will varies depending on what kind of test
machine is used.

With 1 task per NUMA node, the execution times are:

                [Standalone - different nodes]
  # of nodes    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       1           135            135             0%
       2          4604           5087           +10%
       3         10940          12224           +12%
       4         21555          10555           -51%

It can be seen that the queue spinlock is slower than the ticket
spinlock when there are 2 or 3 contending tasks. In all the other case,
the queue spinlock is either equal or faster than the ticket spinlock.

With this patch, the performance data for 2 contending tasks are:

                  [Standalone/Embedded]
  # of tasks    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       2          890/779        984/871        +11%/+12%

                [Standalone - different nodes]
  # of nodes    Ticket lock     Queue lock      %Change
  ----------    -----------     ----------      -------
       2          4604             1364           -70%

It can be seen that the queue spinlock performance for 2 contending
tasks is now comparable to ticket spinlock on the same node, but much
faster when in different nodes. With 3 contending tasks, however,
the ticket spinlock is still quite a bit faster.

Signed-off-by: Waiman Long <Waiman.Long@xxxxxx>
---
 kernel/locking/qspinlock.c |   18 ++++++++++++++++--
 1 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fc7fd8c..7f10758 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -233,11 +233,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 
val)
         */
        for (;;) {
                /*
-                * If we observe any contention; queue.
+                * If we observe that the queue is not empty or both
+                * the pending and lock bits are set, queue
                 */
-               if (val & ~_Q_LOCKED_MASK)
+               if ((val & _Q_TAIL_MASK) ||
+                   (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)))
                        goto queue;
 
+               if (val == _Q_PENDING_VAL) {
+                       /*
+                        * Pending bit is set, but not the lock bit.
+                        * Assuming that the pending bit holder is going to
+                        * set the lock bit and clear the pending bit soon,
+                        * it is better to wait than to exit at this point.
+                        */
+                       cpu_relax();
+                       val = atomic_read(&lock->val);
+                       continue;
+               }
+
                new = _Q_LOCKED_VAL;
                if (val == new)
                        new |= _Q_PENDING_VAL;
-- 
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.