[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH RFC 02/12] x86/ticketlock: convert spin loop to C



 On 08/02/2010 08:07 AM, Peter Zijlstra wrote:
On Fri, 2010-07-16 at 18:03 -0700, Jeremy Fitzhardinge wrote:
+       register union {
+               struct __raw_tickets tickets;
+               unsigned short slock;
+       } inc = { .slock = 1<<  TICKET_SHIFT };
   register arch_spinlock_t inc = { .tickets = { .head = 1, .tail = 0 } };

> From a quick look you can basically replace all TICKET_SHIFT usage (1<<
TICKET_SHIFT) with such a constant.

Mostly. In the later patch to convert trylock in to C, you need it to construct an argument for cmpxchg (which can only take a scalar, even if it does have a struct packed into it).

[ Also, does gcc really listen to the register hint these days? ]

It doesn't make much different in this case. I think the only real effect is that its illegal to take the address of a register variable.

+       asm volatile (LOCK_PREFIX "xaddw %w0, %1\n"
+                     : "+Q" (inc), "+m" (lock->slock) : : "memory", "cc");
  "+Q" (inc->slock)

+       for (;;) {
+               if (inc.tickets.head == inc.tickets.tail)
+                       return;
+               cpu_relax();
+               inc.tickets.head = ACCESS_ONCE(lock->tickets.head);
+       }
+       barrier();              /* make sure nothing creeps before the lock is 
taken */
  }
How will it ever get to that barrier() ?

The compiler treats this as being:

        for (;;) {
                if (inc.tickets.head == inc.tickets.tail)
                        goto out;
                ...
        }
out:    barrier();
}

(Which would probably be a reasonable way to clarify the code.)

Without the barrier there's a risk of locked-region code being scheduled before the for(;;) loop.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.