[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/9] qspinlock: Generic paravirt support



On 04/01/2015 01:12 PM, Peter Zijlstra wrote:
On Wed, Apr 01, 2015 at 12:20:30PM -0400, Waiman Long wrote:
After more careful reading, I think the assumption that the presence of an
unused bucket means there is no match is not true. Consider the scenario:

1. cpu 0 puts lock1 into hb[0]
2. cpu 1 puts lock2 into hb[1]
3. cpu 2 clears hb[0]
4. cpu 3 looks for lock2 and doesn't find it
Hmm, yes. The only way I can see that being true is if we assume entries
are never taken out again.

The wikipedia page could use some clarification here, this is not clear.

At this point, I am thinking using back your previous idea of passing the
queue head information down the queue.
Having to scan the entire array for a lookup sure sucks, but the wait
loops involved in the other idea can get us in the exact predicament we
were trying to get out, because their forward progress depends on other
CPUs.

For the waiting loop, the worst case is when a new CPU get queued right before we write the head value to the previous tail node. In the case, the maximum number of retries is equal to the total number of CPUs - 2. But that should rarely happen.

I do find a way to guarantee forward progress in a few steps. I will try the normal way once. If that fails, I will insert the head node to the tail once again after saving the next pointer. After modifying the previous tail node, cmpxchg will be used to restore the previous tail. If that fails, we just have to wait until the next pointer is updated and write it out to the previous tail node. We can now restore the next pointer and move forward.

Let me know if that looks reasonable to you.

-Longman


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.