WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualiz

To: "H. Peter Anvin" <hpa@xxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 00/20] x86: ticket lock rewrite and paravirtualization
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Mon, 15 Nov 2010 12:00:31 -0800
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>, Linux Virtualization <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>, Avi Kivity <avi@xxxxxxxxxx>
Delivery-date: Mon, 15 Nov 2010 12:04:05 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CDDBDB5.8000800@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1288794124.git.jeremy.fitzhardinge@xxxxxxxxxx> <4CDDBBD3.5050903@xxxxxxxxx> <4CDDBCE4.80906@xxxxxxxx> <4CDDBDB5.8000800@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.12) Gecko/20101027 Fedora/3.1.6-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.6
On 11/12/2010 02:20 PM, H. Peter Anvin wrote:
> On 11/12/2010 02:17 PM, Jeremy Fitzhardinge wrote:
>> On 11/12/2010 02:12 PM, H. Peter Anvin wrote:
>>> On 11/03/2010 07:59 AM, Jeremy Fitzhardinge wrote:
>>>>       - with an unmodified struct spinlock, it can check to see if
>>>>         head == tail after unlock; if not, then there's someone else
>>>>         trying to lock, and we can do a kick.  Unfortunately this
>>>>         generates very high level of redundant kicks, because the
>>>>         waiting CPU might not have blocked yet (which is the common
>>>>         case)
>>>>
>>> How high is "very high" here -- most of the time (so that any mitigation
>>> on the slow patch is useless)?
>> I'll need to remeasure, but I think around 90% of the slowpath entries
>> were spurious without this.  In other words, when spinlocks do contend,
>> most of the time it isn't very serious and the other cpu doesn't spend
>> much time spinning.
>>
> 90% of the slowpath entries is one thing, my real question is the
> fraction of fastpath entries that get diverted to the slowpath.  It
> affects where mitigation needs to happen.

There are two questions: how many unlock events *must* go into the
slowpath for correctness reasons (ie, because the corresponding lock
also went slowpath and got blocked there), and how many end up going
into the slowpath due to imperfect heuristics?

The number of lock events which go slowpath is very dependent on the
workload of the kernel in question and of the machine overall.  On a
system with no CPU overcommit it should be zero (assuming that in the
native case no Linux spinlock remains contended for so long that it will
trigger the slowpath).  On a very overcommitted system, it comes down to
what the likelihood that a VCPU will get preempted while running in a
critical region: Tcrit * Phz, where Tcrit is the critical section time
in S and Phz is the preemption rate of the VCPU scheduler in Hz.  So,
for example, a lock with a 10uS critical section and a 100Hz preemption
rate will have a .1% chance of getting preempted and possibly causing
the other lockers to enter the slow path.

On the unlock side, it needs to test whether lock has any waiters in a
slowpath state.  A conservative test is whether there are any
outstanding tickets, but in my measurements 90% of CPUs which spun on a
lock ended up getting it without having to take the slowpath.  This lead
me to investigate more precise tests, which is currently a count of
slowpath-entering CPUs waiting on the lock.

Another approach I discussed with PeterZ and Mathieu is to steal the LSB
of the ticket counters (halving the max CPU count) to use as a "there is
someone in slowpath waiting on this lock".  But I haven't spent the time
to work out an algorithm to maintain that flag (or flags, since there
are bits available) in a correct and efficient way.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>