WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: Xen spinlock questions

>>> Jeremy Fitzhardinge <jeremy@xxxxxxxx> 06.08.08 10:47 >>>
>Jan Beulich wrote:
>> More on that: You'll really need two per-CPU variables afaics, one for the
>> current non-irq lock being spun upon, and one for the current irqs-disabled
>> one. The latter one might not need saving/restoring as long as you don't
>> re-enable interrupts, but the code might turn out cleaner when doing the
>> save/restore regardless, e.g. for me (doing ticket locking):
>>   
>
>Not sure I follow.  How do you use the second array at the kick end of 
>the process?

I just look at both stored values.

>I ended up just storing the previous value locally, and then restoring 
>it.  It assumes that locks will strictly nest, of course, but I think 
>that's reasonable.

Storing the previous value locally is fine. But I don't think you can do with
just one 'currently spinning' pointer because of the kicking side
requirements - if an irq-save lock interrupted an non-irq one (with the
spinning pointer already set) and a remote CPU releases the lock and
wants to kick you, it won't be able to if the irq-save lock already replaced
the non-irq one. Nevertheless, if you're past the try-lock, you may end
up never getting the wakeup.

Since there can only be one non-irq and one irq-save lock a CPU is
currently spinning on (the latter as long as you don't re-enable interrupts),
two fields, otoh, are sufficient.

Btw., I also think that using an xchg() (and hence a locked transaction)
for updating the pointer isn't necessary.

>> On an 8-core system I'm seeing between 20,000 (x86-64) and 35,000
>> (i686) wakeup interrupts per CPU. I'm not certain this still counts as rare.
>> Though that number may go down a little once the hypervisor doesn't
>> needlessly wake all polling vCPU-s anymore.
>>   
>
>What workload are you seeing that on?  20-35k interrupts over what time 
>period?

Oh, sorry, I meant to say that's for a kernel build (-j12), taking about
400 wall seconds.

>In my tests, I only see it fall into the slow path a couple of thousand 
>times per cpu for a kernbench run.

Hmm, that's different then for me. Actually, I see a significant spike at
modpost stage 2, when all the .ko-s get linked. The (spinlock) interrupt
rate gets up to between 1,000 and 2,000 per CPU and second.

>That said, I've implemented a pile of debugfs infrastructure for 
>extracting lots of details about lock performance so there's some scope 
>for tuning it (including being able to change the timeout on the fly to 
>see how things change).

Yeah, that's gonna be useful to have.

>I think we can also mitigate poll's wake-all behaviour by seeing if our 
>particular per-cpu interrupt is pending and drop back into poll 
>immediately if not (ie, detect a spurious wakeup).

Oh, of course - I didn't consider this so far.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>