[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen spinlock questions



On 15/8/08 15:06, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:

>>> I can't really explain the results of testing with this version of the
>>> patch:
>>> While the number of false wakeups got further reduced by somewhat
>>> less than 20%, both time spent in the kernel and total execution time
>>> went up (8% and 4% respectively) compared to my original (and from
>>> all I can tell worse) version of the patch. Nothing else changed as far as
>>> I'm aware.
>> 
>> That is certainly odd. Presumably consistent across a few runs? I can't
>> imagine where extra time would be being spent though...
> 
> Yes, I did at least five runs in each environment.

It might be worth retrying with the vcpu_unblock() changes removed. It'll
still work, but poll_mask may have bits spuriously left set for arbitrary
time periods. However, vcpu_unblock() is the only thing I obviously make
more expensive than in your patch.

We could also possibly make the vcpu_unblock() check cheaper by testing
v->poll_evtchn for non-zero, and zero it, and clear from poll_mask. Reading
a vcpu-local field may be cheaper than getting access to a domain struct
cache line.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.