[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks



On 05/07/2012 08:22 PM, Avi Kivity wrote:
On 05/07/2012 05:47 PM, Raghavendra K T wrote:
Not good.  Solving a problem in software that is already solved by
hardware?  It's okay if there are no costs involved, but here we're
introducing a new ABI that we'll have to maintain for a long time.



Hmm agree that being a step ahead of mighty hardware (and just an
improvement of 1-3%) is no good for long term (where PLE is future).


PLE is the present, not the future.  It was introduced on later Nehalems
and is present on all Westmeres.  Two more processor generations have
passed meanwhile.  The AMD equivalent was also introduced around that
timeframe.

Having said that, it is hard for me to resist saying :
  bottleneck is somewhere else on PLE m/c and IMHO answer would be
combination of paravirt-spinlock + pv-flush-tb.

But I need to come up with good number to argue in favour of the claim.

PS: Nikunj had experimented that pv-flush tlb + paravirt-spinlock is a
win on PLE where only one of them alone could not prove the benefit.


I'd like to see those numbers, then.

Ingo, please hold on the kvm-specific patches, meanwhile.



Hmm. I think I messed up the fact while saying 1-3% improvement on PLE.

Going by what I had posted in  https://lkml.org/lkml/2012/4/5/73 (with
correct calculation)

  1x     70.475 (85.6979)       63.5033 (72.7041)   15.7%
  2x     110.971 (132.829)      105.099 (128.738)    5.56%      
  3x     150.265 (184.766)      138.341 (172.69)     8.62%


It was around 12% with optimization patch posted separately with that (That one Needs more experiment though)

But anyways, I will come up with result for current patch series..


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.