[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] switch rangeset's lock to rwlock



On Thu, 2014-09-18 at 14:32 +0100, Jan Beulich wrote:
> >>> On 18.09.14 at 15:02, <tim@xxxxxxx> wrote:
> > At 13:15 +0100 on 18 Sep (1411042524), Jan Beulich wrote:
> >> >>> On 18.09.14 at 12:43, <tim@xxxxxxx> wrote:
> >> > At 13:55 +0100 on 12 Sep (1410526507), Jan Beulich wrote:
> >> >> As a general library routine, it should behave as efficiently as
> >> >> possible, even if at present no significant contention is known here.
> >> >> 
> >> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> >> ---
> >> >> With the widened use of rangesets I'd like to re-suggest this change
> >> >> which I had posted already a couple of years back.
> >> > 
> >> > Is this addressing an actual (measured) problem or just seems like a
> >> > good idea?  If the latter maybe keep it until after 4.5?
> >> 
> >> The latter. And yes, I have no problem keeping it until after 4.5,
> >> it's just that the multi-ioreq-server's extended use of rangesets
> >> (as said elsewhere) would seem to make this a reasonable fit for
> >> 4.5.
> > 
> > Well, I think it's a question for the release manager, then. :)
> 
> Konrad?
> 
> > I wouldn't be inclined to tinker with them unless we had a measurement
> > justifying the change.  After all, the rangeset operations are not
> > long-running ones and we'd be adding some overhead.
> 
> Well, that's a pretty weak argument: The vHPET lock doesn't
> protect long running operations either, yet its conversion to rwlock
> did yield a significant improvement.

That implies the measurement Tim is asking about...

>  But yes, unless rangesets
> participate in something that may get used heavily by a guest,
> changing the lock kind would likely not have a significant effect.
> Otoh figuring out that lock contention is a problem involves a non-
> eglectible amount of work, preemption of which seems at least
> desirable.
> 
> In fact, in the latest runs with those many-vCPU Windows guests I
> see signs of contention even on vpt's pl_time_lock (8 out of 64 vCPU-s
> racing for it).

... as does this, but that isn't affected by this change, is it?

I suppose constructing a test for the rangeset getting hit by e.g. the
ioreq server would be tricky to do unless you happen to already have a
disaggregated qemu setup (which I suppose you don't).

What are the potential negative affects if our gut feeling about the
accesses patterns of the rangesets are wrong? Is the rwlock potentially
slower than the existing lock for the read or uncontended or some other
case?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.