[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] switch rangeset's lock to rwlock



>>> On 22.09.14 at 11:42, <Ian.Campbell@xxxxxxxxxxxxx> wrote:
> On Thu, 2014-09-18 at 14:32 +0100, Jan Beulich wrote:
>> >>> On 18.09.14 at 15:02, <tim@xxxxxxx> wrote:
>> > I wouldn't be inclined to tinker with them unless we had a measurement
>> > justifying the change.  After all, the rangeset operations are not
>> > long-running ones and we'd be adding some overhead.
>> 
>> Well, that's a pretty weak argument: The vHPET lock doesn't
>> protect long running operations either, yet its conversion to rwlock
>> did yield a significant improvement.
> 
> That implies the measurement Tim is asking about...
> 
>>  But yes, unless rangesets
>> participate in something that may get used heavily by a guest,
>> changing the lock kind would likely not have a significant effect.
>> Otoh figuring out that lock contention is a problem involves a non-
>> eglectible amount of work, preemption of which seems at least
>> desirable.
>> 
>> In fact, in the latest runs with those many-vCPU Windows guests I
>> see signs of contention even on vpt's pl_time_lock (8 out of 64 vCPU-s
>> racing for it).
> 
> ... as does this, but that isn't affected by this change, is it?

No, it was merely another statement towards a locked region being
short running not being an argument for there not being potential
(severe) contention.

> I suppose constructing a test for the rangeset getting hit by e.g. the
> ioreq server would be tricky to do unless you happen to already have a
> disaggregated qemu setup (which I suppose you don't).

Correct.

> What are the potential negative affects if our gut feeling about the
> accesses patterns of the rangesets are wrong? Is the rwlock potentially
> slower than the existing lock for the read or uncontended or some other
> case?

The only downside is that the raw lock structure is 4 bytes instead
of 2 for the simply spin lock.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.