[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 2/2] gnttab: refactor locking for better scalability



On Tue, Nov 12, 2013 at 09:26:28AM +0000, Jan Beulich wrote:
> >>> On 12.11.13 at 03:03, Matt Wilson <msw@xxxxxxxxx> wrote:
> > From: Matt Wilson <msw@xxxxxxxxxx>
> > 
> > This patch refactors grant table locking. It splits the previous
> > single spinlock per grant table into multiple locks. The heavily
> > modified components of the grant table (the maptrack state and the
> > active entries) are now protected by their own spinlocks. The
> > remaining elements of the grant table are read-mostly, so the main
> > grant table lock is modified to be a rwlock to improve concurrency.
> > 
> > Workloads with high grant table operation rates, especially map/unmap
> > operations as used by blkback/blkfront when persistent grants are not
> > supported, show marked improvement with these changes. A domU with 24
> > VBDs in a streaming 2M write workload achieved 1,400 MB/sec before
> > this change. Performance more than doubles with this patch, reaching
> > 3,000 MB/sec before tuning and 3,600 MB/sec after adjusting event
> > channel vCPU bindings.
> > 
> > Signed-off-by: Matt Wilson <msw@xxxxxxxxxx>
> 
> I will have to take a thorough second look at this, but considering
> its size - would it be possible to split out the conversion of the main
> lock to an rw one into a third patch? Correctness of the second one
> wouldn't suffer from this.

Yes, I figured that there would be a request to break this up. I'll
have to think hard for each change to make sure that nothing breaks
(e.g., deadlock scenarios) for each step along the way. But this is a
useful exercise.

It should be possible to first add the separate locking for maptrack,
followed by fine grained locking for active entries, then finally
moving the main grant table lock to a rwlock (or some other locking
primitive should one become available).

> And I'd also suggest converting all the maptrack_ fields into a
> separate sub-structure named maptrack. That'd not only guarantee
> things to be kept together, but may also allow passing the address
> of the sub-structure as argument to one or another helper function
> instead of the full structure's, thus making it more clear what such a
> function actually acts upon (and in particular what it does _not_
> touch).

I agree. Do you have any thoughts on padding the spinlock out to a
cache line to avoid false sharing of the other elements? The maptrack
critical sections are extremely short, so there might not be any real
need for this. I've not done any measurement of maptrack spinlock
contention with this change yet. I vaguely remember Anthony doing some
measurement on an early PoC version that didn't show contention. It's
probably a premature optimization...

--msw

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.