[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 2/2] gnttab: refactor locking for better scalability



>>> On 12.11.13 at 03:03, Matt Wilson <msw@xxxxxxxxx> wrote:
> From: Matt Wilson <msw@xxxxxxxxxx>
> 
> This patch refactors grant table locking. It splits the previous
> single spinlock per grant table into multiple locks. The heavily
> modified components of the grant table (the maptrack state and the
> active entries) are now protected by their own spinlocks. The
> remaining elements of the grant table are read-mostly, so the main
> grant table lock is modified to be a rwlock to improve concurrency.
> 
> Workloads with high grant table operation rates, especially map/unmap
> operations as used by blkback/blkfront when persistent grants are not
> supported, show marked improvement with these changes. A domU with 24
> VBDs in a streaming 2M write workload achieved 1,400 MB/sec before
> this change. Performance more than doubles with this patch, reaching
> 3,000 MB/sec before tuning and 3,600 MB/sec after adjusting event
> channel vCPU bindings.
> 
> Signed-off-by: Matt Wilson <msw@xxxxxxxxxx>

I will have to take a thorough second look at this, but considering
its size - would it be possible to split out the conversion of the main
lock to an rw one into a third patch? Correctness of the second one
wouldn't suffer from this.

And I'd also suggest converting all the maptrack_ fields into a
separate sub-structure named maptrack. That'd not only guarantee
things to be kept together, but may also allow passing the address
of the sub-structure as argument to one or another helper function
instead of the full structure's, thus making it more clear what such a
function actually acts upon (and in particular what it does _not_
touch).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.