[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3] xen/block: add multi-page ring support



On Tue, Jun 09, 2015 at 10:21:27AM -0400, Konrad Rzeszutek Wilk wrote:
> On Tue, Jun 09, 2015 at 04:07:33PM +0200, Roger Pau Monn? wrote:
> > El 09/06/15 a les 15.39, Konrad Rzeszutek Wilk ha escrit:
> > > On Tue, Jun 09, 2015 at 08:52:53AM +0000, Paul Durrant wrote:
> > >>> -----Original Message-----
> > >>> From: Bob Liu [mailto:bob.liu@xxxxxxxxxx]
> > >>> Sent: 09 June 2015 09:50
> > >>> To: Bob Liu
> > >>> Cc: xen-devel@xxxxxxxxxxxxx; David Vrabel; justing@xxxxxxxxxxxxxxxx;
> > >>> konrad.wilk@xxxxxxxxxx; Roger Pau Monne; Paul Durrant; Julien Grall; 
> > >>> linux-
> > >>> kernel@xxxxxxxxxxxxxxx
> > >>> Subject: Re: [PATCH 3/3] xen/block: add multi-page ring support
> > >>>
> > >>>
> > >>> On 06/03/2015 01:40 PM, Bob Liu wrote:
> > >>>> Extend xen/block to support multi-page ring, so that more requests can 
> > >>>> be
> > >>>> issued by using more than one pages as the request ring between 
> > >>>> blkfront
> > >>>> and backend.
> > >>>> As a result, the performance can get improved significantly.
> > >>>>
> > >>>> We got some impressive improvements on our highend iscsi storage 
> > >>>> cluster
> > >>>> backend. If using 64 pages as the ring, the IOPS increased about 15 
> > >>>> times
> > >>>> for the throughput testing and above doubled for the latency testing.
> > >>>>
> > >>>> The reason was the limit on outstanding requests is 32 if use only 
> > >>>> one-page
> > >>>> ring, but in our case the iscsi lun was spread across about 100 
> > >>>> physical
> > >>>> drives, 32 was really not enough to keep them busy.
> > >>>>
> > >>>> Changes in v2:
> > >>>>  - Rebased to 4.0-rc6.
> > >>>>  - Document on how multi-page ring feature working to linux io/blkif.h.
> > >>>>
> > >>>> Changes in v3:
> > >>>>  - Remove changes to linux io/blkif.h and follow the protocol defined
> > >>>>    in io/blkif.h of XEN tree.
> > >>>>  - Rebased to 4.1-rc3
> > >>>>
> > >>>> Changes in v4:
> > >>>>  - Turn to use 'ring-page-order' and 'max-ring-page-order'.
> > >>>>  - A few comments from Roger.
> > >>>>
> > >>>> Changes in v5:
> > >>>>  - Clarify with 4k granularity to comment
> > >>>>  - Address more comments from Roger
> > >>>>
> > >>>> Signed-off-by: Bob Liu <bob.liu@xxxxxxxxxx>
> > >>>
> > >>> Also tested the windows PV driver which also works fine when multi-page
> > >>> ring feature
> > >>> was enabled in Linux backend.
> > >>> http://www.xenproject.org/downloads/windows-pv-drivers.html
> > >>>
> > >>
> > >> Great! Thanks for verifying that :-)
> > > 
> > > Woot! Bob, could you repost the blkif.h patch for the Xen tree
> > > pleas e and also mention the testing part in it please? I think this
> > > was the only big 'what if?!' question holding this up.
> > > 
> > > 
> > > Roger, I put them (patches) on devel/for-jens-4.2 on
> > > 
> > > git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git
> > > 
> > > I think these two patches:
> > > drivers: xen-blkback: delay pending_req allocation to connect_ring
> > > xen/block: add multi-page ring support
> > > 
> > > are the only ones that haven't been Acked by you (or maybe they
> > > have and I missed the Ack?)
> > 
> > Hello,
> > 
> > I was waiting to Ack those because the XenServer storage performance
> > folks found out that these patches cause a performance regression on
> > some of their tests. I'm adding them to the conversation so they can
> 
> This is with multi-page enabled or with the patches but multi-page
> disabled (baseline)?
> 
> > provide more details about the issues they found, and whether we should
> > hold pushing this patches or not.
> 
> Or surely fix whatever is causing this.


ping?


> > 
> > Roger.
> > 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.