[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC v1 0/5] VBD: enlarge max segment per request in blkfront



On Thu, Aug 16, 2012 at 10:22:56AM +0000, Duan, Ronghui wrote:
> Hi, list.
> The max segments for request in VBD queue is 11, while for Linux OS/ other 
> VMM, the parameter is set to 128 in default. This may be caused by the 
> limited size of ring between Front/Back. So I guess whether we can put 
> segment data into another ring and dynamic use them for the single request's 
> need. Here is prototype which don't do much test, but it can work on Linux 64 
> bits 3.4.6 kernel. I can see the CPU% can be reduced to 1/3 compared to 
> original in sequential test. But it bring some overhead which will make 
> random IO's cpu utilization increase a little.
> 
> Here is a short version data use only 1K random read and 64K sequential read 
> in direct mode. Testing a physical SSD disk as blkback in backend. CPU% is 
> got form xentop.
> Read 1K random        IOPS       Dom0 CPU     DomU CPU%
>               W       52005.9 86.6    71
>               W/O     52123.1 85.8    66.9

So I am getting some different numbers. I tried a simple 4K read:

[/dev/xvda1]
bssplit=4K
rw=read
direct=1
size=4g
ioengine=libaio
iodepth=64

And with your patch got:
  read : io=4096.0MB, bw=92606KB/s, iops=23151 , runt= 45292msec

without:
  read : io=4096.0MB, bw=145187KB/s, iops=36296 , runt= 28889msec


>                       
> Read 64K seq  BW MB/s Dom0 CPU        DomU CPU%
>       W       250             27.1           10.6
>       W/O     250             62.6           31.1

Hadn't tried that yet.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.