[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IO speed limited by size of IO request (for RBD driver)



On 8/05/2013 6:33 PM, Roger Pau Monné wrote:
On 08/05/13 10:20, Steven Haigh wrote:
On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
I noticed you copied your results from "dd", but I didn't see any conclusions 
drawn from experiment.

Did I understand it wrong or now you have comparable performance on dom0 and 
domU when using DIRECT?

domU:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s

dom0:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s


I think that if the performance differs when NOT using DIRECT, the issue must 
be related to the way your guest is flushing the cache. This must be generating 
a workload that doesn't perform well on Xen's PV protocol.

Just wondering if there is any further input on this... While DIRECT
writes are as good as can be expected, NON-DIRECT writes in certain
cases (specifically with a mdadm raid in the Dom0) are affected by about
a 50% loss in throughput...

The hard part is that this is the default mode of writing!

As another test with indirect descriptors, could you change
xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
recompile the DomU kernel and see if that helps?

Ok, I'll get onto this...

One thing I thought I'd try - as the RAID6 is only assembled in the Dom0 then passed to the DomU as /dev/md2 - I wondered what would happen if I passed all the member drives directly to the DomU and let the DomU take care of the RAID6 info...

So - I changed the DomU config as such:
disk = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' , 'phy:/dev/sdc,xvdc,w' , 'phy:/dev/sdd,xvdd,w' , 'phy:/dev/sde,xvde,w' , 'phy:/dev/sdf,xvdf,w' ]

I then assembled the RAID6 on the DomU using mdadm:
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid6 xvdf[1] xvde[0] xvdd[5] xvdc[4]
3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU]

# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 35.4581 s, 60.6 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.54    0.00   11.76    0.00    0.68   87.03

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 0.00 0.00 16.89 2832.70 0.44 36.42 26.49 17.46 6.12 0.36 103.82 sdc 0.00 0.00 14.73 2876.49 0.39 36.36 26.03 19.57 6.77 0.38 108.50 sde 0.00 0.00 20.68 2692.70 0.50 36.40 27.85 17.97 6.62 0.40 109.07 sdd 0.00 0.00 11.76 2846.22 0.35 36.36 26.30 19.36 6.76 0.37 106.14

# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 53.4774 s, 40.2 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           0.49    0.00   14.64    0.00    0.62   84.26

Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util sdf 0.00 0.00 614.88 1382.90 5.08 21.85 27.61 10.12 5.07 0.39 77.70 sdc 0.00 0.00 16.73 2800.86 0.09 26.46 19.30 13.51 4.79 0.28 77.64 sde 0.00 0.00 25.95 2762.24 0.19 21.76 16.12 3.04 1.09 0.12 32.76 sdb 0.00 0.00 0.00 1.97 0.00 0.01 5.75 0.01 7.00 6.63 1.30 sdd 0.00 0.00 6.03 2831.61 0.02 26.62 19.23 14.11 5.01 0.28 80.58

Interesting that doing this destroys the direct writing - however doesn't seem to affect the non-direct. (As a side note, this is using the stock EL6 kernel as DomU and vanilla 3.8.10 as the Dom0.

Will do the other research now...

--
Steven Haigh

Email: netwiz@xxxxxxxxx
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.