[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IO speed limited by size of IO request (for RBD driver)



On 08/05/13 10:20, Steven Haigh wrote:
> On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
>> I noticed you copied your results from "dd", but I didn't see any 
>> conclusions drawn from experiment.
>>
>> Did I understand it wrong or now you have comparable performance on dom0 and 
>> domU when using DIRECT?
>>
>> domU:
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s
>>
>> dom0:
>> # dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
>> 2048+0 records in
>> 2048+0 records out
>> 2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s
>>
>>
>> I think that if the performance differs when NOT using DIRECT, the issue 
>> must be related to the way your guest is flushing the cache. This must be 
>> generating a workload that doesn't perform well on Xen's PV protocol.
> 
> Just wondering if there is any further input on this... While DIRECT 
> writes are as good as can be expected, NON-DIRECT writes in certain 
> cases (specifically with a mdadm raid in the Dom0) are affected by about 
> a 50% loss in throughput...
> 
> The hard part is that this is the default mode of writing!

As another test with indirect descriptors, could you change
xen_blkif_max_segments in xen-blkfront.c to 128 (it is 32 by default),
recompile the DomU kernel and see if that helps?

Thanks, Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.