[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v2 0/5] Multi-queue support for xen-blkfront and xen-blkback



On 08/12/2015 01:32 AM, Jens Axboe wrote:
> On 08/11/2015 03:45 AM, Rafal Mielniczuk wrote:
>> On 11/08/15 07:08, Bob Liu wrote:
>>> On 08/10/2015 11:52 PM, Jens Axboe wrote:
>>>> On 08/10/2015 05:03 AM, Rafal Mielniczuk wrote:
...
>>>>> Hello,
>>>>>
>>>>> We rerun the tests for sequential reads with the identical settings but 
>>>>> with Bob Liu's multiqueue patches reverted from dom0 and guest kernels.
>>>>> The results we obtained were *better* than the results we got with 
>>>>> multiqueue patches applied:
>>>>>
>>>>> fio_threads  io_depth  block_size   1-queue_iops  8-queue_iops  
>>>>> *no-mq-patches_iops*
>>>>>        8           32       512           158K         264K         321K
>>>>>        8           32        1K           157K         260K         328K
>>>>>        8           32        2K           157K         258K         336K
>>>>>        8           32        4K           148K         257K         308K
>>>>>        8           32        8K           124K         207K         188K
>>>>>        8           32       16K            84K         105K         82K
>>>>>        8           32       32K            50K          54K         36K
>>>>>        8           32       64K            24K          27K         16K
>>>>>        8           32      128K            11K          13K         11K
>>>>>
>>>>> We noticed that the requests are not merged by the guest when the 
>>>>> multiqueue patches are applied,
>>>>> which results in a regression for small block sizes (RealSSD P320h's 
>>>>> optimal block size is around 32-64KB).
>>>>>
>>>>> We observed similar regression for the Dell MZ-5EA1000-0D3 100 GB 2.5" 
>>>>> Internal SSD
>>>>>
>>>>> As I understand blk-mq layer bypasses I/O scheduler which also 
>>>>> effectively disables merges.
>>>>> Could you explain why it is difficult to enable merging in the blk-mq 
>>>>> layer?
>>>>> That could help closing the performance gap we observed.
>>>>>
>>>>> Otherwise, the tests shows that the multiqueue patches does not improve 
>>>>> the performance,
>>>>> at least when it comes to sequential read/writes operations.
>>>> blk-mq still provides merging, there should be no difference there. Does 
>>>> the xen patches set BLK_MQ_F_SHOULD_MERGE?
>>>>
>>> Yes.
>>> Is it possible that xen-blkfront driver dequeue requests too fast after we 
>>> have multiple hardware queues?
>>> Because new requests don't have the chance merging with old requests which 
>>> were already dequeued and issued.
>>>
>>
>> For some reason we don't see merges even when we set multiqueue to 1.
>> Below are some stats from the guest system when doing sequential 4KB reads:
>>
>> $ fio --name=test --ioengine=libaio --direct=1 --rw=read --numjobs=8
>>        --iodepth=32 --time_based=1 --runtime=300 --bs=4KB
>> --filename=/dev/xvdb
>>
>> $ iostat -xt 5 /dev/xvdb
>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>             0.50    0.00    2.73   85.14    2.00    9.63
>>
>> Device:         rrqm/s   wrqm/s       r/s     w/s     rkB/s    wkB/s
>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>> xvdb              0.00     0.00 156926.00    0.00 627704.00     0.00
>> 8.00    30.06    0.19    0.19    0.00   0.01 100.48
>>
>> $ cat /sys/block/xvdb/queue/scheduler
>> none
>>
>> $ cat /sys/block/xvdb/queue/nomerges
>> 0
>>
>> Relevant bits from the xenstore configuration on the dom0:
>>
>> /local/domain/0/backend/vbd/2/51728/dev = "xvdb"
>> /local/domain/0/backend/vbd/2/51728/backend-kind = "vbd"
>> /local/domain/0/backend/vbd/2/51728/type = "phy"
>> /local/domain/0/backend/vbd/2/51728/multi-queue-max-queues = "1"
>>
>> /local/domain/2/device/vbd/51728/multi-queue-num-queues = "1"
>> /local/domain/2/device/vbd/51728/ring-ref = "9"
>> /local/domain/2/device/vbd/51728/event-channel = "60"
> 
> If you add --iodepth-batch=16 to that fio command line? Both mq and non-mq 
> relies on plugging to get
> batching in the use case above, otherwise IO is dispatched immediately. 
> O_DIRECT is immediate. 
> I'd be more interested in seeing a test case with buffered IO of a file 
> system on top of the xvdb device,
> if we're missing merging for that case, then that's a much bigger issue.
>
 
I was using the null block driver for xen blk-mq test.

There were not merges happen any more even after patch: 
https://lkml.org/lkml/2015/7/13/185
(Which just converted xen block driver to use blk-mq apis)

Will try a file system soon.

-- 
Regards,
-Bob

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.