[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] blkback: Fix block I/O latency issue



Hey Daniel,
 
Thanks for your comments.
 
>> The notification avoidance these macros implement does not promote
>>deliberate latency. This stuff is not dropping events or deferring guest
requests.
 
It only avoids a gratuitious notification sent by the remote end in
cases where the local one didn't go to sleep yet, and therefore can
>>guarantee that it's going to process the message ASAP, right after
>>finishing what's still pending from the previous kick.
 
If the design goal was to simply avoid unnecessary interrupts but not
delay I/Os, then blkback code has a bug.

If the design goal was to delay the I/Os in order to reducing interrupt
rate, then I am arguing that the design introduces way too much latency
that affects many applications.

Either way, this issue needs to be addressed.


Perhaps, a timeline example will help shed some light on this issue. Let's
IO-1 and IO-2 are submitted with a gap of 200 usecs. Let's assume
interrupt latency is 10usec and disk drive takes ~10,000 usecs to process
each I/O.

t1: IO-1 arrives at blkfront. RING_PUSH_REQUESTS_AND_CHECK_NOTIFY is
called which updates 'sring->req_prod' and uses 'sring->req_event' to
determine if an interrupt must be generated. In this case, blkfront
generates the interrupt.

t1+10 usecs: Interrupt is received by blkback. do_block_io_op is
eventually invoked which dispatches the I/O after incrementing
'common.req_cons'. Note that 'req_event' is NOT updated. There are no more
I/Os to be processed and hence blkback thread goes to sleep.

t1+200 usecs: IO-2 arrives at blkfront.
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY is called which updates
'sring->req_prod' and uses 'sring->req_event' to determine if an interrupt
must be generated. Unfortunately, 'req_event' was NOT updated in the
previous step and as a result blkfront decides not to send an interrupt.
As a result blkback doesn't wake up immediately to process the I/O that
has been added to the shared ring by blkfront.

t1+10000 usecs: IO-1 completes. 'make_response' is invoked which signals
the completion of IO-1 to blkfront. Now it goes through the following code
and decides there is 'more_to_do'.

        if (blk_rings->common.rsp_prod_pvt == blk_rings->common.req_cons) {
                /*
                 * Tail check for pending requests. Allows frontend to
avoid
                 * notifications if requests are already in flight (lower
                 * overheads and promotes batching).
                 */
                RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common,
more_to_do);

        
         Hence the blkback thread is woken up which then invokes
'do_block_io_op'. 'do_block_io_op' then dispatches IO-2

t1+20000 usecs: IO-2 completes.


>From guest point of view, IO-1 took ~10,000 usecs to complete which is
fine. But IO-2 took 19,800 usecs which is obviously very bad.

Now once the patch is applied,


t1+10 usecs : Interrupt is received by blkback. do_block_io_op is
eventually invoked which dispatches the I/O after incrementing
'common.req_cons'. RING_FINAL_CHECK_FOR_REQUESTS is invoked which updates
'req_event'. There are no more I/Os to be processed and hence blkback
thread goes to sleep.

t1+200 usecs: IO-2 arrives at blkfront.
RING_PUSH_REQUESTS_AND_CHECK_NOTIFY is called which updates
'sring->req_prod' and uses 'sring->req_event' to determine if an interrupt
must be generated. Since req_event was updated in the previous step,
blkfront decides to generate an interrupt

t1+210 usecs: Interrupt is received by blkback. do_block_io_op is
eventually invoked which dispatches IO-2 after incrementing
'common.req_cons'. RING_FINAL_CHECK_FOR_REQUESTS is invoked which updates
'req_event'. There are no more I/Os to be processed and hence blkback
thread goes to sleep.

t1+10000 usecs: IO-1 completes.

t1+10210 usecs: IO-2 completes.

Both I/Os take ~10,000 usecs to complete and the application lives happily
ever after.


Does that make sense ?

>>Normally the slightest mistake
on the event processing front rather leads to deadlocks, and we
>> currently don't see any.


Yeah - I had the same thought initially. In this case, the fact that the
make_response kicks off any pending I/Os turns potential deadlocks into
latency issues.


>>Iff you're right -- I guess the better fix would look different. If this
stuff is actually broken, may we can rather simplify things again, not
add more extra checks on top. :)

Love to hear better ways of fixing this issue. Any proposals ?


Thanks,

- Pradeep Vincent

 





On 5/3/11 10:52 AM, "Daniel Stodden" <daniel.stodden@xxxxxxxxxx> wrote:

>On Mon, 2011-05-02 at 21:10 -0400, Vincent, Pradeep wrote:
>> Thanks Jan.
>> 
>> Re: avoid unnecessary notification
>> 
>> If this was a deliberate design choice then the duration of the delay is
>> at the mercy of the pending I/O latencies & I/O patterns and the delay
>>is
>> simply too long in some cases. E.g. A write I/O stuck behind a read I/O
>> could see more than double the latency on a Xen guest compared to a
>> baremetal host. Avoiding notifications this way results in significant
>> latency degradation perceived by many applications.
>
>I'm trying to follow - let me know if I misread you - but I think you're
>misunderstanding this stuff.
>
>The notification avoidance these macros implement does not promote
>deliberate latency. This stuff is not dropping events or deferring guest
>requests.
>
>It only avoids a gratuitious notification sent by the remote end in
>cases where the local one didn't go to sleep yet, and therefore can
>guarantee that it's going to process the message ASAP, right after
>finishing what's still pending from the previous kick.
>
>It's only a mechanism to avoid excess interrupt signaling. Think about a
>situation where you ask the guy at the front door to take his thumb off
>the buzzer while you're already running down the hallway.
>
>R/W reordering is a matter dealt with by I/O schedulers.
>
>Any case of write I/O behind the read you describe is supposed to be
>queued back-to-back. It should never get stuck. A backend can obviously
>reserve the right to override guest submit order, but blkback doesn't do
>this, it's just pushing everything down the disk queue as soon as it
>sees it.
>
>So, that'd be the basic idea. Now, we've got that extra stuff in there
>mixing that up between request and response processing, and it's
>admittedly somewhat hard to read.
>
>If you found a bug in there, well, yoho. Normally the slightest mistake
>on the event processing front rather leads to deadlocks, and we
>currently don't see any.
>
>Iff you're right -- I guess the better fix would look different. If this
>stuff is actually broken, may we can rather simplify things again, not
>add more extra checks on top. :)
>
>Daniel
>
>> If this is about allowing I/O scheduler to coalesce more I/Os, then I
>>bet
>> I/O scheduler's 'wait and coalesce' logic is a great substitute for the
>> delays introduced by blkback.
>> 
>> I totally agree IRQ coalescing or delay is useful for both blkback and
>> netback but we need a logic that doesn't impact I/O latencies
>> significantly. Also, I don't think netback has this type of notification
>> avoidance logic (at least in 2.6.18 code base).
>> 
>> 
>> Re: Other points
>> 
>> Good call. Changed the patch to include tabs.
>> 
>> I wasn't very sure about blk_ring_lock usage and I should have clarified
>> it before sending out the patch.
>> 
>> Assuming blk_ring_lock was meant to protect shared ring manipulations
>> within blkback, is there a reason 'blk_rings->common.req_cons'
>> manipulation in do_block_io_op is not protected ? The reasons for the
>> differences between locking logic in do_block_io_op and make_response
>> weren't terribly obvious although the failure mode for the race
>>condition
>> may very well be benign.
>> 
>> Anyway, I am attaching a patch with appropriate changes.
>> 
>> Jeremey, Can you apply this patch to pvops Dom-0
>> (http://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git). Should
>>I
>> submit another patch for 2.6.18 Dom-0 ?
>> 
>> 
>> Signed-off-by: Pradeep Vincent <pradeepv@xxxxxxxxxx>
>> 
>> diff --git a/drivers/xen/blkback/blkback.c
>>b/drivers/xen/blkback/blkback.c
>> --- a/drivers/xen/blkback/blkback.c
>> +++ b/drivers/xen/blkback/blkback.c
>> @@ -315,6 +315,7 @@ static int do_block_io_op(blkif_t *blkif)
>>   pending_req_t *pending_req;
>>   RING_IDX rc, rp;
>>   int more_to_do = 0;
>> + unsigned long     flags;
>>  
>>   rc = blk_rings->common.req_cons;
>>   rp = blk_rings->common.sring->req_prod;
>> @@ -383,6 +384,15 @@ static int do_block_io_op(blkif_t *blkif)
>>    cond_resched();
>>   }
>>  
>> + /* If blkback might go to sleep (i.e. more_to_do == 0) then we better
>> +    let blkfront know about it (by setting req_event appropriately) so
>> that
>> +    blkfront will bother to wake us up (via interrupt) when it submits
>>a
>> +    new I/O */
>> + if (!more_to_do){
>> +  spin_lock_irqsave(&blkif->blk_ring_lock, flags);
>> +  RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common, more_to_do);
>> +  spin_unlock_irqrestore(&blkif->blk_ring_lock, flags);
>> + }
>>   return more_to_do;
>>  }
>>  
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On 5/2/11 1:13 AM, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>> 
>> >>>> On 02.05.11 at 09:04, "Vincent, Pradeep" <pradeepv@xxxxxxxxxx>
>>wrote:
>> >> In blkback driver, after I/O requests are submitted to Dom-0 block
>>I/O
>> >> subsystem, blkback goes to 'sleep' effectively without letting
>>blkfront
>> >>know 
>> >> about it (req_event isn't set appropriately). Hence blkfront doesn't
>> >>notify 
>> >> blkback when it submits a new I/O thus delaying the 'dispatch' of the
>> >>new I/O 
>> >> to Dom-0 block I/O subsystem. The new I/O is dispatched as soon as
>>one
>> >>of the 
>> >> previous I/Os completes.
>> >> 
>> >> As a result of this issue, the block I/O latency performance is
>> >>degraded for 
>> >> some workloads on Xen guests using blkfront-blkback stack.
>> >> 
>> >> The following change addresses this issue:
>> >> 
>> >> 
>> >> Signed-off-by: Pradeep Vincent <pradeepv@xxxxxxxxxx>
>> >> 
>> >> diff --git a/drivers/xen/blkback/blkback.c
>> >>b/drivers/xen/blkback/blkback.c
>> >> --- a/drivers/xen/blkback/blkback.c
>> >> +++ b/drivers/xen/blkback/blkback.c
>> >> @@ -383,6 +383,12 @@ static int do_block_io_op(blkif_t *blkif)
>> >>   cond_resched();
>> >>   }
>> >> 
>> >> + /* If blkback might go to sleep (i.e. more_to_do == 0) then we
>>better
>> >> +   let blkfront know about it (by setting req_event appropriately)
>>so
>> >>that
>> >> +   blkfront will bother to wake us up (via interrupt) when it
>>submits a
>> >> +   new I/O */
>> >> +        if (!more_to_do)
>> >> +                 RING_FINAL_CHECK_FOR_REQUESTS(&blk_rings->common,
>> >>more_to_do);
>> >
>> >To me this contradicts the comment preceding the use of
>> >RING_FINAL_CHECK_FOR_REQUESTS() in make_response()
>> >(there it's supposedly used to avoid unnecessary notification,
>> >here you say it's used to force notification). Albeit I agree that
>> >the change looks consistent with the comments in io/ring.h.
>> >
>> >Even if correct, you're not holding blkif->blk_ring_lock here, and
>> >hence I think you'll need to explain how this is not a problem.
>> >
>> >From a formal perspective, you also want to correct usage of tabs,
>> >and (assuming this is intended for the 2.6.18 tree) you'd also need
>> >to indicate so for Keir to pick this up and apply it to that tree (and
>> >it might then also be a good idea to submit an equivalent patch for
>> >the pv-ops trees).
>> >
>> >Jan
>> >
>> >>   return more_to_do;
>> >>  }
>> >
>> >
>> >
>> 
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.