WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under

To: Jens Axboe <jens.axboe@xxxxxxxxxx>
Subject: Re: [Xen-devel] Kernel Panic in xen-blkfront.c:blkif_queue_request under 2.6.28
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 04 Feb 2009 08:50:00 -0800
Cc: Greg Harris <greg.harris@xxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 04 Feb 2009 08:51:37 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <18316821.8169281233693449109.JavaMail.root@ouachita>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <18316821.8169281233693449109.JavaMail.root@ouachita>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
Greg Harris wrote:
After applying the patch we were able to reproduce the panic and the additional 
debugging output is attached.  The driver appears to re-try the request several 
times before dying:

Writing inode tables: ------------[ cut here ]------------
WARNING: at drivers/block/xen-blkfront.c:244 do_blkif_request+0x301/0x440()
Modules linked in:
Pid: 0, comm: swapper Not tainted 2.6.28.2-metacarta-appliance-1 #2
Call Trace:
 <IRQ>  [<ffffffff80240b34>] warn_on_slowpath+0x64/0xa0
 [<ffffffff80232ae3>] enqueue_task+0x13/0x30
 [<ffffffff8059be54>] _spin_unlock_irqrestore+0x14/0x20
 [<ffffffff803c70fc>] get_free_entries+0xbc/0x2a0
 [<ffffffff804078b1>] do_blkif_request+0x301/0x440
 [<ffffffff8036fb35>] blk_invoke_request_fn+0xa5/0x110
 [<ffffffff80407a08>] kick_pending_request_queues+0x18/0x30
 [<ffffffff80407bb7>] blkif_interrupt+0x197/0x1e0
 [<ffffffff8026ccd9>] handle_IRQ_event+0x39/0x80
 [<ffffffff8026f096>] handle_level_irq+0x96/0x120
 [<ffffffff802140d5>] do_IRQ+0x85/0x110
 [<ffffffff803c83f5>] xen_evtchn_do_upcall+0xe5/0x130
 [<ffffffff80246217>] __do_softirq+0xe7/0x180
 [<ffffffff8059c65e>] xen_do_hypervisor_callback+0x1e/0x30
 <EOI>  [<ffffffff802093aa>] _stext+0x3aa/0x1000
 [<ffffffff802093aa>] _stext+0x3aa/0x1000
 [<ffffffff8020de8c>] xen_safe_halt+0xc/0x20
 [<ffffffff8020c1fa>] xen_idle+0x2a/0x50
 [<ffffffff80210041>] cpu_idle+0x41/0x70
---[ end trace 107c74ebf2b50a63 ]---
METACARTA: too many segments for ring (11): req->nr_phys_segments = 11
METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 1536 len 512
METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2048 len 512
METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 2560 len 512
METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3072 len 512
METACARTA: 0: bio page ffffe2000c291d00 pfn 379760 off 3584 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 0 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 512 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1024 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 1536 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2048 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 2560 len 512
METACARTA: 0: bio page ffffe2000c291d38 pfn 379761 off 3072 len 512

(Wonder why the index didn't increment.  Missing ++?)

Well, that's interesting. I count 12 bios there. Are we asking for the wrong thing, or is the block layer giving us too many bios?

What's the distinction between a bio and a segment? Also, why are there so many pieces. Our main restriction is that a transfer can't cross a page boundary, but we could easily handle this request in two pieces, one for each page. Can we ask the block layer to do that merging?

Thanks,
   J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel