[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: blktap lockdep hiccup



On Mon, 2010-09-06 at 21:39 -0400, Jeremy Fitzhardinge wrote:
> On 09/03/2010 09:08 AM, Daniel Stodden wrote:
> > On Thu, 2010-09-02 at 18:46 -0400, Jeremy Fitzhardinge wrote:
> >> On 08/22/2010 11:54 PM, Daniel Stodden wrote:
> >>> Response processing doesn't really belong into hard irq context.
> >>>
> >>> Another potential problem this avoids is that switching interrupt cpu
> >>> affinity in Xen domains can presently lead to event loss, if
> >>> RING_FINAL_CHECK is run from hard irq context.
> >> I just got this warning from a 32-bit pv domain.  I think it may relate
> >> to this change.  The warning is
> > We clearly spin_lock_irqsave all through the blkif_do_interrupt frame.
> >
> > It follows that something underneath quite unconditionally chose to
> > reenable them again (?)
> >
> > Either: Can you add a bunch of similar WARN_ONs along that path?
> >
> > Or: This lock is quite coarse-grained. The lock only matters for queue
> > access, and we know irqs are reenabled, so no need for flags. In fact we
> > only need to spin_lock_irq around the __blk_end_ calls and
> > kick_pending_.
> >
> > But I don't immediately see what's to blame, so I'd be curious.
> 
> I haven't got around to investigating this in more detail yet, but
> there's also this long-standing lockdep hiccup in blktap:

Ack. Let's fix that somewhere this week and see if we can clean up the
spin locking problem too then.

Daniel

> Starting auto Xen domains: lurch  alloc irq_desc for 1235 on node 0
>   alloc kstat_irqs on node 0
> block tda: sector-size: 512 capacity: 614400
> INFO: trying to register non-static key.
> the code is fine but needs lockdep annotation.
> turning off the locking correctness validator.
> Pid: 4266, comm: tapdisk2 Not tainted 2.6.32.21 #146
> Call Trace:
>  [<ffffffff8107f0a4>] __lock_acquire+0x1df/0x16e5
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff810f0359>] ? apply_to_page_range+0x295/0x37d
>  [<ffffffff81080677>] lock_acquire+0xcd/0xf1
>  [<ffffffff810f0359>] ? apply_to_page_range+0x295/0x37d
>  [<ffffffff810f0259>] ? apply_to_page_range+0x195/0x37d
>  [<ffffffff81506f7d>] _spin_lock+0x31/0x40
>  [<ffffffff810f0359>] ? apply_to_page_range+0x295/0x37d
>  [<ffffffff810f0359>] apply_to_page_range+0x295/0x37d
>  [<ffffffff812ab37c>] ? blktap_map_uaddr_fn+0x0/0x55
>  [<ffffffff8100d0cf>] ? xen_make_pte+0x8a/0x8e
>  [<ffffffff812ac34e>] blktap_device_process_request+0x43d/0x954
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff8107d687>] ? mark_held_locks+0x52/0x70
>  [<ffffffff81506ddb>] ? _spin_unlock_irq+0x30/0x3c
>  [<ffffffff8107d949>] ? trace_hardirqs_on_caller+0x125/0x150
>  [<ffffffff812acba6>] blktap_device_run_queue+0x1c5/0x28f
>  [<ffffffff812a0234>] ? unbind_from_irq+0x18/0x198
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff812ab14d>] blktap_ring_poll+0x7c/0xc7
>  [<ffffffff81124e9b>] do_select+0x387/0x584
>  [<ffffffff81124b14>] ? do_select+0x0/0x584
>  [<ffffffff811255de>] ? __pollwait+0x0/0xcc
>  [<ffffffff811256aa>] ? pollwake+0x0/0x56
>  [<ffffffff811256aa>] ? pollwake+0x0/0x56
>  [<ffffffff811256aa>] ? pollwake+0x0/0x56
>  [<ffffffff811256aa>] ? pollwake+0x0/0x56
>  [<ffffffff8108059b>] ? __lock_acquire+0x16d6/0x16e5
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff8100f955>] ? xen_force_evtchn_callback+0xd/0xf
>  [<ffffffff811252a4>] core_sys_select+0x20c/0x2da
>  [<ffffffff811250d6>] ? core_sys_select+0x3e/0x2da
>  [<ffffffff81010082>] ? check_events+0x12/0x20
>  [<ffffffff8101006f>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff81108661>] ? kmem_cache_free+0x18e/0x1c8
>  [<ffffffff8141e912>] ? sock_destroy_inode+0x19/0x1b
>  [<ffffffff811299bd>] ? destroy_inode+0x2f/0x44
>  [<ffffffff8102ef22>] ? pvclock_clocksource_read+0x4b/0xa2
>  [<ffffffff8100fe8b>] ? xen_clocksource_read+0x21/0x23
>  [<ffffffff81010003>] ? xen_clocksource_get_cycles+0x9/0x16
>  [<ffffffff81075700>] ? ktime_get_ts+0xb2/0xbf
>  [<ffffffff811255b6>] sys_select+0x96/0xbe
>  [<ffffffff81013d32>] system_call_fastpath+0x16/0x1b
> block tdb: sector-size: 512 capacity: 20971520
> block tdc: sector-size: 512 capacity: 146800640
> block tdd: sector-size: 512 capacity: 188743680
> 
>       J
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.