WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Interesting lockdep message coming out of blktap

To: Daniel Stodden <daniel.stodden@xxxxxxxxxx>
Subject: [Xen-devel] Interesting lockdep message coming out of blktap
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Mon, 29 Mar 2010 13:11:35 -0700
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 29 Mar 2010 13:12:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.8) Gecko/20100301 Fedora/3.0.3-1.fc12 Lightning/1.0b2pre Thunderbird/3.0.3
I'm getting this:

blktap_validate_params: aio:/dev/vg_lilith-raid/xen-f13-64: capacity: 20971520, 
sector-size: 512
blktap_validate_params: aio:/dev/vg_lilith-raid/xen-f13-64: capacity: 20971520, 
sector-size: 512
blktap_device_create: minor 0 sectors 20971520 sector-size 512
blktap_device_create: creation of 253:0: 0
INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.
Pid: 4042, comm: blkid Not tainted 2.6.32 #75
Call Trace:
 [<ffffffff8107711b>] __lock_acquire+0x16d0/0x1767
 [<ffffffff8100f465>] ? xen_force_evtchn_callback+0xd/0xf
 [<ffffffff8100fd52>] ? check_events+0x12/0x20
 [<ffffffff810da9af>] ? apply_to_page_range+0x2ba/0x3c8
 [<ffffffff810772a4>] lock_acquire+0xf2/0x116
 [<ffffffff810da9af>] ? apply_to_page_range+0x2ba/0x3c8
 [<ffffffff810d02af>] ? ftrace_format_kmalloc+0x63/0xdd
 [<ffffffff814e9d47>] _spin_lock+0x36/0x45
 [<ffffffff810da9af>] ? apply_to_page_range+0x2ba/0x3c8
 [<ffffffff810da9af>] apply_to_page_range+0x2ba/0x3c8
 [<ffffffff81288d7c>] ? blktap_map_uaddr_fn+0x0/0x50
 [<ffffffff81289963>] blktap_device_process_request+0x457/0x989
 [<ffffffff810c5305>] ? get_page_from_freelist+0x49b/0x804
 [<ffffffff8100fd3f>] ? xen_restore_fl_direct_end+0x0/0x1
 [<ffffffff8107f323>] ? __module_text_address+0xd/0x53
 [<ffffffff81074d9d>] ? trace_hardirqs_on_caller+0x111/0x135
 [<ffffffff814e9b38>] ? _spin_unlock_irq+0x3c/0x5a
 [<ffffffff814e9540>] ? __down_read+0x38/0xad
 [<ffffffff812802d0>] ? evtchn_interrupt+0xaa/0x112
 [<ffffffff8128a0de>] blktap_device_do_request+0x1dc/0x298
 [<ffffffff814e9bac>] ? _spin_unlock_irqrestore+0x56/0x74
 [<ffffffff8105848b>] ? del_timer+0xd7/0xe5
 [<ffffffff810bf104>] ? sync_page_killable+0x0/0x30
 [<ffffffff81202143>] __generic_unplug_device+0x30/0x35
 [<ffffffff81202171>] generic_unplug_device+0x29/0x3a
 [<ffffffff811fb5dc>] blk_unplug+0x71/0x76
 [<ffffffff811fb5ee>] blk_backing_dev_unplug+0xd/0xf
 [<ffffffff8111a1ad>] block_sync_page+0x42/0x44
 [<ffffffff810bf0fb>] sync_page+0x3f/0x48
 [<ffffffff810bf10d>] sync_page_killable+0x9/0x30
 [<ffffffff814e7a2f>] __wait_on_bit_lock+0x41/0x8a
 [<ffffffff810bf040>] __lock_page_killable+0x61/0x68
 [<ffffffff8106486b>] ? wake_bit_function+0x0/0x2e
 [<ffffffff8103e0af>] ? __might_sleep+0x3d/0x127
 [<ffffffff810c0b1f>] generic_file_aio_read+0x3db/0x594
 [<ffffffff810763f0>] ? __lock_acquire+0x9a5/0x1767
 [<ffffffff8100fd52>] ? check_events+0x12/0x20
 [<ffffffff810f8e16>] do_sync_read+0xe3/0x120
 [<ffffffff81064837>] ? autoremove_wake_function+0x0/0x34
 [<ffffffff811d8da4>] ? selinux_file_permission+0x5d/0x10f
 [<ffffffff811d0d7c>] ? security_file_permission+0x11/0x13
 [<ffffffff810f997a>] vfs_read+0xaa/0x16f
 [<ffffffff81074d9d>] ? trace_hardirqs_on_caller+0x111/0x135
 [<ffffffff810f9af8>] sys_read+0x45/0x6c
 [<ffffffff81013b82>] system_call_fastpath+0x16/0x1b


The lock in question appears to be the pte spinlock, taken in apply_to_page_range() at:

0xffffffff810da9af is in apply_to_page_range 
(/home/jeremy/git/linux/mm/memory.c:1855).
1850            spinlock_t *uninitialized_var(ptl);
1851    
1852            pte = (mm ==&init_mm) ?
1853                    pte_alloc_kernel(pmd, addr) :
1854                    pte_alloc_map_lock(mm, pmd, addr,&ptl);
1855            if (!pte)
1856                    return -ENOMEM;
1857    
1858            BUG_ON(pmd_huge(*pmd));
1859    

(I'm pretty sure its really 1854, the usermode mm case.)

I have split PTE locks enabled, so this is a per-page pte lock rather than the global mm one. It seems highly unlikely this is not being initialized properly in general, or every pte lock would end up triggering this message.

I wonder if something else is going wrong here? I'm not really sure what the blktap code is trying to do here.

Any thoughts?

Thanks,
    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] Interesting lockdep message coming out of blktap, Jeremy Fitzhardinge <=