[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: xenstored unsafe lock order detected, xlate_proc_name, evtchn_ioctl, port_user_lock



On 06/07/2010 05:58 AM, Pasi Kärkkäinen wrote:
> On Sun, Jun 06, 2010 at 09:54:01PM +0300, Pasi Kärkkäinen wrote:
>   
>> On Sun, Jun 06, 2010 at 10:41:04AM -0700, Jeremy Fitzhardinge wrote:
>>     
>>> On 06/06/2010 10:33 AM, Pasi Kärkkäinen wrote:
>>>       
>>>> Hello,
>>>>
>>>> I just tried the latest xen/stable-2.6.32.x kernel, ie. 2.6.32.15, with 
>>>> Xen 4.0.0,
>>>> and I got this:
>>>>
>>>> http://pasik.reaktio.net/xen/pv_ops-dom0-debug/log-2.6.32.15-pvops-dom0-xen-stable-x86_64.txt
>>>>   
>>>>         
>>> Does this help?
>>>
>>>       
>> It gave failing hunks so I had to manually apply it to 2.6.32.15, 
>> but it seems to fix that issue. No "unsafe lock order" messages anymore.
>>
>>     
> Hmm.. it seems I still get this:
>   

OK, thanks.  Let me look at it; that was a first cut patch I did the
other day when I noticed the problem, but I hadn't got around to testing
it myself.

    J

>
> device vif1.0 entered promiscuous mode
> virbr0: topology change detected, propagating
> virbr0: port 1(vif1.0) entering forwarding state
>   alloc irq_desc for 1242 on node 0
>   alloc kstat_irqs on node 0
>   alloc irq_desc for 1241 on node 0
>   alloc kstat_irqs on node 0
>   alloc irq_desc for 1240 on node 0
>   alloc kstat_irqs on node 0
>   alloc irq_desc for 1239 on node 0
>   alloc kstat_irqs on node 0
> ------------[ cut here ]------------
> WARNING: at kernel/lockdep.c:2323 trace_hardirqs_on_caller+0xb7/0x135()
> Hardware name: X7SB4/E
> Modules linked in: xen_gntdev ipt_MASQUERADE iptable_nat nf_nat bridge stp 
> llc sunrpc ip6t_REJECT nf_conntrack_ipv6 ip6table_filter ip6_tables ipv6 xen_
> evtchn xenfs e1000e iTCO_wdt i2c_i801 joydev iTCO_vendor_support serio_raw 
> shpchp pcspkr floppy usb_storage video output aic79xx scsi_transport_spi rade
> on ttm drm_kms_helper drm i2c_algo_bit i2c_core [last unloaded: 
> scsi_wait_scan]
> Pid: 23, comm: xenwatch Not tainted 2.6.32.15 #3
> Call Trace:
>  <IRQ>  [<ffffffff81059c11>] warn_slowpath_common+0x7c/0x94
>  [<ffffffff81478acb>] ? _spin_unlock_irq+0x30/0x3c
>  [<ffffffff81059c3d>] warn_slowpath_null+0x14/0x16
>  [<ffffffff8108b156>] trace_hardirqs_on_caller+0xb7/0x135
>  [<ffffffff8108b1e1>] trace_hardirqs_on+0xd/0xf
>  [<ffffffff81478acb>] _spin_unlock_irq+0x30/0x3c
>  [<ffffffff812c19b9>] add_to_net_schedule_list_tail+0x92/0x9b
>  [<ffffffff812c19fa>] netif_be_int+0x38/0xd0
>  [<ffffffff810b80f4>] handle_IRQ_event+0x53/0x119
>  [<ffffffff810ba096>] handle_level_irq+0x7d/0xdf
>  [<ffffffff812b72bd>] __xen_evtchn_do_upcall+0xe7/0x168
>  [<ffffffff812b7820>] xen_evtchn_do_upcall+0x37/0x4c
>  [<ffffffff81013f3e>] xen_do_hypervisor_callback+0x1e/0x30
>  <EOI>  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b
>  [<ffffffff8100940a>] ? hypercall_page+0x40a/0x100b
>  [<ffffffff812b9fe3>] ? notify_remote_via_evtchn+0x1e/0x44
>  [<ffffffff81477801>] ? __mutex_lock_common+0x36a/0x37b
>  [<ffffffff812ba966>] ? xs_talkv+0x5c/0x174
>  [<ffffffff812ba354>] ? xb_write+0x16e/0x18a
>  [<ffffffff812ba974>] ? xs_talkv+0x6a/0x174
>  [<ffffffff81242c46>] ? kasprintf+0x38/0x3a
>  [<ffffffff812babc3>] ? xs_single+0x3a/0x3c
>  [<ffffffff812bb271>] ? xenbus_read+0x42/0x5b
>  [<ffffffff812c416c>] ? frontend_changed+0x649/0x675
>  [<ffffffff812bc453>] ? xenbus_otherend_changed+0xe9/0x176
>  [<ffffffff8100f55f>] ? xen_restore_fl_direct_end+0x0/0x1
>  [<ffffffff8108d91e>] ? lock_release+0x198/0x1a5
>  [<ffffffff812bca7e>] ? frontend_changed+0x10/0x12
>  [<ffffffff812ba6eb>] ? xenwatch_thread+0x111/0x14c
>  [<ffffffff81079d4a>] ? autoremove_wake_function+0x0/0x39
>  [<ffffffff812ba5da>] ? xenwatch_thread+0x0/0x14c
>  [<ffffffff81079a78>] ? kthread+0x7f/0x87
>  [<ffffffff81013dea>] ? child_rip+0xa/0x20
>  [<ffffffff81013750>] ? restore_args+0x0/0x30
>  [<ffffffff81013de0>] ? child_rip+0x0/0x20
> ---[ end trace c5022d288d3812ac ]---
> blkback: ring-ref 770, event-channel 9, protocol 2 (x86_32-abi)
>   alloc irq_desc for 1238 on node 0
>   alloc kstat_irqs on node 0
> vif1.0: no IPv6 routers present
>
>
>
> -- Pasi
>
>
>
>
>   
>>     
>>> From 3f5e554f669098c84c82ce75e7577f7e0f3fccde Mon Sep 17 00:00:00 2001
>>> From: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
>>> Date: Fri, 28 May 2010 15:28:27 -0700
>>> Subject: [PATCH] xen/evtchn: don't do unbind_from_irqhandler under spinlock
>>>
>>> unbind_from_irqhandler can end up doing /proc operations, which can't
>>> happen under a spinlock.  So before removing the IRQ handler,
>>> disable the irq under the port_user lock (masking the underlying event
>>> channel and making sure the irq handler isn't running concurrently and
>>> won't start running), then remove the handler without the lock.
>>>
>>> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
>>>
>>> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
>>> index f79ac5c..6a3a129 100644
>>> --- a/drivers/xen/evtchn.c
>>> +++ b/drivers/xen/evtchn.c
>>> @@ -375,10 +375,12 @@ static long evtchn_ioctl(struct file *file,
>>>                     break;
>>>             }
>>>  
>>> -           evtchn_unbind_from_user(u, unbind.port);
>>> +           disable_irq(irq_from_evtchn(unbind.port));
>>>  
>>>             spin_unlock_irq(&port_user_lock);
>>>  
>>> +           evtchn_unbind_from_user(u, unbind.port);
>>> +
>>>             rc = 0;
>>>             break;
>>>     }
>>> @@ -484,11 +486,18 @@ static int evtchn_release(struct inode *inode, struct 
>>> file *filp)
>>>             if (get_port_user(i) != u)
>>>                     continue;
>>>  
>>> -           evtchn_unbind_from_user(get_port_user(i), i);
>>> +           disable_irq(irq_from_evtchn(i));
>>>     }
>>>  
>>>     spin_unlock_irq(&port_user_lock);
>>>  
>>> +   for (i = 0; i < NR_EVENT_CHANNELS; i++) {
>>> +           if (get_port_user(i) != u)
>>> +                   continue;
>>> +
>>> +           evtchn_unbind_from_user(get_port_user(i), i);
>>> +   }
>>> +
>>>     kfree(u->name);
>>>     kfree(u);
>>>  
>>>
>>>
>>>       
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.