WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: One issue of pvops dom0's spinlock code

To: "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx>
Subject: [Xen-devel] Re: One issue of pvops dom0's spinlock code
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Tue, 08 Sep 2009 16:35:04 -0700
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 08 Sep 2009 16:35:31 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4AA4BAFE.50906@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4AA4BAFE.50906@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090814 Fedora/3.0-2.6.b3.fc11 Lightning/1.0pre Thunderbird/3.0b3
On 09/07/09 00:49, Yang, Xiaowei wrote:
> As we tested pvops dom0, sometimes we met some vCPUs hung due to dead
> lock. After checking the dom0's stack (see below) and the code, we
> found it's caused by this commit: 1e696f638 (xen: allow interrupts to
> be enabled while doing a blocking spin). If we don't enable irq inside
> spinlock slow path the issue is gone.
>
> Jeremy,
> Can you have a check of this?:)

It seemed like a good idea at the time...

There's an inherent fragility if there's a nested lock (ie, the
interrupt handler takes the same spinlock that the outer code is waiting
on), it relies on the inner lock leaving the wakeup event pending to
stop the outer lock from block-spinning indefinitely.

But that doesn't appear to be what's happening in your case; the inner
lock is spinning indefinitely...

One possibility is that this is a bug in the generic kernel; the
standard ticket-lock implementation doesn't enable interrupts while
spinning, so perhaps we're provoking a bug in doing so.  But I can't see
how enabling interrupts while spinning before taking the lock is
behaviourally any different from taking an uncontended lock with
interrupts enabled.

BTW, assume this is after applying your barrier->wmb patch?

Hm, we can end up actually holding the lock with interrupts enabled,
which isn't going to be good.  Does this help:

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5601506..2f91e56 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -187,7 +187,6 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock 
*lock, bool irq_enabl
        struct xen_spinlock *prev;
        int irq = __get_cpu_var(lock_kicker_irq);
        int ret;
-       unsigned long flags;
        u64 start;
 
        /* If kicker interrupts not initialized yet, just spin */
@@ -199,16 +198,12 @@ static noinline int xen_spin_lock_slow(struct 
raw_spinlock *lock, bool irq_enabl
        /* announce we're spinning */
        prev = spinning_lock(xl);
 
-       flags = __raw_local_save_flags();
-       if (irq_enable) {
-               ADD_STATS(taken_slow_irqenable, 1);
-               raw_local_irq_enable();
-       }
-
        ADD_STATS(taken_slow, 1);
        ADD_STATS(taken_slow_nested, prev != NULL);
 
        do {
+               unsigned long flags;
+
                /* clear pending */
                xen_clear_irq_pending(irq);
 
@@ -228,6 +223,12 @@ static noinline int xen_spin_lock_slow(struct raw_spinlock 
*lock, bool irq_enabl
                        goto out;
                }
 
+               flags = __raw_local_save_flags();
+               if (irq_enable) {
+                       ADD_STATS(taken_slow_irqenable, 1);
+                       raw_local_irq_enable();
+               }
+
                /*
                 * Block until irq becomes pending.  If we're
                 * interrupted at this point (after the trylock but
@@ -238,13 +239,15 @@ static noinline int xen_spin_lock_slow(struct 
raw_spinlock *lock, bool irq_enabl
                 * pending.
                 */
                xen_poll_irq(irq);
+
+               raw_local_irq_restore(flags);
+
                ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
        } while (!xen_test_irq_pending(irq)); /* check for spurious wakeups */
 
        kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
-       raw_local_irq_restore(flags);
        unspinning_lock(xl, prev);
        spin_time_accum_blocked(start);
 



    J

>
> Thanks,
> Xiaowei
>
> [ 2631.686041] Call Trace:
> [ 2631.689073]  <IRQ>  [<ffffffff811a8b5f>] ? xen_poll_irq+0x49/0x53
> [ 2631.695338]  [<ffffffff8100ff74>] xen_spin_lock_slow+0x13f/0x204
> [ 2631.703151]  [<ffffffff810100ef>] xen_spin_lock_flags+0xb6/0xe6
> [ 2631.709405]  [<ffffffff81050d68>] ? delayed_work_timer_fn+0x0/0x33
> [ 2631.715663]  [<ffffffff813283ba>] _spin_lock_irqsave+0x30/0x39
> [ 2631.723479]  [<ffffffff81050d42>] __queue_work+0x18/0x3e
> [ 2631.728171]  [<ffffffff81050d97>] delayed_work_timer_fn+0x2f/0x33
> [ 2631.735988]  [<ffffffff81049f41>] run_timer_softirq+0x160/0x1f1
> [ 2631.743804]  [<ffffffff811a7488>] ? unmask_evtchn+0x34/0xd6
> [ 2631.748493]  [<ffffffff81045f6f>] __do_softirq+0xa2/0x13d
> [ 2631.754756]  [<ffffffff81013e0c>] call_softirq+0x1c/0x30
> [ 2631.761007]  [<ffffffff81015950>] do_softirq+0x42/0x88
> [ 2631.767262]  [<ffffffff81045ecb>] irq_exit+0x3f/0x41
> [ 2631.771955]  [<ffffffff811a82af>] xen_evtchn_do_upcall+0x13e/0x15a
> [ 2631.779774]  [<ffffffff81013e5e>] xen_do_hypervisor_callback+0x1e/0x30
> [ 2631.787584]  <EOI>  [<ffffffff8100ff5d>] ?
> xen_spin_lock_slow+0x128/0x204
> [ 2631.795407]  [<ffffffff810100ef>] ? xen_spin_lock_flags+0xb6/0xe6
> [ 2631.801696]  [<ffffffff810862a4>] ? lru_add_drain_per_cpu+0x0/0xb
> [ 2631.809483]  [<ffffffff813283ba>] ? _spin_lock_irqsave+0x30/0x39
> [ 2631.815743]  [<ffffffff81328520>] ? _spin_unlock_irqrestore+0x27/0x2a
> [ 2631.823557]  [<ffffffff81053fb3>] ? finish_wait+0x3b/0x67
> [ 2631.828251]  [<ffffffff81050f7b>] ? worker_thread+0xb6/0x1f9
> [ 2631.836067]  [<ffffffff81053e8e>] ? autoremove_wake_function+0x0/0x38
> [ 2631.842325]  [<ffffffff81050ec5>] ? worker_thread+0x0/0x1f9
> [ 2631.848576]  [<ffffffff81053aeb>] ? kthread+0x8f/0x97
> [ 2631.854829]  [<ffffffff81013d0a>] ? child_rip+0xa/0x20
> [ 2631.861083]  [<ffffffff81012ee7>] ? int_ret_from_sys_call+0x7/0x1b
> [ 2631.867343]  [<ffffffff8101369d>] ? retint_restore_args+0x5/0x6
> [ 2631.875159]  [<ffffffff81013d00>] ? child_rip+0x0/0x20
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>