This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [PATCH 09/14] xen/pvticketlock: Xen implementation f

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH 09/14] xen/pvticketlock: Xen implementation for PV ticket locks
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 17 Nov 2010 01:57:40 -0800
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxx>, Nick Piggin <npiggin@xxxxxxxxx>, Peter Zijlstra <peterz@xxxxxxxxxxxxx>, Srivatsa Vaddagiri <vatsa@xxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Avi Kivity <avi@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, xiyou.wangcong@xxxxxxxxx, Eric Dumazet <dada1@xxxxxxxxxxxxx>, Linux Virtualization <virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 17 Nov 2010 01:58:58 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4CE397E7.2010107@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <cover.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx><cover.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx> <aa32da076143b8e13c23c1f589d7e6cbedb22907.1289940821.git.jeremy.fitzhardinge@xxxxxxxxxx> <4CE39C3C0200007800022AE2@xxxxxxxxxxxxxxxxxx> <4CE397E7.2010107@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20101027 Fedora/3.1.6-1.fc13 Lightning/1.0b3pre Thunderbird/3.1.6
On 11/17/2010 12:52 AM, Jeremy Fitzhardinge wrote:
> On 11/17/2010 12:11 AM, Jan Beulich wrote:
>>>>> On 16.11.10 at 22:08, Jeremy Fitzhardinge <jeremy@xxxxxxxx> wrote:
>>> +static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
>>>  {
>>> -   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
>>> -   struct xen_spinlock *prev;
>>>     int irq = __get_cpu_var(lock_kicker_irq);
>>> -   int ret;
>>> +   struct xen_lock_waiting *w = &__get_cpu_var(lock_waiting);
>>> +   int cpu = smp_processor_id();
>>>     u64 start;
>>>     /* If kicker interrupts not initialized yet, just spin */
>>>     if (irq == -1)
>>> -           return 0;
>>> +           return;
>>>     start = spin_time_start();
>>> -   /* announce we're spinning */
>>> -   prev = spinning_lock(xl);
>>> +   w->want = want;
>>> +   w->lock = lock;
>>> +
>>> +   /* This uses set_bit, which atomic and therefore a barrier */
>>> +   cpumask_set_cpu(cpu, &waiting_cpus);
>> Since you don't allow nesting, don't you need to disable
>> interrupts before you touch per-CPU state?
> Yes, I think you're right - interrupts need to be disabled for the bulk
> of this function.

Actually, on second thoughts, maybe it doesn't matter so much.  The main
issue is making sure that the interrupt will make the VCPU drop out of
xen_poll_irq() - if it happens before xen_poll_irq(), it should leave
the event pending, which will cause the poll to return immediately.  I
hope.  Certainly disabling interrupts for some of the function will make
it easier to analyze with respect to interrupt nesting.

Another issue may be making sure the writes and reads of "w->want" and
"w->lock" are ordered properly to make sure that xen_unlock_kick() never
sees an inconsistent view of the (lock,want) tuple.  The risk being that
xen_unlock_kick() sees a random, spurious (lock,want) pairing and sends
the kick event to the wrong VCPU, leaving the deserving one hung.


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>