WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype

To: Dulloor <dulloor@xxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Fri, 29 Jan 2010 00:56:55 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Thu, 28 Jan 2010 16:57:15 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:message-id:subject:from:to:cc :content-type:content-transfer-encoding; bh=EawvvOKQ2VFQoA2Cqzf7bape/p+PoDqdhl6OrFEexl8=; b=TcNu0vll9JZwV8ptvb1ivMmG2jxNrU5IIRkft5wA5AY4j26BNVLHVomCPsQ88SxJB2 6DyC9Rx5SORTU3y+y5m5zR0xvw6NeiEstzFkFOTt+7bwBLWimV+pC030sr+VW5OQf6qA UzdcLxs/jgsbcOVvG4mLjYrdwTQhGesyc5al8=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=hiF2wTLqPB2I+DtxQrPUfcxDp6lkPir/GQlm3g8+lqmgeZLzgNGOOAmlw1DCGB8lOD Qln3rHksk0odAUtTbNSqTEh/4+qK77R5GLC5NGfdB9FP5tBSC8JHu2iRolBTCZRbWNyB Lq1r4PEQ5SFyApgRW/K6acvlu9heJcCisskiM=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <940bcfd21001281527j257e9389w8ff8cb8e311aecc9@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C773A726.64B7%keir.fraser@xxxxxxxxxxxxx> <4B4DF825.1090100@xxxxxxxxxxxxx> <940bcfd21001281527j257e9389w8ff8cb8e311aecc9@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Since it's an assertion, I assume you ran it with debug=y?

I'm definitely changing some assumptions with this, so it's not a
surprise that some assertions trigger.

I'm working on a modified version based on the discussion we had here;
I'll post a patch (tested with debug=y) when I'm done.

-George

On Thu, Jan 28, 2010 at 11:27 PM, Dulloor <dulloor@xxxxxxxxx> wrote:
> George,
>
> With your patches and sched=credit2, xen crashes on a failed assertion :
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) Assertion '_spin_is_locked(&(*({ unsigned long __ptr; __asm__ ("" : 
> "=r"(*
> (XEN)
>
> Is this version supposed to work (or is it just some reference code) ?
>
> thanks
> dulloor
>
>
> On Wed, Jan 13, 2010 at 11:43 AM, George Dunlap
> <george.dunlap@xxxxxxxxxxxxx> wrote:
>> Keir Fraser wrote:
>>>
>>> On 13/01/2010 16:05, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>>
>>>
>>>>
>>>> [NB that the current global lock will eventually be replaced with
>>>> per-runqueue locks.]
>>>>
>>>> In particular, one of the races without the first flag looks like this
>>>> (brackets indicate physical cpu):
>>>> [0] lock cpu0 schedule lock
>>>> [0] lock credit2 runqueue lock
>>>> [0] Take vX off runqueue; vX->processor == 1
>>>> [0] unlock credit2 runqueue lock
>>>> [1] vcpu_wake(vX) lock cpu1 schedule lock
>>>> [1] finds vX->running false, adds it to the runqueue
>>>> [1] unlock cpu1 schedule_lock
>>>>
>>>
>>> Actually, hang on. Doesn't this issue, and the one that your second patch
>>> addresses, go away if we change the schedule_lock granularity to match
>>> runqueue granularity? That would seem pretty sensible, and could be
>>> implemented with a schedule_lock(cpu) scheduler hook, returning a
>>> spinlock_t*, and a some easy scheduler code changes.
>>>
>>> If we do that, do you then even need separate private per-runqueue locks?
>>> (Just an extra thought).
>>>
>>
>> Hmm.... can't see anything wrong with it.  It would make the whole locking
>> discipline thing a lot simpler.  It would, AFAICT, remove the need for
>> private per-runqueue locks, which make it a lot harder to avoid deadlock
>> without these sorts of strange tricks. :-)
>>
>> I'll think about it, and probably give it a spin to see how it works out.
>>
>> -George
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel