[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype


  • To: Dulloor <dulloor@xxxxxxxxx>
  • From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
  • Date: Fri, 29 Jan 2010 00:56:55 +0000
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 28 Jan 2010 16:57:15 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=hiF2wTLqPB2I+DtxQrPUfcxDp6lkPir/GQlm3g8+lqmgeZLzgNGOOAmlw1DCGB8lOD Qln3rHksk0odAUtTbNSqTEh/4+qK77R5GLC5NGfdB9FP5tBSC8JHu2iRolBTCZRbWNyB Lq1r4PEQ5SFyApgRW/K6acvlu9heJcCisskiM=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Since it's an assertion, I assume you ran it with debug=y?

I'm definitely changing some assumptions with this, so it's not a
surprise that some assertions trigger.

I'm working on a modified version based on the discussion we had here;
I'll post a patch (tested with debug=y) when I'm done.

-George

On Thu, Jan 28, 2010 at 11:27 PM, Dulloor <dulloor@xxxxxxxxx> wrote:
> George,
>
> With your patches and sched=credit2, xen crashes on a failed assertion :
> (XEN) ****************************************
> (XEN) Panic on CPU 1:
> (XEN) Assertion '_spin_is_locked(&(*({ unsigned long __ptr; __asm__ ("" : 
> "=r"(*
> (XEN)
>
> Is this version supposed to work (or is it just some reference code) ?
>
> thanks
> dulloor
>
>
> On Wed, Jan 13, 2010 at 11:43 AM, George Dunlap
> <george.dunlap@xxxxxxxxxxxxx> wrote:
>> Keir Fraser wrote:
>>>
>>> On 13/01/2010 16:05, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>>
>>>
>>>>
>>>> [NB that the current global lock will eventually be replaced with
>>>> per-runqueue locks.]
>>>>
>>>> In particular, one of the races without the first flag looks like this
>>>> (brackets indicate physical cpu):
>>>> [0] lock cpu0 schedule lock
>>>> [0] lock credit2 runqueue lock
>>>> [0] Take vX off runqueue; vX->processor == 1
>>>> [0] unlock credit2 runqueue lock
>>>> [1] vcpu_wake(vX) lock cpu1 schedule lock
>>>> [1] finds vX->running false, adds it to the runqueue
>>>> [1] unlock cpu1 schedule_lock
>>>>
>>>
>>> Actually, hang on. Doesn't this issue, and the one that your second patch
>>> addresses, go away if we change the schedule_lock granularity to match
>>> runqueue granularity? That would seem pretty sensible, and could be
>>> implemented with a schedule_lock(cpu) scheduler hook, returning a
>>> spinlock_t*, and a some easy scheduler code changes.
>>>
>>> If we do that, do you then even need separate private per-runqueue locks?
>>> (Just an extra thought).
>>>
>>
>> Hmm.... can't see anything wrong with it.  It would make the whole locking
>> discipline thing a lot simpler.  It would, AFAICT, remove the need for
>> private per-runqueue locks, which make it a lot harder to avoid deadlock
>> without these sorts of strange tricks. :-)
>>
>> I'll think about it, and probably give it a spin to see how it works out.
>>
>> -George
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.