WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH] [RFC] Credit2 scheduler prototype
From: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Date: Wed, 13 Jan 2010 16:43:17 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 13 Jan 2010 08:43:41 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C773A726.64B7%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C773A726.64B7%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.23 (X11/20090817)
Keir Fraser wrote:
On 13/01/2010 16:05, "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx> wrote:

[NB that the current global lock will eventually be replaced with
per-runqueue locks.]

In particular, one of the races without the first flag looks like this
(brackets indicate physical cpu):
[0] lock cpu0 schedule lock
[0] lock credit2 runqueue lock
[0] Take vX off runqueue; vX->processor == 1
[0] unlock credit2 runqueue lock
[1] vcpu_wake(vX) lock cpu1 schedule lock
[1] finds vX->running false, adds it to the runqueue
[1] unlock cpu1 schedule_lock

Actually, hang on. Doesn't this issue, and the one that your second patch
addresses, go away if we change the schedule_lock granularity to match
runqueue granularity? That would seem pretty sensible, and could be
implemented with a schedule_lock(cpu) scheduler hook, returning a
spinlock_t*, and a some easy scheduler code changes.

If we do that, do you then even need separate private per-runqueue locks?
(Just an extra thought).
Hmm.... can't see anything wrong with it. It would make the whole locking discipline thing a lot simpler. It would, AFAICT, remove the need for private per-runqueue locks, which make it a lot harder to avoid deadlock without these sorts of strange tricks. :-)

I'll think about it, and probably give it a spin to see how it works out.

-George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel