This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: "Bryan S Rosenburg" <rosnbrg@xxxxxxxxxx>, <habanero@xxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 8 Jun 2005 22:29:21 +0100
Cc: ryanh@xxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, hohnbaum@xxxxxxxxxx, Orran Y Krieger <okrieg@xxxxxxxxxx>
Delivery-date: Wed, 08 Jun 2005 21:28:45 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVsa4TnuejHG1ueSoCiC5gEpzenJAABNxKQ
Thread-topic: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
> > IMO, I don't think this alone is enough to encourage task 
> migration.  
> > The primary motivator to steal is a 25% or more load imbalance, and 
> > one extra fake kernel thread will probably not be enough to 
> trigger this.
> The kernel thread is needed at the very least to ensure that 
> all user programs on the de-scheduled CPU are available for 
> migration.  In an important case, a program on the 
> de-scheduled CPU holds a futex, and another CPU goes idle 
> because its program blocks on the futex.  We'd want the idle 
> CPU to pick up the futex holder, and I'm assuming (with very 
> little actual knowledge) that the Linux scheduler would make 
> that happen. 

We might be able to come up with a cheaper hack for doing this. The
notifaction scheme is already on the expensive side, and adding two
extra passes through the scheduler could totally doom it.

> I'd view your "cpu_power" proposal as orthogonal to (or 
> perhaps complementary to) our ideas on preemption 
> notification.  It's aimed more at load-balancing and fair 
> scheduling than specifically at the problems that arise with 
> the preemption of lock holders.  On the apparent CPU speed 
> issue, does Linux account in any way for different interrupt 
> loads on different processors?  Is a program just out of luck 
> if it happens to get scheduled on a processor with heavy 
> interrupt traffic, or will Linux notice that it's not making 
> the same progress as its peers and shuffle things around?  It 
> seems that your cpu_power proposal might have something to 
> contribute here. 

I don't see it as orthogonal -- I think something like it is needed to
make the notification scheme result in any benefit, otherwise no work
will get migrated from the de-scheduled CPU.

I'm just not sure how easy it will be to add into the rebalance


Xen-devel mailing list