This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding

To: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Yield to VCPU hcall, spinlock yielding
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Fri, 3 Jun 2005 15:59:06 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Bryan Rosenburg <rosnbrg@xxxxxxxxxx>, Michael Hohnbaum <hohnbaum@xxxxxxxxxx>, Orran Krieger <okrieg@xxxxxxxxxx>, Ryan Harper <ryanh@xxxxxxxxxx>
Delivery-date: Fri, 03 Jun 2005 20:58:23 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <A95E2296287EAD4EB592B5DEEFCE0E9D282064@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D282064@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx> [2005-06-03 15:41]:
> > I've not recieved any feedback on this.  Following this patch 
> > up with one that applies against current.  Builds, but 
> > haven't tested it since current SMP domains don't run.  
> Steven Smith has been experimenting with and benchmarking a number of
> different variants of this approach, testing a range of different
> preemption mitigation and avoidance techniques. I'm sure we'll hear more
> next week...

Great.  I'm looking forward to seeing how this turns out.

> My gut feeling is that we can get away with something simpler than the
> confer technique as we only need it as a hint. Anyhow, lets see.
> Have you any suggestions for metrics for comparing the schemes? lmbench
> is quite good for assessing the no contenion case. Perhaps doing a
> kernel build on a guest with VCPUs > phy CPUs is a reasonable way of
> assesing the benefit.

We have currently been using a lock-intensive program, [1]pft as a
benchmark.  I patched in lockmeter to measure the 'lockiness' of various
benchmarks, and even with 8 VCPUS on backed on a single cpu doesn't
generate a large number of lock contentions.  pft is far more lock

However, one of our concerns with confer/directed yielding is that the
lock holder vcpu doesn't know that it was given a time-slice and that
it should voluntarily yield giving other vcpus get a chance at the lock.
With out such a mechanism, one can imagine that the lock holder would
continue on and possibly grab the lock yet again before being preempted
to which another vcpu will then yield, etc.  We could add something
in the vcpu_info array indicating that it was given a slice and in
_raw_spin_unlock() check and call do_yield().  These spinlock changes
certainly affect the speed of the spinlocks in Linux which is one of the
reasons we wanted to avoid directed yielding or any other  mechanism
that required spinlock accounting.

I don't know if you had a chance to see my status on the [2]preemption
notification from about a month ago.  I'm going to bring that patch up
to current and re-run the tests to see where things are again.  Please
take a look at the original results.

1. http://k42.ozlabs.org/wikiattach/PftPerformanceK42/attachments/pft.c.txt
2. http://lists.xensource.com/archives/html/xen-devel/2005-05/msg00139.html


Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253

Xen-devel mailing list