This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] credit scheduler

To: "Karl Rister" <kmr@xxxxxxxxxx>, "Xen Devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] credit scheduler
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 29 Aug 2006 00:04:08 +0100
Delivery-date: Mon, 28 Aug 2006 16:04:32 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbK8WuIR/T+mDWVSxqchapKf/v9gQAANnAw
Thread-topic: [Xen-devel] credit scheduler
> In my most basic test I have a 4 socket dual core Intel box (Paxville)
> a
> uniprocessor dom0 and 7 uniprocessor domUs.  Each of the domUs is
pinned on
> its own core with the first core of the system left for dom0.  When
> the credit scheduler the dom0 VCPU will bounce around the system,
> landing on the same thread as one of the domUs or sometimes on one of
> sibling hyperthreads (this appears to happen a majority of the time it
> moves).  This is less than ideal when considering cache warmth and the
> sharing of CPU resources when the first core of the system is always
> available in this configuration.  Does the credit scheduler have any
> awareness of cache warmth or CPU siblings when balancing?
> I have also seen similar behavior when running tests in the domUs such
> each has its VCPU running at 100% utilization so I believe this
behavior to
> be fairly uniform.

The sometimes suboptimal use of hyperthreading is well understood and is
on the todo list. It hasn't been a priority as the current generation of
Intel Core/Core2 and Opteron CPUs don't have HT.

Apart from the hyperthreading sibling case, a dom0 vcpu should only ever
migrate to another CPU that has been idle for a complete rebalancing
period. Hence the rate of movement should be very low and thus the
overhead of priming the cache tiny.  It's conceivable one could
construct an argument that it's not worth the special case code in the
scheduler to fix...

BTW: The PhD thesis of our very own James Bulpin has lots of useful data
on cache warming times and how to optimize for HT systems:

Allowing dom0 vcpus to explicitly be pinned would certainly be a good
thing though, and is slated for post 3.0.3 -- see the thread on this
topic earlier today. Actually, in the interim it would be easy to add a
xen command line parameter to pin dom0 vcpus...


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>