This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] unnecessary VCPU migration happens again

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Subject: Re: [Xen-devel] unnecessary VCPU migration happens again
From: Emmanuel Ackaouy <ack@xxxxxxxxxxxxx>
Date: Tue, 19 Dec 2006 09:59:41 +0100
Cc: "Petersson, Mats" <Mats.Petersson@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 19 Dec 2006 00:59:38 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <51CFAB8CB6883745AE7B93B3E084EBE207DDAC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <51CFAB8CB6883745AE7B93B3E084EBE207DDAC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Dec 19, 2006, at 8:02, Xu, Anthony wrote:
Your patch is good, and reduce the majority of unnecessary migrations.
But the unnecessary migration still exist. I can still see about 5% performance
degradation on above benchmark( KB and LTP).
In fact this patch had helped a lot (from 27% to 5%)

I can understand it is impossible to implement spreading VCPU over all sockets/cores
and eliminate all unnecessary migration in the same time.

Is it possible for us to add a argument to function scheduler_init to enable/disable
spreading VCPU feature?

I don't think this is a good idea. If you want to disable migration, you can always
pin your VCPUs in place yourself using the cpu affinity masks.

If the attempt to balance work across sockets hurts performance of reasonable benchmarks, then perhaps it's still being too aggressive. right now, such a migration could happen on 10ms boundaries. i can try to smooth this further.

Can you dump the credit scheduler stat counter before and after you run the benchmark? (^A^A^A on the dom0/hypervisor console to switch to the hypervisor and then type the "r" key to dump scheduler info). That along with an idea of the
elapsed time between the two stat samples would be handy.


Xen-devel mailing list