WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Scheduling anomaly with 4.0.0 (rc6)

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: Re: [Xen-devel] Scheduling anomaly with 4.0.0 (rc6)
From: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>
Date: Mon, 5 Apr 2010 15:43:40 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 05 Apr 2010 07:44:23 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:sender:received:in-reply-to :references:date:x-google-sender-auth:received:message-id:subject :from:to:cc:content-type:content-transfer-encoding; bh=j8pRVXJaOwgcykIHVFM1B0ThO2HbfvVm3gy4xkWh3So=; b=cSB4NTRk6+ZErBecWfI38zw7Srsaaq8NHYvAEYJoaSjetjPFxuaN26I0uXYcU48iXO 5zoA+bqLWsbbX6gHHWN1tsXxJ0V6uZY7pbXuBqbihcO//62avW4HOdAklSvM2sEZraSr tRa5aehWRVJOqLJMJ3u1upT8ythr7uDSQhqno=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:in-reply-to:references:date :x-google-sender-auth:message-id:subject:from:to:cc:content-type :content-transfer-encoding; b=n+rIJpN/YFkJebEY7/fZmNWZPqJQbn4B4XRdbqARJOAWv9PmMrQuLE57oMDE/w99wQ Jagg9Sh+F3veSf4wwwBWMBKBDkkDnG01VuyuKAhLg7CtDgps9vTQEVqY2UWfQKNm4sX5 AiJQF//vabXn2w+ePrZA5Vgy7Y3za/HlIgHPw=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <3b511656-ea50-4ebb-918e-e24b40080580@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <3b511656-ea50-4ebb-918e-e24b40080580@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Is it possible that Linux is just favoring one vcpu over the other for
some reason?  Did you try running the same test but with only one VM?

Another theory would be that most interrupts are delivered to vcpu 0,
so it may end up in "boost" priority more often.

I'll re-post the credit2 series shortly; Keir said he'd accept it
post-4.0.  You could try it with that and see what the performance is
like.

 -George

On Fri, Apr 2, 2010 at 5:48 PM, Dan Magenheimer
<dan.magenheimer@xxxxxxxxxx> wrote:
> I've been running some heavy testing on a recent Xen 4.0
> snapshot and seeing a strange scheduling anomaly that
> I thought I should report.  I don't know if this is
> a regression... I suspect not.
>
> System is a Core 2 Duo (Conroe).  Load is four 2-VCPU
> EL5u4 guests, two of which are 64-bit and two of which
> are 32-bit.  Otherwise they are identical.  All four
> are running a sequence of three Linux compiles with
> (make -j8 clean; make -j8).  All are started approximately
> concurrently: I synchronize the start of the test after
> all domains are launched with an external NFS semaphore
> file that is checked every 30 seconds.
>
> What I am seeing is a rather large discrepancy in the
> amount of time consumed "underway" by the four domains
> as reported by xentop and xm list.  I have seen this
> repeatedly, but the numbers in front of me right now are:
>
> 1191s dom0
> 3182s 64-bit #1
> 2577s 64-bit #2 <-- 20% less!
> 4316s 32-bit #1
> 2667s 32-bit #2 <-- 40% less!
>
> Again these are identical workloads and the pairs
> are identical released kernels running from identical
> "file"-based virtual block devices containing released
> distros.  Much of my testing had been with tmem and
> self-ballooning so I had blamed them for awhile,
> but I have reproduced it multiple times with both
> of those turned off.
>
> At start and after each kernel compile, I record
> a timestamp, so I know the same work is being done.
> Eventually the workload finishes on each domain and
> intentionally crashes the kernel so measurement is
> stopped.  At the conclusion, the 64-bit pair have
> very similar total CPU sec and the 32-bit pair have
> very similar total CPU sec so eventually (presumably
> when the #1's are done hogging CPU), the "slower"
> domains do finish the same amount of work.  As a
> result, it is hard to tell from just the final
> results that the four domains are getting scheduled
> at very different rates.
>
> Does this seem like a scheduler problem, or are there
> other explanations? Anybody care to try to reproduce it?
> Unfortunately, I have to use the machine now for other
> work.
>
> P.S. According to xentop, there is almost no network
> activity, so it is all CPU and VBD.  And the ratio
> of VBD activity looks to be approximately the same
> ratio as CPU(sec).
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>