WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] remove blocked time accounting from xen "clockch

To: "Laszlo Ersek" <lersek@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] remove blocked time accounting from xen "clockchip"
From: "Jan Beulich" <JBeulich@xxxxxxxx>
Date: Thu, 10 Nov 2011 08:32:54 +0000
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Joe Jin <joe.jin@xxxxxxxxxx>, Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Thu, 10 Nov 2011 00:34:30 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4EBABCAE.40704@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1318970579-6282-1-git-send-email-lersek@xxxxxxxxxx> <4EBA8FAA020000780005FD5F@xxxxxxxxxxxxxxxxxxxx> <4EBABCAE.40704@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 09.11.11 at 18:47, Laszlo Ersek <lersek@xxxxxxxxxx> wrote:
> On 11/09/11 14:35, Jan Beulich wrote:
>> That is, on an overcommitted system (and without your patch) I
>> would expect you to not see the (full) double idle increment for a not
>> fully idle and not fully loaded vCPU.
> 
> I tried to verify this with an experiment. Please examine if the 
> experiment is bogus or not.
> 
> On a four-PCPU host (hyperthreading off, RHEL-5.7+ hypervisor & dom0) I 
> started three virtual machines:
> 
> VM1: four VCPUs, four processes running a busy loop each, independently.
> VM2: ditto
> VM3: single VCPU running the attached program (which otherwise puts 1/2 
> load on a single CPU, virtual or physical.) OS is RHEL-6.1.
> 
> In VM3, I also ran this script:
> 
> $ grep cpu0 /proc/stat; sleep 20; grep cpu0 /proc/stat
> cpu0 10421 0 510 119943 608 0 1 122 0
> cpu0 11420 0 510 121942 608 0 1 126 0
> 
> The difference in the fourth numerical column is still 1999, even though 
> only 10 seconds of those 20 were spent idly.
> 
> Does the experiment miss the point (or do I), or does this disprove the 
> idea?

For one, my expectation may be wrong (while I think the consideration
of the accounting still being wrong even with the patch is correct).

Second, the amount of stolen time (presumably the second to last
column; not sure what kernel version RHEL-6.1 uses, so I can't
immediately verify) being just 4 is certainly too small to be relevant
(meaning that VM3's only vCPU got scheduled almost instantly in too
many cases, which I think is the intended behavior of the credit
scheduler in a contrived environment like this).

To get the amount of stolen time up, I think one would have to
penalize VM3 (so that it doesn't benefit from not having been
scheduled for 100ms each time). I don't, however, know how to
achieve that in practice.

One question certainly possible to answer is whether, with your patch
and across different (over-)load scenarios, process, system, idle and
steal times add up to wall time, which I don't think would be the case.
Which isn't to say that they do without your patch, just that it
addresses only part of a wider issue.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel