WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH] Quick path for PIO instructions which cut more t

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [PATCH] Quick path for PIO instructions which cut more than half of the expense
From: "Xiang, Kai" <kai.xiang@xxxxxxxxx>
Date: Tue, 23 Dec 2008 22:03:39 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc:
Delivery-date: Tue, 23 Dec 2008 06:04:38 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C5751FD4.207C7%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C85CEDA13AB1CF4D9D597824A86D2B9001C3D4771E@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C5751FD4.207C7%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclkHlHyULdzldIRRfyHPzvdmtg6NAAA9AeuADgQ7hA=
Thread-topic: [Xen-devel] [PATCH] Quick path for PIO instructions which cut more than half of the expense
Thanks Keir for the comments!
For the noise problem, in experiment 1 and 2, thousands of IO traced show the 
data is pretty stable.
For experiment 3, to fight against the noise, I increase the time from 1 minute 
to 3 minutes each and try another four runs each. After that, I can still see 
2%~ 3% performance gains clearly for every run which make the possibility of 
noise influence very low. See backup if you are interested.

But I have to agree with you that maybe those who care performances will go to 
PV driver. 
Just raise the open questions here if there is possible that anyone has strong 
reason to use HVM for performance, or some other situation that may be PIO 
sensitive which may makes this helpful.

Regards
Kai

----------------------------------------------------------------
Backups

IOPS:
Before:  3 minutes for each single run that is 9 minutes each group
Run group 1: (3 runs from start up)      106.059, 112.147, 114.640
Run group 2: (3 runs from start up)      106.340, 112.024, 114.584
Run group 3: (3 runs from start up)      105.919, 111.405, 114.598
Run group 4: (3 runs from start up)      106.065, 112.455, 114.930

After:
Run group 1: (3 runs from start up)      109.435, 114.977, 117.662
Run group 2: (3 runs from start up)      109.961, 115.395, 117.576
Run group 3: (3 runs from start up)      109.41,  114.623, 118.046
Run group 4: (3 runs from start up)      110.464, 116.757, 118.790
-------------------------------------------------------------------




-----Original Message-----
From: Keir Fraser [mailto:keir.fraser@xxxxxxxxxxxxx] 
Sent: 2008年12月22日 18:44
To: Xiang, Kai; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] [PATCH] Quick path for PIO instructions which cut more 
than half of the expense

On 22/12/2008 10:16, "Xiang, Kai" <kai.xiang@xxxxxxxxx> wrote:

> 3) The influence for more realistic workloads:
> We tested on Windows 2003 Server Guest, while using IOmeter to run a Disk
> bound test, the IO pattern is "Default" which use 67% random read and 33%
> random write with 2K request size.
> To reduce the influence of file cache, I run 3 times (1 minutes each) from the
> start of the computer (both xen and the guest)
> 
> Compare before and after
>          IO per second (3 runs)    |  average response time (3 runs)
> ----------------------------------------------------------------
> Before: 100.004; 109.447; 110.801  |  9.988;  9.133;  9.022
> After:  101.951; 110.893; 114.179  |  9.806;  9.016;  8.756
> ------------------------------------------------------------------
> 
> So we are having a 1%~3% percent IO performance gain while reduce the average
> response time by 2%~3% at the same time.
> Considering this is just an ordinary SATA disk and an IO bound workload, we
> are expecting more with faster Disks and more cached IO.

The difference is in the noise, almost. Not, I think, sufficient for me ever
to want to see the VMX PIO code back in Xen ever again. Those who actually
care about performance would run PV drivers anyway, and see much greater
speedups.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>