WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

RE: [Xen-users] Re: Windows Disk performance

To: "Christian Tramnitz" <chris.ace@xxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-users] Re: Windows Disk performance
From: "James Harper" <james.harper@xxxxxxxxxxxxxxxx>
Date: Sun, 8 Jun 2008 23:19:54 +1000
Delivery-date: Sun, 08 Jun 2008 06:20:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <g2g2nv$1ci$1@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4849D559.8000805@xxxxxxxxxx> <g2g2nv$1ci$1@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcjJO/g5yiRSbMYxTECvwlyDuw1UCAALHlJQ
Thread-topic: [Xen-users] Re: Windows Disk performance
> 
> When it comes to PV driver's performance this is an interesting topic.
> I've seen posts giving the opposite (not exact numbers but direction)
> result, so it would be interesting to find out what causes this.
> 

I just did a bit of testing myself... the '32K; 100% Read; 0% random'
test in iometer performs inconsistently when using the qemu drivers. I
tried it once and it gave me 35MB/s. I then tried the gplpv drivers and
they gave me around 23MB/s. I'm now trying the qemu drivers and they
aren't getting past 19MB/s. I'm using an LVM snapshot at the moment,
which is probably something to do with the inconsistent results....

I also tried fiddling with the '# of Outstanding I/O s', changing it to
16 (the maximum number of concurrent requests scsiport will give me).
For qemu, there was no change. For gplpv, my numbers went up to 66MB/s
(from 23MB/s). I'm a little unsure of how much trust to put in that
though as hdparm in Dom0 gives me a maximum of 35MB/s on that lvm
device, so I can't quite figure out how a HVM DomU could be getting
better results than the hdparm baseline figure.

I'm just about to upload 0.9.8, which fixes a performance bug that would
cause a huge slowdown (iometer dropped from 23MB/s to 0.5MB/s :) if too
many outstanding requests were issued at once. It also prints some
statistics to the debug log (viewable via DebugView from
sysinternals.com) every 60 seconds which may or may not be useful.

Unless the above bug was affecting things, and I am not sure that it
was, the reduction in performance may be due to the way that xenpci is
now notifying the child drivers (eg xenvbd and xennet) that an interrupt
has occurred. This should affect xennet equally though. It was changed
with the wdf->wdm rewrite.

James


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users