WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Release 0.9.5 of GPL PV Drivers for Windows

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Release 0.9.5 of GPL PV Drivers for Windows
From: "Ryan Burke" <burke@xxxxxxxxxxxxxxxxx>
Date: Sun, 1 Jun 2008 13:55:37 -0500 (CDT)
Delivery-date: Sun, 01 Jun 2008 11:56:08 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
Importance: Normal
In-reply-to: <89ADFB63BC286B49BC71F1D34A7E38D612D4B0@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <89ADFB63BC286B49BC71F1D34A7E38D612D4B0@xxxxxxxxxxxxxxxxxxxxxx>
Reply-to: burke@xxxxxxxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: SquirrelMail/1.4.13
> James;
>
>
>
> While educating myself in the fine art of Xen I have been lurking on the
> list and paying close attention to your incredible drivers.  I spent
> some time today doing exactly what you asked for, disk performance
> testing with and without your PV drivers, so I thought I would share my
> results with you and the list along with an interesting observation.  I
> hope you find this useful and have some ideas on what might be going
> wrong.  Also, I would like to contribute to Xen in general and this is
> an area where I feel I am able to offer some assistance. If there is
> anything in particular that you would like me to test in more detail or
> if you need any more information please let me know and I will do my
> best to make the time to work with you on this.  I have a rather large
> "Windows" estate running on various Hypervisors so I can probably come
> up with just about any configuration you want.  My current test
> configuration is as follows:
>
>
>
> Dell Poweredge 1950, Dual 2.0 Ghz 5400 series Intel Procs , Perc 5i, 2 *
> SATA 750GB (RAID 1 - WB Cache Enabled in bios and Dom0), 16GB RAM.  All
> the BIOS' are one version out of date but there are no major bugs in any
> of it compelling me to update.
>
>
>
> Xen 3.2.1 compiled from source
>
>
>
> Dom0 is Centos 5.1
>
>
>
> DomU's are on the local storage for testing purposes, an LVM root
> partition (default Centos Installation).  The only DomU's running on the
> machine are the ones mentioned here for these tests and tests were
> carried out on one DomU at a time.
>
>
>
> All DomU's are fresh installations of:  Windows Server 2003, Standard,
> R2 SP2 <No windows updates>
>
>
>
> I've attached the iometer configuration that I used.
>
>
>
> Observation:
>
>
>
> I have observed a possible incompatibility between QCOW image files and
> the PV drivers.  If I create a DomU with the above spec using an image
> file created using dd, for example:
>
>
>
> dd if=/dev/zero of=guest_disk_sparse.img  bs=1k seek=8192k  count=1
>
>
>
> Then the drivers work fine.
>
>
>
> If I create the image file using:
>
>
>
> qemu-img create -f qcow2 giest_disk_qcow.qcow 8G
>
>
>
> I get a BSOD.. I can send you a pretty screen shot if you like.  And
> depending on my "disk" line in the HVM config I get the BSOD at
> different times.  Tap:aio bsods early in the boot and tap:qcow bsods
> quite late.
>
>
>
> I double checked these results with two fresh machines, one running PV
> 0.9.1 and the other 0.9.5 just to be sure but it would be great if
> someone could double check my work?
>
>
>
> Obviously I would prefer performance and stability over the
> functionality that the QCow disks give me but if this is an easy fix it
> would be great to have it all.
>
>
>
>
>
> IOMETER Performance Results (see config attached)
>
>
>
>
>
> Xen 3.2.1
>
> No Tools
>
> Xen 3.2.1
>
> PV 0.95
>
> Xen 3.2.1
>
> PV 0.9.1
>
> Xen 3.2.1
>
> PV 0.9.5
>
> Machine Name
>
> Qcow1
>
> Qcow
>
> W2K3
>
> RAW
>
> Image Type
>
> QCOW2
>
> QCOW2
>
> RAW
>
> RAW
>
> MAX IO's
>
>
>
>
>
>
>
>
>
> Total I/O's per second
>
> 1801.43
>
> BSOD
>
> 4385.78
>
> 16330.33
>
> Total MBs per Second
>
> 0.88
>
>       2.14
>
> 7.97
>
> Average I/O response Time
>
> 4.4096
>
>       1.8225
>
> 0.4890
>
> MAX Throughput
>
>
>
>
>
>
>
>
>
> Total I/O's per second
>
> 576.76
>
>       1446.27
>
> 547.64
>
> Total MBs per Second
>
> 36.05
>
>       90.39
>
> 34.23
>
> Average I/O response Time
>
> 13.8607
>
>       5.5269
>
> 14.5923
>
> RealLife
>
>
>
>
>
>
>
>
>
> Total I/O's per second
>
> 404.61
>
>       6763.00
>
> 610.42
>
> Total MBs per Second
>
> 0.79
>
>       13.21
>
> 1.19
>
> Average I/O response Time
>
> 19.7471
>
>       1.1815
>
> 13.0836
>
>
>
> Best Regards
>
>
>
> Geoff Wiener


Geoff,

Is there anyway you could reformat those test results? I am having a very
hard time following what the actual results are for each test.

Thanks,
Ryan

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>