[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Disk I/O Benchmark Test Results in Windows XP Home Guest on pvops dom 0 kernel 2.6.31.1


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx
  • From: "Mr. Teo En Ming (Zhang Enming)" <space.time.universe@xxxxxxxxx>
  • Date: Wed, 14 Oct 2009 08:23:39 +0800
  • Cc: space.time.universe@xxxxxxxxx
  • Delivery-date: Wed, 14 Oct 2009 06:14:34 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=M5UI5gDxTtzn28sa1jh2TqHLBv/hU/cXO04mkLMWxVVlquwiZ2yq5GGEUEvFrNwP82 aZu5yHginYyUGEaOuewdTufw/yTyq4blvZDfN1jj33iPLdBG1xM1AbS+7US3zwwvHDc4 ieBwC3lgtgqU/AI1RQK7L70gXit324U+FtjGE=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Dear All,

I have just installed GPLPV drivers in my Win XP Home domU.

I installed this version
http://www.meadowcourt.org/downloads/gplpv_fre_wxp_x86_0.10.0.86.msi
in my Windows XP Home virtual machine.

http://wiki.xensource.com/xenwiki/XenWindowsGplPv

http://www.meadowcourt.org/downloads/

After installation of the GPL PV drivers, the harddisk drive in Device
Manager has changed from QEMU harddisk to XEN PV DISK SCSI DISK
DEVICE. The emulated Realtek network adapter has been replaced with
Xen Net Device Driver 1.0 Gbps.

After installation of the GPL PV drivers, I could no longer run
Sisoftware Sandra Disk I/O benchmark tests as it only support ATA,
SATA and ATAPI devices. SCSI device is not supported.

I have performed disk I/O benchmark tests using HDtach for pvops dom 0
kernels 2.6.30-rc3 and 2.6.31.1. For each kernel, there are 3
iterations of the HDtach benchmark test.

I am using Xen 3.5-unstable changeset 20143 with Intel gfx passthrough
version 1 patches applied on Intel DQ45CB motherboard (BIOS 0093) with
nVidia Geforce 8400 GS PCI Express x16 with 8 GB DDR2-800 memory.

http://www.youtube.com/watch?v=HNEiSInrav0

http://www.youtube.com/watch?v=_hOT_9LIG5w

http://www.youtube.com/watch?v=1ia3IwG6tp4

http://www.youtube.com/watch?v=5tLzYqIJ7Q0

Before using Xen GPLPV drivers (QEMU emulated harddisk), disk
performance in 2.6.30-rc3 clearly outshines that in 2.6.31-rc6 and
2.6.31.1. In 2.6.30-rc3, disk throughput is 50 MB/s while in
2.6.31-rc6 and 2.6.31.1, disk throughput is about 30 MB/s using
Sisoftware Sandra test suite.

Using GPLPV drivers, disk I/O performance is about equal in both pvops
dom 0 kernels 2.6.30-rc3 and 2.6.31.1. Please see attached six JPEG
screenshots of the hdtach disk benchmark test results. 3 screenshots
are for 2.6.30-rc3 and the remaining 3 screenshots are for 2.6.31.1.

--
Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical
Engineering)
Alma Maters:
(1) Singapore Polytechnic
(2) National University of Singapore
Blog URL: http://teo-en-ming-aka-zhang-enming.blogspot.com
Email: space.time.universe@xxxxxxxxx
MSN: teoenming@xxxxxxxxxxx
Mobile Phone: +65-9648-9798
Street: Bedok Reservoir Road
Republic of Singapore

On Fri, Oct 9, 2009 at 8:54 AM, Mr. Teo En Ming (Zhang Enming)
<space.time.universe@xxxxxxxxx> wrote:
> Hi,
>
> I have performed 5 iterations of the disk I/O benchmark tests. Here
> are the results:
>
> #1: 26.67 MB/s
> #2: 31.65 MB/s
> #3: 30 MB/s
> #4: 28 MB/s
> #5: 28.87 MB/s
>
> Just to confirm, if I do a "git clone
> git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git
> linux-2.6-xen", but if I did not do a git checkout, the default branch
> selected will be rebase/master right?
>
> --
> Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical
> Engineering)
> Alma Maters:
> (1) Singapore Polytechnic
> (2) National University of Singapore
> Blog URL: http://teo-en-ming-aka-zhang-enming.blogspot.com
> Email: space.time.universe@xxxxxxxxx
> MSN: teoenming@xxxxxxxxxxx
> Mobile Phone: +65-9648-9798
> Street: Bedok Reservoir Road
> Republic of Singapore
>
>
>
> On Thu, Oct 8, 2009 at 11:05 PM, Mr. Teo En Ming (Zhang Enming)
> <space.time.universe@xxxxxxxxx> wrote:
>> On Thu, Oct 8, 2009 at 10:49 PM, Konrad Rzeszutek Wilk
>> <konrad.wilk@xxxxxxxxxx> wrote:
>>>> > > No I did not. I am not aware of this requirement.
>>>> >
>>>> > Please do.
>>>>
>>>> I will do it before I start the Win XP guest. After I start win xp
>>>> domU, I won't be able to access dom0 any more. So I won't be able to
>>>> clear caches in between iterations of the disk I/O benchmark tests.
>>>
>>> You should be able to ssh from the DomU to the Dom0?
>>>
>>
>> By right I should be able to. But a few minutes after I have started
>> Win XP domU, the IP address of dom0 will disappear. So I won't be able
>> to ssh from Win XP domU to dom0.
>>
>>>>
>>>> > >
>>>> > >
>>>> > > >
>>>> > > > Are you using stub-domain or normal QEMU?
>>>> > > >
>>>> > >
>>>> > > I don't think I am using stub-domain. I should be using normal QEMU.
>>>> >
>>>> > You are not based on your guest configuration.
>>>> >
>>>> > .. snip ..
>>>>
>>>> I don't quite understand.
>>>
>>> I missed a comma. It should have said:
>>>
>>> You are not, based on your guest configuration.
>>>
>>
>>
>>
>> --
>> Mr. Teo En Ming (Zhang Enming) Dip(Mechatronics) BEng(Hons)(Mechanical
>> Engineering)
>> Alma Maters:
>> (1) Singapore Polytechnic
>> (2) National University of Singapore
>> Blog URL: http://teo-en-ming-aka-zhang-enming.blogspot.com
>> Email: space.time.universe@xxxxxxxxx
>> MSN: teoenming@xxxxxxxxxxx
>> Mobile Phone: +65-9648-9798
>> Street: Bedok Reservoir Road
>> Republic of Singapore
>>
>

Attachment: 2.6.30-rc3 hdtach 01.JPG
Description: JPEG image

Attachment: 2.6.30-rc3 hdtach 02.JPG
Description: JPEG image

Attachment: 2.6.30-rc3 hdtach 03.JPG
Description: JPEG image

Attachment: 2.6.31.1 hdtach 01.JPG
Description: JPEG image

Attachment: 2.6.31.1 hdtach 02.JPG
Description: JPEG image

Attachment: 2.6.31.1 hdtach 03.JPG
Description: JPEG image

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.