On Thu, Feb 21, 2008 at 09:08:32PM +0100, Stephan Seitz wrote:
> Hi Pasi,
>
> I wasn't able to get Windows XP Professional x64 running with gplpv until
> James released 0.8.0 if his great drivers.
>
> So my answer is a bit delayed:
>
> Equipment: core2 duo, 2,66GHz, Areca PCI-X Raid 6 over 8 disks
>
>
> System is running Xen 3.2.0 64bit, dom0 is 2.6.18.8
>
> Tested HVM domU is running XP Pro x64, Version 2003, SP 2, tested with
> iometer 2006-07-27 stable.
>
Thanks for the testing results!
So it seems windows gplpv drivers give almost 4x better performance compared to
emulated
qemu disks.. Nice!
-- Pasi
> --- Disk/IO using a gplpv'd disk:
>
> pattern: 4k, 50% read, 50% write
>
> total iops: ~14180
> read ~7045-7065
> write ~7025-7045
> total MB/s: ~55
> read ~27.5
> write ~27.5 (looks like 50%...)
>
> avg IO response time: ~0.071 ms
> max IO response time: ~19.438 ms
> cpu utilization: 0% (??)
>
> pattern: 32k, 50% read, 50% write
>
> total iops: ~6900
> read ~3435
> write ~3450
> total MB/s: ~215
> read ~107.5
> write ~107.5
>
> avg IO response time: ~0.145 ms
> max IO response time: ~21.525 ms
> cpu utilization: ~5.52%
>
>
> pure read operations with 32k pattern shows about 280 MB/s throughput
> pure write operation with 512B pattern shows about 8.5 MB/s througput
>
>
> --- Disk/IO using a QEMU disk:
>
> pattern: 4k, 50% read, 50% write
>
> total iops: ~3650
> read ~1828
> write ~1790
> total MB/s: ~14
> read ~7
> write ~7
>
> avg IO response time: ~0.276 ms
> max IO response time: ~55.242 ms
> cpu utilization: 98.7%
>
> pattern: 32k, 50% read, 50% write
>
> total iops: ~3064
> read ~1370-1390
> write ~1360
> total MB/s: ~84
> read ~42-44
> write ~40-42
>
> avg IO response time: ~0.387 ms
> max IO response time: ~77.450 ms
> cpu utilization: ~76.8%
>
>
> pure read operations with 32k pattern shows about 94 MB/s throughput
> pure write operation with 512B pattern shows about 1.8 MB/s througput
>
>
>
> --- (filebased) disk IO at dom0 (random, using dbench):
>
> 10 workers on ext3,defaults: ~660 MB/s
> 10 workers on xfs,defaults: ~620 MB/s
>
> hdparm shows 3.3 GB/s cached and 366 MB/s buffered
>
>
>
> Pasi Kärkkäinen schrieb:
> >On Tue, Feb 05, 2008 at 10:22:55AM +0200, Pasi Kärkkäinen wrote:
> >>On Mon, Feb 04, 2008 at 06:21:41PM +0200, Pasi Kärkkäinen wrote:
> >>>On Sun, Feb 03, 2008 at 12:30:51PM +0100, Stephan Seitz wrote:
> >>>>If someone knows a Windows Benchmarking Suite, I'll do real tests. I
> >>>>know my
> >>>>recent tests are not comparable to any value, but I'm a little bit
> >>>>handycapped
> >>>>on windows ;)
> >>>>
> >>>You could use IOmeter http://www.iometer.org. It's a widely used disk
> >>>benchmarking tool on Windows.
> >>>
> >>>It's easy to run benchmarks using different requests sizes, different
> >>>number
> >>>of outstanding io's etc..
> >>>
> >>With small requests sizes (512 bytes or 4k) you can measure how many IOPS
> >>(IO operations per second) you can get, and with big request sizes (64+
> >>kB)
> >>you can measure how much throughput you can get..
> >>
> >>Changing the number of outstanding IO's means how many IO operations are
> >>active at the same time (optimal values depends on the storage used, and
> >>on
> >>the queue depth of the hardware, drivers and kernel).
> >>
> >>Note that IOmeter wants to use raw disk devices, so don't create any
> >>partitions or format the disk before using IOmeter.
> >>
> >
> >And please share your benchmarking results :)
> >
> >-- Pasi
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|