>-----Original Message-----
>From: Jan Kalcic [mailto:jandot@xxxxxxxxxxxxxx]
>Sent: Tuesday, 19 February 2008 11:31
>To: Joris Dobbelsteen
>Cc: deshantm@xxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx
>Subject: Re: [Xen-users] Exporting a PCI Device
>
>Joris Dobbelsteen wrote:
>>> -----Original Message-----
>>> From: Jan Kalcic [mailto:jandot@xxxxxxxxxxxxxx]
>>> Sent: Monday, 18 February 2008 23:44
>>> To: Joris Dobbelsteen
>>> Cc: deshantm@xxxxxxxxx; xen-users@xxxxxxxxxxxxxxxxxxx
>>> Subject: Re: [Xen-users] Exporting a PCI Device
>>>
>>>
>> [snip]
>>
>>> I did some test and actually it's quite slow both in reading and in
>>> writing. Roughly 50% as Block Device attacched to the domU.
>>> It reduces complexity but too much is lost in performance.
>>>
>>
>> For some odd reason I'm seeing a similar things on my box with
>> attached RAID-0 & LVM. The domU only reaches 50%. I don't know the
>> cause, I only heared comments that it had todo with LVM
>oddities that
>> RAID seemed to aggrivate. Nevertheless the dom0 reaches full speed.
>>
>> Coincidence or a sign of deeper trouble?
>>
>> My setup is just some standard (cheap) SATA disks, it's a personal
>> system. It does run Xen 3.1.2, the 2.6.20 kernel on the domU and the
>> Debian Etch 2.6.18 kernels on the guests. Tests where with Bonnie++
>> (that's part of debian etch).
>>
>> To rule out the virtual block device playing tricks I would
>try to see
>> what happens with a slower disk on the SAN and even with a
>local disk
>> (in the system itself).
>>
>> - Joris
>>
>>
>>
>Hi Joris,
>
>I did some test on the local disk (software RAID) of another
>system just to have an idea about performance. There is a bit
>lost in performance, dom0 tends to be a little quicker even if
>the difference is not so big to be problematic. Differently
>was on the other system with SAN which I could not test. I'll
>do soon and I'll let you know. The following is the output of
>the test I did.
Jan,
Please try to use bonnie++ or some specialized benchmarking tool. "dd"
is not really for benchmarking and you will get effects from caching and
lazy writeback algorithms. Bonnie++ tries to mitigate these effects by
making the data set at least twice the memory size (and perform explicit
write barriers). Also hdparm is intended to be used on disks, not
virtual things. You might get strange effects here.
What IS a very good thing is that you did multiple runs, but these have
HUGE variantions, up to 20% from the mean value. This counts for both
dom0 and domU. If everything is well, the variantions between runs
should be a lot smaller.
Also you should use the same areas on the disks for tests. Towards the
end of the disk, the thing will get slower transfer speeds.
>What do you mean when you say "deeper trouble"? Do you refer
>to Xen code or something about the infrastructure configuration?
Yes, it could be that components in your system, when mixed properly,
behave poorly. On this, result for dom0 and domU seem to be similar for
a local disk.
>Thanks,
>Jan
>
>
>domU
>
># dd if=/dev/zero of=/mnt/test count=1000 bs=1M
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 18.7752 seconds, 55.8 MB/s
>
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 12.395 seconds, 84.6 MB/s
>
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 18.1577 seconds, 57.7 MB/s
>
>
># hdparm -t /dev/sdd1 (three times)
>
>/dev/sdd1:
> Timing buffered disk reads: 116 MB in 3.00 seconds = 38.62 MB/sec
>
>/dev/sdd1:
> Timing buffered disk reads: 156 MB in 3.01 seconds = 51.80 MB/sec
>
>/dev/sdd1:
> Timing buffered disk reads: 148 MB in 3.03 seconds = 48.85 MB/sec
>-------------------------------------------------------------
>dom0
>
># dd if=/dev/zero of=/data/test count=1000 bs=1M (three times)
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 13.2577 seconds, 79.1 MB/s
>
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 18.3124 seconds, 57.3 MB/s
>
>1000+0 records in
>1000+0 records out
>1048576000 bytes (1.0 GB) copied, 12.244 seconds, 85.6 MB/s
>
># hdparm -t /dev/md3
>
>/dev/md3:
> Timing buffered disk reads: 164 MB in 3.02 seconds = 54.37 MB/sec
>
>/dev/md3:
> Timing buffered disk reads: 172 MB in 3.01 seconds = 57.21 MB/sec
>
>/dev/md3:
> Timing buffered disk reads: 180 MB in 3.01 seconds = 59.85 MB/sec
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|