WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Disk I/O performance vs native performance: Xen I/O

Hi,

OK, so I did the hdparm + bonnie++ benchmarks on the server, and posted
all the results to pastebin :
http://pastebin.ca/874800

To put it in a nutshell : Xen I/O is even worse than I previously
thought on a RAID array...

To give more details :
1] LVM vs Native partition doesn't seem to have much effect either with
or without Xen (so I don't seem to run the problems Mitch Kelly talked
about in a previous post)

(compare test 1 vs test 2, as well as test 7 vs test 8)
However, note that I haven't had a chance to do a comparison of LVM vs
Native on top of RAID directly... (this would mean re-installing my
whole server, etc...). I only tested the performance on non-raid disks.


2] Xen Dom0 vs Non-Xen Kernel doesn't seem to have a huge performance
difference. bonnie++ gave slightly different results, but I guess we can
accuse the the experimental nature of the benchmarks
(compare test 3 vs test test 5)

3] Xen Dom0 vs Xen Dom U performance is like day and night !!!!
Compare test 4 vs test 5).
Additionally, the hdparm results are completly different each time the
command is ran, so we can only really  compare the bonnie++ results,
which are not really consistent either..... It's like if xen over lvm
over RAID would just result to not consistent results... Would there be
a reason for that ?

So, if we take, for instance, Sequential Input by block, we have 37MB/s
(worst case)/74MB/s (best case) vs 173MB. this makes Xen SUPER SUPER
SUPER slow, at least twice as slow even when considering the best
results I could get..

4] However, the weird thing is that if you look at test 6 vs test 1 you
can see that Xen over LVM without RAID does not seem to degrade the
performance. I should have used a bigger file size, but test 7 vs test 8
definitely confirms the trend, even if the numbers are not completly
exact in test 6 because of the small disk size...


So, conclusion, I am lost :
On the one side, it seems that Xen, when used on top of a raid array, is
wayyy slower, but when used on top a plain old disk, seems to be pretty
much native performance. Is there a potential link between Xen and RAID
vs non raid performance ? Or maybe the problem is caused by Xen + RAID +
LVM ?

What do you think about that ?

regards,
Sami Dalouche

On Fri, 2008-01-25 at 22:42 +0100, Sami Dalouche wrote:
> Ok, so I'm currently doing bonnie++ benchmarks and will report the
> results as soon as everything is finished.
> But in any case.. I am not trying to create super-accurate benchmarks. I
> am just trying to say that the VM's I/O is definitely slowwer than the
> Dom0, and I don't even need a benchmark to tell that everything is at
> least twice as slow.
> 
> it seriously is super slow, so my original post was about knowing how
> much slower (vs native performance) is acceptable. 
> 
> Concerning your question, I don't quite understand it...
> What I did was : 
> 1] Created a LV on the real disk
> 2] exported this LV as a Xen disk using 
> disk = [ 'phy:/dev/mapper/mylv,hda1,w']
> 3] mounted it on the DomU by mount /dev/mapper/mylv
> 
> isn't it what I'm supposed to do ?
> regards,
> Sami Dalouche
> 
> On Fri, 2008-01-25 at 16:28 -0500, John Madden wrote:
> > > obviously it's the same filesystem type, since it's the same
> > > 'partition'.  of course, different mount flags could in theory affect
> > > measurements.
> > 
> > Sorry, I must've missed something earlier.  I didn't realize you were
> > mounting and writing to the same filesystem in both cases.  But this is
> > interesting -- if you're mounting a filesystem on an LV in dom0 and then
> > passing it as a physical device to domU, how does domU see it?  Does it
> > then put an LV inside this partition?
> > 
> > > > Please use bonnie++ at a minimum for i/o benchmarking.  dd is not a
> > > > benchmarking tool.
> > > 
> > > besides, no matter what tool you use to measure, use datasets at the
> > > very least three or four times the largest memory size.
> > 
> > Exactly.  bonnie++ (for example) provides the -r argument, which causes
> > it to deal with i/o at twice your memory size to avoid cache benefits.  
> > 
> > John
> > 
> > 
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>