> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> Roger Lucas
> Sent: 24 August 2006 15:32
> To: Petersson, Mats; 'Alex Iribarren'; xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-users] Differences in performance between
> file and LVMbased images
>
> Hi Alex,
>
> May I also express my thanks for these benchmarks, but some
> are unlikely to
> be truly representative of the relative disk performances.
>
> > > -- Setup --
> > > Hardware: 2x 3GHz Intel Woodcrest (dual core), Intel
> S5000PAL, 1x SATA
> > > Western Digital WD1600YD-01N 160GB, 8GB RAM (dom0 using 2G)
>
> According to WD's site, your HDD maximum _sustained_ read or write
> performance is 61MB/s. You may get more if you hit caches on
> read or write,
> but if you get numbers bigger than 61MB/s on data that is not
> expected to be
> in the cache (e.g. a properly constructed disk benchmark)
> then I would be
> suspicious (unless you are deliberately trying to test the
> disk caching
> performance).
>
> <snip>
>
> > >
> > > -- Results --
> > > The first three entries (* control) are the results for
> the benchmark
> > > from dom0, so they give an idea of expected "native"
> > > performance (Part.
> > > control) and the performance of using LVM or loopback
> > > devices. The last
> > > two entries are the results as seen from within the domU.
> > >
> > > "Device" Write Rewrite Read
> Reread
> > > dom0 Part. 32.80 MB/s 35.92 MB/s 2010.32 MB/s
> 2026.11 MB/s
> > > dom0 LVM 43.42 MB/s 51.64 MB/s 2008.92 MB/s
> 2039.40 MB/s
> > > dom0 File 55.25 MB/s 65.20 MB/s 2059.91 MB/s
> 2052.45 MB/s
> > > domU Part. 31.29 MB/s 34.85 MB/s 2676.16 MB/s
> 2751.57 MB/s
> > > domU LVM 40.97 MB/s 47.65 MB/s 2645.21 MB/s
> 2716.70 MB/s
> > > domU File 241.24 MB/s 43.58 MB/s 2603.91 MB/s
> 2684.58 MB/s
>
> The domU file write at 241.24 MB/s looks more than slightly
> suspicious since
> your disk can only do 61MB/s. I suspect that the writes are
> being cached in
> the dom0 (because you have lots of RAM) and distorting the
> true disk access
> speeds. You have 2GB of ram in Dom0 and your test is only
> 900MB, so it is
> possible that the writes are being completely cached in the
> Dom0. The DomU
> thinks the write is complete, but all that has happened is
> the data has
> moved to the Dom0 cache.
>
> The read numbers are also way-off as they are at least 30x
> the disk speed.
>
> It is interesting, however, that the read and re-read
> numbers, which are
> must be coming from cache somewhere rather than from disk, show that
> partition, LVM and file are very comparable.
>
> > >
> > > "Device" Random read Random write
> > > dom0 Part. 2013.73 MB/s 26.73 MB/s
> > > dom0 LVM 2011.68 MB/s 32.90 MB/s
> > > dom0 File 2049.71 MB/s 192.97 MB/s
>
> The domU file random write at 192.97 MB/s also looks wrong,
> for the same
> reasons as above.
>
> > > domU Part. 2723.65 MB/s 25.65 MB/s
> > > domU LVM 2686.48 MB/s 30.69 MB/s
> > > domU File 2662.49 MB/s 51.13 MB/s
> > >
> > > According to these numbers, file-based filesystems are
> generally the
> > > fastest of the three alternatives. I'm having a hard time
> > > understanding
> > > how this can possibly be true, so I'll let the more knowledgeable
> > > members of the mailing list enlighten us. My guess is
> that the extra
> > > layers (LVM/loopback drivers/Xen) are caching stuff and
> > > ignoring IOZone
> > > when it tries to write synchronously. Regardless, it seems like
> > > file-based filesystems are the way to go. Too bad, I
> prefer LVMs...
> >
> > Yes, you'll probably get file-caching on Dom0 when using file-based
> > setup, which doesn't happen on other setups.
>
> Absolutely. Hence the over-high readings for file transfers.
>
> >
> > The following would be interesting to also test:
> > 1. Test with noticably larger test-area (say 10GB or so).
>
> You need to run with test files that are at least 5x the
> available cache
> memory before you can start to trust the results. Given that
> you have 2GB
> of memory on Dom0, 10GB would be the smallest test that makes sense.
I didn't actually look at the numbers, I just picked a "much larger
number" - lucky guess, I suppose... ;-)
>
> I would be very interested to see the results from such a test.
>
> As one final question, what is the scheduling configuration
> for Dom0 and
> DomU with these tests? Have you tried different configurations
> (period/slice) for the DomU tests to see if it makes any difference?
Ah, scheduler for all intents and purposes SHOULD be credit scheduler,
as that will give the best possible balance between the domains, rather
than any of the older schedulers that can't for example move a domain
from one CPU to another.
--
Mats
>
> Best regards,
>
> Roger
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
>
>
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|