WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Aoe or iScsi???

To: Adi Kriegisch <kriegisch@xxxxxxxx>
Subject: Re: [Xen-users] Aoe or iScsi???
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Thu, 8 Jul 2010 09:54:13 +0200
Cc: Gilberto Nunes <gilberto.nunes@xxxxxxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 08 Jul 2010 00:55:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100705164319.GE14460@xxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1278331728.1787.12.camel@note-311a> <1278337995.1787.40.camel@note-311a> <20100705164319.GE14460@xxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/2.6.31.12-0.2-desktop; KDE/4.3.5; x86_64; ; )
On Monday 05 July 2010 18:43:20 Adi Kriegisch wrote:
> Hi!
> 
> > I run bonnie++ like this:
> > bonnie++ -d /tmp/ -s 1000 -r 500 -n 1 -x 1 -u root |  bon_csv2txt >
> > test.txt
> 
> just checking: your storage server has 500MB RAM? (-r)
> 
> > This is the result:
> >
> > Version  1.03c      ------Sequential Output------ --Sequential Input-
> > --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
> > Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP 
> > /sec %CP bacula-selbet 1000M 53004  98 189796  37 97783  17 62844  99
> > 1505552  99 +++++
> 
> [SNIP]
> 
> > It's tell something?
> 
> Ja, your storage system can handle ~190MB/s sequential write. This means
> you will not get full peak performance to your clients as one gigabit
> interface is limited with 120MB/s.
> Your write speed (1,15GB/s) shows that you misspecified RAM size on your
> bonnie commandline because this is _WAY_ beyond what your disks are able to
> handle. (Good SATA disks will give you above 100MB/s read speed. Reading at
> that speed hints at 15 or more disks; the limit there is definitely bus
> speed and administrative overhead.)
> 
> What you are really interested in (or should be) are IOPS (Input Output
> Operations per Second): A typical server or workstation no matter if
> virtual or 'real' does a mixture between sequential and random I/O.
> Every server you run has its own partition on your storage backend. Just to
> get a better idea of what I am talking about consider the following:
> Every virtual machine does a sequential file read. What does that mean on
> the storage backend? -- There are 13 files being read at 13 different
> positions at the same time. This is a (close to) random I/O workload. Disk
> heads are flying around to satisfy all requests. No way you will be close
> to any high MB/s value: your disks are doing random I/O.
> Measuring sequential peak performances on network storage is pointless for
> this very reason. (People on this list were suggesting to do that just to
> verify your disk subsystem works fine.)
> To get an idea of what performance you might expect, you can do the
> following:
> 1. calculate IOPS that you might expect. You may use one of the online
>    calculators that are available[1].
>    This begins with calculating IOPS per disk where you might need to
>    consult your vendor's datasheet or lookup the disks here[2]. You'll
>    immediately notice that SAS disks offer twice or more IOPS than SATA
>    drives.
>    When calculating IOPS you need to specify a workload as well. This means
>    specify the read/write ratio. Average fileservers have around 80% read
>    and 20% write. Read and write operations differ in the latency they
>    have: The more latency a request has the fewer requests can be handled
>    per second. (This is also the reason why local storage will always bring
>    more IOPS than network storage: network transport simply adds to
>  latency.) 2. measure the IOPS you get. I personally prefer using FIO[3]
>  which is readily available in Debian. FIO is fully configurable; there are
>  however some reasonable examples which you might use:
>    /usr/share/doc/fio/examples/iometer-file-access-server mimiks a typical
>    file server workload with 80% read. The IOPS calculator above[1] is only
>    able to calculate IOPS for a fixed block size where this workload mixes
>    blocksizes from 512byte to 64k. The result in IOPS cannot be directly
>    compared. If you want to do so, you need to specify 4k blocks only in
>  the config.
>    WARNING: Do not use IOMeter itself on linux: it provides incorrect
>    results as it cannot use aio on linux and therefor is unable to queue
>    requests.
>    Using the stock 'iometer-file-access-server' profile you should get the
>    something like:
>    3 disks/RAID5: 200-250 IOPS
>    4 disks/RAID5: 270-320 IOPS
>    5 disks/RAID5: 340-390 IOPS
>    and so on (for SATA disks with AoE).
> 3. find the bottleneck in case you're not getting what you can expect.
>    Measure IOPS on the storage server with 'iostat 1' ("tps" roughly
>    corresponds to IOPS).
>    ...ok, writing up how to debug a storage backend will take another
>    hour... ask me if necessary.
> 
> -- Adi
> 
> [1] http://www.wmarow.com/strcalc/
> [2] http://www.wmarow.com/strdir/hdd/
> [3] http://freshmeat.net/projects/fio
> 
> PS: Maybe there should be a wiki page about how to plan and implement a
> storage backend for a xen server? -- then others can add their knowledge
> more easily.
> ...and the question pops up every once in a while.
> 

Adi,

I have been looking at FIO, but what jobfile do you use that you find optimal 
to test network storage for Xen?


cheers,

B.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users