WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iSCSI vs NFS

To: frank.pikelner@xxxxxxxxxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] iSCSI vs NFS
From: Grant McWilliams <grantmasterflash@xxxxxxxxx>
Date: Tue, 2 Feb 2010 11:25:53 -0800
Cc: Andy Pace <APace@xxxxxxxxxxxxx>, Jeff Sturm <jeff.sturm@xxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 02 Feb 2010 11:26:55 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :from:date:message-id:subject:to:cc:content-type; bh=hp55S8HmjyT1AJ02HOEpjunbvur2r511vCUAeb6GDic=; b=QjnHJ/uJS/0F0cXwo/BY65DrZHUc3Z2NXSGaZgCcqyw3YSF604oyJZLB/QGlIFH0K2 2DEzLOTJBIR2rQgEzH64vIaox2SBmZ13PrAl5mUTdabY1gvkU/zfVWL8o1CrvoiA6T6K +i/o3yE/56+LJnCXmgUkYc+KFkqG24EAetAvM=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc:content-type; b=YRnRh64cHVi8m/TR4iU6VXkkBWHJ66JKKQU2rjKoKQ+GivMaFaOhI/tE5aCFVTEg+J 9FgWmt9dnY05kWaF55EaMQid75nDoUwmiF24hHYvtzAGzXGmnQeZ7OEJlJ2k1t1jWBu3 VPS1rTTYAEHPBybD9qF5p+3UjUAlNdVOH9EYY=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1265138059.19057.53.camel@nc155>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <B915EE0870BDF348816B665DBE85F1652A65125FC1@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <64D0546C5EBBD147B75DE133D798665F03F3F5F6@xxxxxxxxxxxxxxxxx> <ed123fa31002021100p764e64dfp4002e27854f6d624@xxxxxxxxxxxxxx> <1265138059.19057.53.camel@nc155>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, Feb 2, 2010 at 11:14 AM, Frank Pikelner <frank.pikelner@xxxxxxxxxxxxxxxxxxxxxxxxxx> wrote:

>
> "Tests show". Famous last words...  I've been throwing around a lot of
> ideas in this same vein. I currently have 42 VMs running off the same
> disk in a classroom environment. Things are fine until everyone starts
> installing software or formatting their disks at the same time.
>
> From the hearsay that I've heard AoE is the fastest network block
> storage but it's still hard to beat NFS. The problem comes about when
> you want more than one VM to access a storage device and then
> performance goes in the toilet because the cluster FS's are very slow.
> I however, don't make decisions based on hearsay so in the coming
> months I'll be testing all combinations of NFS, iSCSI, AoE, with GFS
> and OCFS and any other possibility I can find in common kernels. I'll
> be comparing these to the speeds of local disk access via ext3 to see
> how much of a hit (or advantage?) we take by moving storage out of
> box. Of course to do fast migration the storage has to be somewhere
> else...
>
> Once testing is done I'll be posting the numbers. It's amazing how
> little benchmarking takes place. I did extensive tests on LVM vs disk
> files and have still not seen any other numbers on this. Oh well, I
> guess that will be my contribution.

Grant,

With what type of drives are you expecting to do the testing? I would be
interested in any numbers, but have started to move over to solid state
drives.

Best,

Frank


My tests will be relative to the network protocols. I'd assume that the differences between local
storage and network storage will be the same no matter the backend hardware.

I too am looking into RAIDs of SSDs for IO reasons. I need access time more than I need throughput.
Drives are getting so large that if I throw eight 1.5 TB drives in an array for speed reasons I end up with
5x the storage that I need for my project. I don't have a problem spending the same and getting less storage
if I get more performance. We'll see.


Grant McWilliams

Some people, when confronted with a problem, think "I know, I'll use Windows."
Now they have two problems.




 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>