WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] xen and SAN.

To: "Florian Manschwetus" <florianmanschwetus@xxxxxx>, "John Madden" <jmadden@xxxxxxxxxxx>
Subject: Re: [Xen-users] xen and SAN.
From: "Nick Couchman" <Nick.Couchman@xxxxxxxxx>
Date: Mon, 21 Sep 2009 15:22:45 -0600
Cc: William <xen-mailinglist@xxxxxxxxxx>, xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>, Mauro <mrsanna1@xxxxxxxxx>
Delivery-date: Mon, 21 Sep 2009 14:23:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 2009/09/21 at 12:28, Florian Manschwetus <florianmanschwetus@xxxxxx> wrote:
Am 21.09.2009 20:16, schrieb John Madden:
>> In my case, I want to use 3 physical servers and 1 SAN to manage 10-15
>> virtual servers.
>>
>> What is the best solution : ISCSI (1Gb/s) or Fibre Channel (4Gb/s)?
>
> That depends on your needs.  I'm personally skittish about passing disk
> blocks over ethernet and prefer FC, but it's expensive and if you
> believe [some of] the pundits, everything is going to ethernet
> eventually anyway.  I think it's still safe to say though that at this
> time, if you need really reliable disk at theoretically-higher
> performance and you can afford it, go with FC.
>
>> What kind of device are you using ?
>
> 4Gb/s FC to EMC DMX-3 and IBM DS-4700 through Brocade fabrics.
>
>> For the moment,  I'm interesting  about HP technologies with
>> bladesystem c3000 and msa2324.
>
> FWIW, our experience with HP's blades has been less than thrilling,
> wouldn't recommend them.  We have a few (5) of IBM's BladeCenters though
> and we've been extremely happy with them.  I think they're more
> expensive up-front (?) but well worth it.
>
> John
>
>
>
FC is dead, go for 10 GB/s iSCSI (based on 10 GB/s Ethernet) its also
cheaper...

Florian


I'm not sure I buy that...if FC were dead, companies would not still be developing technologies based upon it...like 8Gb/s FC, which is alive and well.  The major server manufacturers also still offer their servers - 1U, 2U, blade, etc. - with FC HBAs.  Cisco is still building, selling, and supporting FC switches.  FC is far from dead.  Furthermore, even on 10Gb (not GB, Gb) iSCSI still has higher latency and higher overhead than FC, so even if your throughput is lower (even much lower), depending on the types of files you're using on those FC connections FC may actually yield better performance than 10Gb iSCSI.  Just my two bits...
 
-Nick


<br><hr>
This e-mail may contain confidential and privileged material for the sole use of the intended recipient. If this email is not intended for you, or you are not responsible for the delivery of this message to the intended recipient, please note that this message may contain SEAKR Engineering (SEAKR) Privileged/Proprietary Information. In such a case, you are strictly prohibited from downloading, photocopying, distributing or otherwise using this message, its contents or attachments in any way. If you have received this message in error, please notify us immediately by replying to this e-mail and delete the message from your mailbox. Information contained in this message that does not relate to the business of SEAKR is neither endorsed by nor attributable to SEAKR.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>