This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] RE: iscsi vs nfs for xen VMs

To: "<xen-users@xxxxxxxxxxxxxxxxxxx>" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] RE: iscsi vs nfs for xen VMs
From: Ryan Holt <ryan@xxxxxxxxxxxx>
Date: Sun, 30 Jan 2011 17:14:40 +0000
Accept-language: en-US
Delivery-date: Sun, 30 Jan 2011 09:16:10 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcvAoNEcunE6vbk+SDKa0rcyvkR/0Q==
Thread-topic: iscsi vs nfs for xen VMs
Hi all,

Been reading this thread with interest the last few weeks and was wondering the feasibility of this type of cluster for the storage of my VM servers. I have a mix of vSphere / ESX servers with Hyper-V and Xen on our network that I'd like to build a fault tolerent storage solution. Based on the comments on this thread I've been toying with a few ideas...

Attached is a rough outline of what I'm considering putting together. The top layer will be servers with the actual disks. I'm looking at the Areca 1880 RAID card series to build a few 8 disk RAID6 in each node. Those nodes will then pass their RAIDed arrays (sda) to the mid level iSCSI targets (head nodes) which will use cLVM w/ Mirroring to build Volume Groups across the two disk servers. They would have their own IP addresses to be used as targets by the VM servers which would use MPIO to multipath between the two head nodes. The VM servers would handle the filesystems. Does this make sense / seem feasible? I'm trying to eliminate as many single points of failure as possible. I believe this solution would allow for failure at each level without any interruption in service, but will also allow all disks to contribute to the performance of the entire pool.


Attachment: SAN Network.jpg
Description: SAN Network.jpg

Xen-users mailing list
<Prev in Thread] Current Thread [Next in Thread>