This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] SAN for Xen - NFS-RDMA or ISCSI-ISER

To: Robert Dunkley <Robert@xxxxxxxxx>
Subject: Re: [Xen-users] SAN for Xen - NFS-RDMA or ISCSI-ISER
From: Javier Guerra <javier@xxxxxxxxxxx>
Date: Fri, 2 Oct 2009 09:56:47 -0500
Cc: Xen User-List <xen-users@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 02 Oct 2009 07:57:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C1EAC9C5E752D24C968FF091D446D823459BF3@ALTERNATEREALIT>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <C1EAC9C5E752D24C968FF091D446D823459BF3@ALTERNATEREALIT>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, Oct 2, 2009 at 2:15 AM, Robert Dunkley <Robert@xxxxxxxxx> wrote:
> I'm trying to decide on what storage tech would be best for SAN/Storage
> use for Xen on an Infiniband network. Targets would just run Linux on
> current X86 hardware with decent Raid and Infiniband cards; has anyone
> any ideas on which might perform better?

i don't have personal experience with either implementation (just some
envy!), so take this ramblings with a big pot of salt....

- both NFS-RDMA and iSCSI-ISER use InfiniBand verbs to perform high
throughput data transfers while minimizing CPU work, so in theory you
would get the best 'wire-speed' from either.

- NFS, no matter the transport, is a file-level protocol, so you'll
have to put the VM images as files in a filesystem.

- Both NFS and the underlying filesystem are designed to handle
multiple processes accessing the same directories and even the same
file at the same time, and manage to present consistent behavior in
this cases.  This needs complex locks and some checks at every access.

- iSCSI, is a block-level protocol, so you can put your VM images as
block devices.

- even if you use some shared, cluster-aware volume manager (cLVM is
the best known for Linux), in the end you get a block-device access.
all the locks and checks are done at management and setup time.  the
data access itself goes 'directly'.  if you violate the 'exclusivity'
and access the same block device from two DomU's (without a
clusterfilesystem inside that block device), no block-level layer will
stop you or try to make sense.  It will happily let you corrupt the

so, from an architectural point of view, iSCSI have less overhead than
NFS, even on top of such a high-performance transport.


Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>