On Fri, Oct 2, 2009 at 2:15 AM, Robert Dunkley <Robert@xxxxxxxxx> wrote:
> I'm trying to decide on what storage tech would be best for SAN/Storage
> use for Xen on an Infiniband network. Targets would just run Linux on
> current X86 hardware with decent Raid and Infiniband cards; has anyone
> any ideas on which might perform better?
i don't have personal experience with either implementation (just some
envy!), so take this ramblings with a big pot of salt....
- both NFS-RDMA and iSCSI-ISER use InfiniBand verbs to perform high
throughput data transfers while minimizing CPU work, so in theory you
would get the best 'wire-speed' from either.
- NFS, no matter the transport, is a file-level protocol, so you'll
have to put the VM images as files in a filesystem.
- Both NFS and the underlying filesystem are designed to handle
multiple processes accessing the same directories and even the same
file at the same time, and manage to present consistent behavior in
this cases. This needs complex locks and some checks at every access.
- iSCSI, is a block-level protocol, so you can put your VM images as
- even if you use some shared, cluster-aware volume manager (cLVM is
the best known for Linux), in the end you get a block-device access.
all the locks and checks are done at management and setup time. the
data access itself goes 'directly'. if you violate the 'exclusivity'
and access the same block device from two DomU's (without a
clusterfilesystem inside that block device), no block-level layer will
stop you or try to make sense. It will happily let you corrupt the
so, from an architectural point of view, iSCSI have less overhead than
NFS, even on top of such a high-performance transport.
Xen-users mailing list