This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] iscsi vs nfs for xen VMs

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Sat, 29 Jan 2011 16:27:52 +0100
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, Adi Kriegisch <adi@xxxxxxxxxxxxxxx>, Christian Zoffoli <czoffoli@xxxxxxxxxxx>, Roberto Bifulco <roberto.bifulco2@xxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 29 Jan 2011 07:31:50 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110129150926.GF2754@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx> <4D3FF9BC.40601@xxxxxxxxxxx> <sig.4007da378a.AANLkTiku=-RhcyUZVHmwnJ18+Az6Fk5CxdEjKdHQKJ54@xxxxxxxxxxxxxx> <4D4032C7.9000003@xxxxxxxxxxx> <AANLkTin+K5G10_03qLRT_yqCRELu339roLEHy1bVFoqR@xxxxxxxxxxxxxx> <4D4064CD.8010005@xxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01BB9292@trantor> <20110127083537.GD29664@xxxxxxxx> <20110129150926.GF2754@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20101125 SUSE/3.0.11 Thunderbird/3.0.11
On 01/29/11 16:09, Pasi Kärkkäinen wrote:
> On Thu, Jan 27, 2011 at 09:35:38AM +0100, Adi Kriegisch wrote:
>> Hi!
>>>> iSCSI tipically has a quite big overhead due to the protocol, FC, SAS,
>>>> native infiniband, AoE have very low overhead.
>>> For iSCSI vs AoE, that isn't as true as you might think. TCP offload can
>>> take care of a lot of the overhead. Any server class network adapter
>>> these days should allow you to send 60kb packets to the network adapter
>>> and it will take care of the segmentation, while AoE would be limited to
>>> MTU sized packets. With AoE you need to checksum every packet yourself
>>> while with iSCSI it is taken care of by the network adapter.
>> What AoE actually does is sending a frame per block. Block size is 4K -- so
>> no need for fragmentation. The overhead is pretty low, because we're
>> talking about Ethernet frames.
>> Most iSCSI issues I have seen are with reordering of packages due to
>> transmission across several interfaces. So what most people recommend is to
>> keep the number of interfaces to two. To keep performance up this means you
>> have to use 10G, FC or similar which is quite expensive -- especially if
>> you'd like to have a HA SAN network (HSRP and stuff like that is required).
>> AoE does not suffer from those issues: Using 6 GBit interfaces is no
>> problem at all, load balancing will happen automatically, as the load is
>> distributed equally across all available interfaces. HA is very simple:
>> just use two switches and connect one half of the interfaces to one switch
>> and the other half to the other switch. (It is recommended to use switches
>> that can do jumbo frames and flow control)
>> IMHO most of the current recommendations and practises surrounding iSCSI
>> are there to overcome the shortcomings of the protocol. AoE is way more
>> robust and easier to handle.
> iSCSI does not have problems using multiple gige interfaces.
> Just setup multipathing properly.
> -- Pasi
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

On this subject: am using multipathing to iSCSI too, hoping to have
aggregated speed on top of path redundancy but the speed seems not to
surpass the one of a single interface.

Is anyone successful at doing this?



Xen-users mailing list