WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

Il 29/01/2011 17:37, Pasi Kärkkäinen ha scritto:
[cut]
> No, it's not just smoke in the eyes.
> It clearly shows ethernet and iSCSI can match and beat legacy FC.

SAS storages and also infiniband storages can beat both legacy FC and
they cost less than a full 10G infrastructure

>> Noone published the hardware list they have used to reach such performances.
>>
> 
> Hardware configuration was published.

please provide a link of the full hw configuration

I cannot see anything about what you are saying having a look for
example to:

http://download.intel.com/support/network/sb/inteliscsiwp.pdf

>> First of all they have aggregated the perfomances of *10* targets (if
>> the math is not changed 1 aggregator+10 targets == 11) and they have not
>> said what kind of hard disk and how many hard disks they used to reach
>> these performances.
>>
> 
> Targets weren't the point of that test.
> 
> The point was to show single host *initiator* (=iSCSI client) 
> can handle one million IOPS.

that's meaningless in this thread ...where are discussing about choosing
the right storage infrastructure for a xen cluster

when someone will release something real that everyone can adopt in his
infrastructure with 1M IOPS I would be delighted to buy it

[cut]
> In that test they used 10 targets, ie. 10 separate servers as targets,
> and each had big RAM disk shared as iSCSI LUN.

see above ...it's meaningless in this thread


>> In real life is very hard to reach high performance levels, for example:
>> - 48x 2.5IN 15k disks in raid0 gives you ~8700 RW IOPS (in raid 0 the %
>> of read doesn't impact on the results)
>>
> 
> The point of that test was to show iSCSI protocol is NOT the bottleneck,
> Ethernet is NOT the bottleneck, and iSCSI initiator (client)
> is NOT the bottleneck.
> 
> The bottleneck is the storage server. And that's the reason
> they used many *RAM disks* as the storage servers.

noone said something different ..we are discussing how to create the
best clustered xen setup and in particular we are evaluating also the
differences between all the technologies.

Nevertheless noone in the test results pointed how much CPU & co was
wasted using this approach.

Christian





_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users