WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

On Sat, Jan 29, 2011 at 05:24:30PM +0100, Christian Zoffoli wrote:
> Il 29/01/2011 16:08, Pasi Kärkkäinen ha scritto:
> [cut]
> > Microsoft and Intel had some press releases around one year ago
> > demonstrating over one *million* IOPS using a single 10gbit Intel NIC,
> > on a *single* x86 box, using *software* iSCSI.
> 
> there is a big differece between marketing numbers and real numbers.
> 
> The test you have pointed is only smoke in the eyes.
> 

No, it's not just smoke in the eyes.
It clearly shows ethernet and iSCSI can match and beat legacy FC.

> Noone published the hardware list they have used to reach such performances.
> 

Hardware configuration was published.


> First of all they have aggregated the perfomances of *10* targets (if
> the math is not changed 1 aggregator+10 targets == 11) and they have not
> said what kind of hard disk and how many hard disks they used to reach
> these performances.
> 

Targets weren't the point of that test.

The point was to show single host *initiator* (=iSCSI client) 
can handle one million IOPS.


> The best std hard disk -> SAS 2.5 15k can do ~190 IOPS so it's quite
> impossible to archive such IOPS, the only way is to use SSDs or better
> PCIe SSDs ...but as everyone knows you have to pay 2 arms, 2 legs and so on.
> 

In that test they used 10 targets, ie. 10 separate servers as targets,
and each had big RAM disk shared as iSCSI LUN.

> In real life is very hard to reach high performance levels, for example:
> - 48x 2.5IN 15k disks in raid0 gives you ~8700 RW IOPS (in raid 0 the %
> of read doesn't impact on the results)
> 

The point of that test was to show iSCSI protocol is NOT the bottleneck,
Ethernet is NOT the bottleneck, and iSCSI initiator (client)
is NOT the bottleneck.

The bottleneck is the storage server. And that's the reason
they used many *RAM disks* as the storage servers.

> If you have enought money you can choose products from texas memory of
> fusion-io but tipically the costs are too high.
> 
> For example a fusion ioDrive DUO 604GB  MLC costs ~15k $ ...if you want
> 1M IOPS you can choose the IODRIVE OCTAL ...but if we make the
> proportion it should be over 120k $.
> 

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users