|  |  | 
  
    |  |  | 
 
  |   |  | 
  
    |  |  | 
  
    |  |  | 
  
    |   xen-api
Re: [Xen-API] Alternative to Vastsky? 
| Since there is no official support for InfiniBand in XCP and some people have managed to get it working, others not so much, I wanted to tell how far I got with it and hopefully in time, get some sort of howto out of this. All those RDMA possibilities could come handy but I would be more than pleased if I could even use IPoIB. 
SuperMicro SuperServer (2026TT-H6IBQRF), each of the four nodes comes with integrated Mellanox ConnectX QDR Infiniband 40Gbps Controller w/ QSFP connector. I have cable between two nodes and while testing, I had two ubuntu server 10.04 installations and according to netperf I got unidirecitonal throughput of about 15000Mbit/s on IPoIB between two nodes.
 Hardware and setup I have for this project: 
 
 
 Where I got with IB support is pretty sad. Had OFED compile and install all the drivers and such on ddk after I added some repositorys and installed bunch of different things required. The thing is that after reboot, ddk vm just hangs on boot if I have pci passthrough enabled for the IB card to ddk. If I disable it, everything is fine, since there is no IB card present for the ddk to try and use the drivers with. If there is some one with experience on ddk, and customising the dom0, I could do fresh xcp 1.0 install and give this willing person root priviledges to this machine, so he can try. If this person get's it working, we could make a how to out of it and I could even host a site that makes all the files and instructions needed freely availeable for everyone. 
 -Henrik Andersson On 20 April 2011 11:25, Tim Titley <tim@xxxxxxxxxx>  wrote: 
  
    
    
  
  
    Sounds interesting and definately worth looking into. You would not
    have the advantage of snapshots like you do with an LVM type
    solution, but it may pay off in some instances from a performance
    perspective.
 I've never used InfiniBand, but I think you've just convinced me to
    go buy a few cheap adaptors and have a little play.
 
 On 19/04/11 23:39, Henrik Andersson wrote:
 Now that VastSky has been reported to be on hiatus
      atleast, I'dd like to propose GlusterFS as a candidate. It is well
      tested and actively developed and maintained project. I'm
      personally really interested in "RDMA version". It should provide
      really low latencies and since 40Gbit InfiniBand is a bargain
      compared to 10GbE, there should be more than enough throughput
      availeable.
       
 This would require IB support on XCP but my thinking is, it
        would be beneficial in many other ways. For example I would
        imagine RDMA could be used with live migrates. 
 -Henrik Andersson On 20 April 2011 01:10, Tim Titley <tim@xxxxxxxxxx> 
          wrote:
            Has anyone considered a replacement for
            the vastsky storage backend now that the project is
            officially dead (at least for now)?
 I have been looking at Ceph ( http://ceph.newdream.net/
            ). A suggestion to someone so inclined to do something about
            it, may be to use the Rados block device (RBD) and put an
            LVM storage group on it, which would require modification of
            the current LVM storage manager code - I assume similar to
            LVMOISCSI.
 
 This would provide scalable, redundant storage at what I
            assume would be reasonable performance since the data can be
            striped across many storage nodes.
 
 Development seems reasonably active and although the project
            is not officially production quality yet, it is part of the
            Linux kernel which looks promising, as does the news that
            they will be providing commercial support.
 
 The only downside is that RBD requires a 2.6.37 kernel. For
            those "in the know" - how long will it be before this kernel
            makes it to XCP - considering that this vanilla kernel
            supposedly works in dom0 (I have yet to get it working)?
 
 Any thoughts?
 
 Regards,
 
 Tim
 
 _______________________________________________
 xen-api mailing list
 xen-api@xxxxxxxxxxxxxxxxxxx
 http://lists.xensource.com/mailman/listinfo/xen-api
 
_______________________________________________
 xen-api mailing list
 xen-api@xxxxxxxxxxxxxxxxxxx
 http://lists.xensource.com/mailman/listinfo/xen-api
 
 
_______________________________________________
xen-api mailing list
xen-api@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/mailman/listinfo/xen-api
 | 
 |  | 
  
    |  |  |