This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-API] Alternative to Vastsky?

To: Henrik Andersson <henrik.j.andersson@xxxxxxxxx>
Subject: Re: [Xen-API] Alternative to Vastsky?
From: Tim Titley <tim@xxxxxxxxxx>
Date: Wed, 20 Apr 2011 14:26:45 +0100
Cc: xen-api@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 20 Apr 2011 06:29:02 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <BANLkTin8FwK7G6wMc=8985Vc+iw0Gy13_w@xxxxxxxxxxxxxx>
List-help: <mailto:xen-api-request@lists.xensource.com?subject=help>
List-id: Discussion of API issues surrounding Xen <xen-api.lists.xensource.com>
List-post: <mailto:xen-api@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-api>, <mailto:xen-api-request@lists.xensource.com?subject=unsubscribe>
References: <4DAE085F.5040707@xxxxxxxxxx> <BANLkTinponoRXqtO85wCpfkpq+SHNMY_Rw@xxxxxxxxxxxxxx> <4DAE986C.7060508@xxxxxxxxxx> <BANLkTin8FwK7G6wMc=8985Vc+iw0Gy13_w@xxxxxxxxxxxxxx>
Sender: xen-api-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20110207 Lightning/1.0b2 Shredder/3.1.9pre
On 20/04/11 12:33, Henrik Andersson wrote:
Since there is no official support for InfiniBand in XCP and some people have managed to get it working, others not so much, I wanted to tell how far I got with it and hopefully in time, get some sort of howto out of this. All those RDMA possibilities could come handy but I would be more than pleased if I could even use IPoIB.

Hardware and setup I have for this project:

SuperMicro SuperServer (2026TT-H6IBQRF), each of the four nodes comes with integrated Mellanox ConnectX QDR Infiniband 40Gbps Controller w/ QSFP connector. I have cable between two nodes and while testing, I had two ubuntu server 10.04 installations and according to netperf I got unidirecitonal throughput of about 15000Mbit/s on IPoIB between two nodes.

Hmm, I wonder where the problem is. I would have thought that performance would be better than this. I can get near wire speed on a 1Gbit ethernet using iperf, so 15Gbit on a theoretical data throughput of 32Gbit is by no means perfect.

Where I got with IB support is pretty sad. Had OFED compile and install all the drivers and such on ddk after I added some repositorys and installed bunch of different things required. The thing is that after reboot, ddk vm just hangs on boot if I have pci passthrough enabled for the IB card to ddk. If I disable it, everything is fine, since there is no IB card present for the ddk to try and use the drivers with.

I would imagine you would enable the same repos on dom0 to install the dependencies (only the libraries would be required, not the dev packages). Build the drivers on the ddk and change the DESTDIR to install to it's own tree (/opt/ibdrivers for example), and rsync the tree over to the dom0 filesystem once the build had completed.

I have just ordered a couple of InfiniBand HBAs for my own testing purposes so I will know more soon. Let us know how you get on - I can lend a hand (for whatever it's worth) but I zero experience when it comes to building rpms or managing yum repos.



xen-api mailing list