WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen IB

On Thu, 10 Nov 2011, Joseph Glanville wrote:

Hi Steven,

Sorry I missed your post but I thought I would clarify what we have
working, both for the benefit of the list and yourself.

We are currently not using SR-IOV in production, I did however build a test
stack consisting of a Connext-2 card and an IOMMU enabled Intel server
(Intel VT-d) and Xen.org Source 4.1.
This setup allowed me to create 7 VFs per port (this was a dual port card)
for a total of 14 virtual adapters.

What firmware revision is your card?  Compatible with this one?

02:00.0 InfiniBand: Mellanox Technologies MT26418 [ConnectX VPI PCIe 2.0 5GT/s - IB DDR / 10GigE] (rev b0)


The beta package I was using was dated March 2011 and seemed to be stable,
it also presents 100% virtualized IB adapters that can either be used
directly or passed through to virtual machines.
As the VFs appear on the PCI bus you must have a server supporting an IOMMU
(most recent servers).

We are running Intel E5640 (Westmere) with 5520 series chipset, that should be good enough, shouldn't it? What if any software other
than the firmware is needed on the dom0?

You can then use the standard OFED 1.5.X packages within the virtual
machines to access the IB fabric.



The reasons this hasn't been used in production basically fall on the
following reasons.

1) The driver was still beta - though it seemed to be stable we never
deploy anything that is not certified to be stable.

2) As far as I can see there isn't a security model in place that would
allow us to use it in a multi-tenant environment. This might not be an
issue for most users but we are a public IaaS platform.

3) Our current software stack isn't able to make use of IOMMU PCI
pass-through (limitation of our stack - not the Mellanox hardware, we have
since resolved this)

The performance is as native, the tooling is simple - standard PCI
pass-through easy to do with both xl or legacy xm.

Does this imply that there are some native pci-passthrough
routines available in higher versions of Xen that are not functions
that are accessible via libvirt for instance?

Steve Timm



I am looking forward to the production release so I can employ SR-IOV on
our command and control stack and into the future when enough security
features are available to offer it to clients on our multi-tenant platform.

Joseph.

On 6 November 2011 05:19, Steven Timm <timm@xxxxxxxx> wrote:


Could anyone define what "support" means?
We bought Mellanox connect-x2 cards 1+ years  ago and
we are still waiting for the drivers which they promised
at that time.  The last we talked to them the only
supported drivers are for VMware and even those are not
using the SR-IOV feature.  What they are working on for KVM
and Xen according to them is just a driver that presents
the IB card to the VM as a big network pipe, not something
that is recognizable with any regular mellanox driver
or that can be used with regular IB MPI drivers.

I hope I'm wrong and someone has got it working somewhere
because we have a lot of cash sunk into these cards
but the latest we heard from mellanox is that they still have
both software and firmware issues to be resolved.

Steve Timm


On Sat, 5 Nov 2011, Joseph Glanville wrote:

 Hi

Mellanox ConnectX-2 and ConnectX-3 cards support SR-IOV.
Not sure about other vendors.

Joseph.

On 3 November 2011 03:47, Nick Khamis <symack@xxxxxxxxx> wrote:

 Hello Vivien,

Just out of curiosity, which IB devices actually support things like
pci-passthrough or SR-IOV.

Thanks in Advance,

Nick.

______________________________**_________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/**xen-users<http://lists.xensource.com/xen-users>






--
------------------------------**------------------------------**------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.






--
------------------------------------------------------------------
Steven C. Timm, Ph.D  (630) 840-8525
timm@xxxxxxxx  http://home.fnal.gov/~timm/
Fermilab Computing Division, Scientific Computing Facilities,
Grid Facilities Department, FermiGrid Services Group, Group Leader.
Lead of FermiCloud project.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>