[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] SR-IOV problems - HVM cannot access network



On 03/01/2011 12:50 PM, Rose, Gregory V wrote:
>> -----Original Message-----
>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-devel-
>> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of David White
>> Sent: Tuesday, March 01, 2011 11:32 AM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Subject: [Xen-devel] SR-IOV problems - HVM cannot access network
>>
>> Hi all,
>>
>> I am having problems getting SR-IOV functions to work in my HVMs.  My
>> hardware has VT-d support, and pci passthrough works fine for physical
>> functions.  In a nutshell here is the current state:
>>
>> Dom0 :  NICs are 2-port 82576.  I can get full network access using
>> either PF or VF interfaces.
>> HVM : PCI passthrough of physical functions work -- full network access
>> HVM : PCI passthrough of virtual functions fail -- can send pkts but
>> cannot receive.
>>
>> The best lead I have right now is evident from the qemu logs.
>>
>> when PF (04:00.0) is assigned to HVM:
>>
>> pt_msix_init: get MSI-X table bar base fafbc000
>> pt_msix_init: table_off = 0, total_entries = 10
>> pt_msix_init: errno = 2
>> pt_msix_init: mapping physical MSI-X table to 7f23a03d5000
>> pt_msi_setup: msi mapped with pirq 37
>> pci_intx: intx=1
>> register_real_device: Real physical device 04:00.0 registered successfuly!
>> IRQ type = MSI-INTx
>>
>> when VF (04:10.2) is assigned to HVM:
>>
>> pt_msix_init: get MSI-X table bar base fae24000
>> pt_msix_init: table_off = 0, total_entries = 3
>> pt_msix_init: errno = 2
>> pt_msix_init: mapping physical MSI-X table to 7fc918846000
>> register_real_device: Real physical device 04:10.2 registered successfuly!
>> IRQ type = INTx
>>
>> VFs don't seem to be using MSI/MSI-X interupts.  Does this indicate a
>> problem?
> Yes, this is absolutely a problem.  82576 virtual functions require MSI-X 
> interrupt support to function properly.  You didn't mention what your guest 
> OS is but the guest OS must support MSI-X interrupts.  Even if it does have 
> MSI-X support the attempt to allocate the vectors may fail for some reason.  
> If that happens then the VF will not function correctly.
>
> - Greg Rose
> LAN Access Division
> Intel Corp.
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

The guest (Ubuntu Maverick 64-bit) supports MSI-X, as is indicated by the fact
that PF passthrough works fine (am I interpreting the above output correctly,
for the PF/HVM case?).

The only thing different between the two above cases is that the first one is
assigned a PF, the second on a VF.  Is there something in the dom0 igb/igbvf
drivers that affect the MSI-X capabilities in the HVM?  igb/igbvf drivers on
dom0 are from Intel site:

root@dom0:~# modinfo igb | grep ^version:
version:        2.4.12
root@dom0:~# modinfo igbvf | grep ^version:
version:        1.0.7

The guest driver does not seem to be the source of this difference,
since these qemu logs are logged before HVM grub menu appears (and hence
before guest kernel is loaded)

Why would a physical function invoke MSI-X but not a virtual function?
Is it something that pciback is doing?

When I bind a PF to pciback, the xen-pciback driver shows:
[ 1390.090693] pciback 0000:04:00.1: seizing device
[ 1390.095417] xen_allocate_pirq: returning irq 17 for gsi 17
[ 1390.101021] Already setup the GSI :17
[ 1390.104717] pciback 0000:04:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
[ 1390.111807] pciback 0000:04:00.1: PCI INT B disabled

but when I bind a VF to pciback, the driver shows:
[ 1439.411763] pciback 0000:04:10.0: seizing device
[ 1439.416462] pciback 0000:04:10.0: enabling device (0000 -> 0002)

(note no INT or GSI messages)

could pciback be the source of the HVM INTx problem?

-David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.