[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IRQ affinity: Xen view different from pvops (konrad-pcifront) domU



On 11/09/2010 05:18 PM, Dante Cinco wrote:
> I have a Fibre Channel HBA that I'm passing through to the pvops domU.
> In domU its IRQ 127 and I've affinitized it to VCPU0 (out of 16) by
> setting /proc/irq/127/smp_affinity to 0001 (default setting was FFFF).
> However, when I checked the interrupt bindings in Xen, it still shows
> IRQ 127 as being affinitized to all CPUs (all F's). I checked lspci in
> both dom0 and domU and the address portion (00000000fee00000) of the
> MSI address/data seem to match Xen's interrupt binding report (I
> expected the digits to the right of 'fee' to be non-zero if it's
> affinitized to one specific CPU).
>
> I've tried using a recent Ubuntu HVM kernel for domU instead of
> Konrad's pcifront pvops kernel and with the HVM kernel, Xen's
> interrupt binding matches the IRQ smp_affinity in domU.
>
> Am I not correctly interpreting the apparent affinity discrepancy
> between Xen and pvops domU or is there a known limitation or problem
> with affinitizing IRQs in Konrad's pcifront pvops domU kernel?

A VCPU is not a real physical CPU, and can run on any physical CPU from
moment to moment.  Setting the affinity within the domU will cause the
interrupts to be handled by a specific VCPU, but that has no meaning to
the hardware as it knows nothing about VCPUs.  Since the VCPU can run on
any PCPU, it makes sense that the hardware routing is for all PCPUs.  In
principle I guess you could pin the VCPU to a particular PCPU and then
route only to that PCPU, but I'm not sure what that would achieve.

    J

> from Konrad's pcifront pvop domU:
> lspci -vv -s 00:00.0
> 00:00.0 Fibre Channel: PMC-Sierra Inc. Device 8032 (rev 08)
>         Interrupt: pin D routed to IRQ 127
>         Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+
> Queue=0/1 Enable+
>                 Address: 00000000fee00000  Data: 4043
>         Capabilities: [b0] MSI-X: Enable- Mask- TabSize=9
>                 Vector table: BAR=4 offset=00004100
>                 PBA: BAR=4 offset=00004000
> cat /proc/irq/127/smp_affinity
> 0001
> uname -a
> Linux kaan-40-dpm 2.6.36-rc7-pvops-kpcif-08-2-domu-5.8.dcinco-debug #1
> SMP Tue Nov 9 10:36:45 PST 2010 x86_64 GNU/Linux
>
>
> from 2.6.32.25 pvops dom0:
> lspci -vv -s 11:00.3
> 11:00.3 Fibre Channel: PMC-Sierra Inc. Device 8032 (rev 08)
>         Interrupt: pin D routed to IRQ 4426
>         Capabilities: [60] Message Signalled Interrupts: Mask- 64bit+
> Queue=0/1 Enable+
>                 Address: 00000000fee00000  Data: 4043
>         Capabilities: [b0] MSI-X: Enable- Mask- TabSize=9
>                 Vector table: BAR=4 offset=00004100
>                 PBA: BAR=4 offset=00004000
> uname -a
> Linux kaan-40 2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug #1 SMP
> PREEMPT Fri Nov 5 16:13:32 PDT 2010 x86_64 GNU/Linux
>
>
> (XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch
> input to DOM0)
> (XEN) Guest interrupt information:
> (XEN)    IRQ:  67 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:43
> type=PCI-MSI         status=00000010 in-flight=0
> domain-list=2:127(----),
>
>
> xm list
> Name                                        ID   Mem VCPUs      State   
> Time(s)
> Domain-0                                     0  1024     1     r-----     43.6
> domU                                          2  2048    16     -b----     
> 63.0
> xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU 
> Affinity
> Domain-0                             0     0     0   r--      44.7 0
> domU                                  2     0     0   -b-      17.1 0
> domU                                  2     1     1   -b-       4.9 1
> domU                                  2     2     2   -b-       3.1 2
> domU                                  2     3     3   -b-       4.7 3
> domU                                  2     4     4   -b-       3.0 4
> domU                                  2     5     5   -b-       3.9 5
> domU                                  2     6     6   -b-       4.9 6
> domU                                  2     7     7   -b-       1.9 7
> domU                                  2     8     8   -b-       3.7 8
> domU                                  2     9     9   -b-       1.8 9
> domU                                  2    10    10   -b-       1.8 10
> domU                                  2    11    11   -b-       4.3 11
> domU                                  2    12    12   -b-       2.8 12
> domU                                  2    13    13   -b-       2.7 13
> domU                                  2    14    14   -b-       1.9 14
> domU                                  2    15    15   -b-       3.9 15
> xm info
> release                : 2.6.32.25-pvops-stable-dom0-5.7.dcinco-debug
> version                : #1 SMP PREEMPT Fri Nov 5 16:13:32 PDT 2010
> machine                : x86_64
> hw_caps                :
> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
> max_node_id            : 1
> xen_major              : 4
> xen_minor              : 0
> xen_extra              : .2-rc1-pre
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_changeset          : Fri Sep 17 17:06:57 2010 +0100 21350:6e0ffcd2d9e0
> xen_commandline        : dummy=dummy dom0_mem=1024M dom0_max_vcpus=1
> dom0_vcpus_pin=true iommu=1,passthrough,no-intremap loglvl=all
> loglvl_guest=all loglevl=10 debug apic=on apic_verbosity=verbose
> extra_guest_irqs=80 com1=115200,8n1 console=com1 console_to_ring
> noirqbalance xen-pciback.permissive acpi=force numa=on
>
>
> - Dante
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.