On Wed, Nov 10, 2010 at 12:13 AM, Jan Beulich <JBeulich@xxxxxxxxxx> wrote:
>>>> On 10.11.10 at 02:18, Dante Cinco <dantecinco@xxxxxxxxx> wrote:
>> I have a Fibre Channel HBA that I'm passing through to the pvops domU.
>> In domU its IRQ 127 and I've affinitized it to VCPU0 (out of 16) by
>> setting /proc/irq/127/smp_affinity to 0001 (default setting was FFFF).
>> However, when I checked the interrupt bindings in Xen, it still shows
>> IRQ 127 as being affinitized to all CPUs (all F's). I checked lspci in
>> both dom0 and domU and the address portion (00000000fee00000) of the
>> MSI address/data seem to match Xen's interrupt binding report (I
>> expected the digits to the right of 'fee' to be non-zero if it's
>> affinitized to one specific CPU).
>
> See -unstable c/s 21625:0695a5cdcb42.
>
> Jan
>
>
Jan,
After seeing your response (cs 21625, x86: IRQ affinity should track
vCPU affinity), I installed the latest xen-unstable-4.1 (cs 22382) and
the result is the same --- IRQ affinity in Xen is set to all F's after
explicitly setting the IRQ smp_affinity in domU to a specific CPU. I
do have the VCPUs assigned to domU pinned to PCPUs on a one-to-one
basis (VCPU0 -> PCPU0, VCPU1 -> PCPU1, etc.).
BTW, I actually have 16 of these devices PCI-passed through to domU
which has16 VCPUs and I'm affinitizing each device to its own
dedicated CPU. Before explicitly setting the IRQ smp_affinity, all 16
devices are set to 0001. See the explicit IRQ smp_affinity setting
below. If I look in the domU's /proc/interrupts, it shows the
interrupts for a given IRQ going only to the CPU it has been
affinitized to (no interrupts going to the other CPUs) which is
expected.
My system has 24 PCPUs (dual socket X5650 Xeon/6-core Westmere) but
I'm only assigning the first 16 CPUs to domU and the VCPUs are pinned
to their respective PCPU. The part I don't understand is if the IRQ
affinity per Xen (all F's) is correct, how does the interrupt get
handled if it is directed to a PCPU that is not assigned/pinned to a
VCPU in domU?
Jeremy, to address your question, pinning the VCPUs to specific PCPUs
and affinitizing the IRQ to specific VCPUs in domU has worked very
well for us on the HVM kernel. The I/O performance is significantly
better compared with no affinitization. We're trying to transition to
the pvops domU with the expectation that our affinitization strategy
will still be applicable and maintain or even possibly improve I/O
performance.
These are the IRQ smp_affinity settings in pvops domU:
cat /proc/irq/112/smp_affinity
8000
cat /proc/irq/113/smp_affinity
4000
cat /proc/irq/114/smp_affinity
2000
cat /proc/irq/115/smp_affinity
1000
cat /proc/irq/116/smp_affinity
0800
cat /proc/irq/117/smp_affinity
0400
cat /proc/irq/118/smp_affinity
0200
cat /proc/irq/119/smp_affinity
0100
cat /proc/irq/120/smp_affinity
0080
cat /proc/irq/121/smp_affinity
0040
cat /proc/irq/122/smp_affinity
0020
cat /proc/irq/123/smp_affinity
0010
cat /proc/irq/124/smp_affinity
0008
cat /proc/irq/125/smp_affinity
0004
cat /proc/irq/126/smp_affinity
0002
cat /proc/irq/127/smp_affinity
0001
(XEN) *** Serial input -> Xen (type 'CTRL-a' three times to switch
input to DOM0)
(XEN) Guest interrupt information:
(XEN) IRQ: 67 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:7a
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:127(----),
(XEN) IRQ: 68 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:8a
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:126(----),
(XEN) IRQ: 69 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:92
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:125(----),
(XEN) IRQ: 70 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:9a
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:124(----),
(XEN) IRQ: 71 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:a2
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:123(----),
(XEN) IRQ: 72 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:aa
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:122(----),
(XEN) IRQ: 73 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:b2
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:121(----),
(XEN) IRQ: 74 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:ba
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:120(----),
(XEN) IRQ: 75 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:c2
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:119(----),
(XEN) IRQ: 76 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:ca
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:118(----),
(XEN) IRQ: 77 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:d2
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:117(----),
(XEN) IRQ: 78 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:da
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:116(----),
(XEN) IRQ: 79 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:23
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:115(----),
(XEN) IRQ: 80 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:2b
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:114(----),
(XEN) IRQ: 81 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:33
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:113(----),
(XEN) IRQ: 82 affinity:ffffffff,ffffffff,ffffffff,ffffffff vec:3b
type=PCI-MSI status=00000010 in-flight=0
domain-list=1:112(----),
xm info
xen_changeset : Tue Nov 09 20:37:46 2010 +0000 22382:a15b0a2dc276
xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 1024 1 r----- 50.3
domU 1 2048 16 -b---- 220.9
xm vcpu-list
Name ID VCPU CPU State Time(s) CPU Affinity
Domain-0 0 0 0 r-- 50.6 0
domU 1 0 0 -b- 32.2 0
domU 1 1 1 -b- 19.8 1
domU 1 2 2 -b- 19.0 2
domU 1 3 3 -b- 13.9 3
domU 1 4 4 -b- 8.4 4
domU 1 5 5 -b- 16.6 5
domU 1 6 6 -b- 26.6 6
domU 1 7 7 -b- 8.0 7
domU 1 8 8 -b- 9.6 8
domU 1 9 9 -b- 16.5 9
domU 1 10 10 -b- 9.0 10
domU 1 11 11 -b- 8.2 11
domU 1 12 12 -b- 12.4 12
domU 1 13 13 -b- 11.6 13
domU 1 14 14 -b- 4.9 14
domU 1 15 15 -b- 4.5 15
- Dante
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|