WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP P

To: "Cinco, Dante" <Dante.Cinco@xxxxxxx>, "He, Qing" <qing.he@xxxxxxxxx>
Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)
From: "Zhang, Xiantao" <xiantao.zhang@xxxxxxxxx>
Date: Thu, 22 Oct 2009 09:58:35 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Wed, 21 Oct 2009 18:59:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <2B044E14371DA244B71F8BF2514563F503FC081B@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <706158FABBBA044BAD4FE898A02E4BC201C9BD8CED@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <2B044E14371DA244B71F8BF2514563F503FC081B@xxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpOPi4Q9UlUI8ufS5qQ1CJSqDysQAAAM40gAAsf3SAABZW6UAAQxcbQAJQYlsAADK/qgAAEGtWgABRnJ9AAEEcpwAAiDlkgABCR9DA=
Thread-topic: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)
Dante, 
   Have you applied the two patches when you did the testing?   Without them, 
we can reproduce the issue you reported, but with them, the issue is gone.  The 
root-cause is that when program MSI, we have to mask the MSI interrupt source 
first, otherwise it may generate inconistent interrupts with incorrect 
destination and right vector or incorrect vector and right destination.  

For exmaple, if the old MSI interrupt info is 0.186 which means the destination 
id is 0 and the vector is 186, but when the IRQ migrates to another cpu(e.g.  
Cpu 1), the MSI info should be changed to 1.194. When you programs MSI info to 
pci device, if not mask it first, it may generate the interrupt as 1.186 or 
0.194. Obviously, ther interrupts with the info 1.186 and 0.194 doesn't exist, 
and according to the spec, any combination is possible. Since Xen writes addr 
field first, so it is likely to generate 1.186 instead of 0.194, so your pci 
devices may generate interrupt with new destination and old vector(1.186).    
In my two patches, one is used to fix guest interrupt affinity issue(a race 
exists between guest eoi old vector and guest setting new vector), and another 
one is used to safely program MSI info to pci devices to avoid inconsistent 
interrupts generation.  

> (XEN) traps.c:1626: guest_io_write::pci_conf_write data=0x40ba 

This should be written by dom0(likely to be Qemu).  And if it does exist, we 
may have to prohibit such unsafe writings about MSI in Qemu.  

Xiantao

     
> <<<<<<<<<< culprit (XEN) pci.c:53:
> pci_conf_write::cf8=0x8007006c,offset=0,bytes=2,data=0x40ba    
> <<<<<<<<<< vector reverted back to 186 (XEN) do_IRQ: 1.186 No irq
> handler for vector (irq -1)                          <<<<<<<<<< can't
> find handler because vector should have been 218         
> 
> (XEN) Guest interrupt information:
> (XEN) IRQ: 66, IRQ affinity:0x00000002, Vec:218 type=PCI-MSI
> status=00000010 in-flight=0 domain-list=1: 79(----) 
> 
> dom0 lspci -vv -s 0:07:0.0 | grep Address
>                 Address: 00000000fee10000  Data: 40ba (dest ID=16
> APIC ID of CPU1, vector=186) 
> 
> domU lspci -vv -s 00:05.0 | grep Address
>                 Address: 00000000fee02000  Data: 40b1
> 
> I followed the call hierarchy for guest_io_write() as far as I can:
> 
> do_page_fault
>   fixup_page_fault
>     handle_gdt_ldt_mapping_fault
>       do_general_protection
>         emulate_privileged_op
>           guest_io_write
> 
> -------------------------------------------- END DATA
> 
> -----Original Message-----
> From: Zhang, Xiantao [mailto:xiantao.zhang@xxxxxxxxx]
> Sent: Tuesday, October 20, 2009 6:11 PM
> To: Cinco, Dante; He, Qing
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
> Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus
> > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem) 
> 
> Only need to apply the two patches and the previous one should be
> discarded. 
> Xiantao
> 
> -----Original Message-----
> From: Cinco, Dante [mailto:Dante.Cinco@xxxxxxx]
> Sent: Wednesday, October 21, 2009 1:27 AM
> To: Zhang, Xiantao; He, Qing
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Keir Fraser
> Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus
> > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem) 
> 
> Xintao,
> With the latest patch (Fix-irq-affinity-msi3.patch,
> Mask_msi_irq_when_programe_it.patch), should I still apply the
> previous patch with removes "desc->handler->set_affinity(irq,
> *cpumask_of(v->processor))" or was that just a one-time experiment
> that should now be discarded?    
> Dante
> 
> -----Original Message-----
> From: Zhang, Xiantao [mailto:xiantao.zhang@xxxxxxxxx]
> Sent: Tuesday, October 20, 2009 12:51 AM
> To: Zhang, Xiantao; Cinco, Dante; He, Qing
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Fraser
> Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus
> > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem) 
> 
> Attached two patches should fix the issues. For the issue which
> complains "(XEN) do_IRQ: 1.187 No irq handler for vector (irq -1),",
> I root-caused it.  Currenlty, when programs MSI address & data, Xen
> doesn't perform the mask/unmask logic to avoid inconsistent interrupt
> genernation. In this case, according to spec, the interrupt
> generation behavior is undfined, and device may generate MSI
> interrupts with the expected vector and incorrect destination ID, so
> leads to the issue.  The attached two patches should address it.
> Fix-irq-affinity-msi3.patch:  same with the previous post.       
> Mask_msi_irq_when_programe_it.patch : disable irq when program msi.
> 
> Xiantao
> 
> 
> Zhang, Xiantao wrote:
>> Cinco, Dante wrote:
>>> Xiantao,
>>> With vcpus=16 (all CPUs) in domU, I'm able to change the IRQ
>>> smp_affinity to any one-hot value and see the interrupts routed to
>>> the specified CPU. Every now and then though, both domU and dom0
>>> will permanently lockup (cold reboot required) after changing the
>>> smp_affinity. If I change it manually via command-line, it seems to
>>> be okay but if I change it within a script (such as shifting-left a
>>> walking "1" to test all 16 CPUs), it will lockup part way through
>>> the script.
>> 
>> I can't reproduce the failure at my side after applying the patches
>> even with a similar script which changes irq's affinity.  Could you
>> share your script with me ? 
>> 
>> 
>> 
>>> Other observations:
>>> 
>>> In the above log, I had changed the smp_affinity for IRQ 66 but IRQ
>>> 68 and 69 got masked.
>> 
>> We can see the warning as "No irq handler for vector" but it
>> shouldn't hang host, and it maybe related to another potential
>> issue, and maybe need further investigation. 
>> 
>> Xiantao
>> 
>>> -----Original Message-----
>>> From: Zhang, Xiantao [mailto:xiantao.zhang@xxxxxxxxx]
>>> Sent: Friday, October 16, 2009 5:59 PM
>>> To: Cinco, Dante; He, Qing
>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; Fraser; Fraser
>>> Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with
>>> vcpus 
>>>> 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)
>>> 
>>>  Dante
>>>  It should be another issue as you described.  Can you try the
>>> following code to see whether it works for you ?  Just a try.
>>> Xiantao 
>>> 
>>> diff -r 0705efd9c69e xen/arch/x86/hvm/hvm.c
>>> --- a/xen/arch/x86/hvm/hvm.c    Fri Oct 16 09:04:53 2009 +0100
>>> +++ b/xen/arch/x86/hvm/hvm.c    Sat Oct 17 08:48:23 2009 +0800
>>> @@ -243,7 +243,7 @@ void hvm_migrate_pirqs(struct vcpu *v)
>>>          continue; irq = desc - irq_desc;
>>>          ASSERT(MSI_IRQ(irq));
>>> -        desc->handler->set_affinity(irq,
>>> *cpumask_of(v->processor)); +       
>>>          //desc->handler->set_affinity(irq,
>>>      *cpumask_of(v->processor)); spin_unlock_irq(&desc->lock); }
>>> spin_unlock(&d->event_lock); 
>>> 
>>> -----Original Message-----
>>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Cinco,
>>> Dante Sent: Saturday, October 17, 2009 2:24 AM
>>> To: Zhang, Xiantao; He, Qing
>>> Cc: Keir; xen-devel@xxxxxxxxxxxxxxxxxxx; Fraser
>>> Subject: RE: [Xen-devel] IRQ SMP affinity problems in domU with
>>> vcpus 
>>>> 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)
>>> 
>>> Xiantao,
>>> I'm still losing the interrupts with your patch but I see some
>>> differences. To simplifiy the data, I'm only going to focus on the
>>> first function of my 4-function PCI device.
>>> 
>>> After changing the IRQ affinity, the IRQ is not masked anymore
>>> (unlike before the patch). What stands out for me is the new vector
>>> (219) as reported by "guest interrupt information" does not match
>>> the vector (187) in dom0 lspci. Before the patch, the new vector in
>>> "guest interrupt information" matched the new vector in dom0 lspci
>>> (dest ID in dom0 lspci was unchanged). I also saw this message pop
>>> on the Xen console when I changed smp_affinity:
>>> 
>>> (XEN) do_IRQ: 1.187 No irq handler for vector (irq -1).
>>> 
>>> 187 is the vector from dom0 lspci before and after the smp_affinity
>>> change but "guest interrupt information" reports the new vector is
>>> 219. To me, this looks like the new MSI message data (with
>>> vector=219) did not get written into the PCI device, right?
>>> 
>>> Here's a comparison before and after changing smp_affinity from ffff
>>> to 2 (dom0 is pvops 2.6.31.1, domU is 2.6.30.1):
>>> 
>>> ---------------------------------------------------------------------
>>> ---
>>> 
>>> /proc/irq/48/smp_affinity=ffff (default):
>>> 
>>> dom0 lspci: Address: 00000000fee00000  Data: 40bb (vector=187)
>>> 
>>> domU lspci: Address: 00000000fee00000  Data: 4071 (vector=113)
>>> 
>>> qemu-dm-dpm.log: pt_msi_setup: msi mapped with pirq 4f (79)
>>>                  pt_msi_update: Update msi with pirq 4f gvec 71
>>> gflags 0 
>>> 
>>> Guest interrupt information: (XEN) IRQ: 74, IRQ affinity:0x00000001,
>>> Vec:187 type=PCI-MSI status=00000010 in-flight=0 domain-list=1:
>>> 79(----) 
>>> 
>>> Xen console: (XEN) [VT-D]iommu.c:1289:d0 domain_context_unmap:PCIe:
>>>              bdf = 7:0.0 (XEN) [VT-D]iommu.c:1175:d0
>>>              domain_context_mapping:PCIe: bdf = 7:0.0 (XEN)
>>>              [VT-D]io.c:301:d0 VT-d irq bind: m_irq = 4f device = 5
>>>              intx = 0 (XEN) io.c:326:d0 pt_irq_destroy_bind_vtd:
>>> machine_gsi=79 guest_gsi=36, device=5, intx=0 (XEN) io.c:381:d0
>>> XEN_DOMCTL_irq_unmapping: m_irq = 0x4f device = 0x5 intx = 0x0
>>> 
>>> ---------------------------------------------------------------------
>>> ---
>>> 
>>> /proc/irq/48/smp_affinity=2:
>>> 
>>> dom0 lspci: Address: 00000000fee10000  Data: 40bb (dest ID changed
>>> from 0 (APIC ID of CPU0) to 16 (APIC ID of CPU1), vector unchanged)
>>> 
>>> domU lspci: Address: 00000000fee02000  Data: 40b1 (dest ID changed
>>> from 0 (APIC ID of CPU0) to 2 (APIC ID of CPU1), new vector=177)
>>> 
>>> Guest interrupt information: (XEN) IRQ: 74, IRQ affinity:0x00000002,
>>> Vec:219 type=PCI-MSI status=00000010 in-flight=0 domain-list=1:
>>> 79(----) 
>>> 
>>> qemu-dm-dpm.log: pt_msi_update: Update msi with pirq 4f gvec 71
>>>                  gflags 2 pt_msi_update: Update msi with pirq 4f
>>> gvec b1 gflags 2
>> 
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>