[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)



Zhang, Xiantao wrote:
> Dante,
>    Have you applied the two patches when you did the testing?  
> Without them, we can reproduce the issue you reported, but with them,
> the issue is gone.  The root-cause is that when program MSI, we have
> to mask the MSI interrupt source first, otherwise it may generate
> inconistent interrupts with incorrect destination and right vector or
> incorrect vector and right destination.     
> 
> For exmaple, if the old MSI interrupt info is 0.186 which means the
> destination id is 0 and the vector is 186, but when the IRQ migrates
> to another cpu(e.g.  Cpu 1), the MSI info should be changed to 1.194.
> When you programs MSI info to pci device, if not mask it first, it
> may generate the interrupt as 1.186 or 0.194. Obviously, ther
> interrupts with the info 1.186 and 0.194 doesn't exist, and according
> to the spec, any combination is possible. Since Xen writes addr field
> first, so it is likely to generate 1.186 instead of 0.194, so your
> pci devices may generate interrupt with new destination and old
> vector(1.186).    In my two patches, one is used to fix guest
> interrupt affinity issue(a race exists between guest eoi old vector
> and guest setting new vector), and another one is used to safely
> program MSI info to pci devices to avoid inconsistent interrupts
> generation.             
> 
>> (XEN) traps.c:1626: guest_io_write::pci_conf_write data=0x40ba
> 
> This should be written by dom0(likely to be Qemu).  And if it does
> exist, we may have to prohibit such unsafe writings about MSI in
> Qemu.  

Another issue may exist which leads to the issue.  Currenlty, both Qemu and 
hypervisor can program MSI but Xen lacks synchronization mechnism between them 
to avoid race.  As said in the last mail,  Qemu shouldn't be allowed to do the 
unsafe writing about MSI Info, and insteadly,  it should resort to hypervisor 
through hypercall for MSI programing, otherwise, Qemu may write staled MSI info 
to PCI devices  and leads to the strange issues.   
Keir/Ian
        What's your opinion about the potential issue ?  Maybe we need to add a 
lock between them or just allow hypervisor to do the writing ?    
Xiantao
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.