WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] pv-ops domU not working with MSI interrupts on Nehalem

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: RE: [Xen-devel] pv-ops domU not working with MSI interrupts on Nehalem
From: "Lin, Ray" <Ray.Lin@xxxxxxx>
Date: Fri, 8 Oct 2010 11:40:40 -0600
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Bruce Edge <bruce.edge@xxxxxxxxx>
Delivery-date: Fri, 08 Oct 2010 10:41:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20101008173054.GA20884@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActnDsG3igmD1WSvSEyZUC/8ONbFQgAALZYg
Thread-topic: [Xen-devel] pv-ops domU not working with MSI interrupts on Nehalem
I got this from dom0 when I brought up domU. There is no complaint from iommu.


-Ray

 about to get started...
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
(XEN) traps.c:2310:d2 Domain attempted WRMSR 000000000000008b from 
0x0000001500000000 to 0x0000000000000000.
[ 5171.932037] vif2.0: no IPv6 routers present
[ 5178.026355] blkback: ring-ref 8, event-channel 87, protocol 1 (x86_64-abi)
[ 5221.204637] pciback 0000:07:00.0: enabling device (0000 -> 0003)
[ 5221.204696] xen: registering gsi 32 triggering 0 polarity 1
[ 5221.204716] xen_allocate_pirq: returning irq 32 for gsi 32
[ 5221.204735] xen: --> irq=32
[ 5221.204749] Already setup the GSI :32
[ 5221.204764] pciback 0000:07:00.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
[ 5221.204819] pciback 0000:07:00.0: setting latency timer to 64
[ 5221.205376]   alloc irq_desc for 474 on node 0
[ 5221.205400]   alloc kstat_irqs on node 0
[ 5221.270496] pciback 0000:07:00.1: enabling device (0000 -> 0003)
[ 5221.270536] xen: registering gsi 42 triggering 0 polarity 1
[ 5221.270576] xen_allocate_pirq: returning irq 42 for gsi 42
[ 5221.270595] xen: --> irq=42
[ 5221.270608] Already setup the GSI :42
[ 5221.270624] pciback 0000:07:00.1: PCI INT B -> GSI 42 (level, low) -> IRQ 42
[ 5221.270660] pciback 0000:07:00.1: setting latency timer to 64
[ 5221.271210]   alloc irq_desc for 473 on node 0
[ 5221.271234]   alloc kstat_irqs on node 0
[ 5221.333809] pciback 0000:07:00.2: enabling device (0000 -> 0003)
[ 5221.333849] xen: registering gsi 47 triggering 0 polarity 1
[ 5221.333888] xen_allocate_pirq: returning irq 47 for gsi 47
[ 5221.333907] xen: --> irq=47
[ 5221.333921] Already setup the GSI :47
[ 5221.333936] pciback 0000:07:00.2: PCI INT C -> GSI 47 (level, low) -> IRQ 47
[ 5221.333972] pciback 0000:07:00.2: setting latency timer to 64
[ 5221.334523]   alloc irq_desc for 472 on node 0
[ 5221.334546]   alloc kstat_irqs on node 0
[ 5221.595255] pciback 0000:07:00.3: enabling device (0000 -> 0003)
[ 5221.595340] xen: registering gsi 41 triggering 0 polarity 1
[ 5221.595373] xen_allocate_pirq: returning irq 41 for gsi 41
[ 5221.595422] xen: --> irq=41
[ 5221.595445] Already setup the GSI :41
[ 5221.595474] pciback 0000:07:00.3: PCI INT D -> GSI 41 (level, low) -> IRQ 41
[ 5221.595530] pciback 0000:07:00.3: setting latency timer to 64
[ 5221.596417]   alloc irq_desc for 471 on node 0
[ 5221.596457]   alloc kstat_irqs on node 0


-----Original Message-----
From: Konrad Rzeszutek Wilk [mailto:konrad.wilk@xxxxxxxxxx] 
Sent: Friday, October 08, 2010 10:31 AM
To: Lin, Ray
Cc: Bruce Edge; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] pv-ops domU not working with MSI interrupts on Nehalem

On Fri, Oct 08, 2010 at 10:48:01AM -0600, Lin, Ray wrote:
> 
> I just tried Bruce's latest kernel build based on Konrad's 
> devel/xen-pcifront-0.7. It doesn't help the issue we have. The driver still 
> doesn't recognize the source of interrupt, even though the interrupts happen.
> 
> 
> 124:      87792          0          0          0          0          0      
> 12208          0          0          0          0          0          0       
>    0  xen-pirq-pcifront-msi  HW_TACHYON
> 125:      89692          0          0          0      10308          0        
>   0          0          0          0          0          0          0         
>  0  xen-pirq-pcifront-msi  HW_TACHYON
> 126:      90979          0       9021          0          0          0        
>   0          0          0          0          0          0          0         
>  0  xen-pirq-pcifront-msi  HW_TACHYON
> 127:     100000          0          0          0          0          0        
>   0          0          0          0          0          0          0         
>  0  xen-pirq-pcifront-msi  HW_TACHYON
> 

And you still get on the Xen hypervisor side the DMAR failure of reading the 
memory?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>