[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen-unstable: xen panic RIP: dpci_softirq



. snip..
> > # cat /proc/interrupts |grep eth
> >  36:     384183          0  xen-pirq-ioapic-level  eth0
> >  63:          1          0  xen-pirq-msi-x     eth1
> >  64:         24     661961  xen-pirq-msi-x     eth1-rx-0
> >  65:        205          0  xen-pirq-msi-x     eth1-rx-1
> >  66:        162          0  xen-pirq-msi-x     eth1-tx-0
> >  67:        190          0  xen-pirq-msi-x     eth1-tx-1
> > Is that a similar distribution of IRQ/MSIx you end up having?
> 
> These are when they are still active and assigned to dom0 (and not owned by 
> pci-back) or in the guest ?

In the guest.
> 
> I attached my /proc/interrupts for both dom0 as guest 16 with all guests 
> running 
> (on a Xen from before the dpci changes). 
> With the devices passed through I only see one line with the IRQ of a 
> PCI soundcard passed through to a PV guest:
>   22:      38959          0          0          0          0          0  
> xen-pirq-ioapic-level  xen-pciback[0000:03:06.0]
> 
> All the other devices passed through (to HVM guests) are not visible in 
> /proc/interrupts of dom0.

Right.
> 
> In the guest i do get these:
>  23:         35          0          0          0  xen-pirq-ioapic-level  
> uhci_hcd:usb3
>  40:   13440077          0          0          0  xen-pirq-ioapic-level  
> cx25821[1], cx25821[1]

That is a bit odd. You have two 'request_irq' off this sole device, which would
imply that there are _two_ devices which are using the same interrupt line.

But how is that possible when your device:

0a:00.0 Multimedia video controller: Conexant Systems, Inc. Device 8210
        Flags: bus master, fast devsel, latency 0, IRQ 47
        Memory at fe200000 (64-bit, non-prefetchable) [size=2M]
        Capabilities: [40] Express Endpoint, MSI 00
        Capabilities: [80] Power Management version 3
        Capabilities: [90] Vital Product Data
        Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [100] Advanced Error Reporting
        Capabilities: [200] Virtual Channel
        Kernel driver in use: pciback

Has only one IRQ! What is the name of this device? Perhaps I've another one that
is similar to this. Could you attach

 a) 'lspci -vvvv' from your guest please?

 b) The  the full 'dmesg' from your guest?

 c) the /var/log/xen/qemu-dm-XXX ? Hmm, you are using qemu-xen so it won't log
    that much information. Could you try 'qemu-traditional' or would that
    mess up with XHCI?


In regards to your other question:

        Hi Konrad,

        Here is the xl dmesg output with this patch (attached with debug-key i 
and M
        output). What i don't get .. is that d16 and d17 each have a device 
passed through
        that seems to be using the same pirq 87 ?

Those are per guest. They are the MSI values after 84 or so.

Back to your crash:

d16 OK-softirq 458msec ago, state:1, 52039 count, [prev:ffff83054ef283e0, 
next:ffff83054ef283e0] ffff83051b95fd28MACH_PCI_SHIFT MAPPED_SHIFT 
GUEST_PCI_SHIFT  PIRQ:0
d16 OK-raise   489msec ago, state:1, 52049 count, [prev:0000000000200200, 
next:0000000000100100] ffff83051b95fd28MACH_PCI_SHIFT MAPPED_SHIFT 
GUEST_PCI_SHIFT  PIRQ:0
d16 ERR-poison 561msec ago, state:0, 1 count, [prev:0000000000200200, 
next:0000000000100100] ffff83051b95fd28MACH_PCI_SHIFT MAPPED_SHIFT 
GUEST_PCI_SHIFT  PIRQ:0
d16 Z-softirq  731msec ago, state:3, 3 count, [prev:ffff83054ef283e0, 
next:ffff83054ef283e0] ffff83051b95fd28MACH_PCI_SHIFT MAPPED_SHIFT 
GUEST_PCI_SHIFT  PIRQ:0
domain_crash called from io.c:938
Domain 16 reported crashed by domain 32767 on cpu#5:

All of it point to the legacy interrupt - that is the on that starts at Xen IRQ 
47 (guest IRQ 40):
 io.c:550: d16: bind: m_gsi=47 g_gsi=40 dev=00.00.6 intx=0
IRQ:  47 affinity:02 vec:d1 type=IO-APIC-level   status=00000030 in-flight=1 
domain-list=16: +47(P-M),

which looks OK.

I am puzzled by the driver binding twice to the same interrupt, but perhaps that
is just a buggy driver.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.