[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] domU guest for xcp 0.1.1


  • To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
  • From: Ritu kaur <ritu.kaur.us@xxxxxxxxx>
  • Date: Thu, 18 Mar 2010 06:43:49 -0700
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 18 Mar 2010 06:44:56 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=enTr11xTDPhyUKi7vWdeIG+p7p4rE4hUNlazM5X6tED/VI2sBjyup4l8FgcS9ycSHK s8ikCpjmabKaZh4BrsRtH/No+JncDM7qswGKnED6rB4NyxD8K+4bBh32fLuS4DLJSnQN vMUvc+q9ib+kPu5QeEewf9n9lJ4fLcvURBnxw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Ian.

 pcifront_handler_aer is the callback function.

...
err = bind_evtchn_to_irqhandler(pdev->evtchn, pcifront_handler_aer,
                0, "pcifront", pdev);
        if (err < 0) {
                xenbus_free_evtchn(pdev->xdev, pdev->evtchn);
                xenbus_dev_fatal(pdev->xdev, err, "Failed to bind evtchn to "
                                 "irqhandler.\n");
                return err;
        }
...

In pcifront_handler_aer, schedule_pcifront_aer_op is called.

irqreturn_t pcifront_handler_aer(int irq, void *dev)
{
        struct pcifront_device *pdev = dev;
        schedule_pcifront_aer_op(pdev);
        return IRQ_HANDLED;
}

So I am assuming it is called during normal path as well.

Yes my nic device is sharing interrupts(IRQ17) with usb and ide devices in dom0.

cat /proc/interrupts in domU shows interrupts are never received by the interface(probably confirmed that pcifront itself doesn't receive interrupts). I am only passing 0000:08:01.0 nic device to a domU. It shares IRQ 17 with usb/ide devices in dom0.

Thanks

On Thu, Mar 18, 2010 at 2:17 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
I thought AER stuff was only called on error conditions and isn't in the
normal pci passthrough paths so I don't think you would expect to see
any messages from schedule_pcifront_aer_op in normal operation.

I'm not sure about your actual problem (we're pushing the boundaries of
my immediate knowledge of pcifront/back here), it looks as if it might
relate to the interrupt being shared with other devices in domain 0?
What does /proc/interrupts say on both ends? Which devices are you
trying to passthrough, only 0000:08:01.0? Does 0000:08:01.0 share an
interrupt with your USB controller and/or ATA controller in domain 0?

Ian.

On Wed, 2010-03-17 at 18:44 +0000, Ritu kaur wrote:
> Pasi, Ian
>
> I debugged this further with the assumption that IRQ follows the path
> IDT->Hypervisor->pciback->pcifront->actual_device. I added printk
> message while binding to a event-chnl  and event-chnl callback
> function in pcifront. I do see my printk message(while binding to
> event-chnl in domU) so I know kernel has correct module. After nic
> device is enabled via ifconfig in domU, I do not see messages (added
> in event-chnl callback function) from pcifront so I believe interrupt
> is not delivered to pcifront itself.
>
> static inline void schedule_pcifront_aer_op(struct pcifront_device
> *pdev)
> {
>         if (test_bit(_XEN_PCIB_active, (unsigned long
> *)&pdev->sh_info->flags)
>                 && !test_and_set_bit(_PDEVB_op_active, &pdev->flags))
> {
>                 dev_dbg(&pdev->xdev->dev, "schedule aer frontend job
> \n");
>                 printk(KERN_DEBUG "schedule aer frontend job %d\n",
> pdev->irq); <<<<<<< never seen in dmesg in domU
>                 schedule_work(&pdev->op_work);
>         }
> }
>
> dmesg on dom0 says "nobody cared...", forum has some old discussions
> around 2006 and hence didn't look into it in detail.
>
> Inputs will be very much appreciated.
>
> dmesg in dom0 and domU follows.
>
> Thanks
> /**********************dmesg on
> dom0***************************************/
> pciback 0000:08:01.0: PCI INT A -> GSI 17 (level, low) -> IRQ 17
> irq 17: nobody cared (try booting with the "irqpoll" option)
> Pid: 0, comm: swapper Tainted: G
> 2.6.27.42-0.1.1.xs0.1.1.737.1065xen #1
>  [<c01544f7>] __report_bad_irq+0x27/0x90
>  [<c015485c>] note_interrupt+0x2fc/0x330
>  [<f01df92d>] ? usb_hcd_irq+0x4d/0xe0 [usbcore]
>  [<c0153931>] ? handle_IRQ_event+0x31/0x90
>  [<c01551e4>] handle_level_irq+0xe4/0x110
>  [<c0107733>] do_IRQ+0x43/0x90
>  [<c01413b9>] ? ktime_get+0x19/0x40
>  [<c026cfcf>] evtchn_do_upcall+0xdf/0x1f0
>  [<c0105565>] hypervisor_callback+0x41/0x49
>  [<c010797b>] ? xen_safe_halt+0x8b/0xc0
>  [<c010afde>] xen_idle+0x1e/0x50
>  [<c0103728>] cpu_idle+0x58/0xa0
>  [<c0338f4e>] rest_init+0x4e/0x60
>  =======================
> handlers:
> [<f01df8e0>] (usb_hcd_irq+0x0/0xe0 [usbcore])
> Disabling IRQ #17
> ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
> ata1.00: cmd ca/00:08:31:08:14/00:00:00:00:00/e0 tag 0 dma 4096 out
>          res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout)
> ata1.00: status: { DRDY }
> ata1: soft resetting link
> ata1.00: qc timeout (cmd 0x27)
> ata1.00: failed to read native max address (err_mask=0x4)
> ata1.00: revalidation failed (errno=-5)
> ata1: soft resetting link
> ata1.00: qc timeout (cmd 0x27)
> ata1.00: failed to read native max address (err_mask=0x4)
> ata1.00: revalidation failed (errno=-5)
> ata1: soft resetting link
> ata1.00: qc timeout (cmd 0x27)
> ata1.00: failed to read native max address (err_mask=0x4)
> ata1.00: revalidation failed (errno=-5)
> ata1.00: disabled
> ata1.00: device reported invalid CHS sector 0
> ata1: soft resetting link
> ata1: EH complete
> sd 0:0:0:0: [sda] Result: hostbyte=DID_BAD_TARGET
> driverbyte=DRIVER_OK,SUGGEST_OK
> end_request: I/O error, dev sda, sector 1312817
> Buffer I/O error on device sda1, logical block 164102
> lost page write due to I/O error on sda1
>
>
> /**************************dmesg on
> domU*********************************************/
> [    5.994657] EXT3 FS on xvda1, internal journal
> [    8.431645] loop: module loaded
> [   10.043554] NET: Registered protocol family 10
> [   10.044013] lo: Disabled Privacy Extensions
> [   11.859410] lp: driver loaded but no devices found
> [   11.965333] ppdev: user-space parallel port driver
> [   20.076012] eth0: no IPv6 routers present
> [  170.192510] ncr 0000:00:00.0: enabling device (0000 -> 0002)
> [  170.192551] ncr 0000:00:00.0: Xen PCI enabling IRQ: 17
> [  170.192571] ncr: Found an ncr device (cfg revision 0)...
> [  287.816020] ------------[ cut here ]------------
> [  287.816031] WARNING: at net/sched/sch_generic.c:261 dev_watchdog
> +0xf8/0x188()
> [  287.816037] NETDEV WATCHDOG: ncr (): transmit queue 0 timed out
> [  287.816041] Modules linked in: ncr ppdev parport_pc lp parport
> acpi_cpufreq processor cpufreq_powersave cpufreq_stats
> cpufreq_ondemand freq_table cpufreq_userspace cpufreq_conservative
> ipv6 loop evdev pcspkr xen_netfront ext3 jbd mbcache xen_blkfront
> thermal_sys
> [  287.816113] Pid: 0, comm: swapper Not tainted 2.6.32.9 #4
> [  287.816118] Call Trace:
> [  287.816127]  [<c11f2db1>] ? dev_watchdog+0xf8/0x188
> [  287.816135]  [<c11f2db1>] ? dev_watchdog+0xf8/0x188
> [  287.816143]  [<c1037a1b>] ? warn_slowpath_common+0x5e/0x8a
> [  287.816151]  [<c1037a79>] ? warn_slowpath_fmt+0x26/0x2a
> [  287.816159]  [<c11f2db1>] ? dev_watchdog+0xf8/0x188
> [  287.816168]  [<c100665c>] ? check_events+0x8/0xc
> [  287.816175]  [<c1005ff4>] ? xen_force_evtchn_callback+0xc/0x10
> [  287.816183]  [<c100665c>] ? check_events+0x8/0xc
> [  287.816191]  [<c1006653>] ? xen_restore_fl_direct_end+0x0/0x1
> [  287.816200]  [<c124f4ea>] ? _spin_unlock_irqrestore+0xe/0x10
> [  287.816209]  [<c1042a74>] ? mod_timer+0x15f/0x168
> [  287.816217]  [<c11f2cb9>] ? dev_watchdog+0x0/0x188
> [  287.816224]  [<c104263c>] ? run_timer_softirq+0x195/0x217
> [  287.816232]  [<c103cb18>] ? __do_softirq+0xaa/0x151
> [  287.816240]  [<c103cbf0>] ? do_softirq+0x31/0x3c
> [  287.816247]  [<c103ccc6>] ? irq_exit+0x26/0x58
> [  287.816256]  [<c118f14b>] ? xen_evtchn_do_upcall+0x13f/0x151
> [  287.816264]  [<c1009087>] ? xen_do_upcall+0x7/0xc
> [  287.816272]  [<c10023a7>] ? hypercall_page+0x3a7/0x1001
> [  287.816280]  [<c1006075>] ? xen_safe_halt+0xf/0x1b
> [  287.816287]  [<c1004083>] ? xen_idle+0x23/0x30
> [  287.816295]  [<c100773c>] ? cpu_idle+0x46/0x62
> [  287.816303]  [<c136e7e0>] ? start_kernel+0x2c7/0x2ca
> [  287.816310]  [<c1370d33>] ? xen_start_kernel+0x5e6/0x5ee
> [  287.816315] ---[ end trace 00c16cce2318c073 ]---
> [  287.816320] ncr: Transmit timeout on ncr at 4294964250, latency 583
> [  291.816017] ncr: Transmit timeout on ncr at 4294965250, latency
> 1000
> ...
>
> [  412.673801] end_request: I/O error, dev xvda, sector 7608215
> [  412.673826] end_request: I/O error, dev xvda, sector 7608223
> [  412.673837] end_request: I/O error, dev xvda, sector 7608231
> [  480.052035] INFO: task kjournald:565 blocked for more than 120
> seconds.
> [  480.052047] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [  480.052057] kjournald     D 121f1575     0   565      2 0x00000000
> [  480.052069]  cf864d80 00000246 c136b7c0 121f1575 c1c25040 c13ce460
> c13ce460 cf864f38
> [  480.052091]  c26fb460 00000000 5737fa96 00000045 c136b7c0 121f1e2a
> c1052d6d 6c3e6a53
> [  480.052111]  00000000 121f1e2a 00000000 cf894690 cf864f38 cf864d80
> c26fb894 c26fb460
> [  480.052131] Call Trace:
> [  480.052143]  [<c1052d6d>] ? ktime_get_ts+0xd7/0xdf
> [  480.052154]  [<c124e1eb>] ? io_schedule+0x5f/0x98
> [  480.052162]  [<c10d49a2>] ? sync_buffer+0x30/0x33
> [  480.052169]  [<c124e645>] ? __wait_on_bit+0x33/0x58
> [  480.052176]  [<c10d4972>] ? sync_buffer+0x0/0x33
> [  480.052183]  [<c124e720>] ? out_of_line_wait_on_bit+0xb6/0xbe
> [  480.052190]  [<c10d4972>] ? sync_buffer+0x0/0x33
> [  480.052198]  [<c104b97f>] ? wake_bit_function+0x0/0x3c
> [  480.052205]  [<c10d493f>] ? __wait_on_buffer+0x16/0x18
> [  480.052223]  [<d084622d>] ? journal_commit_transaction+0x85a/0xd6d
> [jbd]
> [  480.052235]  [<c10323bf>] ? finish_task_switch+0x3d/0x9c
> [  480.052243]  [<c100665c>] ? check_events+0x8/0xc
> [  480.052250]  [<c1006653>] ? xen_restore_fl_direct_end+0x0/0x1
> [  480.052258]  [<c124f4ea>] ? _spin_unlock_irqrestore+0xe/0x10
> [  480.052267]  [<c1042c5a>] ? try_to_del_timer_sync+0x79/0x80
> [  480.052276]  [<d0848b6f>] ? kjournald+0xbb/0x1e5 [jbd]
> [  480.052283]  [<c104b952>] ? autoremove_wake_function+0x0/0x2d
> [  480.052292]  [<d0848ab4>] ? kjournald+0x0/0x1e5 [jbd]
> [  480.052299]  [<c104b71e>] ? kthread+0x61/0x66
> [  480.052305]  [<c104b6bd>] ? kthread+0x0/0x66
> [  480.052313]  [<c1009037>] ? kernel_thread_helper+0x7/0x10
> [  480.052320] INFO: task rsyslogd:1929 blocked for more than 120
> seconds.
> [  480.052328] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
> disables this message.
> [  480.052337] rsyslogd      D 00001000     0  1929      1 0x00000000
> [  480.052346]  c1c20d40 00000286 d0877650 00001000 c1c25200 c13ce460
> c13ce460 c1c20ef8
> [  480.052367]  c26fb460 00000000 c1cd5600 c1cf02c0 cf4693ec cf576518
> c1005ff4 c26f0a5c
> [  480.052387]  cf99fd98 cf99fdac c100665c c26f025c c1c20ef8 c26f0a5c
> cf99fd98 cf99fdac
> [  480.052407] Call Trace:
> [  480.052418]  [<d0877650>] ? __ext3_get_inode_loc+0xc7/0x275 [ext3]
> [  480.052426]  [<c1005ff4>] ? xen_force_evtchn_callback+0xc/0x10
> [  480.052434]  [<c100665c>] ? check_events+0x8/0xc
> [  480.052442]  [<d0845072>] ? do_get_write_access+0x1f8/0x3b5 [jbd]
> [  480.052450]  [<c104b97f>] ? wake_bit_function+0x0/0x3c
> [  480.052459]  [<d0845247>] ? journal_get_write_access+0x18/0x26
> [jbd]
> [  480.052469]  [<d0882caf>] ? __ext3_journal_get_write_access
> +0x13/0x32 [ext3]
> [  480.052479]  [<d0877baf>] ? ext3_reserve_inode_write+0x2d/0x5d
> [ext3]
> [  480.052489]  [<d0877bf0>] ? ext3_mark_inode_dirty+0x11/0x27 [ext3]
> [  480.052499]  [<d0877d05>] ? ext3_dirty_inode+0x50/0x63 [ext3]
> [  480.052507]  [<c10cf541>] ? __mark_inode_dirty+0x20/0x10c
> [  480.052515]  [<c10c7bc5>] ? file_update_time+0xbe/0xdf
> [  480.052523]  [<c109107b>] ? __generic_file_aio_write+0x2f7/0x452
> [  480.052531]  [<c1006653>] ? xen_restore_fl_direct_end+0x0/0x1
> [  480.052539]  [<c124f4ea>] ? _spin_unlock_irqrestore+0xe/0x10
> [  480.052546]  [<c104e617>] ? hrtimer_try_to_cancel+0x6e/0x83
> [  480.052554]  [<c104e625>] ? hrtimer_try_to_cancel+0x7c/0x83
> [  480.052561]  [<c1091227>] ? generic_file_aio_write+0x51/0x93
> [  480.052571]  [<c10b8680>] ? do_sync_write+0xc0/0x107
> [  480.052578]  [<c104b952>] ? autoremove_wake_function+0x0/0x2d
> [  480.052586]  [<c102dd9e>] ? pick_next_task_fair+0x95/0x9c
> [  480.052593]  [<c124e105>] ? schedule+0x5ea/0x671
> [  480.052601]  [<c1107ae8>] ? security_file_permission+0xc/0xd
> [  480.052609]  [<c10b85c0>] ? do_sync_write+0x0/0x107
> [  480.052616]  [<c10b900b>] ? vfs_write+0x84/0x12f
> [  480.052623]  [<c10b914e>] ? sys_write+0x3c/0x63
> [  480.052630]  [<c10084b4>] ? sysenter_do_call+0x12/0x28
>
>
>
>
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.