Daniel,
I played around a bit using prev xen, 4.0.2, and i was not able to
reproduce. So:
1. what exact xen are you using?
2. did you change xen at all?
3. bitness of xen, dom0, and guest?
I don't have a test env unfortunately for unstable right now, so I just
merged my patch in unstable and uploaded to ext/debuggers.hg, so it's
possible i messed up.
-Mukesh
On Mon, 14 Feb 2011 11:04:03 -0800
Mukesh Rathor <mukesh.rathor@xxxxxxxxxx> wrote:
> Hey Daniel,
>
> Let me look around a bit, I'll let you know. BTW, what c/s are you
> using? is it the latest tree from .../ext/debuggers.hg?
>
> thanks
> mukesh
>
>
> On Mon, 14 Feb 2011 15:51:35 +0530
> Daniel J Mathew <danieljmathew@xxxxxxxxx> wrote:
>
> > Some more info on this. These kdb commands were executed after
> > another crash at the same breakpoint, and shows the kdb stack.
> >
> > [1]xkdb> go
> > cmd not available in fatal/crashed state....
> > [1]xkdb> kdbdbg
> > kdbdbg set to:1
> > [1]xkdb> kdbf
> > trapimm:ccpu:1 reas:3
> > ccpu:1 trapdbg reas:3
> > (XEN) ----[ Xen-4.1-unstable x86_64 debug=n Not tainted ]----
> > (XEN) CPU: 1
> > (XEN) RIP: e008:[<ffff82c4801fe9bf>] kdb_trap_immed+0x3f/0x80
> > (XEN) RFLAGS: 0000000000000202 CONTEXT: hypervisor
> > (XEN) rax: 0000000000000001 rbx: 0000000000000003 rcx:
> > 0000000000000004 (XEN) rdx: 0000000000000000 rsi:
> > 0000000000000082 rdi: ffff82c480249f4c (XEN) rbp:
> > 0000000000000092 rsp: ffff83007c4cfcc8 r8: 0000000000000000
> > (XEN) r9: 0000000000000001 r10: ffff83007c4cfbc8 r11:
> > ffff82c4801371d0 (XEN) r12: ffff82c4802d65c0 r13:
> > 0000000000000001 r14: ffff83007c4cfe28 (XEN) r15:
> > ffff83007c4cfcf8 cr0: 000000008005003b cr4: 00000000000426f0
> > (XEN) cr3: 000000005805c000 cr2: ffff82c49f7e7170 (XEN) ds: 002b
> > es: 002b fs: 0000 gs: 0000 ss: e010 cs: e008 (XEN) Xen stack
> > trace from rsp=ffff83007c4cfcc8: (XEN) 0000000000000001
> > ffff82c4802d65c4 0000000000000001 ffff82c48020114c (XEN)
> > ffff83007c4cfcf8 ffff82c48020640d ffff82c4802d65c0 00000000000000c6
> > (XEN) ffff82c4801371d0 ffff83007c4cf9b8 0000000000000018
> > 000000000000001c (XEN) 000000008851cf8c 000000008838cb0e
> > 000000000000016e 0000000000000000 (XEN) 000000000032bc48
> > 000000f100000000 ffff82c48014b45c 000000000000e008 (XEN)
> > 0000000000000206 ffff83007c4cfd80 0000000000000202 ffff82c4802d5d00
> > (XEN) 0000000000000001 ffff83007c4cfe28 0000000000000001
> > 0000000000000002 (XEN) 000000674badefea ffff82c4801ff320
> > ffff82c4801bd73d 0000000000000000 (XEN) ffff83007c4cfe28
> > ffff82c49f7e7170 ffff830058080000 ffff82c4801ff441 (XEN)
> > ffff83007c4cfe28 ffff82c48017c007 ffff8300107e8000 0000000000000001
> > (XEN) ffff8300107e8000 ffff83007c4d6000 ffff83007ab60080
> > ffff82c4801f4628 (XEN) 000000674badefea ffff83007ab60080
> > ffff83007c4d6000 ffff8300107e8000 (XEN) 0000000000000001
> > ffff8300107e8000 ffff83007ab62558 ffff83007ab60180 (XEN)
> > ffff830058080448 ffff83007ab62530 00000000fa889380 ffff83007c4cff90
> > (XEN) 0000000000000000 0000000000000001 ffff8300107e97f0
> > 0000000e00000002 (XEN) ffff82c4801f473e 000000000000e008
> > 0000000000010002 ffff83007c4cfed8 (XEN) 000000000000e010
> > 000000fc00000000 ffff83007c4cff18 0000000000000000 (XEN)
> > ffff82c4801bbf91 000000000000e008 0000000000000286 ffff83007c4cff10
> > (XEN) 000000000000e010 ffff82c4801bbd9b 0000000000000000
> > 0000000000000000 (XEN) 0000000000000000 0000000000000000
> > 0000000000000000 0000000000000000 (XEN) Xen call trace: (XEN)
> > [<ffff82c4801fe9bf>] kdb_trap_immed+0x3f/0x80 (XEN)
> > [<ffff82c48020114c>] kdb_cmdf_kdbf+0x1c/0x50 (XEN)
> > [<ffff82c48020640d>] kdb_do_cmds+0x15d/0x230 (XEN)
> > [<ffff82c4801371d0>] ns16550_putc+0x0/0x20 (XEN)
> > [<ffff82c48014b45c>] __udelay+0x2c/0x40 (XEN)
> > [<ffff82c4801ff320>] kdbmain_fatal+0xd0/0x1e0 (XEN)
> > [<ffff82c4801bd73d>] vmx_do_resume+0x12d/0x1e0 (XEN)
> > [<ffff82c4801ff441>] kdb_trap_fatal+0x11/0x20 (XEN)
> > [<ffff82c48017c007>] do_page_fault+0x437/0x470 (XEN)
> > [<ffff82c4801f4628>] handle_exception_saved+0x30/0x6e (XEN)
> > [<ffff82c4801f473e>] int3+0x1e/0x40 (XEN) [<ffff82c4801bbf91>]
> > vmx_intr_assist+0x1/0x250 (XEN) [<ffff82c4801bbd9b>]
> > vmx_asm_do_vmentry+0x5/0xea
> >
> > Please help me out with this issue. Is kdb actively supported and in
> > use now?
> >
> >
> > On Mon, Feb 14, 2011 at 11:05 AM, Daniel J Mathew
> > <danieljmathew@xxxxxxxxx>wrote:
> >
> > > Hi,
> > >
> > > I forgot to mention that the following lines are printed to
> > > console after I hit 'go' and start the guest, before I get the
> > > error: [421093.000014] Clocksource tsc unstable (delta =
> > > 30119955364 ns) (XEN) tmem: all pools frozen for all domains
> > > (XEN) tmem: all pools thawed for all domains
> > > (XEN) tmem: all pools frozen for all domains
> > > (XEN) tmem: all pools thawed for all domains
> > >
> > > Does this have to do anything with the error?
> > >
> > >
> > > Daniel.
> > > --
> > > Daniel J Mathew
> > > Indian Institute of Technology Delhi
> > > <http://www.cse.iitd.ernet.in/%7Emathew>
> > >
> > > On Mon, Feb 14, 2011 at 10:57 AM, Daniel J Mathew
> > > <danieljmathew@xxxxxxxxx
> > > > wrote:
> > >
> > >> Hi,
> > >>
> > >> I am trying to debug some HVM code I wrote for recording and
> > >> replaying VM execution (on xen-unstable). For this, I set up kdb
> > >> and a serial connection to another machine. However, most of the
> > >> times when the breakpoint is hit, a fatal error occurs.
> > >>
> > >> Here's what I'm doing:
> > >> bp vmx_intr_assist
> > >> go
> > >> [Started HVM guest from the other machine. The guest OS is a
> > >> dummy OS called Pintos.]
> > >> *** kdb (Fatal Error on cpu:1 vec:14 Page Fault):
> > >> ffff82c4801f473e: int3+1e lock bts %rax,
> > >> 0xe17b9(%rip)
> > >>
> > >> Another example (with the call stack):
> > >> bp hvmemul_read_io
> > >> [1]xkdb> go
> > >> [Started HVM guest from the other machine.]
> > >> (XEN) read_ins_ring_dom0: cleared ring
> > >> (XEN) HVM1: HVM Loader
> > >> (XEN) setmode: Initialized ring
> > >> (XEN) HVM1: Detected Xen v4.1-unstable
> > >> (XEN) HVM1: CPU speed is 3325 MHz
> > >> (XEN) HVM1: Xenbus rings @0xfeffc000, event channel 2
> > >> (XEN) irq.c:243: Dom1 PCI link 0 changed 0 -> 5
> > >> (XEN) HVM1: PCI-ISA link 0 routed to IRQ5
> > >> (XEN) irq.c:243: Dom1 PCI link 1 changed 0 -> 10
> > >> (XEN) HVM1: PCI-ISA link 1 routed to IRQ10
> > >> (XEN) irq.c:243: Dom1 PCI link 2 changed 0 -> 11
> > >> (XEN) HVM1: PCI-ISA link 2 routed to IRQ11
> > >> (XEN) irq.c:243: Dom1 PCI link 3 changed 0 -> 5
> > >> (XEN) HVM1: PCI-ISA link 3 routed to IRQ5
> > >> *** kdb (Fatal Error on cpu:1 vec:14 Page Fault):
> > >> ffff82c4801f473e: int3+1e lock bts %rax,
> > >> 0xe17b9(%rip)
> > >>
> > >> [1]xkdb> f
> > >> (XEN) Xen call trace:
> > >> (XEN) [<ffff82c4801f473e>] int3+0x1e/0x40
> > >> (XEN) [<ffff82c4801a2171>] hvmemul_read_io+0x1/0x1f0
> > >> (XEN) [<ffff82c480188ec5>] x86_emulate+0xb8e5/0x12bd0
> > >> (XEN) [<ffff82c4801d9922>] sh_gva_to_gfn__guest_2+0x112/0x180
> > >> (XEN) [<ffff82c4801a8000>] __hvm_copy+0x240/0x3b0
> > >> (XEN) [<ffff82c480137900>] __serial_putc+0x50/0x190
> > >> (XEN) [<ffff82c480149619>] smp_apic_timer_interrupt+0x49/0x80
> > >> (XEN) [<ffff82c48011830f>] csched_vcpu_wake+0x12f/0x2c0
> > >> (XEN) [<ffff82c48014e5ed>] vcpu_kick+0x1d/0x80
> > >> (XEN) [<ffff82c480106065>] evtchn_set_pending+0x145/0x1d0
> > >> (XEN) [<ffff82c4801d9922>] sh_gva_to_gfn__guest_2+0x112/0x180
> > >> (XEN) [<ffff82c480106175>]
> > >> notify_via_xen_event_channel+0x85/0xa0 (XEN)
> > >> [<ffff82c4801a6a10>] hvm_send_assist_req+0xa0/0x120 (XEN)
> > >> [<ffff82c4801a80b6>] __hvm_copy+0x2f6/0x3b0 (XEN)
> > >> [<ffff82c4801a16d9>] hvm_emulate_one+0xc9/0x1b0 (XEN)
> > >> [<ffff82c4801ac165>] handle_mmio+0x285/0x320 (XEN)
> > >> [<ffff82c480130001>] unshare_xenoprof_page_with_guest+0xc1/0x140
> > >> (XEN) [<ffff82c48011d2b3>] vcpu_runstate_get+0x63/0xd0 (XEN)
> > >> [<ffff82c48011d340>] get_cpu_idle_time+0x20/0x30 (XEN)
> > >> [<ffff82c4801ac2c7>] hvm_io_assist+0xc7/0xd0 (XEN)
> > >> [<ffff82c4801a7075>] hvm_do_resume+0x185/0x1b0 (XEN)
> > >> [<ffff82c4801a6f21>] hvm_do_resume+0x31/0x1b0 (XEN)
> > >> [<ffff82c4801bd73d>] vmx_do_resume+0x12d/0x1e0 (XEN)
> > >> [<ffff82c48014f577>] context_switch+0x147/0xe40 (XEN)
> > >> [<ffff82c48014f577>] context_switch+0x147/0xe40 (XEN)
> > >> [<ffff82c480174668>] __update_vcpu_system_time+0x258/0x2e0
> > >> (XEN) [<ffff82c48011e480>] schedule+0x230/0x570 (XEN)
> > >> [<ffff82c48014907a>] event_check_interrupt+0x2a/0x30 (XEN)
> > >> [<ffff82c48011f8af>] __do_softirq+0x6f/0xb0 (XEN)
> > >> [<ffff82c48015255d>] idle_loop+0x2d/0x60
> > >>
> > >>
> > >>
> > >> The line where the error occurs is in arch/x86/x86_64/entry.S. I
> > >> couldn't find any way to get Xen back to running after this error
> > >> happens, so I usually end up doing a forced reboot.
> > >> Can someone please shed some light on what's happening? Is there
> > >> anything I can do differently to get around this?
> > >>
> > >> Thanks,
> > >> Daniel.
> > >> --
> > >> Daniel J Mathew
> > >> Indian Institute of Technology Delhi
> > >>
> > >>
> > >
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|