[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: dom0 pvops crash when used as domU kernel



On 01/26/2010 05:50 AM, Ian Jackson wrote:
Is the dom0 pvops kernel supposed to be useable for a PV guest ?

Yep.

If so, it doesn't work on amd64.  See attached console output from the
guest.

The host and guest are both xen-unstable x86_64 with the dom0 pvops
kernel.  The host and guest operating system are Debian lenny amd64.
The guest is installed with Debian's xen-create-image, which uses
debootstrap on the host to construct the guest's filesystem in an LV
and set it up ready to be booted PV.

The backtrace is strange; sshd doing statfs shouldn't be hitting any Xen-specific paths. That suggests something else has gone wrong earlier.

The problem occurs during or shortly after save/restore.  My test
system did this:

Hm, save/restore got pretty bitrotted for a while. IanC has done a mass of work on it lately, but its possible there are still some issues. Are you actually using an AMD processor? They tend to be undertested.

    J

  LOG executing ssh ... root@xxxxxxxxxxxxx xm list
  LOG executing ssh ... root@xxxxxxxxxxxxx xm save one.guest.osstest image
  LOG ping 10.80.249.241 down
  LOG executing ssh ... root@xxxxxxxxxxxxx xm restore image
  LOG ping 10.80.249.241 up
  LOG await tcp one.guest.osstest 22: waiting 5s...
  LOG await tcp one.guest.osstest 22: ok. (0s)
  LOG executing ssh ... root@xxxxxxxxxxxxx echo guest one.guest.osstest: 
restored
  ssh_exchange_identification: Connection closed by remote host
  status 65280 at Osstest.pm line 190.

Ian.

[   19.662648] Grant tables using version 2 layout.
[   23.870414] ------------[ cut here ]------------
[   23.870439] kernel BUG at 
/home/osstest/build.1142.build-amd64/xen-unstable/linux-2.6-pvops.git/include/linux/dcache.h:336!
[   23.870458] invalid opcode: 0000 [#1] SMP
[   23.870474] last sysfs file: /sys/kernel/uevent_seqnum
[   23.870485] CPU 0
[   23.870497] Modules linked in: [last unloaded: scsi_wait_scan]
[   23.870518] Pid: 1438, comm: sshd Not tainted 2.6.31.6 #1
[   23.870529] RIP: e030:[<ffffffff81106c70>]  [<ffffffff81106c70>] 
__follow_mount+0x4f/0x76
[   23.870555] RSP: e02b:ffff88001ed53c18  EFLAGS: 00010246
[   23.870566] RAX: 0000000000000000 RBX: ffff88001e84ca00 RCX: 0000000000000002
[   23.870580] RDX: ffff88001f4826c0 RSI: 0000000000000001 RDI: ffff88001f5ad840
[   23.870594] RBP: ffff88001ed53c38 R08: 0000000000000000 R09: 0000000000000000
[   23.870607] R10: 0000000000000007 R11: 0000000000100000 R12: 0000000000000000
[   23.870620] R13: ffff88001ed53ca8 R14: ffff88001ed53d68 R15: 0000000000000001
[   23.870638] FS:  00007f0ee1912790(0000) GS:ffffc90000000000(0000) 
knlGS:0000000000000000
[   23.870654] CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
[   23.870666] CR2: 00007f0edf52fca0 CR3: 000000001f146000 CR4: 0000000000002660
[   23.870680] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   23.870695] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   23.870709] Process sshd (pid: 1438, threadinfo ffff88001ed52000, task 
ffff88001edc8f20)
[   23.870723] Stack:
[   23.870731]  ffff88001ed53d68 ffff88001f5ad840 ffff88001ed53d68 
ffff88001ed53ca8
[   23.870752]<0>  ffff88001ed53c88 ffffffff81106de4 ffff88001ed53ca8 
ffff88001ed53c98
[   23.870778]<0>  ffff88001e955200 ffff88001ed541ed ffff88001ed53d68 
ffff88001ed53ca8
[   23.870807] Call Trace:
[   23.870821]  [<ffffffff81106de4>] do_lookup+0x61/0x15d
[   23.870849]  [<ffffffff811079fd>] __link_path_walk+0x61a/0x770
[   23.870866]  [<ffffffff81107d9a>] path_walk+0x69/0xd4
[   23.870881]  [<ffffffff811081e5>] do_path_lookup+0x2a/0x86
[   23.870895]  [<ffffffff8110a72f>] user_path_at+0x52/0x8c
[   23.870910]  [<ffffffff810cf02e>] ? unlock_page+0x22/0x26
[   23.870925]  [<ffffffff811001e7>] ? file_free+0x31/0x35
[   23.870939]  [<ffffffff810fe9e4>] sys_statfs+0x29/0x95
[   23.870954]  [<ffffffff8102f0f1>] ? xen_force_evtchn_callback+0xd/0xf
[   23.870974]  [<ffffffff8102f822>] ? check_events+0x12/0x20
[   23.870989]  [<ffffffff8102f80f>] ? xen_restore_fl_direct_end+0x0/0x1
[   23.871007]  [<ffffffff81576dd9>] ? _spin_unlock_irqrestore+0x34/0x36
[   23.871013]  [<ffffffff811ffc5a>] ? __up_read+0x92/0x9c
[   23.871013]  [<ffffffff81084c5b>] ? up_read+0x9/0xb
[   23.871013]  [<ffffffff8157959b>] ? do_page_fault+0x2d5/0x307
[   23.871013]  [<ffffffff81577245>] ? page_fault+0x25/0x30
[   23.871013]  [<ffffffff81033cc2>] system_call_fastpath+0x16/0x1b
[   23.871013] Code: 45 49 8b 7d 08 e8 78 8d 00 00 45 85 e4 74 09 49 8b 7d 00 e8 97 
fe ff ff 49 89 5d 00 48 8b 53 20 48 85 d2 74 0d 8b 02 85 c0 75 04<0f>  0b eb fe 
3e ff 02 49 89 55 08 41 bc 01 00 00 00 49 8b 45 08
[   23.871013] RIP  [<ffffffff81106c70>] __follow_mount+0x4f/0x76
[   23.871013]  RSP<ffff88001ed53c18>
[   23.871013] ---[ end trace d3d7603d6373c307 ]---
[   23.893521] ------------[ cut here ]------------



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.