WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Launching a PV Centos6 DomU crashing at "kernel BUG at

To: "Muriel" <mucawhite@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Launching a PV Centos6 DomU crashing at "kernel BUG at fs/sysfs/group.c:65!; Kernel panic - not syncing: Fatal exception"
From: pgnetwork@xxxxxxxxxxx
Date: Tue, 20 Sep 2011 07:25:13 -0700
Cc:
Delivery-date: Tue, 20 Sep 2011 20:12:06 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d= messagingengine.com; h=message-id:from:to:mime-version :content-transfer-encoding:content-type:in-reply-to:references :reply-to:subject:date; s=smtpout; bh=QtQ1o21brBP1JD+FcVIOH+C0g4 k=; b=S4GKYjyhZqpoObiukGWr69LSUHGki3aOw361sBsL313h+9G2fw6ZxM2z6F 5MAtuyHflgt8hAX827GIP4s/7x7ZY+RIFD+2vBl64MVia/enWhTt6ZGnwsN4ZAZY Z9x2rt5rTJMRds7HZE9fnO/B5CwuemRonoWzpfSsFevFeAWy8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E786664.8010005@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <1316469057.1683.140258143625249@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <4E786664.8010005@xxxxxxxxx>
Reply-to: pgnetwork@xxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi Muriel,

On Tuesday, September 20, 2011 12:09 PM, "Muriel" <mucawhite@xxxxxxxxx>
wrote:
> My installation of SL6 is working with the kernel 32.131.. have you tried 
> this?

I had just download Scientific Linux 6.1 last night thinking it might
make a difference.

Changing my config file to

 name = 'SciLinux61'
 builder = 'linux'
 kernel = '/stor/vmlinuz'
 ramdisk = '/stor/initrd.img'
 disk = [
 'file:/stor/SL-61-x86_64-2011-07-27-Install-DVD.iso,hdc:cdrom,r',
 'phy:/dev/VG0/scilinux,xvda,w',
 ...

Launching the Guest still results in the crash.

Greg


CONSOLE OUTPUT AT CRASH --

xm create -c /stor/centos6_init.cfg

Using config file "/stor/centos6_init.cfg".

Started domain SciLinux61 (id=1)
                                Initializing cgroup subsys cpuset
Initializing cgroup subsys cpu
Linux version 2.6.32-131.0.15.el6.x86_64 (mockbuild@xxxxxxxxxxxx) (gcc
version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Sat May 21
10:27:57 CDT 2011
...
Freeing unused kernel memory: 1796k freed
------------[ cut here ]------------
kernel BUG at fs/sysfs/group.c:65!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/devices/virtual/block/loop6/removable
CPU 0
Modules linked in: xen_blkfront(+) iscsi_ibft iscsi_boot_sysfs pcspkr
mlx4_ib mlx4_en mlx4_core ib_ipoib ib_cm ib_sa ib_mad ib_core ipv6
iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi squashfs cramfs

Modules linked in: xen_blkfront(+) iscsi_ibft iscsi_boot_sysfs pcspkr
mlx4_ib mlx4_en mlx4_core ib_ipoib ib_cm ib_sa ib_mad ib_core ipv6
iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi squashfs cramfs
Pid: 18, comm: xenwatch Tainted: G        W  ----------------  
2.6.32-131.0.15.el6.x86_64 #1
RIP: e030:[<ffffffff811e8457>]  [<ffffffff811e8457>]
internal_create_group+0xf7/0x1a0
RSP: e02b:ffff88003ebb1c60  EFLAGS: 00010246
RAX: 00000000ffffffef RBX: ffff88000555c000 RCX: ffff88003eac93c0
RDX: ffffffff81a5b480 RSI: 0000000000000000 RDI: ffff8800056ae870
RBP: ffff88003ebb1cb0 R08: 0000000000000004 R09: 0000000000000000
R10: 000000000000000f R11: 000000000000000f R12: ffff88000555c000
R13: ffff8800056ae870 R14: ffffffff81a5b480 R15: 0000000000000000
FS:  00007f994829b820(0000) GS:ffff880006066000(0000)
knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f85753136e8 CR3: 0000000005a5e000 CR4: 0000000000000660
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process xenwatch (pid: 18, threadinfo ffff88003ebb0000, task
ffff88003eb7f540)
Stack:
 ffff88003ebb1cc0 0000000081336e91 000032333a323032 ffff88003ebb1cd0
<0> ffff88003ebb1c90 ffff88000555c000 ffff88000555c000 ffff8800056ae860
<0> ffff8800056ae800 0000000000000000 ffff88003ebb1cc0 ffffffff811e8533
Call Trace:
 [<ffffffff811e8533>] sysfs_create_group+0x13/0x20
 [<ffffffff810f7724>] blk_trace_init_sysfs+0x14/0x20
 [<ffffffff8124bb60>] blk_register_queue+0x40/0x100
 [<ffffffff8125116e>] add_disk+0xae/0x160
 [<ffffffffa00c93e4>] backend_changed+0x374/0x700 [xen_blkfront]
 [<ffffffff81007b52>] ? check_events+0x12/0x20
 [<ffffffff812f7a4a>] otherend_changed+0xca/0x180
 [<ffffffff812f613a>] xenwatch_thread+0xaa/0x170
 [<ffffffff8108e100>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff814dcf6c>] ? _spin_unlock_irqrestore+0x1c/0x20
 [<ffffffff812f6090>] ? xenwatch_thread+0x0/0x170
 [<ffffffff8108dd96>] kthread+0x96/0xa0
 [<ffffffff8100c1ca>] child_rip+0xa/0x20
 [<ffffffff8100b393>] ? int_ret_from_sys_call+0x7/0x1b
 [<ffffffff8100bb1d>] ? retint_restore_args+0x5/0x6
 [<ffffffff8100c1c0>] ? child_rip+0x0/0x20
Code: 8b 04 24 48 85 c0 74 27 41 83 c7 01 8b 55 bc 85 d2 74 b1 48 8b 30
48 89 df e8 76 be ff ff eb a4 48 83 7f 30 00 0f 85 49 ff ff ff <0f> 0b
eb fe 48 8b 5d c8 31 d2 48 85 db 74 18 f0 ff 0b 0f 94 c0
RIP  [<ffffffff811e8457>] internal_create_group+0xf7/0x1a0
 RSP <ffff88003ebb1c60>
---[ end trace f46928c89d14ef9b ]---
Kernel panic - not syncing: Fatal exception
Pid: 18, comm: xenwatch Tainted: G      D W  ----------------  
2.6.32-131.0.15.el6.x86_64 #1
Call Trace:
 [<ffffffff814da06e>] ? panic+0x78/0x143
 [<ffffffff814dcf6c>] ? _spin_unlock_irqrestore+0x1c/0x20
 [<ffffffff814de0b4>] ? oops_end+0xe4/0x100
 [<ffffffff8100f2eb>] ? die+0x5b/0x90
 [<ffffffff814dd984>] ? do_trap+0xc4/0x160
 [<ffffffff8100ceb5>] ? do_invalid_op+0x95/0xb0
 [<ffffffff811e8457>] ? internal_create_group+0xf7/0x1a0
 [<ffffffff8118d5aa>] ? ilookup5+0x4a/0x60
 [<ffffffff8100731d>] ? xen_force_evtchn_callback+0xd/0x10
 [<ffffffff81007b52>] ? check_events+0x12/0x20
 [<ffffffff8100bf5b>] ? invalid_op+0x1b/0x20
 [<ffffffff811e8457>] ? internal_create_group+0xf7/0x1a0
 [<ffffffff811e8533>] ? sysfs_create_group+0x13/0x20
 [<ffffffff810f7724>] ? blk_trace_init_sysfs+0x14/0x20
 [<ffffffff8124bb60>] ? blk_register_queue+0x40/0x100
 [<ffffffff8125116e>] ? add_disk+0xae/0x160
 [<ffffffffa00c93e4>] ? backend_changed+0x374/0x700 [xen_blkfront]
 [<ffffffff81007b52>] ? check_events+0x12/0x20
 [<ffffffff812f7a4a>] ? otherend_changed+0xca/0x180
 [<ffffffff812f613a>] ? xenwatch_thread+0xaa/0x170
 [<ffffffff8108e100>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff814dcf6c>] ? _spin_unlock_irqrestore+0x1c/0x20
 [<ffffffff812f6090>] ? xenwatch_thread+0x0/0x170
 [<ffffffff8108dd96>] ? kthread+0x96/0xa0
 [<ffffffff8100c1ca>] ? child_rip+0xa/0x20
 [<ffffffff8100b393>] ? int_ret_from_sys_call+0x7/0x1b
 [<ffffffff8100bb1d>] ? retint_restore_args+0x5/0x6
 [<ffffffff8100c1c0>] ? child_rip+0x0/0x20

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>