WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-bugs

[Xen-bugs] [Bug 390] Unable to handle kernel paging request at virtual a

To: xen-bugs@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-bugs] [Bug 390] Unable to handle kernel paging request at virtual address c2b02000
From: bugzilla-daemon@xxxxxxxxxxxxxxxxxxx
Date: Wed, 09 Nov 2005 14:31:10 +0000
Delivery-date: Wed, 09 Nov 2005 14:31:16 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-bugs-request@lists.xensource.com?subject=help>
List-id: Xen Bugzilla <xen-bugs.lists.xensource.com>
List-post: <mailto:xen-bugs@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-bugs>, <mailto:xen-bugs-request@lists.xensource.com?subject=unsubscribe>
Reply-to: bugs@xxxxxxxxxxxxxxxxxx
Sender: xen-bugs-bounces@xxxxxxxxxxxxxxxxxxx
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=390


dbarrera@xxxxxxxxxx changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
           Severity|normal                      |blocker




------- Additional Comments From dbarrera@xxxxxxxxxx  2005-11-09 14:31 -------
Using changeset 7701, same issue:

x335b-vm1:~ # dmesg
Linux version 2.6.12.6-xenU (root@x335b) (gcc version 3.3.3 (SuSE Linux)) #1 SMP
Wed Nov 9 07:43:01 CST 2005
BIOS-provided physical RAM map:
 Xen: 0000000000000000 - 0000000010000000 (usable)
0MB HIGHMEM available.
264MB LOWMEM available.
On node 0 totalpages: 67584
  DMA zone: 67584 pages, LIFO batch:31
  Normal zone: 0 pages, LIFO batch:1
  HighMem zone: 0 pages, LIFO batch:1
IRQ lockup detection disabled
Built 1 zonelists
Kernel command line:  root=/dev/sdb6 ro
Initializing CPU#0
PID hash table entries: 2048 (order: 11, 32768 bytes)
Xen reported: 3189.368 MHz processor.
Dentry cache hash table entries: 65536 (order: 6, 262144 bytes)
Inode-cache hash table entries: 32768 (order: 5, 131072 bytes)
vmalloc area: d1000000-f53fe000, maxmem 2d800000
Memory: 255488k/270336k available (1823k kernel code, 6340k reserved, 556k data,
140k init, 0k highmem)
Checking if this processor honours the WP bit even in supervisor mode... Ok.
Calibrating delay loop... 6370.09 BogoMIPS (lpj=31850496)
Mount-cache hash table entries: 512
CPU: After generic identify, caps: bfebfbff 00000000 00000000 00000000 00004400
00000000 00000000
CPU: After vendor identify, caps: bfebfbff 00000000 00000000 00000000 00004400
00000000 00000000
CPU: Trace cache: 12K uops, L1 D cache: 8K
CPU: L2 cache: 512K
CPU: L3 cache: 1024K
CPU: After all inits, caps: bfebc3f1 00000000 00000000 00000080 00004400
00000000 00000000
Enabling fast FPU save and restore... done.
Enabling unmasked SIMD FPU exception support... done.
Checking 'hlt' instruction... disabled
Brought up 1 CPUs
CPU0 attaching sched-domain:
 domain 0: span 01
  groups: 01
NET: Registered protocol family 16
xenbus_probe_init
CPU0 attaching sched-domain:
 domain 0: does not load-balance
Enabling SMP...
CPU0 attaching sched-domain:
 domain 0: span 03
  groups: 01 02
CPU1 attaching sched-domain:
 domain 0: span 03
  groups: 02 01
Initializing CPU#1
CPU0 attaching sched-domain:
 domain 0: does not load-balance
CPU1 attaching sched-domain:
 domain 0: does not load-balance
CPU0 attaching sched-domain:
 domain 0: span 07
Initializing CPU#2
  groups: 01 02 04
CPU1 attaching sched-domain:
 domain 0: span 07
  groups: 02 04 01
CPU2 attaching sched-domain:
 domain 0: span 07
  groups: 04 01 02
CPU0 attaching sched-domain:
 domain 0: does not load-balance
CPU1 attaching sched-domain:
 domain 0: does not load-balance
CPU2 attaching sched-domain:
 domain 0: does not load-balance
CPU0 attaching sched-domain:
 <6>Initializing CPU#3
domain 0: span 0f
  groups: 01 02 04 08
CPU1 attaching sched-domain:
 domain 0: span 0f
  groups: 02 04 08 01
CPU2 attaching sched-domain:
 domain 0: span 0f
  groups: 04 08 01 02
CPU3 attaching sched-domain:
 domain 0: span 0f
  groups: 08 01 02 04
Brought up 4 CPUs
xen_mem: Initialising balloon driver.
Grant table initialized
Initializing Cryptographic API
io scheduler noop registered
io scheduler anticipatory registered
io scheduler deadline registered
io scheduler cfq registered
RAMDISK driver initialized: 16 RAM disks of 4096K size 1024 blocksize
Xen virtual console successfully installed as tty1
Event-channel device installed.
Registering block device major 8
xen_net: Initialising virtual ethernet driver.
NET: Registered protocol family 2
IP: routing cache hash table of 1024 buckets, 16Kbytes
TCP established hash table entries: 16384 (order: 6, 262144 bytes)
TCP bind hash table entries: 16384 (order: 5, 196608 bytes)
TCP: Hash tables configured (established 16384 bind 16384)
NET: Registered protocol family 1
NET: Registered protocol family 17
EXT3-fs: INFO: recovery required on readonly filesystem.
EXT3-fs: write access will be enabled during recovery.
EXT3-fs: sdb6: orphan cleanup on readonly fs
kjournald starting.  Commit interval 5 seconds
ext3_orphan_cleanup: deleting unreferenced inode 229820
EXT3-fs: sdb6: 1 orphan inode deleted
EXT3-fs: recovery complete.
EXT3-fs: mounted filesystem with ordered data mode.
VFS: Mounted root (ext3 filesystem) readonly.
Freeing unused kernel memory: 140k freed
EXT3 FS on sdb6, internal journal
Adding 2104444k swap on /dev/sdb5.  Priority:42 extents:1
nfs warning: mount version older than kernel
Unable to handle kernel paging request at virtual address c1f39010
 printing eip:
c014da29
0e273000 -> *pde = 00000000:d78d6001
0da3b000 -> *pme = 00000001:00ed7067
Unable to handle kernel paging request at virtual address 155559c8
 printing eip:
c01169b8
0e273000 -> *pde = 00000000:d5b62001
0f7af000 -> *pme = 00000000:00000000
Oops: 0000 [#1]
PREEMPT SMP
Modules linked in:
CPU:    0
EIP:    0061:[<c01169b8>]    Not tainted VLI
EFLAGS: 00010206   (2.6.12.6-xenU)
EIP is at dump_fault_path+0x108/0x140
eax: 00000000   ebx: 00e9ff60   ecx: 55555000   edx: 00000000
esi: 155559c8   edi: 0e273000   ebp: 000009c8   esp: cf605cb4
ds: 0069   es: 0069   ss: 0069
Process tar (pid: 2961, threadinfo=cf604000 task=c67e2530)
Stack: c02f29fc 0da3b000 00000001 00ed7067 0000000b 0000000e c02f2655 00000003
       c0116dd8 c1f39010 c014da29 cf605d1c c103e1c0 c01d8242 c02e0d02 cf9b7858
       00000488 000004bd 00000000 c01441e9 c103e1c0 c0320b60 00000035 c67e2530
Call Trace:
 [<c0116dd8>] do_page_fault+0x3e8/0x856
 [<c014da29>] alloc_slabmgmt+0x29/0x70
 [<c01d8242>] ext3_ordered_commit_write+0xd2/0x130
 [<c01441e9>] unlock_page+0x19/0x60
 [<c0148e90>] prep_new_page+0x50/0x60
 [<c0149440>] buffered_rmqueue+0x160/0x2d0
 [<c0149759>] __alloc_pages+0xd9/0x420
 [<c010a56e>] page_fault+0x2e/0x34
 [<c014da29>] alloc_slabmgmt+0x29/0x70
 [<c014dcbd>] cache_grow+0x10d/0x240
 [<c014e007>] cache_alloc_refill+0x217/0x250
 [<c014e29c>] kmem_cache_alloc+0x9c/0xc0
 [<c01e02a9>] ext3_alloc_inode+0x19/0x40
 [<c0185e6b>] alloc_inode+0x1b/0x140
 [<c0186b43>] get_new_inode_fast+0x23/0x150
 [<c0187142>] iget_locked+0xf2/0x100
 [<c01dcbdb>] ext3_lookup+0x6b/0xd0
 [<c017ac57>] __lookup_hash+0xa7/0xe0
 [<c017acad>] lookup_hash+0x1d/0x30
 [<c017b878>] lookup_create+0x38/0x80
 [<c017bced>] sys_mkdir+0x5d/0x100
 [<c0169e91>] sys_write+0x51/0x80
 [<c010a1a9>] syscall_call+0x7/0xb
Code: ff ff 31 d2 89 c1 0f ac d1 0c 25 ff 0f 00 00 8b 0c 8d 00 00 80 f5 c1 ed 09
c1 e1 0c 81 e5 f8 0f 00 00 09 c1 8d b4 0d 00 00 00 c0 <8b> 06 89 44 24 0c 8b 46
04 89 4c 24 04 c7 04 24 1c 2a 2f c0 89
 <1>Unable to handle kernel paging request at virtual address c16d1000
 printing eip:
c014948c
0f5a2000 -> *pde = 00000000:d5891001
0fa80000 -> *pme = 00000000:ed4d4067
00007000 -> *pte = 00000000:e5440061
Oops: 0003 [#2]
PREEMPT SMP
Modules linked in:
CPU:    0
EIP:    0061:[<c014948c>]    Not tainted VLI
EFLAGS: 00010286   (2.6.12.6-xenU)
EIP is at buffered_rmqueue+0x1ac/0x2d0
eax: 00000000   ebx: 00000001   ecx: 00000400   edx: c16d1000
esi: c102da20   edi: c16d1000   ebp: 00000000   esp: ce1b9d88
ds: 0069   es: 0069   ss: 0069
Process ld (pid: 3012, threadinfo=ce1b8000 task=ce336a40)
Stack: c102da20 00000003 0000001f c031c6b0 c9d800c4 c102da20 c031c680 00000000
       00000000 000080d2 c0149743 c031c680 00000000 00000012 00000000 00000000
       00000000 00000000 00000000 ce336a40 00000010 c031d2a0 00000000 e546a067
Call Trace:
 [<c0149743>] __alloc_pages+0xc3/0x420
 [<c015813e>] do_anonymous_page+0xee/0x2c0
 [<c0158388>] do_no_page+0x78/0x4b0
 [<c01451c0>] __generic_file_aio_read+0x200/0x240
 [<c0158bb7>] handle_mm_fault+0x227/0x2b0
 [<c014525b>] generic_file_aio_read+0x5b/0xb0
 [<c0116bcf>] do_page_fault+0x1df/0x856
 [<c015a297>] vma_adjust+0x1f7/0x390
 [<c015bd80>] do_brk+0x1e0/0x300
 [<c0159dae>] sys_brk+0x11e/0x130
 [<c010a56e>] page_fault+0x2e/0x34
Code: 4e 8b 74 24 14 31 ed 90 8d b4 26 00 00 00 00 89 34 24 b8 03 00 00 00 89 44
24 04 e8 0f f6 fc ff 89 c2 89 c7 b9 00 04 00 00 89 e8 <f3> ab 89 14 24 b8 03 00
00 00 83 c6 20 89 44 24 04 e8 5e f6 fc
 <6>note: ld[3012] exited with preempt_count 1
scheduling while atomic: ld/0x10000001/3012
 [<c02c5743>] schedule+0x693/0x760
 [<c01559f4>] unmap_page_range+0x194/0x210
 [<c0160d86>] free_pages_and_swap_cache+0x86/0xa0
 [<c02c6027>] cond_resched+0x27/0x40
 [<c0155bef>] unmap_vmas+0x17f/0x280
 [<c015bf47>] exit_mmap+0xa7/0x1c0
 [<c011f088>] mmput+0x38/0xa0
 [<c0124438>] do_exit+0xa8/0x3f0
 [<c010ab58>] die+0x188/0x190
 [<c0116df0>] do_page_fault+0x400/0x856
 [<c0116bcf>] do_page_fault+0x1df/0x856
 [<c0149759>] __alloc_pages+0xd9/0x420
 [<c014c2e1>] __do_page_cache_readahead+0xb1/0x1a0
 [<c0148f81>] __rmqueue+0xe1/0x130
 [<c010a56e>] page_fault+0x2e/0x34
 [<c0110069>] show+0x19/0x50
 [<c014948c>] buffered_rmqueue+0x1ac/0x2d0
 [<c0149743>] __alloc_pages+0xc3/0x420
 [<c015813e>] do_anonymous_page+0xee/0x2c0
 [<c0158388>] do_no_page+0x78/0x4b0
 [<c01451c0>] __generic_file_aio_read+0x200/0x240
 [<c0158bb7>] handle_mm_fault+0x227/0x2b0
 [<c014525b>] generic_file_aio_read+0x5b/0xb0
 [<c0116bcf>] do_page_fault+0x1df/0x856
 [<c015a297>] vma_adjust+0x1f7/0x390
 [<c015bd80>] do_brk+0x1e0/0x300
 [<c0159dae>] sys_brk+0x11e/0x130
 [<c010a56e>] page_fault+0x2e/0x34
Unable to handle kernel paging request at virtual address c17a2000
 printing eip:
c014948c
0f086000 -> *pde = 00000000:dfa02001
0590f000 -> *pme = 00000000:ed4d4067
00007000 -> *pte = 00000000:e536f061
Oops: 0003 [#3]
PREEMPT SMP
Modules linked in:
CPU:    0
EIP:    0061:[<c014948c>]    Not tainted VLI
EFLAGS: 00010286   (2.6.12.6-xenU)
EIP is at buffered_rmqueue+0x1ac/0x2d0
eax: 00000000   ebx: 00000001   ecx: 00000400   edx: c17a2000
esi: c102f440   edi: c17a2000   ebp: 00000000   esp: c536dd88
ds: 0069   es: 0069   ss: 0069
Process cc1 (pid: 3049, threadinfo=c536c000 task=cdcb4a40)
Stack: c102f440 00000003 c102f3a0 00000003 00000063 c102f440 c031c680 00000000
       00000000 000080d2 c0149743 c031c680 00000000 00000012 00000000 00000000
       00000000 00000000 00000000 cdcb4a40 00000010 c031d2a0 00000000 00000084
Call Trace:
 [<c0149743>] __alloc_pages+0xc3/0x420
 [<c015813e>] do_anonymous_page+0xee/0x2c0
 [<c0158388>] do_no_page+0x78/0x4b0
 [<c01451c0>] __generic_file_aio_read+0x200/0x240
 [<c0158bb7>] handle_mm_fault+0x227/0x2b0
 [<c0116bcf>] do_page_fault+0x1df/0x856
 [<c015ad41>] do_mmap_pgoff+0x481/0x7a0
 [<c0112ec5>] sys_mmap2+0x85/0xd0
 [<c010a56e>] page_fault+0x2e/0x34
Code: 4e 8b 74 24 14 31 ed 90 8d b4 26 00 00 00 00 89 34 24 b8 03 00 00 00 89 44
24 04 e8 0f f6 fc ff 89 c2 89 c7 b9 00 04 00 00 89 e8 <f3> ab 89 14 24 b8 03 00
00 00 83 c6 20 89 44 24 04 e8 5e f6 fc
 <6>note: cc1[3049] exited with preempt_count 1

-------------

In this particular case, the guest domain, vm1, is unresponsive.

-- 
Configure bugmail: 
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.

_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs