[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blktap2 and CONFIG_XEN_BLKBACK_PAGEMAP


  • To: Kaushik Kumar Ram <kaushik@xxxxxxxx>
  • From: Shriram Rajagopalan <rshriram@xxxxxxxxx>
  • Date: Thu, 15 Jul 2010 11:19:26 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 15 Jul 2010 11:20:30 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type; b=a98PNkuRylbZ1WZv2KDFLsESMWl0QvEdc//eJewUQSoy7YUbCBJhNAGoJ2Royqwzcg 3hdUIvNpj7Q4pB7i6CeQmxr5+ZF/ZyfAoL1MXsA463+Ip0V5PnlVzweX1P+wn0UU5kcr IBXDtTL0Ne3/vsnXweZX6dH6EY6YCpQta7yOs=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

IIRC during my early experiments with blkback & blktap2, I hit a similar error.. tracing through the code, I gathered that the pagemap stuff is used to manage page grants to blktap2 kernel driver . So, the #else (ie !BLKBK_PAGEMAP) code is not going to work.
I suggest, you try to look at the blkback_pagemap.c and the blktap2/device.c or something like that to get a better picture.

On Wed, Jul 14, 2010 at 4:59 PM, Kaushik Kumar Ram <kaushik@xxxxxxxx> wrote:
Is it necessary to use blkback_pagemap with blktap2? Since the use of blkback_pagemap is configurable I tried without it and my system crashed (crash dump attached below). Or is it a bug?

I am using about a month old xen-unstable.hg with linux-2.6.18-xen.hg (both 64 bit).

Thanks.
-Kaushik

(XEN) mm.c:889:d0 Error getting mfn 80765 (pfn 3fba6) from L1 entry 8000000080765027 for l1e_owner=0, pg_owner=0
(XEN) mm.c:5046:d0 ptwr_emulate: could not get_page_from_l1e()
Unable to handle kernel paging request at ffff8800388f6688 RIP:
 [<ffffffff803dc7d6>] blktap_map_uaddr_fn+0xa6/0xc0
PGD 1140067 PUD 1141067 PMD 1306067 PTE 80100000388f6065
Oops: 0003 [1] SMP
CPU 0
Modules linked in: e1000e sd_mod ata_piix libata thermal fan
Pid: 4183, comm: blkback.1.sda1 Not tainted 2.6.18.8-xen0 #40
RIP: e030:[<ffffffff803dc7d6>]  [<ffffffff803dc7d6>] blktap_map_uaddr_fn+0xa6/0xc0
RSP: e02b:ffff880039d01840  EFLAGS: 00010297
RAX: 8000000080765027 RBX: ffff8800388f6688 RCX: ffff880039d01908
RDX: 00002b218a8d1000 RSI: ffff880001fb15d0 RDI: ffff8800388f6688
RBP: ffff880039d01850 R08: 00000000000388f6 R09: 0000000000000000
R10: 0000000000000000 R11: 00000000000002c8 R12: ffff8800388f6688
R13: 00002b218a8d1000 R14: 00002b218a8d2000 R15: ffff88003890e2a0
FS:  00002af9674c06e0(0000) GS:ffffffff8058c000(0000) knlGS:0000000000000000
CS:  e033 DS: 0000 ES: 0000
Process blkback.1.sda1 (pid: 4183, threadinfo ffff880039d00000, task ffff88003e8cf080)
Stack:  ffff8800388f6688 ffff880001fb15d0 ffff880039d018f0 ffffffff80270033
 0000001a00000039 ffff880039d01908 ffffffff803dc730 ffff88003a714080
 ffff8800389802b0 00002b218a8d2000 00002b218a8d2000 ffff88003c03b430
Call Trace:
 [<ffffffff80270033>] apply_to_page_range+0x4e3/0x590
 [<ffffffff803dc730>] blktap_map_uaddr_fn+0x0/0xc0
 [<ffffffff803dac01>] blktap_map_uaddr+0x21/0x30
 [<ffffffff803db70c>] blktap_device_do_request+0x67c/0xfe0
 [<ffffffff8023f36c>] __mod_timer+0xbc/0xe0
 [<ffffffff802088b0>] __switch_to+0x370/0x5b0
 [<ffffffff8023f1dc>] lock_timer_base+0x2c/0x60
 [<ffffffff8023f9c6>] del_timer+0x56/0x70
 [<ffffffff80344715>] __generic_unplug_device+0x25/0x30
 [<ffffffff803459d0>] generic_unplug_device+0x20/0x60
 [<ffffffff803d3196>] unplug_queue+0x26/0x50
 [<ffffffff803d3dea>] blkif_schedule+0x55a/0x690
 [<ffffffff803d3890>] blkif_schedule+0x0/0x690
 [<ffffffff8024b12a>] kthread+0xda/0x110
 [<ffffffff8020a428>] child_rip+0xa/0x12
 [<ffffffff8024b050>] kthread+0x0/0x110
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



--
perception is but an offspring of its own self
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.