This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: [Xen-ia64-devel] Oops from loop driver on IA64

To: Kouya SHIMURA <kouya@xxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [Xen-ia64-devel] Oops from loop driver on IA64
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Tue, 18 Apr 2006 20:37:58 +0900
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 18 Apr 2006 04:38:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200604181041.k3IAfsi07036@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <200604181041.k3IAfsi07036@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/
On Tue, Apr 18, 2006 at 07:41:55PM +0900, Kouya SHIMURA wrote:
Content-Description: message body text

> We encounter a Oops message from loop driver when vbd is used
> in dom0 kernel with CONFIG_VIRTUAL_MEM_MAP on ia64.
> I investigated this and might find a serious bug.
> On x86, flush_dcache_page() does nothing and there is no problem.
> But on ia64 flush_dcache_page() might access a wrong page struct
> and destroy the kernel memory.
> Attached patch fixes this problem but it seems bad idea to modify
> a linux driver. How should we fix it?

Why is a invalid page passed?
Should it be fixed instead of modifying loop.c?


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>