WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] 32bit on 64-bit causes guest Kernel oops

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] 32bit on 64-bit causes guest Kernel oops
From: Bastian Blank <bastian@xxxxxxxxxxxx>
Date: Wed, 29 Aug 2007 12:08:23 +0200
Delivery-date: Wed, 29 Aug 2007 03:08:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <db4f7fd10708270705n1c015a62m15b5288cfebb546d@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <db4f7fd10708270705n1c015a62m15b5288cfebb546d@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
On Mon, Aug 27, 2007 at 10:05:01AM -0400, Nathan Widmyer wrote:
> Is the version mismatch between the API and hypervisor versions a problem?

No. But a missmatch in the blkback-blkfront-interface. The protocol is
not compatible between x86_32 and x86_64 and the ability to handle both
was added in blkbak for the 3.1 kernel. The fedora kernel is based on
3.0.4.

Bastian

-- 
        "That unit is a woman."
        "A mass of conflicting impulses."
                -- Spock and Nomad, "The Changeling", stardate 3541.9

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>