WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] GPLPV Drivers - Problem with Scientificlinux 5.2

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] GPLPV Drivers - Problem with Scientificlinux 5.2
From: Klaus Steinberger <klaus.steinberger@xxxxxxxxxxxxxxxxxxxxxx>
Date: Mon, 09 Feb 2009 18:10:00 +0100
Delivery-date: Mon, 09 Feb 2009 09:10:38 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
Hello James,

I'm just trying to use the GPLPV Drivers (Version 0.9.12-pre13) but
under SL 5.2 (same as CentOS 5.2 or RHEL 5.2) and a Windows 2008 VM i
got the doubled system disk problem.

It looks like the new method for switching of the qemu devices does not
work with the Xen version from redhat. (xentop says something about
xen-3.1.2)

Any idea what we can do?  Maybe some compatibility switch, which switch
the xenhide driver back to the method used in 0.9.12-pre4 and before?

Sincerly,
Klaus

P.S: I resent my mail without crypto signature, as mailman seems to have problems with it.

Attachment: klaus_steinberger.vcf
Description: Vcard

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>