WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] FreeBSD HVM Guest boots very slow.

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] FreeBSD HVM Guest boots very slow.
From: Mark Williamson <mark.williamson@xxxxxxxxxxxx>
Date: Thu, 21 Aug 2008 19:44:33 +0100
Cc: John Mathews <mathjm@xxxxxxxxx>
Delivery-date: Thu, 21 Aug 2008 11:45:10 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <a31dfe7c0808202018x4f54ac89u11b5de765d8e996e@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <a31dfe7c0808202018x4f54ac89u11b5de765d8e996e@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
Hi John,

I'd like to clarify a few things to make sure we're on the same page here...

On Thursday 21 August 2008, John Mathews wrote:
> Hello Everyone
>
> I am trying to boot to a FreeBSD HVM guest on an RHEL5 Dom0.
> I have around 2GB memory installed in the system and below is my xen config

<snip>

Looks fine.

> I am able to boot to FreeBSD guest OS with the above configuration. Things
> work well.
>
> But as soon as I add a new line to the above config to hide one pci device,
> [pci=["0000:xx:00.0" ] ]
> the FreeBSD booting becomes almost 10 times slow. Any idea what could be
> the reason ?

Just to be clear, that line doesn't "hide" anything on its own.  It's a 
directive to pass through a PCI device from your *host* system to the guest.  
It's assumed that there is not a driver in dom0 holding access to that PCI 
device - you arrange for that to be true by "hiding" the device from the 
driver in dom0.

You can hide a device on the dom0 kernel commandline but this doesn't work if 
the Xen pciback driver is a module on your system.  In that case you need to 
manually rebind the driver or fiddle around with your dom0 configuration 
files a bit.

So, putting that line down ought to pass through a host device.  I don't 
*actually* know what happens if you try to pass through a device which you 
haven't "hidden" / rebound in dom0.  I doubt it'd be what you intended 
though - it's conceivable (to me) that if the device is in use in dom0 then 
you might be getting some timeouts as it tries (and fails) to talk to the 
device.  I'm assuming (hopefully?) that the code won't let two domains 
actually *fight* over a device! ;-)

> Going through a couple of other responses here, I could figure out that the
> Disk IO reads will be
> pretty slow from an HVM guest without PV drivers. But I am not able
> understand why this happens
> only when I try to  hide a device.

I think, as you say, it's unlikely to be this since it only manifests with the 
PCI passthrough line in the config file.

Could you please clarify what the PCI config line was supposed to do and if 
anything I've said sounds odd or new to you?

> I was wondering if anyone had any information on this. Please post your
> responses if you could
> think of any possible cause for this or you have some suggestion to make my
> FreeBSD guest faster.

I guess the ideal way to make the guest faster would be to get someone to port 
the PV drivers to run under FreeBSD.  There was an existing paravirt FreeBSD 
port which could be drawn upon here but - ironically - one of the things that 
kept it out of the FreeBSD mainline was the need to modify the drivers 
support FreeBSD's Newbus architecture.  I guess this would still be a problem 
now with respect to mainlining - however, the PV drivers definitely *worked* 
in the PV port at one stage.  We (as a community) still would need to find 
someone who'd take on this work though :-/

Other than that, I'm afraid all I can suggest is that you apply any FreeBSD / 
Xen / virtualisation tuning tips you can find and see what effect they have.  
HVM usually hurts most in networking performance.  HVM also has fairly 
limited GUI performance so you may find (despite the network limitations) a 
networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run it) would 
work best.

Hope that helps,
Cheers,
Mark

-- 
Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>