|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] FreeBSD HVM Guest boots very slow.
Thanks Mark..... Please see my comments inline... On Thu, Aug 21, 2008 at 11:44 AM, Mark Williamson <mark.williamson@xxxxxxxxxxxx> wrote:
Hi John,
I'd like to clarify a few things to make sure we're on the same page here...
On Thursday 21 August 2008, John Mathews wrote:
> Hello Everyone
>
> I am trying to boot to a FreeBSD HVM guest on an RHEL5 Dom0.
> I have around 2GB memory installed in the system and below is my xen config
<snip>
Looks fine.
> I am able to boot to FreeBSD guest OS with the above configuration. Things
> work well.
>
> But as soon as I add a new line to the above config to hide one pci device,
> [pci=["0000:xx:00.0" ] ]
> the FreeBSD booting becomes almost 10 times slow. Any idea what could be
> the reason ?
Just to be clear, that line doesn't "hide" anything on its own. It's a
directive to pass through a PCI device from your *host* system to the guest.
It's assumed that there is not a driver in dom0 holding access to that PCI
device - you arrange for that to be true by "hiding" the device from the
driver in dom0.
You can hide a device on the dom0 kernel commandline but this doesn't work if
the Xen pciback driver is a module on your system. In that case you need to
manually rebind the driver or fiddle around with your dom0 configuration
files a bit.
Sorry, I forgot to mention about the pciback module part. I run the pciback as a module in Dom0 and I do run the below commands before I start the guest OS.
modprobe pciback echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/new_slot
echo -n 0000:xx:00.0 > /sys/bus/pci/drivers/pciback/bind
So i guess this ensures that this device is hidden from dom0.
How can I verify that this device is really hidden from dom0 ? If I do an lspci from dom0 to dump the pci config space of this deivce after its hidden, should that work ?
So, putting that line down ought to pass through a host device. I don't
*actually* know what happens if you try to pass through a device which you
haven't "hidden" / rebound in dom0. I doubt it'd be what you intended
though - it's conceivable (to me) that if the device is in use in dom0 then
you might be getting some timeouts as it tries (and fails) to talk to the
device. I'm assuming (hopefully?) that the code won't let two domains
actually *fight* over a device! ;-)
> Going through a couple of other responses here, I could figure out that the
> Disk IO reads will be
> pretty slow from an HVM guest without PV drivers. But I am not able
> understand why this happens
> only when I try to hide a device.
I think, as you say, it's unlikely to be this since it only manifests with the
PCI passthrough line in the config file.
Could you please clarify what the PCI config line was supposed to do and if
anything I've said sounds odd or new to you?
My intention is just to make sure that my PCI device is hidden from DOM0 and is visible in my FreeBSD guest. So If I hide my device in DOM0 and direct xen to enable the PCI passthrough with the PCI line in the config file, the FreeBSD guest boots very slow. And if I comment out the PCI line in Config file, it boots superfast.
Do you think the PCI passthrough logic in Xen can by any chance degrade the Disk IO performance for HVM guests ? (Just a wild guess. I am not that good with the xen code.)
> I was wondering if anyone had any information on this. Please post your
> responses if you could
> think of any possible cause for this or you have some suggestion to make my
> FreeBSD guest faster.
I guess the ideal way to make the guest faster would be to get someone to port
the PV drivers to run under FreeBSD. There was an existing paravirt FreeBSD
port which could be drawn upon here but - ironically - one of the things that
kept it out of the FreeBSD mainline was the need to modify the drivers
support FreeBSD's Newbus architecture. I guess this would still be a problem
now with respect to mainlining - however, the PV drivers definitely *worked*
in the PV port at one stage. We (as a community) still would need to find
someone who'd take on this work though :-/
Other than that, I'm afraid all I can suggest is that you apply any FreeBSD /
Xen / virtualisation tuning tips you can find and see what effect they have.
HVM usually hurts most in networking performance. HVM also has fairly
limited GUI performance so you may find (despite the network limitations) a
networked GUI like X11-over-SSH or Nomachine X (if FreeBSD can run it) would
work best.
Hope that helps,
Cheers,
Mark
--
Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|