[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Back end domains : input desired



On Mon, 2005-01-24 at 12:18, Mark A. Williamson wrote:
> > DOM0:  minimal linux install with LVM2 primarily for backending the ide
> > disks.
> 
> Fine.
> 
> > BE_NIC_0:  Back end NIC_0 domain (bridge) with minimal linux install -
> > no ip address assigned - using ebtables to filter/protect
> > BE_NIC_1:  Same as BE_NIC_0 only for NIC_1
> 
> This should work, although a recent post suggested there was some sort of bug 
> in the multiple backend support...
> 
> > BE_VNIC_2:  Back end for a "virtual nic"/bridge for DomU to DomU
> > communication (DMZ).
> 
> So does this have any connections to the physical network cards at all?

No. Could I possibly use the "dummy" driver to handle this requirement?

> The problem is that AFAIK the current code won't allow a domain to run a 
> backend driver unless it's controlling a real physical device.
> 
> > BE_MGMT:  firewall config/mgmt console (xwindows) (preferred x
> > displaying (direct) through AGP on console - is this possible) and
> > ntp/clock sync (can this happen here or does it have to happen on
> > DOM0?).
> 
> Clock sync can probably only occur from dom0 at the moment.  Likewise for AGP 
> access (although one user had some success in giving a graphics card to a 
> domU, it's not fully working yet).

Ok, I can live with that for the moment ... hopefully this will be
addressed in the near future?


> > 1)   I only seem to be able to compile the actual NIC drivers with DOM0
> > (e100/e1000/3c95x, etc).  Is this where I should be compiling them even
> > though the NIC's will be used in another DOM?  If not, how do I go about
> > compiling the drivers for the BE DOM'S? (they don't show up as options -
> > yes, I do have XEN_PHYSDEV_ACCESS and XEN_NETDEV_BACKEND enabled.
> 
> Just stick all the drivers you need into a xen0 kernel, then use that kernel 
> in any domain that's talking to the hardware.  You can use a xen0 kernel 
> anywhere.

Wow, so you can run "multiple" dom0 images (one real dom0) - is there
anything I need to add to the .sxp file to differentiate the non-dom0
domains from the real Dom0?

> > 2)  Even with pci_dom0_hide=(01,01,0)(02,00,0) as part of my grub.conf
> > (for the startup of xen.gz), I still see these devices under DOM0, is
> > this normal? lspci shows the devices as 0000:01:01.0 and 0000:02:00:0.0)
> > respectively.  Are my parameters to pci_dom0_hide correct?
> 
> Try physdev_dom0_hide - pci_dom0_hide is a bug that got introduced to the 
> docs 
> at some point (I think it has now been fixed).

Not as of yesterday with regards to the doc available on your website.

> > 3)  Should I be using stable, testing or unstable for this?  NOTE:
> > stable and testing both are unable to attach xen console to ttyS whereas
> > unstable works correctly for this.
> 
> In general, use stable for production environments.  Testing is the "next 
> stable release" and so is quite stable itself (and may have additional bug 
> fixes).
> 
> > 4)  It would be preferred to run X in a domain separate from Dom0, but
> > still be accessible for use on the local console without having to
> > install X and a VNC client in DOM0.  Is this possible, or am I just
> > dreaming here?
> 
> Possible in theory, in practice this doesn't quite work yet.

Good to know - I'll try it anyways and see if I'm lucky one of the lucky
few, or if I have to wait.

> HTH,
> Mark
> 

Thanks for the input!

B.


-------------------------------------------------------
This SF.Net email is sponsored by: IntelliVIEW -- Interactive Reporting
Tool for open source databases. Create drag-&-drop reports. Save time
by over 75%! Publish reports on the web. Export to DOC, XLS, RTF, etc.
Download a FREE copy at http://www.intelliview.com/go/osdn_nl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.