> I am currently in the planning phase for a new HTPC system, I want to
> set up and there are several questions related to Xen that I was not
> able to find answers to. The goal of that system in the long run is
> to replace not only my server and TV receiver, but also my primary
> desktop and access point for several LAN and Wifi clients. Though
> this might sound rather simple, the solution I am hoping for has to
> separate all these tasks in virtual machines, leaving the Dom0
> strictly as a controller for those and only accessible by SSH from
> one virtual machine (probably the Desktop). This would ensure that I
> can work on one instance (which might need tinkering with the kernel
> for example) while all other services are still available and it
> would help me sandbox all the components, so none of them directly
> influences the other. The system would be an AMD Athlon X2 4850e CPU
> on an Asus M3A-H mainboard (AMD 780G chipset).
As Todd mentioned, the only VT-d capable systems available currently are the
Intel ones, AFAIK. I understand that AMD's IOMMU systems will be along in
the not-too-distant future but they're not with us yet.
> The questions I have now are:
> 1. Is it possible to successfully pass through a graphics adapter, in
> this case probably an onboard ATI HD 3200 PCIe (currently only
> supported with the non-free fglrx driver), so that the Desktop/TV
> virtual machine has exclusive use of the hardware including (future)
> 3D support? I would also like to try other systems that can not be
> paravirtualized, while the other services have to still be running.
> In some of these systems I would like to have 3D hardware
> acceleration support (Desktop VM would have to be shut down as I
> understand it), since they heavily depend on it (like Vista or maybe
> Mac OS X). Is that possible? I already read that it is rather
> problematic to pass through the primary VGA adapter, because Xen
> itself will always claim it for console output at boot time, but
> wouldn't it be possible to use a serial port for that instead? Are
> there any other pitfalls that will not permit me to achieve this?
Funnily enough, quite recently a patch was posted which apparently enables you
to start an HVM domain (on a system with VT-d) and pass the host's primary
graphics card to it.
(xen-devel thread here: http://markmail.org/message/gecwbq4oqttszxvo)
Bear in mind that's development code so it's not guaranteed to work stably /
at all. Still, it's a step in the right direction and hopefully it might
make it into a future release of Xen - it's not even in the -unstable tree
yet though, so if you wanted to try it you'd have to apply the patch yourself
and build Xen.
In principle this would allow you to do what you want to do for your desktop
VM, *if* you were happy to run it in HVM. This would mean a bit of a
performance hit for Linux but probably not too bad if you installed PV
drivers in the guest. For Windows, you'd need to run it in HVM anyhow. For
MacOS, an off-the-shelf guest won't work because HVM doesn't have EFI; the
MacOSx86 project have done work on getting MacOS running on non-EFI systems,
though. Of course, Apple's EULA still requires that you only run MacOS on a
Mac ...
People have also made some progress with using PCI passthrough to give a
non-primary graphics card to a PV domU but I'm not sure if anyone managed to
get a full X server running, so I don't know if you could do this :-/
> 2. Is it possible to have a virtual machine hosting a MythTV backend,
> that has exclusive use of a PCI(e) or PCMCIA DVB-T/S/S2 adapter. This
> backend should not run in the Desktop instance, because I would
> prefer it if the Desktop doesn't have to be running for other clients
> to use it. This would also come in handy, because not running a full
> blown desktop while using the MythTV backend from another client
> could possibly save a bit on the energy bill (and the whole system is
> supposed to help me reduce that at least a bit)
This should be doable.
You've been able to pass through many PCI cards to PV domUs for ages without
needing any special hardware. This always had the consequence that the domU
had to be trusted because by abusing the PCI device it could gain
dom0-equivalent privileges (if a sufficiently clever and determined attacker
took it over, for instance). Support has just gone into the -unstable tree
for restricting PV domUs using VT-d so that they can be less trusted. I
don't know if the intention is to allow them to be completely untrusted but
VT-d certainly closes a big loophole!
People have done this sort of thing before with PV guests and today somebody
posted saying he'd passed a TV card to an HVM guest. Some cards misbehave
when using PCI passthrough so it may be worth searching the mailing lists for
success reports.
> 3. I would like to pass 3 NICs (HP 4 port gbit switch pci card, D-
> Link 4x T100Base card and a Ralink pci 802.11n draft 2.0 adapter)
> directly to an AP/Router virtual maschine, so the Dom0 is not
> connected directly to the network, but only to the Desktop VM by a
> separate bridge and the onboard gbit adapter (for backups/failover).
Wow! Again, passing those through to a domU using PCI passthrough ought to
work in principle but it's worth searching for success reports as to whether
the cards / drivers play well with PCI passthrough.
> Is that possible and am I really gaining security for the whole
> system or is this just my imagination and doesn't make any sense at
> all? How about the performance, especially for the graphics adapter,
> do I have to factor in bigger losses there (maybe because PCI
> passthrough doesn't support the full PCIe 16x speed)? Has anyone
> tried something similar yet or am I the first to think this might be
> a good idea?
No idea what performance you can expect from the graphics passthrough (once,
that is, it is available in Xen at all). PCI passthrough should be fairly
low overhead but because there are more virtual machines contending for the
CPU (and Xen will potentially have to switch virtual machines each time a PCI
device needs servicing) you may take some performance hit.
Virtualisation always introduces overheads. Xen is fortunate in that these
overheads are often very small but you're definitely going to reduce the
*overall throughput* the machine is capable of by doing this. If the overall
throughput was more than you needed / used anyhow then this may not actually
matter. Still, it's worth bearing in mind.
Instances where you hop over the virtual network within Xen will reduce
network performance - again, going through the virtual NIC (and the Linux
bridging code) adds overheads. You're also adding to the number of context
switches that your CPU has to do in order to get data in / out of the system,
on top of the processing time. Again, this may not actually be that
noticeable in your system but it's worth bearing in mind.
Security judgements are very difficult to make. By introducing Xen you've
actually *increased* the size of the Trusted Computing Base - code which you
rely upon being secure in order to protect your system. Equally well, you'll
have put up some harder partitions between components of your system which
may help you. Of course, if you do all your internet browsing and file
editing in your Desktop VM then maybe that's the only one whose security is
really critical - in which case you wouldn't have changed that much!
Splitting up your system like this *may* eventually (when all the support in
Xen is there and you have the appropriate hardware) make things more flexible
for you - e.g. ability to do 3D stuff in Vista whilst running a Linux Myth
backend on the same system. That's something that you just can't do without
some sort of virtual machine layer. And you're able to contain administrator
mistakes, kernel panics, etc that may occur in any one virtual machine
(though typically these are rare, one hopes!).
I guess what I'm saying is "Yes, you can do that, yes it may work well.
Security benefits debatable but you *may* be more robust to certain problems
and able to do some really cool stuff."
Other things you could consider, which may work out simpler / cheaper or not:
1) run Windows as primary OS and either
- can you run Myth on Windows?
- separate router box?
- run non-hardware-dependent Linux stuff either in a conventional virtual
machine system or look at using CoLinux (http://colinux.org/)
2) Run Linux as primary OS and maybe look into using OpenVZ
(http://openvz.org/) to partition different Linux-based workloads within the
system. You'd need to dual-boot to Windows, or have a separate machine.
Still, you could shut that machine down entirely when not using it, the Linux
box could perhaps be lower power.
3) Bear in mind that there are other Open Source virtualisers, although I
don't think any others will do PCI passthrough for you. There are various
approaches - other than PCI passthrough - to accelerated 3D graphics in guest
VMs being worked on or starting to be available currently, both for Xen and
for other virtualisers.
> I hope someone can give me an answer or at least a hint on how far I
> am from reality.
> Thanks,
I hope my comments help a bit. Sorry for rambling :-)
Cheers,
mark
--
Push Me Pull You - Distributed SCM tool (http://www.cl.cam.ac.uk/~maw48/pmpu/)
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|