[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Questioning the Xen Design of the VMM



On Tue, 2006-08-08 at 17:10 +0300, Al Boldi wrote:

> > There are two flavours of Xen guests:
> > Para-virtual guests. Those are patched kernels, and have (in past
> > versions of Xen) been implemented for Linux 2.4, Linux 2.6, Windows,
> > <some version of>BSD and perhaps other versions that I don't know of.
> > Current Xen is "Linux only" supplied with the Xen kernel. Other kernels
> > are being worked on.
> 
> This is the part I am questioning.
> 
> > HVM guests. These are fully virtualized guests, where the guest contains
> > the same binary as you would use on a non-virtual system. You can run
> > Windows or Linux, or most other OS's on this. It does require "new"
> > hardware that has virtualization support in hardware (AMD's AMDV (SVM)
> > or Intel VT) to use this flavour of guest though, so the older model is
> > still maintained.
> 
> So HVM solves the problem, but why can't this layer be implemented in 
> software?

the short answer at the cpu level is "because of the arcane nature of
the x86 architecture" :/

it can be done, but it requires mechanisms xen developers currently do
not and wouldn't be willing to apply. non-paravirtualized guests may
perform operations which on bare x86 hardware are hard/impossible to
track. one way to work around this would be patching guest code segments
before executing them. that's where systems like e.g. vmware come into
play. xen-style paravirtualization at the cpu level basically resolves
that efficiently by teaching the guest system not to use the critical
stuff, but be aware of the vmm to do it instead.

once the cpu problem has been solved, you'd need to emulate hardware
resources an unmodified guest system attempts to drive. that again takes
additional cycles. elimination of the peripheral hardware interfaces by
putting the I/O layers on top of an abstract low-level path into the VMM
is one of the reasons why xen is faster than others. many systems do
this quite successfully, even for 'non-modified' guests like e.g.
windows, by installing dedicated, virtualization aware drivers once the
base installation went ok.

> I'm sure there can't be a performance issue, as this virtualization doesn't 
> occur on the physical resource level, but is (should be) rather implemented 
> as some sort of a multiplexed routing algorithm, I think :)

few device classes support resource sharing in that manner efficiently.
peripheral devices in commodity platforms are inherently single-hosted
and won't support unfiltered access by multiple driver instances in
several guests.

from the vmm perspective, it always boils down to emulating the device.
howerver, with varying degrees of complexity regarding the translation
of guest requests to physical access. it depends. ide, afaik is known to
work comparatively well. an example of an area where it's getting more
sportive would be network adapters.

this is basically the whole problem when building virtualization layers
for cots platforms: the device/driver landscape spreads to infinity :)
since you'll have a hard time driving any possible combination by
yourself, you need something else to do it. one solution are hosted
vmms, running on top of an existing operating system. a second solution
is what xen does: offload drivers to a modified guest system which can
then carry the I/O load from the additional, nonprivileged guests as
well.

regards,
daniel

-- 
Daniel Stodden
LRR     -      Lehrstuhl fÃr Rechnertechnik und Rechnerorganisation
Institut fÃr Informatik der TU MÃnchen             D-85748 Garching
http://www.lrr.in.tum.de/~stodden         mailto:stodden@xxxxxxxxxx
PGP Fingerprint: F5A4 1575 4C56 E26A 0B33  3D80 457E 82AE B0D8 735B

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.