WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-devel] RE: [Xen-users] Does VT-d make live migration of Xen more di

To: "Liang Yang" <multisyncfe991@xxxxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] RE: [Xen-users] Does VT-d make live migration of Xen more difficult by reducing the size of abstraction layer?
From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
Date: Mon, 12 Feb 2007 12:30:47 +0100
Cc: Xen devel list <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 12 Feb 2007 04:31:29 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <BAY125-DAV991F884B1668B2701C66C939C0@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdMf0sJ929ugKDkR6GT1gZNw7QhhACGDOyw
Thread-topic: [Xen-users] Does VT-d make live migration of Xen more difficult by reducing the size of abstraction layer?
 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Liang Yang
> Sent: 09 February 2007 19:18
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Cc: Xen devel list
> Subject: [Xen-users] Does VT-d make live migration of Xen 
> more difficult by reducing the size of abstraction layer?
> 
> Hi,
> 
> I'm just thinking about the pros and cons of VT-d. On one 
> side, it does 
> improve performance of guest domain to provide more direct 
> access to HW by 
> walking around hypervisor; while on the other side, it also 
> reduce the 
> abstraction layer of hypervisor which could make the live 
> migration more 
> difficult.

Ehm, that's a bit "wrong". VT (or in the case I know better, AMD-V)
doesn't directly allow the guest to access hardware, because of the
complications of memory addresses. Guest has a "delusioned" view of
physical memory addresses, for example a guest with 256MB of memory will
believe that memory starts at 0 and ends at 256MB. That can of course
only be true for one guest (at the most), and that privilege is usually
for Xen+Dom0. The guests are then loaded at some other address, given a
"false" statement from the hypervisor about where memory is - because
most OS's don't grasp the concept of being loaded at some random address
in memory (never mind the fact that the memory for the guest can be
non-contiguous]. Because the guest is completley unaware of the
machine-physical address, it will not be able to correctly tell a piece
of hardware where the actual data is located (within the MACHINE's
memory). 

The memory abstraction for HVM (Fully virtualizaed domains) is not
particularly different from the PV domains - it's different in the sense
that we trap into the hypervisor in a different way, and we've got to
"reverse engineer" the operations the kernel is doing, rather than "know
from the source-code" what's going on, and a few other complications.
But in essence, the hypervisor knows all about the memory layout and
hardware settings that the guest has done, and is not much different in
difficulty than the PV-case. 

Whilst there is overhead accessing hardware in the PV-case, the overhead
is actually greater in the HVM-case, as the number of intercepts ("traps
to hypervisor") for any emulated hardware is most likely a larger number
than the trap to HV from the PV-guest - only really trivial hardware can
get away with a single memory operation to perform a HW access complete.
An IDE access for example consists of several IO-writes followed
IO-read/write operations for the data. 

> 
> Could someone here give me a best balanced point when 
> considering vt-d for 
> Xen guest domains?

The balanced view is that if PV kernel is available, use PV. If there is
no PV-kernel (and it's non-trivial to get one), then use HVM. The latter
is the case for Windows or other "closed-source" OS's, as well as OS's
where the kernel patches supplied by Xen-source aren't available (Linux
kernel version 2.4.x for example). 

--
Mats
> 
> Thanks,
> 
> Liang
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>