WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] ioemu: directly project all memory on x86_64

To: Samuel Thibault <samuel.thibault@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] ioemu: directly project all memory on x86_64
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Wed, 23 Jan 2008 17:54:40 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 23 Jan 2008 09:55:08 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080123163844.GL4252@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Achd6QagRTe43sncEdyZHgAX8io7RQ==
Thread-topic: [Xen-devel] [PATCH] ioemu: directly project all memory on x86_64
User-agent: Microsoft-Entourage/11.3.6.070618
On 23/1/08 16:38, "Samuel Thibault" <samuel.thibault@xxxxxxxxxxxxx> wrote:

> Keir Fraser, le Wed 23 Jan 2008 16:30:27 +0000, a écrit :
>> I don't really want the memory-size parameter back. What if we support
>> memory hotplug in future (e.g., we could do now if the guest decides to
>> balloon in some memory higher up in its memory map)?
> 
> Well, the question holds for ia64 too, which already projects all
> memory.

Sure. I expect ia64 lags behind x86 in this respect.

I remember the crash caused by not tracking unmapped pages by the way. At
the time Xen was not notifying the qemu on increase_reservation, so qemu was
not refreshing its guest memory map and would crash when newly-allocated
pages were the source or detsination of I/O operations. Looks like it's
double fixed -- Xen is invalidating on both increase and decrease
reservation, and qemu-dm is able to lazily fault in new guest mappings.

>> Anyway I don't see what getting rid of the mapcache fixes.
> 
> Well, that was mostly to speed up memory lookup in the usual case, I
> don't really need it.

It's a good aim, I just think the optimisation should be done within the
context of the mapcache subsystem.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel