This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Essay on an important Xen decision (long)

To: "Anthony Liguori" <aliguori@xxxxxxxxxx>, "Mark Williamson" <mark.williamson@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] Essay on an important Xen decision (long)
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 11 Jan 2006 17:20:34 -0000
Cc: "Magenheimer, Dan \(HP Labs Fort Collins\)" <dan.magenheimer@xxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 11 Jan 2006 17:27:05 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcYWzfhVdG6V6SaDQrKOMicJX4cJ0QAAwxwg
Thread-topic: [Xen-devel] Essay on an important Xen decision (long)
> Just to be thorough, was the shadow paging code a "pure" 
> shadow page table where ever PTE write trapped to the 
> hypervisor or were bulk PMD updates sent to the hypervisor?

All of Xen's pagetable options are able to do high-performance bulk
updates (though its actually typically more important to optimize for
the demand-fault path).
There was some quite extensive benchmarking done ~9 months back, and
we're hoping to write it up and submit it somewhere. The algorithms have
evolved a bit since so we need to rerun things.

> I'm surprised there would be a measurable difference with 
> shadow paging as it should only require a potential 
> allocation (which could be fast
> pathed) and in the normal case, a couple extra reads/writes.  
> I would think that cost would be overshadowed by the original 
> cost of the context switch.

Hint: you need to be propagate dirty and accessed bits back to the guest

> Of course, I guess it wouldn't be that much of a shock to me 
> that the overhead is at least measurable...

It's certainly measureable, and certainly dominates the virtualization
overhead of some workloads. 


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>