WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] improve x86 page table handling performance

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] improve x86 page table handling performance
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Tue, 27 Mar 2007 07:22:30 +0100
Delivery-date: Mon, 26 Mar 2007 23:42:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4607F9E7.76E4.0078.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4607F9E7.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> "Jan Beulich" <jbeulich@xxxxxxxxxx> 26.03.07 16:50 >>>
>Where possible,
>- use hypercalls instead of writing to read-only pages
>- fold TLB flushes into page table update hypercalls
>- on PAE, use single-access updates instead of two-access ones
>
>The single change to pte_clear() yields a 25-30% boost for kernel builds
>on a 4x2x2 CPUs, 8Gb box; the other changes together yield improvements
>of 2-5%.

I shall clarify that this is for the PAE case only (i.e. resulting from the
folding of two page faults into a single hypercall, which is being used in the
kmap path).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>