WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RFC: Superpage/hugepage performance improvement

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: [Xen-devel] RFC: Superpage/hugepage performance improvement
From: Dave McCracken <dcm@xxxxxxxx>
Date: Mon, 5 Apr 2010 12:52:29 -0500
Cc: Xen Developers List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 05 Apr 2010 10:53:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.12.4 (Linux/2.6.32; KDE/4.3.4; x86_64; ; )
In our testing we found that the superpage/hugepage mapping code is seriously 
bogged down by the need to maintain the reference count on each of the 
underlying pages every time a hugepage is mapped.  I came up with a fix where a 
guest can call into the hypervisor to mark a set of pages as a superpage, thus 
locking that set of pages to be read/write data pages until the corresponding 
unmark is call is made.  To make this work I added two mmuext ops, one to mark 
a superpage and one to unmark it.  This change makes a huge performance 
difference in the hugepage mapping (on the order of 50 times faster).

On the Linux side, the hugepages are marked at the time they are put into the 
hugepage pool, and unmarked when they are taken out of the pool.  This 
commonly happens very infrequently.

Does this mechanism sound reasonable to you all?  If so, I'd like to make sure 
the numbers we use for the new mmuext ops are reserved upstream so we won't 
have to change them in the future.

I will port the actual patch forward to mainline shortly and send it off, but I 
wanted to get an early indication of how you feel about the design.

Thanks,
Dave McCracken
Oracle Corp.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>