[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB pages


  • To: Keshav Darak <keshav_darak@xxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Mon, 21 Mar 2011 21:31:03 +0000
  • Cc: jeremy@xxxxxxxx
  • Delivery-date: Mon, 21 Mar 2011 14:32:15 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=wjgxgrRASdqU7sfeQnFpDcS0sjtRafXoFbppicMDRsOQBSjDIzHHLzEFxU3gBbNpm8 YTKvMIgfZug9jGv9OYJYr7mm1BLEf2AjaQomd8ybL0mUE6tpIU3cX9CfOd7aZPNZSPfd Zj85UpODEZyaQ3yQKDhCUbObxr4CJzUfcEmWw=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcvoD0diO3WG54L4i0CpF+ItZVdBaQ==
  • Thread-topic: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB pages

Keshav,

There is already optional support for superpage allocations and mappings for
PV guests in the hypervisor and toolstack. See the opt_allow_superpages
boolean flag in the hypervisor, and the 'superpages' domain config option
that can be specified when creating a new domain via xend/xm.

 -- Keir

On 21/03/2011 21:01, "Keshav Darak" <keshav_darak@xxxxxxxxx> wrote:

> have corrected few mistakes in previously attached xen patch file.
> Please review it.
> 
> --- On Sun, 3/20/11, Keshav Darak <keshav_darak@xxxxxxxxx> wrote:
>> 
>> From: Keshav Darak <keshav_darak@xxxxxxxxx>
>> Subject: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB
>> pages
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: jeremy@xxxxxxxx, keir@xxxxxxx
>> Date: Sunday, March 20, 2011, 10:34 PM
>> 
>> We have implemented hugepage support for guests in following manner
>> 
>> In our implementation we added a parameter hugepage_num which is specified in
>> the config file of the DomU. It is the number of hugepages that the guest is
>> guaranteed to receive whenever the kernel asks for hugepage by using its boot
>> time parameter or reserving after booting (eg. Using echo XX >
>> /proc/sys/vm/nr_hugepages). During creation of the domain we reserve MFN's
>> for these hugepages and store them in the list. The listhead of this list is
>> inside the domain structure with name "hugepage_list". When the domain is
>> booting, at that time the memory seen by the kernel is allocated memory  less
>> the amount required for hugepages. The function reserve_hugepage_range is
>> called as a initcall. Before this function the xen_extra_mem_start points to
>> this apparent end of the memory. In this function we reserve the PFN range
>> for the hugepages which are going to be allocated by kernel by incrementing
>> the xen_extra_mem_start. We maintain these PFNs as pages in
>> "xen_hugepfn_list" in the kernel.
>> 
>> Now before the kernel requests for hugepages, it makes a hypercall
>> HYPERVISOR_memory_op  to get count of hugepages allocated to it and
>> accordingly reserves the pfn range.
>> then whenever kernel requests for hugepages it again make hypercall
>> HYPERVISOR_memory_op to get the preallocated hugepage and according makes the
>> p2m mapping on both sides (xen as well as kernel side)
>> 
>> The approach can be better explained using the presentation attached.
>> 
>> --
>> Keshav Darak
>> Kaustubh Kabra
>> Ashwin Vasani 
>> Aditya Gadre
>> 
>>  
>> 
>> -----Inline Attachment Follows-----
>> 
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx </mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx>
>> http://lists.xensource.com/xen-devel
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.