[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB pages


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx, Keir Fraser <keir.xen@xxxxxxxxx>
  • From: Keshav Darak <keshav_darak@xxxxxxxxx>
  • Date: Tue, 22 Mar 2011 05:36:33 -0700 (PDT)
  • Cc: jeremy@xxxxxxxx
  • Delivery-date: Tue, 22 Mar 2011 05:37:47 -0700
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=s1024; d=yahoo.com; h=Message-ID:X-YMail-OSG:Received:X-Mailer:Date:From:Subject:To:Cc:In-Reply-To:MIME-Version:Content-Type; b=3xGXWAg9j08He/SVDWHBDVbBgoL9fqbxL4xjETRsF+Vbgdlwc1nGgAvl2KewYUFRdp0xbsRj3RjkBNt4liK8lu6fSlMj9LWuzXh+1foA1jBhJFY3hPYxgV4HVmpdYhDVmi7huOrQnpXDvF8vbl6OHVjBtcoE7+gKNQh2UjZw6MQ=;
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Keir,
    We are aware of it and we have to use 'opt_allow_superpages' boolean flag in our implementation too. But when we use superpages flag in domain configuration file,
entire domain boots on hugepages (superpages).If the specified memory in 'hugepages' for the domain is not available, then the domain does not boot.
     But in our implementation , we target to give only those many hugepages ( using "hugepage_num" option in config file) to the domain that it actually requires and hence entire domain need not be booted on hugepages.
     This is to support domains that boot with 4 KB pages and still can use hugepages. So,
the pressure on the number of hugepages required for a domain even to boot is reduced to a great extent.
--- On Mon, 3/21/11, Keir Fraser <keir.xen@xxxxxxxxx> wrote:

From: Keir Fraser <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB pages
To: "Keshav Darak" <keshav_darak@xxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Cc: jeremy@xxxxxxxx
Date: Monday, March 21, 2011, 9:31 PM

Keshav,

There is already optional support for superpage allocations and mappings for
PV guests in the hypervisor and toolstack. See the opt_allow_superpages
boolean flag in the hypervisor, and the 'superpages' domain config option
that can be specified when creating a new domain via xend/xm.

-- Keir

On 21/03/2011 21:01, "Keshav Darak" <keshav_darak@xxxxxxxxx> wrote:

> have corrected few mistakes in previously attached xen patch file.
> Please review it.
>
> --- On Sun, 3/20/11, Keshav Darak <keshav_darak@xxxxxxxxx> wrote:
>>
>> From: Keshav Darak <keshav_darak@xxxxxxxxx>
>> Subject: [Xen-devel] PATCH: Hugepage support for Domains booting with 4KB
>> pages
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: jeremy@xxxxxxxx, keir@xxxxxxx
>> Date: Sunday, March 20, 2011, 10:34 PM
>>
>> We have implemented hugepage support for guests in following manner
>>
>> In our implementation we added a parameter hugepage_num which is specified in
>> the config file of the DomU. It is the number of hugepages that the guest is
>> guaranteed to receive whenever the kernel asks for hugepage by using its boot
>> time parameter or reserving after booting (eg. Using echo XX >
>> /proc/sys/vm/nr_hugepages). During creation of the domain we reserve MFN's
>> for these hugepages and store them in the list. The listhead of this list is
>> inside the domain structure with name "hugepage_list". When the domain is
>> booting, at that time the memory seen by the kernel is allocated memory  less
>> the amount required for hugepages. The function reserve_hugepage_range is
>> called as a initcall. Before this function the xen_extra_mem_start points to
>> this apparent end of the memory. In this function we reserve the PFN range
>> for the hugepages which are going to be allocated by kernel by incrementing
>> the xen_extra_mem_start. We maintain these PFNs as pages in
>> "xen_hugepfn_list" in the kernel.
>>
>> Now before the kernel requests for hugepages, it makes a hypercall
>> HYPERVISOR_memory_op  to get count of hugepages allocated to it and
>> accordingly reserves the pfn range.
>> then whenever kernel requests for hugepages it again make hypercall
>> HYPERVISOR_memory_op to get the preallocated hugepage and according makes the
>> p2m mapping on both sides (xen as well as kernel side)
>>
>> The approach can be better explained using the presentation attached.
>>
>> --
>> Keshav Darak
>> Kaustubh Kabra
>> Ashwin Vasani
>> Aditya Gadre
>>
>> 
>>
>> -----Inline Attachment Follows-----
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx </mc/compose?to=Xen-devel@xxxxxxxxxxxxxxxxxxx>
>> http://lists.xensource.com/xen-devel
>
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.