WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page

To: Wei Huang <wei.huang2@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH][3/4] Enable 1GB for Xen HVM host page
From: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Date: Tue, 23 Feb 2010 10:07:24 +0000
Cc: Keir, "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>, "'xen-devel@xxxxxxxxxxxxxxxxxxx'" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Fraser <Keir.Fraser@xxxxxxxxxxxxx>
Delivery-date: Tue, 23 Feb 2010 02:08:08 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B82BC78.7080907@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B82BC78.7080907@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
At 17:18 +0000 on 22 Feb (1266859128), Wei Huang wrote:
> This patch changes P2M code to works with 1GB page now.
> 
> Signed-off-by: Wei Huang <wei.huang2@xxxxxxx>
> Acked-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>

 
> @@ -1064,6 +1093,19 @@
>      if ( unlikely(d->is_dying) )
>          goto out_fail;
>  
> +    /* Because PoD does not have cache list for 1GB pages, it has to remap
> +     * 1GB region to 2MB chunks for a retry. */
> +    if ( order == 18 )
> +    {
> +        gfn_aligned = (gfn >> order) << order;
> +        for( i = 0; i < (1 << order); i += (1 << 9) )
> +            set_p2m_entry(d, gfn_aligned + i, _mfn(POPULATE_ON_DEMAND_MFN), 
> 9,
> +                          p2m_populate_on_demand);

I think you only need one set_p2m_entry call here - it will split the
1GB entry without needing another 511 calls.

Was the decision not to implement populate-on-demand for 1GB pages based
on not thinking it's a good idea or not wanting to do the work? :)
How much performance do PoD guests lose by not having it?

> +        audit_p2m(d);
> +        p2m_unlock(p2md);
> +        return 0;
> +    }
> +
>      /* If we're low, start a sweep */
>      if ( order == 9 && page_list_empty(&p2md->pod.super) )
>          p2m_pod_emergency_sweep_super(d);
> @@ -1196,6 +1238,7 @@
>      l1_pgentry_t *p2m_entry;
>      l1_pgentry_t entry_content;
>      l2_pgentry_t l2e_content;
> +    l3_pgentry_t l3e_content;
>      int rv=0;
>  
>      if ( tb_init_done )
> @@ -1222,18 +1265,44 @@
>          goto out;
>  #endif
>      /*
> +     * Try to allocate 1GB page table if this feature is supported.
> +     *
>       * When using PAE Xen, we only allow 33 bits of pseudo-physical
>       * address in translated guests (i.e. 8 GBytes).  This restriction
>       * comes from wanting to map the P2M table into the 16MB RO_MPT hole
>       * in Xen's address space for translated PV guests.
>       * When using AMD's NPT on PAE Xen, we are restricted to 4GB.
>       */

Please move this comment closer to the code it describes.  

Also maybe a BUG_ON(CONFIG_PAGING_LEVELS == 3) in the order-18 case
would be useful, since otherwise it looks like order-18 allocations are
exempt from the restriction.

Actually, I don't see where you enforce that - do you?

Tim.


-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel