[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [Linux PATCH] Make hugepages work in current git tree



On 04/28/2010 07:08 AM, Dave McCracken wrote:
> Somewhere in the move to the paravirt way of doing things hugepages
> stopped working.  This patch fixes hugepages.
>   

Looks reasonable.  I rewrote the commit comment:

Subject: [PATCH] x86/hugetlb: use set_pmd for huge pte operations

On x86, a huge pte is logically a pte, but structurally a pmd.  Among
other issues, pmds and ptes overload some flags for multiple uses (PAT
vs PSE), so it is necessary to know which structural level a pagetable
entry is in order interpret it properly.

When huge pages are used within a paravirtualized system, it is therefore
appropriate to use the pmd set of function to operate on them, so that
the hypervisor can correctly validate the update.

Signed-off-by: Dave McCracken <dave.mccracken@xxxxxxxxxx>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>

Does this look correct?

    J

> Signed-off-by: Dave McCracken <dave.mccracken@xxxxxxxxxx>
>
> --------
>
> --- 2.6-xen/arch/x86/include/asm/hugetlb.h    2009-10-29 17:48:21.000000000 
> -0500
> +++ 2.6-xen-huge/arch/x86/include/asm/hugetlb.h       2010-04-21 
> 09:50:40.000000000 -0500
> @@ -36,16 +36,24 @@ static inline void hugetlb_free_pgd_rang
>       free_pgd_range(tlb, addr, end, floor, ceiling);
>  }
>  
> +static inline pte_t huge_ptep_get(pte_t *ptep)
> +{
> +     return *ptep;
> +}
> +
>  static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
>                                  pte_t *ptep, pte_t pte)
>  {
> -     set_pte_at(mm, addr, ptep, pte);
> +     set_pmd((pmd_t *)ptep, __pmd(pte_val(pte)));
>  }
>  
>  static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
>                                           unsigned long addr, pte_t *ptep)
>  {
> -     return ptep_get_and_clear(mm, addr, ptep);
> +     pte_t pte = huge_ptep_get(ptep);
> +
> +     set_huge_pte_at(mm, addr, ptep, __pte(0));
> +     return pte;
>  }
>  
>  static inline void huge_ptep_clear_flush(struct vm_area_struct *vma,
> @@ -66,19 +74,25 @@ static inline pte_t huge_pte_wrprotect(p
>  static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
>                                          unsigned long addr, pte_t *ptep)
>  {
> -     ptep_set_wrprotect(mm, addr, ptep);
> +     pte_t pte = huge_ptep_get(ptep);
> +
> +     pte = pte_wrprotect(pte);
> +     set_huge_pte_at(mm, addr, ptep, pte);
>  }
>  
>  static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
>                                            unsigned long addr, pte_t *ptep,
>                                            pte_t pte, int dirty)
>  {
> -     return ptep_set_access_flags(vma, addr, ptep, pte, dirty);
> -}
> +     pte_t oldpte = huge_ptep_get(ptep);
> +     int changed = !pte_same(oldpte, pte);
>  
> -static inline pte_t huge_ptep_get(pte_t *ptep)
> -{
> -     return *ptep;
> +     if (changed && dirty) {
> +             set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
> +             flush_tlb_page(vma, addr);
> +     }
> +
> +     return changed;
>  }
>  
>  static inline int arch_prepare_hugepage(struct page *page)
>
>   


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.