[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 10/11] mm: introduce and use vm_normal_page_pud()
On Mon, Aug 11, 2025 at 01:26:30PM +0200, David Hildenbrand wrote: > Let's introduce vm_normal_page_pud(), which ends up being fairly simple > because of our new common helpers and there not being a PUD-sized zero > folio. > > Use vm_normal_page_pud() in folio_walk_start() to resolve a TODO, > structuring the code like the other (pmd/pte) cases. Defer > introducing vm_normal_folio_pud() until really used. > > Note that we can so far get PUDs with hugetlb, daxfs and PFNMAP entries. I guess hugetlb will be handled in a separate way, daxfs will be... special, I think? and PFNMAP definitely is. > > Reviewed-by: Wei Yang <richard.weiyang@xxxxxxxxx> > Reviewed-by: Oscar Salvador <osalvador@xxxxxxx> > Signed-off-by: David Hildenbrand <david@xxxxxxxxxx> Anyway this is nice, thanks! Nice to resolve the todo :) Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@xxxxxxxxxx> > --- > include/linux/mm.h | 2 ++ > mm/memory.c | 19 +++++++++++++++++++ > mm/pagewalk.c | 20 ++++++++++---------- > 3 files changed, 31 insertions(+), 10 deletions(-) > > diff --git a/include/linux/mm.h b/include/linux/mm.h > index b626d1bacef52..8ca7d2fa71343 100644 > --- a/include/linux/mm.h > +++ b/include/linux/mm.h > @@ -2360,6 +2360,8 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct > *vma, > unsigned long addr, pmd_t pmd); > struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long > addr, > pmd_t pmd); > +struct page *vm_normal_page_pud(struct vm_area_struct *vma, unsigned long > addr, > + pud_t pud); > > void zap_vma_ptes(struct vm_area_struct *vma, unsigned long address, > unsigned long size); > diff --git a/mm/memory.c b/mm/memory.c > index 78af3f243cee7..6f806bf3cc994 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -809,6 +809,25 @@ struct folio *vm_normal_folio_pmd(struct vm_area_struct > *vma, > return page_folio(page); > return NULL; > } > + > +/** > + * vm_normal_page_pud() - Get the "struct page" associated with a PUD > + * @vma: The VMA mapping the @pud. > + * @addr: The address where the @pud is mapped. > + * @pud: The PUD. > + * > + * Get the "struct page" associated with a PUD. See __vm_normal_page() > + * for details on "normal" and "special" mappings. > + * > + * Return: Returns the "struct page" if this is a "normal" mapping. Returns > + * NULL if this is a "special" mapping. > + */ > +struct page *vm_normal_page_pud(struct vm_area_struct *vma, > + unsigned long addr, pud_t pud) > +{ > + return __vm_normal_page(vma, addr, pud_pfn(pud), pud_special(pud), > + pud_val(pud), PGTABLE_LEVEL_PUD); > +} > #endif > > /** > diff --git a/mm/pagewalk.c b/mm/pagewalk.c > index 648038247a8d2..c6753d370ff4e 100644 > --- a/mm/pagewalk.c > +++ b/mm/pagewalk.c > @@ -902,23 +902,23 @@ struct folio *folio_walk_start(struct folio_walk *fw, > fw->pudp = pudp; > fw->pud = pud; > > - /* > - * TODO: FW_MIGRATION support for PUD migration entries > - * once there are relevant users. > - */ > - if (!pud_present(pud) || pud_special(pud)) { > + if (pud_none(pud)) { > spin_unlock(ptl); > goto not_found; > - } else if (!pud_leaf(pud)) { > + } else if (pud_present(pud) && !pud_leaf(pud)) { > spin_unlock(ptl); > goto pmd_table; > + } else if (pud_present(pud)) { > + page = vm_normal_page_pud(vma, addr, pud); > + if (page) > + goto found; > } > /* > - * TODO: vm_normal_page_pud() will be handy once we want to > - * support PUD mappings in VM_PFNMAP|VM_MIXEDMAP VMAs. > + * TODO: FW_MIGRATION support for PUD migration entries > + * once there are relevant users. > */ > - page = pud_page(pud); > - goto found; > + spin_unlock(ptl); > + goto not_found; > } > > pmd_table: > -- > 2.50.1 >
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |