[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v1 3/9] mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd()



Just like we do for vmf_insert_page_mkwrite() -> ... ->
insert_page_into_pte_locked() with the shared zeropage, support the
huge zero folio in vmf_insert_folio_pmd().

When (un)mapping the huge zero folio in page tables, we neither
adjust the refcount nor the mapcount, just like for the shared zeropage.

For now, the huge zero folio is not marked as special yet, although
vm_normal_page_pmd() really wants to treat it as special. We'll change
that next.

Reviewed-by: Oscar Salvador <osalvador@xxxxxxx>
Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
 mm/huge_memory.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1c4a42413042a..9ec7f48efde09 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1429,9 +1429,11 @@ static vm_fault_t insert_pmd(struct vm_area_struct *vma, 
unsigned long addr,
        if (fop.is_folio) {
                entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
 
-               folio_get(fop.folio);
-               folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
-               add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
+               if (!is_huge_zero_folio(fop.folio)) {
+                       folio_get(fop.folio);
+                       folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, 
vma);
+                       add_mm_counter(mm, mm_counter_file(fop.folio), 
HPAGE_PMD_NR);
+               }
        } else {
                entry = pmd_mkhuge(pfn_pmd(fop.pfn, prot));
                entry = pmd_mkspecial(entry);
-- 
2.50.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.