[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v3 41/41] mm/ksm: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard@xxxxxxxxxx> For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Dan Williams <dan.j.williams@xxxxxxxxx> Cc: Daniel Black <daniel@xxxxxxxxxxxxx> Cc: Jan Kara <jack@xxxxxxx> Cc: Jérôme Glisse <jglisse@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx> Signed-off-by: John Hubbard <jhubbard@xxxxxxxxxx> --- mm/ksm.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/ksm.c b/mm/ksm.c index 3dc4346411e4..e10ee4d5fdd8 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -456,7 +456,7 @@ static inline bool ksm_test_exit(struct mm_struct *mm) * We use break_ksm to break COW on a ksm page: it's a stripped down * * if (get_user_pages(addr, 1, 1, 1, &page, NULL) == 1) - * put_page(page); + * put_user_page(page); * * but taking great care only to touch a ksm page, in a VM_MERGEABLE vma, * in case the application has unmapped and remapped mm,addr meanwhile. @@ -483,7 +483,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) FAULT_FLAG_WRITE | FAULT_FLAG_REMOTE); else ret = VM_FAULT_WRITE; - put_page(page); + put_user_page(page); } while (!(ret & (VM_FAULT_WRITE | VM_FAULT_SIGBUS | VM_FAULT_SIGSEGV | VM_FAULT_OOM))); /* * We must loop because handle_mm_fault() may back out if there's @@ -568,7 +568,7 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item) flush_anon_page(vma, page, addr); flush_dcache_page(page); } else { - put_page(page); + put_user_page(page); out: page = NULL; } @@ -1974,10 +1974,10 @@ struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, parent = *new; if (ret < 0) { - put_page(tree_page); + put_user_page(tree_page); new = &parent->rb_left; } else if (ret > 0) { - put_page(tree_page); + put_user_page(tree_page); new = &parent->rb_right; } else if (!ksm_merge_across_nodes && page_to_nid(tree_page) != nid) { @@ -1986,7 +1986,7 @@ struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, * it will be flushed out and put in the right unstable * tree next time: only merge with it when across_nodes. */ - put_page(tree_page); + put_user_page(tree_page); return NULL; } else { *tree_pagep = tree_page; @@ -2328,7 +2328,7 @@ static struct rmap_item *scan_get_next_rmap_item(struct page **page) &rmap_item->rmap_list; ksm_scan.address += PAGE_SIZE; } else - put_page(*page); + put_user_page(*page); up_read(&mm->mmap_sem); return rmap_item; } -- 2.22.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |