|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 2/9] mm: Place unscrubbed pages at the end of pagelist
On 14/04/17 16:37, Boris Ostrovsky wrote:
> . so that it's easy to find pages that need to be scrubbed (those pages are
> now marked with _PGC_need_scrub bit).
>
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> ---
> Changes in v3:
> * Keep dirty bit per page, add dirty_head to page_info that indicates whether
> the buddy has dirty pages.
> * Make page_list_add_scrub() set buddy's page order
> * Data type adjustments (int -> unsigned)
>
> xen/common/page_alloc.c | 119 +++++++++++++++++++++++++++++++++++++--------
> xen/include/asm-arm/mm.h | 6 ++
> xen/include/asm-x86/mm.h | 6 ++
> 3 files changed, 110 insertions(+), 21 deletions(-)
>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 6fe55ee..9dcf6ee 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -383,6 +383,8 @@ typedef struct page_list_head
> heap_by_zone_and_order_t[NR_ZONES][MAX_ORDER+1];
> static heap_by_zone_and_order_t *_heap[MAX_NUMNODES];
> #define heap(node, zone, order) ((*_heap[node])[zone][order])
>
> +static unsigned long node_need_scrub[MAX_NUMNODES];
> +
> static unsigned long *avail[MAX_NUMNODES];
> static long total_avail_pages;
>
> @@ -678,6 +680,20 @@ static void check_low_mem_virq(void)
> }
> }
>
> +/* Pages that need scrub are added to tail, otherwise to head. */
> +static void page_list_add_scrub(struct page_info *pg, unsigned int node,
> + unsigned int zone, unsigned int order,
> + bool need_scrub)
> +{
> + PFN_ORDER(pg) = order;
> + pg->u.free.dirty_head = need_scrub;
> +
> + if ( need_scrub )
> + page_list_add_tail(pg, &heap(node, zone, order));
> + else
> + page_list_add(pg, &heap(node, zone, order));
> +}
> +
> /* Allocate 2^@order contiguous pages. */
> static struct page_info *alloc_heap_pages(
> unsigned int zone_lo, unsigned int zone_hi,
> @@ -802,7 +818,7 @@ static struct page_info *alloc_heap_pages(
> while ( j != order )
> {
> PFN_ORDER(pg) = --j;
> - page_list_add_tail(pg, &heap(node, zone, j));
> + page_list_add(pg, &heap(node, zone, j));
> pg += 1 << j;
> }
>
> @@ -851,11 +867,14 @@ static int reserve_offlined_page(struct page_info *head)
> int zone = page_to_zone(head), i, head_order = PFN_ORDER(head), count =
> 0;
> struct page_info *cur_head;
> int cur_order;
> + bool need_scrub;
>
> ASSERT(spin_is_locked(&heap_lock));
>
> cur_head = head;
>
> + head->u.free.dirty_head = false;
> +
> page_list_del(head, &heap(node, zone, head_order));
>
> while ( cur_head < (head + (1 << head_order)) )
> @@ -892,8 +911,16 @@ static int reserve_offlined_page(struct page_info *head)
> {
> merge:
> /* We don't consider merging outside the head_order. */
> - page_list_add_tail(cur_head, &heap(node, zone, cur_order));
> - PFN_ORDER(cur_head) = cur_order;
> +
> + /* See if any of the pages need scrubbing. */
> + need_scrub = false;
> + for ( i = 0; i < (1 << cur_order); i++ )
> + if ( test_bit(_PGC_need_scrub, &cur_head[i].count_info) )
> + {
> + need_scrub = true;
> + break;
> + }
> + page_list_add_scrub(cur_head, node, zone, cur_order,
> need_scrub);
This thing with clearing dirty_head, then setting it again in
page_list_add_scrub() could use some explanation -- either near one of
these loops, or in mm.h preferrably.
> cur_head += (1 << cur_order);
> break;
> }
> @@ -922,10 +949,13 @@ static int reserve_offlined_page(struct page_info *head)
> /* Returns new buddy head. */
> static struct page_info *
> merge_and_free_buddy(struct page_info *pg, unsigned int node,
> - unsigned int zone, unsigned int order)
> + unsigned int zone, unsigned int order,
> + bool need_scrub)
What is the meaning of "need_scrub" here? Does this mean that pg needs
to be scrubbed?
-George
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |