[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/9] mm: Place unscrubbed pages at the end of pagelist



>>> On 04.04.17 at 17:39, <boris.ostrovsky@xxxxxxxxxx> wrote:
> On 04/04/2017 11:29 AM, Jan Beulich wrote:
>>>>> On 04.04.17 at 17:14, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> On 04/04/2017 10:46 AM, Jan Beulich wrote:
>>>>> @@ -933,6 +952,10 @@ static bool_t can_merge(struct page_info *buddy, 
>>>>> unsigned int node,
>>>>>           (phys_to_nid(page_to_maddr(buddy)) != node) )
>>>>>          return false;
>>>>>  
>>>>> +    if ( need_scrub !=
>>>>> +         !!test_bit(_PGC_need_scrub, &buddy->count_info) )
>>>>> +        return false;
>>>> I don't think leaving the tree in a state where larger order chunks
>>>> don't become available for allocation right away is going to be
>>>> acceptable. Hence with this issue being dealt with only in patch 7
>>>> as it seems, you should state clearly and visibly that (at least)
>>>> patches 2...7 should only be committed together.
>>> The dirty pages are available for allocation as result of this patch but
>>> they might not be merged with higher orders (which is what this check is
>>> for)
>> The individual chunks are available for allocation, but not the
>> combined one (for a suitably high order request). Or am I
>> missing something?
> 
> 
> Correct, but this is not changed by any later patch (including patch 7).
> We only merge with a buddy with the same level of cleanliness (so to
> speak ;-))

Hmm, that aspect was one of the main things I had objected to
back when one of your colleagues had a first take at this.

>>>>> @@ -952,9 +977,10 @@ static struct page_info *merge_chunks(struct 
>>>>> page_info *pg, unsigned int node,
>>>>>          {
>>>>>              /* Merge with predecessor block? */
>>>>>              buddy = pg - mask;
>>>>> -            if ( !can_merge(buddy, node, order) )
>>>>> +            if ( !can_merge(buddy, node, order, need_scrub) )
>>>>>                  break;
>>>>>  
>>>>> +            pg->count_info &= ~PGC_need_scrub;
>>>>>              pg = buddy;
>>>>>              page_list_del(pg, &heap(node, zone, order));
>>>>>          }
>>>>> @@ -962,9 +988,10 @@ static struct page_info *merge_chunks(struct 
>>>>> page_info *pg, unsigned int node,
>>>>>          {
>>>>>              /* Merge with successor block? */
>>>>>              buddy = pg + mask;
>>>>> -            if ( !can_merge(buddy, node, order) )
>>>>> +            if ( !can_merge(buddy, node, order, need_scrub) )
>>>>>                  break;
>>>>>  
>>>>> +            buddy->count_info &= ~PGC_need_scrub;
>>>>>              page_list_del(buddy, &heap(node, zone, order));
>>>>>          }
>>>> For both of these, how come you can / want to clear the need-scrub
>>>> flag? Wouldn't it be better for each individual page to retain it, so
>>>> when encountering a higher-order one you know which pages need
>>>> scrubbing and which don't? Couldn't that also be used to avoid
>>>> suppressing their merging here right away?
>>> I am trying to avoid having to keep dirty bit for each page since a
>>> buddy is either fully clean or fully dirty. That way we shouldn't need
>>> to walk the list and clear the bit. (I, in fact, suspect that there may
>>> be other state bits/fields that we might be able to keep at a buddy only)
>> But as said - at the expense of not being able to merge early. I
>> consider this a serious limitation.
> 
> What do you mean by "early"? At freeing time?

Right.

> But then we will always have to scan the buddy during allocation to see
> if any pages are dirty.

There could be a summary flag to avoid this for entirely clean
buddies. Plus perhaps some auxiliary indication where the first
unclean part is, to speed up the scanning.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.