|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/3] xen: delay page scrubbing to allocation path
On 06/30/2014 11:56 PM, Jan Beulich wrote:
>>>> On 30.06.14 at 15:39, <lliubbo@xxxxxxxxx> wrote:
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -711,6 +711,12 @@ static struct page_info *alloc_heap_pages(
>>
>> for ( i = 0; i < (1 << order); i++ )
>> {
>> + if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
>> + {
>> + scrub_one_page(&pg[i]);
>> + pg[i].count_info &= ~PGC_need_scrub;
>> + }
>> +
>
> heap_lock is still being held here - scrubbing should be done after it
> was dropped (or else you re-introduce the same latency problem to
> other paths now needing to wait for the scrubbing to complete).
>
I see, now it only avoids this case e.g don't scrub all 4Tb when a 4Tb
chunk found for a request of a single page.
Anyway I will move the scrubbing out of spinlock in next version.
>> @@ -876,6 +882,15 @@ static void free_heap_pages(
>> midsize_alloc_zone_pages = max(
>> midsize_alloc_zone_pages, total_avail_pages /
>> MIDSIZE_ALLOC_FRAC);
>>
>> + if ( need_scrub )
>> + {
>> + if ( !tainted )
>> + {
>> + for ( i = 0; i < (1 << order); i++ )
>> + pg[i].count_info |= PGC_need_scrub;
>> + }
>> + }
>
> Two if()s like these should be folded into one.
>
Will be fixed.
>> @@ -889,6 +904,17 @@ static void free_heap_pages(
>> (PFN_ORDER(pg-mask) != order) ||
>> (phys_to_nid(page_to_maddr(pg-mask)) != node) )
>> break;
>> + /* If we need scrub, only merge with PGC_need_scrub pages */
>> + if ( need_scrub )
>> + {
>> + if ( !test_bit(_PGC_need_scrub, &(pg-mask)->count_info) )
>> + break;
>> + }
>> + else
>> + {
>> + if ( test_bit(_PGC_need_scrub, &(pg-mask)->count_info) )
>> + break;
>> + }
>
> You're setting PGC_need_scrub on each 4k page anyway (which
> is debatable), hence there's no need to look at the passed in
> need_scrub flag here: Just check whether both chunks have the
> flag set the same. Same below.
>
Right, thanks for your suggestion.
>> @@ -1535,7 +1571,7 @@ void free_xenheap_pages(void *v, unsigned int order)
>>
>> memguard_guard_range(v, 1 << (order + PAGE_SHIFT));
>>
>> - free_heap_pages(virt_to_page(v), order);
>> + free_heap_pages(virt_to_page(v), order, 1);
>
> Why?
>
Sorry, a mistake here and will be fixed.
>> @@ -1588,11 +1624,10 @@ void free_xenheap_pages(void *v, unsigned int order)
>>
>> for ( i = 0; i < (1u << order); i++ )
>> {
>> - scrub_one_page(&pg[i]);
>> pg[i].count_info &= ~PGC_xen_heap;
>> }
>>
>> - free_heap_pages(pg, order);
>> + free_heap_pages(pg, order, 1);
>
> The flags needs to be 1 here, but I don't see why you also pass 1 in
> the other free_xenheap_pages() incarnation above.
>
>> @@ -1745,24 +1780,20 @@ void free_domheap_pages(struct page_info *pg,
>> unsigned int order)
>> * domain has died we assume responsibility for erasure.
>> */
>> if ( unlikely(d->is_dying) )
>> - for ( i = 0; i < (1 << order); i++ )
>> - scrub_one_page(&pg[i]);
>> -
>> - free_heap_pages(pg, order);
>> + free_heap_pages(pg, order, 1);
>> + else
>> + free_heap_pages(pg, order, 0);
>> }
>> else if ( unlikely(d == dom_cow) )
>> {
>> ASSERT(order == 0);
>> - scrub_one_page(pg);
>> - free_heap_pages(pg, 0);
>> + free_heap_pages(pg, 0, 1);
>> drop_dom_ref = 0;
>> }
>> else
>> {
>> /* Freeing anonymous domain-heap pages. */
>> - for ( i = 0; i < (1 << order); i++ )
>> - scrub_one_page(&pg[i]);
>> - free_heap_pages(pg, order);
>> + free_heap_pages(pg, order, 1);
>> drop_dom_ref = 0;
>> }
>>
>
> This hunk is patching no longer existing code (see commit daa4b800
> "slightly consolidate code in free_domheap_pages()").
>
I'll rebase this patch to an newer version after that commit.
Thanks again.
--
Regards,
-Bob
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |