WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Latency spike during page_scrub_softirq

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Latency spike during page_scrub_softirq
From: Chris Lalancette <clalance@xxxxxxxxxx>
Date: Fri, 03 Jul 2009 09:32:08 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 03 Jul 2009 00:32:47 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C6728D15.88B5%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C6728D15.88B5%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.21 (X11/20090320)
Keir Fraser wrote:
> On 02/07/2009 15:47, "Chris Lalancette" <clalance@xxxxxxxxxx> wrote:
> 
>> There are a couple of solutions that I can think of:
>> 1)  Just clear the pages inside free_domheap_pages().  I tried this with a
>> 64GB
>> guest as mentioned above, and I didn't see any ill effects from doing so.  It
>> seems like this might actually be a valid way to go, although then a single
>> CPU
>> is doing all of the work of freeing the pages (might be a problem on UP
>> systems).
> 
> Now that domain destruction is preemptible all the way back up to libxc, I
> think the page-scrub queue is not so much required. And it seems it never
> worked very well anyway! I will remove it.
> 
> This may make 'xm destroy' operations take a while, but actually this may be
> more sensibly handled by punting the destroy hypercall into another thread
> at dom0 userspace level, rather than doing the shonky 'scheduling' we
> attempt in Xen itself right now.

Yep, agreed, and I see you've committed as c/s 19886.  Except...

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
...
@@ -1247,10 +1220,7 @@ void free_domheap_pages(struct page_info
             for ( i = 0; i < (1 << order); i++ )
             {
                 page_set_owner(&pg[i], NULL);
-                spin_lock(&page_scrub_lock);
-                page_list_add(&pg[i], &page_scrub_list);
-                scrub_pages++;
-                spin_unlock(&page_scrub_lock);
+                scrub_one_page(&pg[i]);
             }
         }
     }

This hunk actually needs to free the page as well, with free_heap_pages().

-- 
Chris Lalancette

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>