WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops

To: Brendan Cully <Brendan@xxxxxxxxx>, "andreas.olsowski@xxxxxxxxxxxxxxx" <andreas.olsowski@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, "edwin.zhai@xxxxxxxxx" <edwin.zhai@xxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Thu, 3 Jun 2010 08:12:17 +0100
Cc:
Delivery-date: Thu, 03 Jun 2010 00:13:20 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100603065542.GC52378@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsC6dTPL4cmIom2TXGl7h4HOlDIvwAAkR6x
Thread-topic: [Xen-devel] slow live magration / xc_restore on xen4 pvops
User-agent: Microsoft-Entourage/12.24.0.100205
On 03/06/2010 07:55, "Brendan Cully" <Brendan@xxxxxxxxx> wrote:

>> kernel, min call time, max call time
>> 2.6.18, 4 us, 72 us
>> pvops, 202 us, 10696 us (!)
>> 
>> It looks like pvops is dramatically slower to perform the
>> xc_domain_memory_populate_physmap call!
> 
> Looking at changeset 20841:
> 
>   Allow certain performance-critical hypercall wrappers to register data
>   buffers via a new interface which allows them to be 'bounced' into a
>   pre-mlock'ed page-sized per-thread data area. This saves the cost of
>   mlock/munlock on every such hypercall, which can be very expensive on
>   modern kernels.
> 
> ...maybe the lock_pages call in xc_memory_op (called from
> xc_domain_memory_populate_physmap) has gotten very expensive?
> Especially considering this hypercall is now issued once per page.

Maybe there are two issues here then. I mean, there's slow, and there's 10ms
for a presumably in-core kernel operation, which is rather mad.

Getting our batching back for 4k allocations is the most critical thing
though, of course.

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>