WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops

To: keir.fraser@xxxxxxxxxxxxx, andreas.olsowski@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian.Jackson@xxxxxxxxxxxxx, edwin.zhai@xxxxxxxxx, jeremy@xxxxxxxx
Subject: Re: [Xen-devel] slow live magration / xc_restore on xen4 pvops
From: Brendan Cully <Brendan@xxxxxxxxx>
Date: Wed, 2 Jun 2010 23:55:43 -0700
Cc:
Delivery-date: Wed, 02 Jun 2010 23:56:34 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed; d=quuxuum.com; h=date:to :subject:message-id:references:mime-version:content-type :in-reply-to:from; s=dk; bh=Hsi8Q/VDRNHOz0vhKfsUHfJyB78=; b=S4fH DIRuHiG57RScync8c6gyM/bdOQjEhFQWQcz7UcZ+jiU0yaAMdbUDCmaAxxWtKmb9 d0e++GuQ4dhuG/jGNLs5a4yQy7ylo/WB85rNHumM0xCe+bs83/D0VzkCEzF428eg Rz6gQkJTlakAXXYvRWI0v+9s1iDRsXE8HwNzFo0=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100603064545.GB52378@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: keir.fraser@xxxxxxxxxxxxx, andreas.olsowski@xxxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian.Jackson@xxxxxxxxxxxxx, edwin.zhai@xxxxxxxxx, jeremy@xxxxxxxx
References: <20100603010418.GB2028@xxxxxxxxxxxxxxxxx> <C82D0098.168F5%keir.fraser@xxxxxxxxxxxxx> <20100603064545.GB52378@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.20 (2010-04-22)
On Wednesday, 02 June 2010 at 23:45, Brendan Cully wrote:
> On Thursday, 03 June 2010 at 06:47, Keir Fraser wrote:
> > On 03/06/2010 02:04, "Brendan Cully" <Brendan@xxxxxxxxx> wrote:
> > 
> > > I've done a bit of profiling of the restore code and observed the
> > > slowness here too. It looks to me like it's probably related to
> > > superpage changes. The big hit appears to be at the front of the
> > > restore process during calls to allocate_mfn_list, under the
> > > normal_page case. It looks like we're calling
> > > xc_domain_memory_populate_physmap once per page here, instead of
> > > batching the allocation? I haven't had time to investigate further
> > > today, but I think this is the culprit.
> > 
> > Ccing Edwin Zhai. He wrote the superpage logic for domain restore.
> 
> Here's some data on the slowdown going from 2.6.18 to pvops dom0:
> 
> I wrapped the call to allocate_mfn_list in uncanonicalize_pagetable
> to measure the time to do the allocation.
> 
> kernel, min call time, max call time
> 2.6.18, 4 us, 72 us
> pvops, 202 us, 10696 us (!)
> 
> It looks like pvops is dramatically slower to perform the
> xc_domain_memory_populate_physmap call!

Looking at changeset 20841:

  Allow certain performance-critical hypercall wrappers to register data
  buffers via a new interface which allows them to be 'bounced' into a
  pre-mlock'ed page-sized per-thread data area. This saves the cost of
  mlock/munlock on every such hypercall, which can be very expensive on
  modern kernels.

...maybe the lock_pages call in xc_memory_op (called from
xc_domain_memory_populate_physmap) has gotten very expensive?
Especially considering this hypercall is now issued once per page.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>