WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Improving domU restore time

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] Re: Improving domU restore time
From: Rafal Wojtczuk <rafal@xxxxxxxxxxxxxxxxxxxxxx>
Date: Wed, 2 Jun 2010 18:24:13 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Wed, 02 Jun 2010 09:25:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C053C99.6010906@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20100525103557.GC23903@xxxxxxxxxxxxxxxxxxx> <C8217820.15199%keir.fraser@xxxxxxxxxxxxx> <20100531094243.GB3374@xxxxxxxxxxxxxxxxxxx> <4C053C99.6010906@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.17 (2007-11-01)
On Tue, Jun 01, 2010 at 10:00:09AM -0700, Jeremy Fitzhardinge wrote:
> On 05/31/2010 02:42 AM, Rafal Wojtczuk wrote:
> > Hello,
> >   
> >> I would be grateful for the comments on possible methods to improve domain
> >> restore performance. Focusing on the PV case, if it matters.
> >>     
> > Continuing the topic; thank you to everyone that responded so far.
> >
> > Focusing on xen-3.4.3 case for now, dom0/domU still 2.6.32.x pvops x86_64. 
> > Let me just reiterate that for our purposes, the domain save time (and 
> > possible related post-processing) is not critical, it 
> > is only the restore time that matters. I did some experiments; they involve:
> > 1) before saving a domain, have domU allocate all free memory in an userland
> > process, then fill it with some MAGIC_PATTERN. Save domU, then process the
> > savefile, removing all pfns (and their page content) that refer to a page 
> > containing MAGIC_PATTERN.
> > This reduces the savefile size.
> Why not just balloon the domain down?
I thought it (well, rather the matching balloon up after restore) would cost 
quite some CPU time; it used to AFAIR. But nowadays it looks sensible, in 90ms
range. Yes, that is much cleaner, thank you for the hint.
 
> > should be no disk reads at all). Is the single threaded nature of xenstored 
> > the possible cause for the delays ?
> Have you tried oxenstored?  It works well for me, and seems to be a lot
> faster.
Do you mean 
http://xenbits.xensource.com/ext/xen-ocaml-tools.hg
?
After some tweaks to Makefiles (-fPIC is required on x86_64 for libs sources) 
it compiles, but then it bails during startup with 
fatal error: exception Failure("ioctl bind_interdomain failed")
This happens under xen-3.4.3; does it require 4.0.0 ?

> >> I would expect IOCTL_PRIVCMD_MMAPBATCH to be the most significant part of
> >> that loop.
> > Let's imagine there is a hypercall do_direct_memcpy_from_dom0_to_mfn(int
> > mfn_count, mfn* mfn_array, char * pages_content).
> The main cost of pagetable manipulations is the tlb flush; if you can
> batch all your setups together to amortize the cost of the tlb flush, it
> should be pretty quick.  But if batching is not being used properly,
> then it could get very expensive.  My own observation of "strace xl
> restore" is that it seems to do a *lot* of ioctls on privcmd, but I
> haven't looked more closely to see what those calls are, and whether
> they're being done in an optimal way.
Well, it looks like xc_restore should _usually_ call 
xc_map_foreign_batch once per pages batch (once per 1024 read pages), which
looks sensible. xc_add_mmu_update also tries to batch requests. There are 
432 occurences of ioctl syscall in the xc_restore strace output; I am not 
sure if it is damagingly numerous. 

Regards,
Rafal Wojtczuk
Principal Researcher
Invisible Things Lab, Qubes-os project

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>