[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/6] [VERY RFC] Migration Stream v2



On Wed, 2014-04-09 at 19:28 +0100, Andrew Cooper wrote:
> Some design decisions have been take very deliberately (e.g. splitting the
> logic for PV and hvm migration) while others have been more along the lines of
> "I think its a sensible thing to do given a lack of any evidence/opinion to
> the contrary".

Is there some indication of which is which?

Should we check in the desigh/spec which was previously posted as part
of this?

> The error handling is known to only semi-consistent.  Functions return 0 for
> success and non-zero for failure.  This is typically -1, although errno is not
> always relevant.  However, the logging messages should all be relevant and
> correct.  Making this properly consistent will involve wider effort across all
> of libxc.

It would be useful if the new code was correct at least so far as its
own behaviour went (meaning no need to fix functions it calls as part of
this).

> An area needing discussing is how to do v1 -> v2 transformations for a 
> one-time
> upgrade.  There is a (very basic currently) python script which can pick a v1
> stream, and a separate python library to write v2 streams.
> 
> One option would be to combine these two into a program which takes two fds,
> which libxc can exec() out to.  There is deliberate flexibility in the v2
> restore code which allows a v1 -> v2 transformation on a stream without 
> seeking.

forking/execing in libxc might be problematic, fitting it into libxl
might be easier, since it has infrastructure for that sort of thing.

Or maybe the fact that most of this already happens in a process which
libxl spawns for that purpose means that libxc can safely fork because
the application in that case is under our control.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.