[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Live migration and PV device handling



On Tue, Apr 7, 2020 at 1:57 AM Paul Durrant <xadimgnik@xxxxxxxxx> wrote:
>
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Tamas 
> > K Lengyel
> > Sent: 06 April 2020 18:31
> > To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> > Cc: Xen-devel <xen-devel@xxxxxxxxxxxxx>; Anastassios Nanos 
> > <anastassios.nanos@xxxxxxxxxxx>
> > Subject: Re: Live migration and PV device handling
> >
> > On Mon, Apr 6, 2020 at 11:24 AM Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
> > wrote:
> > >
> > > On 06/04/2020 18:16, Tamas K Lengyel wrote:
> > > > On Fri, Apr 3, 2020 at 6:44 AM Andrew Cooper 
> > > > <andrew.cooper3@xxxxxxxxxx> wrote:
> > > >> On 03/04/2020 13:32, Anastassios Nanos wrote:
> > > >>> Hi all,
> > > >>>
> > > >>> I am trying to understand how live-migration happens in xen. I am
> > > >>> looking in the HVM guest case and I have dug into the relevant parts
> > > >>> of the toolstack and the hypervisor regarding memory, vCPU context
> > > >>> etc.
> > > >>>
> > > >>> In particular, I am interested in how PV device migration happens. I
> > > >>> assume that the guest is not aware of any suspend/resume operations
> > > >>> being done
> > > >> Sadly, this assumption is not correct.  HVM guests with PV drivers
> > > >> currently have to be aware in exactly the same way as PV guests.
> > > >>
> > > >> Work is in progress to try and address this.  See
> > > >> https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=775a02452ddf3a6889690de90b1a94eb29c3c732
> > > >> (sorry - for some reason that doc isn't being rendered properly in
> > > >> https://xenbits.xen.org/docs/ )
> > > > That proposal is very interesting - first time it came across my radar
> > > > - but I dislike the idea that domain IDs need to be preserved for
> > > > uncooperative migration to work.
> > >
> > > The above restriction is necessary to work with existing guests, which
> > > is an implementation requirement of the folks driving the work.
> > >
> > > > Ideally I would be able to take
> > > > advantage of the same plumbing to perform forking of VMs with PV
> > > > drivers where preserving the domain id is impossible since its still
> > > > in use.
> > >
> > > We would of course like to make changes to remove the above restriction
> > > in the longterm.  The problem is that it is not a trivial thing to fix.
> > > Various things were discussed in Chicago, but I don't recall if any of
> > > the plans made their way onto xen-devel.
> >
> > Yea I imagine trying to get this to work with existing PV drivers is
> > not possible in any other way.
>
> No, as the doc says, the domid forms part of the protocol, hence being 
> visible to the guest, and the guest may sample and use the value when making 
> certain hypercalls (only some enforce use of DOMID_SELF). Thus faking it 
> without risking a guest crash is going to be difficult.
>
> > But if we can update the PV driver code
> > such that in the longterm it can work without preserving the domain
> > ID, that would be worthwhile.
> >
>
> I think that ship has sailed. It would probably be simpler and cheaper to 
> just get virtio working with Xen.

That would certainly make sense to me. That would reduce the
maintenance overhead considerably if we all converged on a single
standard.

Tamas



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.