[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Strange (???) xl behavior for save, migrate and migrate-receive



On Mon, Oct 17, 2011 at 11:44:51AM -0700, Dan Magenheimer wrote:
> > From: Daniel Kiper [mailto:dkiper@xxxxxxxxxxxx]
> > Subject: [Xen-devel] Strange (???) xl behavior for save, migrate and 
> > migrate-receive
> >
> > During work on memory hotplug for Xen I have received some notices
> > that it breaks machine migration. I had some time and I done some
> > tests a few days ago. It looks that source of this problem is
> > xl command itself. I discovered that generic save/restore mechanism
> > is used for machine migration. xl save store machine config which
> > was used at machine startup with current machine state. It means
> > that it does not take into account any config changes which were made
> > during machine run. This behavior does not allow migrating domain,
> > on which memory hotplug was used, to restore on destination host
> > because current size of memory allocated for machine is larger than
> > size of memory allocated at startup by memory option. Yes, it is
> > memory option not maxmem option. However, it is not important here
> > because I think that generic behavior of xl save, migrate and 
> > migrate-receive
> > should be changed (fix for memory hotplug case is workaround for the
> > generic problem which will return sooner or later). I think that xl save,
> > migrate and migrate-receive should use current machine state and __CURRENT__
> > config (from xenstore ???) to do their tasks. However, I am aware that
> > this change could have large impact on current users. That is why I decided
> > to ask you about your opinion and suggested solutions in that case
> > (in general not memory hotplug only).
> >
> > Currently, these problems could be workaround by passing
> > path to config file with current config to xl command.
> >
> > I have done tests on Xen Ver. 4.1.2-rc3. I have not done tests
> > on xm command, however, I suppose that it has similar behavior.
>
> Hi Daniel --
>
> In a recent internal discussion at Oracle, we were thinking about
> whether to enable hotplug functionality in a guest kernel and it
> raised some concerns about manageability.  I think right now
> the system administrator of the guest can arbitrarily increase
> memory size beyond maxmem... that is really the whole point
> of your implementation, right?  But this may be unacceptable to
> the "data center administrator" (the admin who runs the "cloud"
> and determines such things as vcpus and maxmem across all guests)
> since multiple guests may try to do this semi-maliciously to grab
> as much RAM as they can. And Xen has no way to discourage this,
> so will just hand out the RAM first-come-first-serve, right?
>
> I was thinking one way to handle this problem would be
> to have a new vm.cfg parameter, e.g. "maxmem_hotplug".
> If unspecified (or zero), there are no constraints placed
> on the guest.  If specified (in MB), Xen/xl will disallow
> hotplug memory requests beyond this maximum.
>
> I suspect, if implemented properly, this might also eliminate
> your live migration issue.
>
> Apologies if something like this was previously discussed or
> is already working in your implementation.

Please look into Ian and my e-mail posted earlier.

> Dan
>
> P.S. Also FYI, selfballooning is implemented in Oracle's kernel
> so we should work to ensure that selfballooning and hotplug
> work properly together.

I am happy to do that, however, I am very busy now.
Could we postpone this 2-3 months ???

Daniel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.