xen-devel
Re: [Xen-devel] Strange (???) xl behavior for save, migrate and migrate-
To: |
Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> |
Subject: |
Re: [Xen-devel] Strange (???) xl behavior for save, migrate and migrate-receive |
From: |
Daniel Kiper <dkiper@xxxxxxxxxxxx> |
Date: |
Tue, 18 Oct 2011 17:22:02 +0200 |
Cc: |
jeremy@xxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, Konrad Wilk <konrad.wilk@xxxxxxxxxx>, ian.jackson@xxxxxxxxxxxxx, v.tolstov@xxxxxxxxx, ian.campbell@xxxxxxxxxxxxx, Daniel Kiper <dkiper@xxxxxxxxxxxx> |
Delivery-date: |
Tue, 18 Oct 2011 08:22:48 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<4ffd9c88-88d2-437f-9af1-f3f0149334d9@default> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<20111017174036.GD29445@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4ffd9c88-88d2-437f-9af1-f3f0149334d9@default> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
User-agent: |
Mutt/1.3.28i |
On Mon, Oct 17, 2011 at 11:44:51AM -0700, Dan Magenheimer wrote:
> > From: Daniel Kiper [mailto:dkiper@xxxxxxxxxxxx]
> > Subject: [Xen-devel] Strange (???) xl behavior for save, migrate and
> > migrate-receive
> >
> > During work on memory hotplug for Xen I have received some notices
> > that it breaks machine migration. I had some time and I done some
> > tests a few days ago. It looks that source of this problem is
> > xl command itself. I discovered that generic save/restore mechanism
> > is used for machine migration. xl save store machine config which
> > was used at machine startup with current machine state. It means
> > that it does not take into account any config changes which were made
> > during machine run. This behavior does not allow migrating domain,
> > on which memory hotplug was used, to restore on destination host
> > because current size of memory allocated for machine is larger than
> > size of memory allocated at startup by memory option. Yes, it is
> > memory option not maxmem option. However, it is not important here
> > because I think that generic behavior of xl save, migrate and
> > migrate-receive
> > should be changed (fix for memory hotplug case is workaround for the
> > generic problem which will return sooner or later). I think that xl save,
> > migrate and migrate-receive should use current machine state and __CURRENT__
> > config (from xenstore ???) to do their tasks. However, I am aware that
> > this change could have large impact on current users. That is why I decided
> > to ask you about your opinion and suggested solutions in that case
> > (in general not memory hotplug only).
> >
> > Currently, these problems could be workaround by passing
> > path to config file with current config to xl command.
> >
> > I have done tests on Xen Ver. 4.1.2-rc3. I have not done tests
> > on xm command, however, I suppose that it has similar behavior.
>
> Hi Daniel --
>
> In a recent internal discussion at Oracle, we were thinking about
> whether to enable hotplug functionality in a guest kernel and it
> raised some concerns about manageability. I think right now
> the system administrator of the guest can arbitrarily increase
> memory size beyond maxmem... that is really the whole point
> of your implementation, right? But this may be unacceptable to
> the "data center administrator" (the admin who runs the "cloud"
> and determines such things as vcpus and maxmem across all guests)
> since multiple guests may try to do this semi-maliciously to grab
> as much RAM as they can. And Xen has no way to discourage this,
> so will just hand out the RAM first-come-first-serve, right?
>
> I was thinking one way to handle this problem would be
> to have a new vm.cfg parameter, e.g. "maxmem_hotplug".
> If unspecified (or zero), there are no constraints placed
> on the guest. If specified (in MB), Xen/xl will disallow
> hotplug memory requests beyond this maximum.
>
> I suspect, if implemented properly, this might also eliminate
> your live migration issue.
>
> Apologies if something like this was previously discussed or
> is already working in your implementation.
Please look into Ian and my e-mail posted earlier.
> Dan
>
> P.S. Also FYI, selfballooning is implemented in Oracle's kernel
> so we should work to ensure that selfballooning and hotplug
> work properly together.
I am happy to do that, however, I am very busy now.
Could we postpone this 2-3 months ???
Daniel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|