This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH] xen: update machine_to_phys_order on resume

To: <Ian.Campbell@xxxxxxxxxx>,<konrad.wilk@xxxxxxxxxx>, <keir@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xen: update machine_to_phys_order on resume
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Fri, 15 Jul 2011 18:30:22 +0100
Cc: olaf@xxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx
Delivery-date: Fri, 15 Jul 2011 10:36:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> "Jan Beulich" <JBeulich@xxxxxxxxxx> 07/15/11 6:07 PM >>>
>>>> On 13.07.11 at 11:12, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>> It's not so much an objection to this patch but this issue seems to have
>> been caused by Xen cset 20892:d311d1efc25e which looks to me like a
>> subtle ABI breakage for guests. Perhaps we should introduce a feature
>> flag to indicate that a guest can cope with the m2p changing size over
>> migration like this?
>That's actually not strait forward, as the hypervisor can't see the ELF
>note specified features of a DomU kernel. Passing this information
>down from the tools or from the guest kernel itself otoh doesn't
>necessarily seem worth it. Instead a guest that can deal with the
>upper bound of the M2P table changing can easily obtain the
>desired information through XENMEM_maximum_ram_page. So I
>think on the hypervisor side we're good with the patch I sent
>earlier today.

Actually, one more thought: What's the purpose of this hypercall if
it is set in stone what values it ought to return? Isn't a guest using
it (supposed to be) advertising that it can deal with the values being
variable (and it was just overlooked so far that this doesn't only
include varying values from boot to boot, but also migration)? Or in
other words, if we found a need to relocate the M2P table or grow
its static maximum size, it would be impossible to migrate guests
from an old to a new hypervisor.

Xen-devel mailing list