This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] xm mem-max and mem-set

To: Ky Srinivasan <ksrinivasan@xxxxxxxxxx>
Subject: Re: [Xen-devel] xm mem-max and mem-set
From: Anthony Liguori <aliguori@xxxxxxxxxx>
Date: Thu, 13 Apr 2006 20:48:48 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 13 Apr 2006 18:49:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <443EB5A6.E57C.0030.0@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <443EB5A6.E57C.0030.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mail/News 1.5 (X11/20060309)
Ky Srinivasan wrote:
It appears that these commands (and code backing these commands) do no
sanity checking and could potentially get the system to crash if the
values picked are not appropriate. For instance one can set the mem-max
value to a value that appears reasonable and basically render the
machine unusable. Consider the case where max value being set is less
than what is currently allocated to the domain. All subsequent
allocations will fail and these failures are considered fatal in Linux
(look at hypervisor.c). Once the domain is up, does it even make sense
to lower the max_mem parameter without risking crashing the system?
Minimally, we should ensure that the mem_max value is at least equal to
what the current domain allocation is. I have a trivial patch to xen
that implements this logic. This patch fixes a bug we have in our
bugzilla against SLES10. Would there be interest in such a patch.

I'm slightly concerned about the subtle race condition it would introduce. If there's no reason to set max-mem below current reservation (if it causes crashes which I don't really understand why it would) then I think it would be something best enforced within the hypervisor.

Why, exactly, would setting max-mem below the current reservation cause problems in the guest? I guess it may fail because of grant transfer ops (in which case, we really ought to enforce it at the hypervisor level).


Anthony Liguori


K. Y

Xen-devel mailing list

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>