This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] xm mem-max and mem-set

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] xm mem-max and mem-set
From: "Ky Srinivasan" <ksrinivasan@xxxxxxxxxx>
Date: Thu, 13 Apr 2006 18:33:42 -0600
Delivery-date: Thu, 13 Apr 2006 17:34:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
It appears that these commands (and code backing these commands) do no
sanity checking and could potentially get the system to crash if the
values picked are not appropriate. For instance one can set the mem-max
value to a value that appears reasonable and basically render the
machine unusable. Consider the case where max value being set is less
than what is currently allocated to the domain. All subsequent
allocations will fail and these failures are considered fatal in Linux
(look at hypervisor.c). Once the domain is up, does it even make sense
to lower the max_mem parameter without risking crashing the system?
Minimally, we should ensure that the mem_max value is at least equal to
what the current domain allocation is. I have a trivial patch to xen
that implements this logic. This patch fixes a bug we have in our
bugzilla against SLES10. Would there be interest in such a patch. 


K. Y

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>