This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] xm mem-max and mem-set

To: "Anthony Liguori" <aliguori@xxxxxxxxxx>
Subject: Re: [Xen-devel] xm mem-max and mem-set
From: "Ky Srinivasan" <ksrinivasan@xxxxxxxxxx>
Date: Fri, 14 Apr 2006 07:30:10 -0600
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 14 Apr 2006 06:31:06 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <443EFF80.7060005@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <443EB5A6.E57C.0030.0@xxxxxxxxxx> <443EFF80.7060005@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
The patch I have is for the hypervisor. The reason for the crash is the
current Linux code cannot deal with a failure when it tries to give up a
contiguous region and repopulate the the region with potentially
non-contiguous pages (refer to hypervisor.c for this).


K. Y 
>>> Anthony Liguori <aliguori@xxxxxxxxxx> 04/13/06 9:48 pm >>> 
Ky Srinivasan wrote:
> It appears that these commands (and code backing these commands) do
> sanity checking and could potentially get the system to crash if the
> values picked are not appropriate. For instance one can set the mem-
> value to a value that appears reasonable and basically render the
> machine unusable. Consider the case where max value being set is
> than what is currently allocated to the domain. All subsequent
> allocations will fail and these failures are considered fatal in
> (look at hypervisor.c). Once the domain is up, does it even make
> to lower the max_mem parameter without risking crashing the system?
> Minimally, we should ensure that the mem_max value is at least equal
> what the current domain allocation is. I have a trivial patch to xen
> that implements this logic. This patch fixes a bug we have in our
> bugzilla against SLES10. Would there be interest in such a patch. 

I'm slightly concerned about the subtle race condition it would 
introduce.  If there's no reason to set max- mem below current 
reservation (if it causes crashes which I don't really understand why
would) then I think it would be something best enforced within the 

Why, exactly, would setting max- mem below the current reservation
problems in the guest?  I guess it may fail because of grant transfer 
ops (in which case, we really ought to enforce it at the hypervisor


Anthony Liguori

> Regards,
> K. Y
> _______________________________________________
> Xen- devel mailing list
> Xen- devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen- devel

Xen- devel mailing list
Xen- devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen- devel

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>