WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Feature idea

To: chris <tknchris@xxxxxxxxx>
Subject: Re: [Xen-devel] Feature idea
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Tue, 25 Oct 2011 09:04:55 +0100
Cc: Xen-Devel List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 25 Oct 2011 01:06:49 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <CAKnNFz_iB8yGcV-CWcg1is=KnfHjrrHqcvZGGeTF0YyLDUxbNg@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <CAKnNFz_iB8yGcV-CWcg1is=KnfHjrrHqcvZGGeTF0YyLDUxbNg@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, 2011-10-25 at 01:30 +0100, chris wrote:
> Is there any mechanism to give a.domU memory from dom0 swap? It would
> be neat/useful if we could utilize that to test things in a vm with
> more RAM than is.physically available. Obviously performance wouldn't
> be.stellar.but it would still have some usefulness

The xenpaging feature which some folks are working on allows guest RAM
to be swapped to a file in dom0.

It lives in tools/xenpaging. There's been loads of work on it since 4.1,
mainly by Olaf Hering.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>