This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Memory overhead of HVM domains

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] Memory overhead of HVM domains
From: "Charles Coffing" <ccoffing@xxxxxxxxxx>
Date: Tue, 11 Apr 2006 15:42:30 -0400
Delivery-date: Tue, 11 Apr 2006 12:43:00 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

I was trying to find a solution for bug #521 ("video ram for hvm guests not 
properly accounted for when ballooning").  The trivial (although ugly) answer 
is to allocate an extra (hard-coded) 1026 pages in the getDomainMemory() 
function to account for the increase_reservation that qemu-dm will do.

However, ugly or not, this doesn't work.  In reality, an HVM domain requires 
some extra memory in addition to its nominal memory size.  Here are some 
measurements I did (everything in MB; overhead is approximate and measured by 
looking at memory remaining in Xen's DMA and DOM memory zones before and after 
creating the HVM domU):

Nominal    Overhead
-------    --------
   16        14.2
  128        16.3
  256        16.6
  512        17.1
 1024        18.4

4 MB of this is due to the VM's video memory.  I expect additional state would 
be stored in the qemu-dm process, but that would consume already-allocated dom0 
memory, and so wouldn't be represented above.  I also see references to VMCBs / 
VMCSs, but those are getting allocated on Xen's heap, and so also not 
represented above.

So several questions:

1. Where's the extra memory going?

2. Should we even try to calculate it for auto-ballooning?  It seems like many 
factors could affect it, and any such calculation would be very brittle.

I'll gladly code up and test a patch to auto-balloon for HVM domains, but I 
first want to understand what's going on.


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>