I think difference between like 256 ~ 512 ~ 1024 is reasonable, sfor example,
512M ram requires about 512k p2m table size.
So the point is the size for 16M vmx domain. If we need to investigate that
size, maybe we can calculate each item one by one, including: 1:1 mapping
table, shadow, p2m, shadow cache. Anything else left?
>From: Charles Coffing [mailto:ccoffing@xxxxxxxxxx]
>Sent: 2006年4月11日 15:17
>To: Jiang, Yunhong; xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [Xen-devel] Memory overhead of HVM domains
>On Tue, Apr 11, 2006 at 3:58 PM, in message
><FFEFE1749526634699CD3AC2EDB7627A0184B6E7@pdsmsx406>, "Jiang, Yunhong"
>> From you definition of overhead, I think your overhead should include
>> shadow page table, p2m table and those shadow cache, am I right?
>Right. But 10 to 14 MB** for just a 16 MB domU seems excessive for
>these things, doesn't it?
>** my numbers below minus 4 MB for video
>> Not sure if any other sources.
>> Also I just find a bug on qemu, which may occupy double size of the
>> memory if you are using Xwindow.
>That might help explain it, thanks.
>> Yunhong Jiang
>> --- Original Message-----
>>>From: xen- devel- bounces@xxxxxxxxxxxxxxxxxxx
>>>[mailto:xen- devel- bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Charles
>>>Sent: 2006年4月11日 12:43
>>>To: xen- devel@xxxxxxxxxxxxxxxxxxx
>>>Subject: [Xen- devel] Memory overhead of HVM domains
>>>I was trying to find a solution for bug #521 ("video ram for hvm
>>>properly accounted for when ballooning"). The trivial (although
>>>is to allocate an extra (hard- coded) 1026 pages in the
>>>function to account for the increase_reservation that qemu- dm will
>>>However, ugly or not, this doesn't work. In reality, an HVM domain
>>>some extra memory in addition to its nominal memory size. Here are
>>>measurements I did (everything in MB; overhead is approximate and
>>>looking at memory remaining in Xen's DMA and DOM memory zones before
>>>creating the HVM domU):
>>> 16 14.2
>>> 128 16.3
>>> 256 16.6
>>> 512 17.1
>>> 1024 18.4
>>>4 MB of this is due to the VM's video memory. I expect additional
>>>be stored in the qemu- dm process, but that would consume already-
>>>memory, and so wouldn't be represented above. I also see references
>>>/ VMCSs, but those are getting allocated on Xen's heap, and so also
>>>So several questions:
>>>1. Where's the extra memory going?
>>>2. Should we even try to calculate it for auto- ballooning? It seems
>>>factors could affect it, and any such calculation would be very
>>>I'll gladly code up and test a patch to auto- balloon for HVM
>domains, but I
>>>want to understand what's going on.
>>>Xen- devel mailing list
Xen-devel mailing list