[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] Memory overhead of HVM domains


  • To: "Charles Coffing" <ccoffing@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Date: Wed, 12 Apr 2006 07:02:56 +0800
  • Delivery-date: Tue, 11 Apr 2006 16:03:27 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcZdtaWC0YJ9o7VNRs20Znk47GN+SQAAi68g
  • Thread-topic: [Xen-devel] Memory overhead of HVM domains

I think difference between like 256 ~ 512 ~ 1024 is reasonable, sfor example, 
512M ram requires about 512k p2m table size.

So the point is the size for 16M vmx domain. If we need to investigate that 
size, maybe we can calculate each item one by one, including: 1:1 mapping 
table, shadow, p2m, shadow cache. Anything else left?

Yunhong Jiang

>-----Original Message-----
>From: Charles Coffing [mailto:ccoffing@xxxxxxxxxx]
>Sent: 2006年4月11日 15:17
>To: Jiang, Yunhong; xen-devel@xxxxxxxxxxxxxxxxxxx
>Subject: RE: [Xen-devel] Memory overhead of HVM domains
>
>On Tue, Apr 11, 2006 at  3:58 PM, in message
><FFEFE1749526634699CD3AC2EDB7627A0184B6E7@pdsmsx406>, "Jiang, Yunhong"
><yunhong.jiang@xxxxxxxxx> wrote:
>> From you definition of overhead, I think your overhead should include
>the
>> shadow page table, p2m table and those shadow cache, am I right?
>
>Right.  But 10 to 14 MB** for just a 16 MB domU seems excessive for
>these things, doesn't it?
>
>** my numbers below minus 4 MB for video
>
>
>> Not sure if any other sources.
>>
>> Also I just find a bug on qemu, which may occupy double size of the
>video
>> memory if you are using Xwindow.
>
>That might help explain it, thanks.
>
>
>> Thanks
>> Yunhong Jiang
>>
>> --- Original Message-----
>>>From: xen- devel- bounces@xxxxxxxxxxxxxxxxxxx
>>>[mailto:xen- devel- bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Charles
>Coffing
>>>Sent: 2006年4月11日 12:43
>>>To: xen- devel@xxxxxxxxxxxxxxxxxxx
>>>Subject: [Xen- devel] Memory overhead of HVM domains
>>>
>>>Hi,
>>>
>>>I was trying to find a solution for bug #521 ("video ram for hvm
>guests not
>>>properly accounted for when ballooning").  The trivial (although
>ugly) answer
>>>is to allocate an extra (hard- coded) 1026 pages in the
>getDomainMemory()
>>>function to account for the increase_reservation that qemu- dm will
>do.
>>>
>>>However, ugly or not, this doesn't work.  In reality, an HVM domain
>requires
>>>some extra memory in addition to its nominal memory size.  Here are
>some
>>>measurements I did (everything in MB; overhead is approximate and
>measured by
>>>looking at memory remaining in Xen's DMA and DOM memory zones before
>and
>> after
>>>creating the HVM domU):
>>>
>>>Nominal    Overhead
>>>-------     --------
>>>   16        14.2
>>>  128        16.3
>>>  256        16.6
>>>  512        17.1
>>> 1024        18.4
>>>
>>>4 MB of this is due to the VM's video memory.  I expect additional
>state
>> would
>>>be stored in the qemu- dm process, but that would consume already-
>allocated dom0
>>>memory, and so wouldn't be represented above.  I also see references
>to VMCBs
>>>/ VMCSs, but those are getting allocated on Xen's heap, and so also
>not
>>>represented above.
>>>
>>>So several questions:
>>>
>>>1. Where's the extra memory going?
>>>
>>>2. Should we even try to calculate it for auto- ballooning?  It seems
>like many
>>>factors could affect it, and any such calculation would be very
>brittle.
>>>
>>>I'll gladly code up and test a patch to auto- balloon for HVM
>domains, but I
>> first
>>>want to understand what's going on.
>>>
>>>Thanks,
>>>Chuck
>>>
>>>
>>>
>>>_______________________________________________
>>>Xen- devel mailing list
>>>Xen- devel@xxxxxxxxxxxxxxxxxxx
>>>http://lists.xensource.com/xen- devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.