WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, do

To: Jarod Wilson <jwilson@xxxxxxxxxx>
Subject: Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, don't panic on over-allocation
From: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
Date: Thu, 2 Aug 2007 11:12:00 +0900
Cc: Alex Williamson <alex.williamson@xxxxxx>, xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 01 Aug 2007 19:09:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <46B0D5AF.1050309@xxxxxxxxxx>
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
References: <46AFF7F6.5090105@xxxxxxxxxx> <1185943424.6802.98.camel@bling> <20070801052434.GC14448%yamahata@xxxxxxxxxxxxx> <46B08EE2.5020106@xxxxxxxxxx> <46B0ACEB.3080200@xxxxxxxxxx> <46B0C21C.9010605@xxxxxxxxxx> <46B0D5AF.1050309@xxxxxxxxxx>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.4.2.1i
On Wed, Aug 01, 2007 at 02:49:19PM -0400, Jarod Wilson wrote:

> > Rather than that approach, a simple 'max_dom0_pages =
> > avail_domheap_pages()' is working just fine on both my 4G and 16G boxes,
> > with the 4G box now getting ~260MB more memory for dom0 and the 16G box
> > getting ~512MB more. Are there potential pitfalls here? 

Hi Jarod. Sorry for delayed reply.
Reviewing the Alex's mail, it might have used up xenheap at that time.
However now that the p2m table is allocated from domheap, 
memory for the p2m table would be counted.
It can be calculated by very roughly dom0_pages / PTRS_PER_PTE.
Here PTRS_PER_PTE = 2048 with 16kb page size, 1024 with 8KB page size...

the p2m table needs about  2MB for  4GB of dom0 with 16KB page size.
                    about  8MB for 16GB
                    about 43MB for 86GB 
                    about 48MB for 96GB 

(It counts only PTE pages and it supposes that dom0 memory is contiguous.
For more precise calculation it should count PMD, PGD and sparseness.
But its memory size would be only KB order. Even for 1TB dom0,
it would be about 1MB. So I ignored them.)

With max_dom0_pages = avail_domheap_pages() as you proposed,
we use xenheap for the p2m table, I suppose.
Xenheap size is at most 64MB and so precious.

How about this heurictic?
max_dom0_pages = avail_domheap_pages() - avail_domheap_pages() / PTRS_PER_PTE;

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

<Prev in Thread] Current Thread [Next in Thread>