WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

Re: [Xen-ia64-devel] [PATCH] Use saner dom0 memory and vcpu defaults, d

Jarod Wilson wrote:
> Jarod Wilson wrote:
>> Jarod Wilson wrote:
>>> Isaku Yamahata wrote:
>>>> On Wed, Aug 01, 2007 at 02:49:19PM -0400, Jarod Wilson wrote:
>>>>
>>>>>> Rather than that approach, a simple 'max_dom0_pages =
>>>>>> avail_domheap_pages()' is working just fine on both my 4G and 16G
>>>>>> boxes,
>>>>>> with the 4G box now getting ~260MB more memory for dom0 and the
>>>>>> 16G box
>>>>>> getting ~512MB more. Are there potential pitfalls here? 
>>>> Hi Jarod. Sorry for delayed reply.
>>>> Reviewing the Alex's mail, it might have used up xenheap at that time.
>>>> However now that the p2m table is allocated from domheap, memory for
>>>> the p2m table would be counted.
>>>> It can be calculated by very roughly dom0_pages / PTRS_PER_PTE.
>>>> Here PTRS_PER_PTE = 2048 with 16kb page size, 1024 with 8KB page
>>>> size...
>>>>
>>>> the p2m table needs about  2MB for  4GB of dom0 with 16KB page size.
>>>>                     about  8MB for 16GB
>>>>             about 43MB for 86GB             about 48MB for 96GB
>>>> (It counts only PTE pages and it supposes that dom0 memory is
>>>> contiguous.
>>>> For more precise calculation it should count PMD, PGD and sparseness.
>>>> But its memory size would be only KB order. Even for 1TB dom0,
>>>> it would be about 1MB. So I ignored them.)
>>>>
>>>> With max_dom0_pages = avail_domheap_pages() as you proposed,
>>>> we use xenheap for the p2m table, I suppose.
>>>> Xenheap size is at most 64MB and so precious.
>>>>
>>>> How about this heurictic?
>>>> max_dom0_pages = avail_domheap_pages() - avail_domheap_pages() /
>>>> PTRS_PER_PTE;
>>> Sounds quite reasonable to me. I'm build and boot testing an updated
>>> patch, which assuming all goes well, I'll ship off to the list a bit
>>> later today...
>>>
>>> Ah, one more thing I'm adding: if one specifies dom0_mem=0 on the xen
>>> command line, that'll now allocate all available memory.
>>
>> ...and here it is. I shuffled a few things around in the max_dom0_size
>> calculation a little bit for better readability and avoid multiple calls
>> to avail_domheap_pages() (my assumption being that its increasingly
>> costly on larger and larger systems).
>>
>> Indeed, on my 16GB system, only 8MB less from the v2 incantation, and
>> the dom0_mem=0 option does properly allocate all available memory to
>> dom0. I'm quite happy with this version if everyone else is...
> 
> Eep! I retract my happiness... Seems with all memory allocated like
> this, I can't get a guest to actually boot. I get this when I try to
> bring one up via xm create:
> 
> Error: (4, 'Out of memory', "xc_dom_boot_mem_init: can't allocate low
> memory for domain\n")
> 
> I *think* with the same fudge-factor swiped from x86, I actually could
> get guests to boot when allocating as much as possible, but I'm not
> certain anymore. I'll prod it some more tomorrow... :\

Okay, I'm happy with these changes again. They're not at fault -- ia64
xen dom0's inability to automatically balloon down is. If I give my 16G
box only 12G of RAM and try to spin up a 4G guest, I get the same
failure. However, if I do an 'xm mem-set 0 8G' to manually balloon it
down, then start up the guest, all is well.

Nb: this is w/the RHEL5 xen 3.1-based codebase, not full
xen-ia64-unstable, so I dunno if perhaps this has already been
addressed. If so, I'd certainly appreciate pointers to changesets. :)

-- 
Jarod Wilson
jwilson@xxxxxxxxxx


Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
<Prev in Thread] Current Thread [Next in Thread>