WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

Re: [XenPPC] [RFC] 64mb Chunk Allocator


On Thu, 2006-06-22 at 16:56 -0400, Michal Ostrowski wrote:
>> How is this expected to be integrated into existing memory allocators in Xen?

As I understand Jimi's approach, all of physical memory would be divided into 64mb chunks. The 1st chunk,
starting at address 0x00000000, belongs to the hypervisor. When a domain is started, at least 1 chunk is allocated,
with RMOR set to provide real address 0x00000000. A domain whose maxmem is 128mb would have 2 chunks and so on.
When the domain is destroyed, its chunk(s) are freed.

While this approach seems straight-forward, memory utilization could be better, especially considering chunk size.
On other hand, allowing multiple domains access to a chunk leads to fragmentation issues raised by Hollis.




hollisb@xxxxxxxxxxxxxxxxxxxxxxx

06/23/2006 11:22 AM

To
Michal Ostrowski <mostrows@xxxxxxxxxxxxxx>
cc
Dan E Poff/Watson/IBM@IBMUS, xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
Subject
Re: [XenPPC] [RFC] 64mb Chunk Allocator





On Thu, 2006-06-22 at 16:56 -0400, Michal Ostrowski wrote:
> How is this expected to be integrated into existing memory allocators in
> Xen?  Would the other allocators always allocate their own chunks to
> work with and divide them up into smaller pieces or would other
> allocators compete with the chunk allocator?

Yeah, I think this is the key question.

Once you've given a chunk to the heap allocator, I don't think you can
ever expect it to be defragmented and given back. I worry about the
classic fragmentation problem:
"Create domain." -> -ENOMEM
"How much memory is free?" -> lots and lots

> Whereas the allocation strategies chosen by Linux encourage
> fragmentation and make memory hot-unplug difficult, this could be an
> opportunity to introduce a generic large-granularity memory-management
> layer that makes it easier to deal with hot-plug and large pages.

That's an interesting point.

--
Hollis Blanchard
IBM Linux Technology Center


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel