xen-ppc-devel
Re: [XenPPC] [RFC] 64mb Chunk Allocator
hollis wrote on 06/26/2006 12:31:58 PM:
> On Sat, 2006-06-24 at 16:31 -0400, Dan E Poff wrote:
> >
> > On Thu, 2006-06-22 at 16:56 -0400, Michal Ostrowski wrote:
> > >> How is this expected to be integrated into existing
memory
> > allocators in Xen?
> >
> > As I understand Jimi's approach, all of physical memory would
be
> > divided into 64mb chunks. The 1st chunk,
> > starting at address 0x00000000, belongs to the hypervisor. When
a
> > domain is started, at least 1 chunk is allocated,
> > with RMOR set to provide real address 0x00000000. A domain whose
> > maxmem is 128mb would have 2 chunks and so on.
> > When the domain is destroyed, its chunk(s) are freed.
>
> Again, I see no reason why an ordinary domain would need more than
one
> chunk.
Hollis, do you think, past the first real-mode area,
all memory should be allocated in individual page frames? Would it
be a bad idea to go whole hog and allocate all memory used by OSes in 64mb
chunks? Then, do something special for pages that are flipped...?
> > While this approach seems straight-forward, memory utilization
could
> > be better, especially considering chunk size.
> > On other hand, allowing multiple domains access to a chunk leads
to
> > fragmentation issues raised by Hollis.
>
> I'm not sure what you mean by this. Fragmentation will exist for all
> chunks used by the heap allocator, for example. If you're planning
to
> allow the smaller allocators access to only a single chunk, then that
> would prevent fragmentation elsewhere but would limit hypervisor
> operations.
Yes, I think that the assumption is most memory is
used by OSes, and not very much is used by the heap of the hypervisor itself.
Do you think this is wrong?
> I would be interested to see allocator usage statistics from a
> fully-running system (especially with stressed virtual IO devices).
If
> it's a relatively fixed amount of memory then I'm OK with restricting
> the page/heap allocators to a fixed number of chunks. How to determine
> that number is another story though...
That would be interesting, Ian asserted to me before
that the amount of memory used for flipping is very tiny, i.e., the memory
allocated for networking.
I guess you could start off at one of two extremes,
i.e., 1) all allocates in big chunks, except for a few specialized cases
(e.g., buffers for page flipping, or 2) all allocates in individual page
frames except for what you need for RMA. Presumably you can start
off from either extreme, and figure out where things break, right?
> >
> >
> >
> > hollisb@xxxxxxxxxxxxxxxxxxxxxxx
> >
> > 06/23/2006 11:22 AM
> >
> >
> > To
> > Michal Ostrowski
> > <mostrows@xxxxxxxxxxxxxx>
> > cc
> > Dan E
> > Poff/Watson/IBM@IBMUS, xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
> > Subject
> > Re: [XenPPC]
> > [RFC] 64mb Chunk
> > Allocator
> >
> >
> >
> >
> >
> >
> >
> >
> > On Thu, 2006-06-22 at 16:56 -0400, Michal Ostrowski wrote:
> > > How is this expected to be integrated into existing memory
> > allocators in
> > > Xen? Would the other allocators always allocate their
own chunks to
> > > work with and divide them up into smaller pieces or would
other
> > > allocators compete with the chunk allocator?
> >
> > Yeah, I think this is the key question.
> >
> > Once you've given a chunk to the heap allocator, I don't think
you can
> > ever expect it to be defragmented and given back. I worry about
the
> > classic fragmentation problem:
> > "Create domain." -> -ENOMEM
> > "How much memory is free?" -> lots and lots
> >
> > > Whereas the allocation strategies chosen by Linux encourage
> > > fragmentation and make memory hot-unplug difficult, this
could be an
> > > opportunity to introduce a generic large-granularity
> > memory-management
> > > layer that makes it easier to deal with hot-plug and large
pages.
> >
> > That's an interesting point.
> >
> > --
> > Hollis Blanchard
> > IBM Linux Technology Center
> >
> >
> --
> Hollis Blanchard
> IBM Linux Technology Center
>
>
> _______________________________________________
> Xen-ppc-devel mailing list
> Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-ppc-devel
_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel
|
|
|