[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH DO NOT APPLY] docs: Document allocator properties and the rubric for using them
> On Feb 16, 2021, at 10:55 AM, Julien Grall <julien@xxxxxxx> wrote: > > Hi George, > > On 16/02/2021 10:28, George Dunlap wrote: >> Document the properties of the various allocators and lay out a clear >> rubric for when to use each. >> Signed-off-by: George Dunlap <george.dunlap@xxxxxxxxxx> >> --- >> This doc is my understanding of the properties of the current >> allocators (alloc_xenheap_pages, xmalloc, and vmalloc), and of Jan's >> proposed new wrapper, xvmalloc. >> xmalloc, vmalloc, and xvmalloc were designed more or less to mirror >> similar functions in Linux (kmalloc, vmalloc, and kvmalloc >> respectively). >> CC: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >> CC: Jan Beulich <jbeulich@xxxxxxxx> >> CC: Roger Pau Monne <roger.pau@xxxxxxxxxx> >> CC: Stefano Stabellini <sstabellini@xxxxxxxxxx> >> CC: Julien Grall <julien@xxxxxxx> >> --- >> .../memory-allocation-functions.rst | 118 ++++++++++++++++++ >> 1 file changed, 118 insertions(+) >> create mode 100644 docs/hypervisor-guide/memory-allocation-functions.rst >> diff --git a/docs/hypervisor-guide/memory-allocation-functions.rst >> b/docs/hypervisor-guide/memory-allocation-functions.rst >> new file mode 100644 >> index 0000000000..15aa2a1a65 >> --- /dev/null >> +++ b/docs/hypervisor-guide/memory-allocation-functions.rst >> @@ -0,0 +1,118 @@ >> +.. SPDX-License-Identifier: CC-BY-4.0 >> + >> +Xenheap memory allocation functions >> +=================================== >> + >> +In general Xen contains two pools (or "heaps") of memory: the *xen >> +heap* and the *dom heap*. Please see the comment at the top of >> +``xen/common/page_alloc.c`` for the canonical explanation. >> + >> +This document describes the various functions available to allocate >> +memory from the xen heap: their properties and rules for when they should be >> +used. >> + >> + >> +TLDR guidelines >> +--------------- >> + >> +* By default, ``xvmalloc`` (or its helper cognates) should be used >> + unless you know you have specific properties that need to be met. >> + >> +* If you need memory which needs to be physically contiguous, and may >> + be larger than ``PAGE_SIZE``... >> + >> + - ...and is order 2, use ``alloc_xenheap_pages``. >> + >> + - ...and is not order 2, use ``xmalloc`` (or its helper cognates).. >> + >> +* If you don't need memory to be physically contiguous, and know the >> + allocation will always be larger than ``PAGE_SIZE``, you may use >> + ``vmalloc`` (or one of its helper cognates). >> + >> +* If you know that allocation will always be less than ``PAGE_SIZE``, >> + you may use ``xmalloc``. > > AFAICT, the determining factor is PAGE_SIZE. This is a single is a single > value on x86 (e.g. 4KB) but on other architecture this may be multiple values. > > For instance, on Arm, this could be 4KB, 16KB, 64KB (note that only the > former is so far supported on Xen). > > For Arm and common code, it feels to me we can't make a clear decision based > on PAGE_SIZE. Instead, I continue to think that the decision should only be > based on physical vs virtually contiguous. > > We can then add further rules for x86 specific code if the maintainers want. Sorry my second mail was somewhat delayed — my intent was: 1) post the document I’d agreed to write, 2) say why I think the proposal is a bad idea. :-) Re page size — the vast majority of time we’re talking “knowing” that the size is less than 4k. If we’re confident that no architecture will ever have a page size less than 4k, then we know that all allocations less than 4k will always be less than PAGE_SIZE. Obviously larger page sizes then becomes an issue. But in any case — unless we have BUG_ON(size > PAGE_SIZE), we’re going to have to have a fallback, which is going to cost one precious conditional, making the whole exercise pointless. -George
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |