[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation





On 20/05/2021 06:36, Penny Zheng wrote:
Hi Julien

Hi Penny,

+
+Later, when domain get destroyed and memory relinquished, only pages
+in `page_list` go back to heap, and pages in `reserved_page_list` shall not.

While going through the series, I could not find any code implementing this.
However, this is not enough to prevent a page to go to the heap allocator
because a domain can release memory at runtime using hypercalls like
XENMEM_remove_from_physmap.

One of the use case is when the guest decides to balloon out some memory.
This will call free_domheap_pages().

Effectively, you are treating static memory as domheap pages. So I think it
would be better if you hook in free_domheap_pages() to decide which
allocator is used.

Now, if a guest can balloon out memory, it can also balloon in memory.
There are two cases:
     1) The region used to be RAM region statically allocated
     2) The region used to be unallocated.

I think for 1), we need to be able to re-use the page previously. For 2), it is
not clear to me whether a guest with memory statically allocated should be
allowed to allocate "dynamic" pages.


Yeah, I share the same with you of hooking in free_domheap_pages(). I'm thinking
that if pages of PGC_reserved, we may create a new func free_staticmem_pages to
free them.

For issues on ballooning out or in, it is not supported here.

It is fine that the implementation doesn't yet implement it. However, I think the design document should take into account ballooning. This is because even if...

Domain on Static Allocation and 1:1 direct-map are all based on dom0-less right
now, so no PV, grant table, event channel, etc, considered.

... there is no PV support & co, a guest is still able to issue hypercalls (they are not hidden). Therefore your guest will be able to disturb your static allocation.


Right now, it only supports device got passthrough into the guest.

+### Memory Allocation for Domains on Static Allocation
+
+RAM regions pre-defined as static memory for one specifc domain shall
+be parsed and reserved from the beginning. And they shall never go to
+any memory allocator for any use.

Technically, you are introducing a new allocator. So do you mean they should
not be given to neither the buddy allocator nor the bot allocator?


Yes. These pre-defined RAM regions will not be given to any current
memory allocator. If be given there, there is no guarantee that it will
not be allocated for other use.

And right now, in my current design, these pre-defined RAM regions are either 
for
one specific domain as guest RAM or as XEN heap.
+
+Later when allocating static memory for this specific domain, after
+acquiring those reserved regions, users need to a do set of
+verification before assigning.
+For each page there, it at least includes the following steps:
+1. Check if it is in free state and has zero reference count.
+2. Check if the page is reserved(`PGC_reserved`).
+
+Then, assigning these pages to this specific domain, and all pages go
+to one new linked page list `reserved_page_list`.
+
+At last, set up guest P2M mapping. By default, it shall be mapped to
+the fixed guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`,
+just like normal domains. But later in 1:1 direct-map design, if
+`direct-map` is set, the guest physical address will equal to physical
address.
+
+### Static Allocation for Xen itself
+
+### New Deivce Tree Node: `xen,reserved_heap`

s/Deivce/Device/


Thx.

+
+Static memory for Xen heap refers to parts of RAM reserved in the
+beginning for Xen heap only. The memory is pre-defined through XEN
+configuration using physical address ranges.
+
+The reserved memory for Xen heap is an optional feature and can be
+enabled by adding a device tree property in the `chosen` node.
+Currently, this feature is only supported on AArch64.
+
+Here is one example:
+
+
+        chosen {
+            xen,reserved-heap = <0x0 0x30000000 0x0 0x40000000>;
+            ...
+        };
+
+RAM at 0x30000000 of 1G size will be reserved as heap memory. Later,
+heap allocator will allocate memory only from this specific region.

This section is quite confusing. I think we need to clearly differentiate heap 
vs
allocator.

In Xen we have two heaps:
     1) Xen heap: It is always mapped in Xen virtual address space. This is
mainly used for xen internal allocation.
     2) Domain heap: It may not always be mapped in Xen virtual address space.
This is mainly used for domain memory and mapped on-demand.

For Arm64 (and x86), two heaps are allocated from the same region. But on
Arm32, they are different.

We also have two allocator:
     1) Boot allocator: This is used during boot only. There is no concept of
heap at this time.
     2) Buddy allocator: This is the current runtime allocator. This can either
allocator from either heap.

AFAICT, this design is introducing a 3rd allocator that will return domain heap
pages.

Now, back to this section, are you saying you will separate the two heaps and
force the buddy allocator to allocate xen heap pages from a specific region?

[...]

I will try to explain clearly here.
The intention behind this reserved heap is that for supporting total static 
system, we
not only want to pre-define memory resource for guests, but also for xen runtime
allocation. Any runtime behavior are more predictable.

Right now, on AArch64, all RAM, except reserved memory, will be given to buddy
allocator as heap,  like you said, guest RAM for normal domain will be allocated
from there, xmalloc eventually is get memory from there, etc. So we want to 
refine
the heap here, not iterating through bootinfo.mem to set up XEN heap, but like
iterating bootinfo. reserved_heap to set up XEN heap.

So effectively, you want to move to a split heap like on Arm32. Is that correct?

But let's take a step back from the actual code (this is implementation details). If the Device-Tree describes all the regions statically allocated to domains, why can't the memory used by Xen heap be the left over?


True, on ARM32, xen heap and domain heap are separately mapped, which is more
complicated here. That's why I only talking about implementing these features on
AArch64 as first step.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.