[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)



Hi Elliott,

On 14/05/2021 03:42, Elliott Mitchell wrote:
Upon thinking about it, this seems appropriate to bring to the attention
of the Xen development list since it seems to have wider implications.


On Wed, May 12, 2021 at 11:08:39AM +0100, Julien Grall wrote:
On 12/05/2021 03:37, Elliott Mitchell wrote:

What about the approach to the grant-table/xenpv memory situation?

As stated for a 768MB VM, Xen suggested a 16MB range.  I'm unsure whether
that is strictly meant for grant-table use or is meant for any foreign
memory mappings (Julien?).

An OS is free to use it as it wants. However, there is no promise that:
    1) The region will not shrink
    2) The region will stay where it is

Issue is what is the intended use of the memory range allocated to
/hypervisor in the device-tree on ARM?  What do the Xen developers plan
for?  What is expected?

From docs/misc/arm/device-tree/guest.txt:

"
- reg: specifies the base physical address and size of a region in
  memory where the grant table should be mapped to, using an
  HYPERVISOR_memory_op hypercall. The memory region is large enough to map
the whole grant table (it is larger or equal to gnttab_max_grant_frames()).
  This property is unnecessary when booting Dom0 using ACPI.
"

Effectively, this is a known space in memory that is unallocated. Not all the guests will use it if they have a better way to find unallocated space.



With FreeBSD, Julien Grall's attempt 5 years ago at getting Xen/ARM
support treated the grant table as distinct from other foreign memory
mappings.  Yet for the current code (which is oriented towards x86) it is
rather easier to treat all foreign mappings the same.

Limiting foreign mappings to a total of 16MB for a 768MB domain is tight.

It is not clear to me whether you are referring to frontend or backend domain.

However, there is no relation between the size of a domain and how many foreign pages it will map. You can have a tiny backend (let say 128MB of RAM) that will handle a large domain (e.g. 2GB).

Instead, it depends on the maximum number of pages that will be mapped at a given point. If you are running a device emulator, then it is more convenient to try to keep as many foreign pages as possible mapped.

For PV backend (e.g. block, net), they tend to use grant mapping. Most of the time they are ephemeral (they last for the duration of the requests) but in some cases they will be kept mapped for the longer (for instance the block backend may support persistent grant).

Was the /hypervisor range intended *strictly* for mapping grant-tables?

It was introduced to tell the OS a place where the grant-table could be conveniently mapped.

Was it intended for the /hypervisor range to dynamically scale with the
size of the domain?
As per above, this doesn't depend on the size of the domain. Instead, this depends on what sort of the backend will be present in the domain.

Was it intended for /hypervisor to grow over the
years as hardware got cheaper?
I don't understand this question.

Might it be better to deprecate the /hypervisor range and have domains
allocate any available address space for foreign mappings?

It may be easy for FreeBSD to find available address space but so far this has not been the case in Linux (I haven't checked the latest version though).

To be clear, an OS is free to not use the range provided in /hypervisor (maybe this is not clear enough in the spec?). This was mostly introduced to overcome some issues we saw in Linux when Xen on Arm was introduced.


Should the FreeBSD implementation be treating grant tables as distinct
from other foreign mappings?

Both require unallocated addres space to work. IIRC FreeBSD is able to find unallocated space easily, so I would recommend to use it.

(is treating them the same likely to
induce buggy behavior on x86?)

I will leave this answer to Roger.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.