[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] xen/memory: Introduce a hypercall to provide unallocated space



Hi Jan,

On 03/08/2021 13:49, Jan Beulich wrote:
Once a safe range (or ranges) has been chosen, any subsequent action
which overlaps with the ranges must be rejected, as it will violate the
guarantees provided.

Furthermore, the ranges should be made available to the guest via normal
memory map means.  On x86, this is via the E820 table, and on ARM I
presume the DTB.  There is no need for a new hypercall.

Device-Tree only works if you have a guest using it. How about ACPI?

ACPI inherits E820 from x86 (its a trivial format), and UEFI was also
based on it.

But whichever...  All firmware interfaces have a memory map.

This will be UEFI memory map. However, I am a bit confused how we can
tell the OS the region will be used for grant/foreign mapping. Is it
possible to reserved a new type?

As with about any non-abandoned specification it is in principle
possible to define/reserve new types. Question how practical it is,
i.e. in particular how long it may take to get to the point where
we have a firmly reserved type. Short of this I wonder whether you,
Andrew, were thinking to re-use an existing type (in which case the
question of disambiguation arises).

Copying/pasting the IRC discussion related to this:

[11:32:19] <Diziet> julieng: I have skimread the thread "[RFC PATCH] xen/memory: Introduce a hypercall to provide unallocated space" [11:32:56] <Diziet> My impression is that it is converging on a workable solution but I am not sure. Does it need any help ? [12:20:32] <julieng> Diziet: I think we have a solution for Arm and DT. We are waiting on andyhhp for the ACPI part. He suggested to use the memory map but it is unclear how we could describe the safe region. [13:01:49] <andyhhp> that's easy, seeing as we already have hypercall to convay that information, but feel free to skip the x86 side for v1 if that helps [13:02:15] <andyhhp> it wants an extention to the PVH spec to define a new memory type [13:04:09] <julieng> andyhhp: This doesn't really address the question to how we can define the memory type because this is not a spec we own. See 5176e91c-1971-9004-af65-7a4aefc7eb78@xxxxxxxx for more details..
[13:04:27] <andyhhp> after which it wants to appear in XENMEM_memory_map
[13:04:32] <andyhhp> this is a spec we own
[13:04:43] <julieng> We don't own the E820.
[13:05:19] <julieng> Nor the UEFI memory map.
[13:05:24] <andyhhp> no, we don't, but that's not the spec in question
[13:06:06] <andyhhp> the spec in question is the PVH start info and/or XENMEM_memory_map, both of which are "in the format of the E820" table, not "is an E820 table"
[13:06:27] <andyhhp> with almost 4 billion type identifiers avaialble
[13:07:03] <julieng> So what you are saying is let's pick up a random number and hope no-one will use it? [13:07:34] <julieng> Because neither XENMEM_memory_map nor PVH start info exist for ACPI on Arm. [13:08:17] <andyhhp> we (xen) are the source of this information, via a Xen specified API/ABI [13:08:41] <andyhhp> we are entirely within our rights to document an extention, which has defined meaning under Xen [13:09:03] <andyhhp> and yeah - you choose some Xen specific constant to go in the high bits of the type id or something [13:09:04] <julieng> I agree for a domU. But for dom0, the memory map is the same as the host. So we have to make sure the number doesn't clash. [13:09:49] <andyhhp> the chance of actually having a collision is 0, because in 30 years or so, about 10 types have been declared and handed out [13:10:15] <andyhhp> and if a collision does occur, we add a second hypercall saying "this is a raw non-xenified E820 table, and here is the xenified one"


As a result I wonder whether a "middle" approach wouldn't be better:
Have the range be determined up front (by tool stack or Xen), but
communicate it to the guest by PV means (hypercall, shared info,
start info, or yet some other table).

Jan


--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.