[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] design: design doc for shared memory on a dom0less system



Hi Bertrand,

On 26/01/2022 11:14, Bertrand Marquis wrote:
On 26 Jan 2022, at 10:58, Julien Grall <julien@xxxxxxx> wrote:

Hi,

On 26/01/2022 10:09, Penny Zheng wrote:
This commit provides a design doc for static shared memory
on a dom0less system.
Signed-off-by: Penny Zheng <penny.zheng@xxxxxxx>
---
  design/shm-dom0less.md | 182 +++++++++++++++++++++++++++++++++++++++++
  1 file changed, 182 insertions(+)
  create mode 100644 design/shm-dom0less.md
diff --git a/design/shm-dom0less.md b/design/shm-dom0less.md
new file mode 100644
index 0000000..b46199d
--- /dev/null
+++ b/design/shm-dom0less.md
@@ -0,0 +1,182 @@
+# Static Shared Memory between domains on a dom0less system
+
+This design aims to provide an overview of the new feature: setting up static
+shared memory between domains on a dom0less system, through device tree
+configuration.
+
+The new feature is driven by the need of finding a way to build up
+communication channels on dom0less system, since the legacy ways including
+grant table, etc are all absent there.

Stefano has a series to add support for grant-table [2]. So I think you want to 
justify it differently.

+
+It was inspired by the patch serie of "xl/libxl-based shared memory", see
+[1] for more details.
+
+# Static Shared Memory Device Tree Configuration
+
+The static shared memory device tree nodes allow users to statically set up
+shared memory among a group of dom0less DomUs and Dom0, enabling domains
+to do shm-based communication.
+
+- compatible
+
+    "xen,domain-shared-memory-v1"
+
+- xen,shm-id

 From the document, it is not clear to me what is the purpose of the 
identifier. Could you clarify it?

+
+    An u32 value represents the unique identifier of the shared memory region.
+    User valuing per shared memory region shall follow the ascending order,
+    starting from xen,shm-id = <0x0>, to the maximum identifier
+    xen,shm-id = <0x126>.

Why is it limit to 0x126? And also, why do they have to be allocated in 
ascending order?

The special xen,shm-id = <0x127> is reserved for
+    INVALID_SHMID.

Why do we need to reserve invalid?

+
+- xen,shared-mem
+
+    An array takes a physical address, which is the base address of the
+    shared memory region in host physical address space, a size, and a guest
+    physical address, as the target address of the mapping.

I think shared memory is useful without static allocation. So I think we want 
to make the host physical address optional.

+
+- role(Optional)
+
+    A string property specifying the ownership of a shared memory region,
+    the value must be one of the following: "owner", or "borrower"
+    A shared memory region could be explicitly backed by one domain, which is
+    called "owner domain", and all the other domains who are also sharing
+    this region are called "borrower domain".
+    If not specified, the default value is "borrower" and owner is
+    "dom_shared", a system domain.

I don't particularly like adding another system domain. Instead, it would be 
better to always specify the owner.

Having an owner which is not Xen is creating a dependency so restart the owner 
you would need to restart the borrowers.

You don't necessarily have to. You can keep the "struct domain" and the shared pages in place and wipe everything else.

To remove this dependency and allow use cases where any domain having access 
can be restarted without the other side
needing to, having Xen as the owner is required.


Initial discussion started between Penny and Stefano went the way you said and 
I asked to modify it like this to have something
more looking like a standard shared memory with only users but no “owner”.

My main concern with dom_shared is the permissions. How do you make sure that a given page is only shared with the proper domain?

Also it fits to some of our use cases.

Would you mind to briefly describe them?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.