[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v8 8/8] xen/arm: map reserved-memory regions as normal memory in dom0



Hi,

On 11/7/18 7:01 PM, Stefano Stabellini wrote:
On Wed, 7 Nov 2018, Julien Grall wrote:
On 07/11/2018 12:18, Julien Grall wrote:
Hi Stefano,

On 07/11/2018 00:32, Stefano Stabellini wrote:
On Mon, 22 Oct 2018, Julien Grall wrote:
Hi,

On 09/10/2018 00:37, Stefano Stabellini wrote:
reserved-memory regions should be mapped as normal memory.

This is already the case with p2m_mmio_direct_c. The hardware domain
should
have full control on the resulting attributes via its stage-1 mappings.
So
what's wrong with that p2m type?

Shared mappings are prevented for any types other than p2m_ram_rw, see
the p2m_is_ram checks in the implementation of XENMAPSPACE_gmfn_share.

This does not make any sense. This series is about mapping between any
domain but dom0. So if you end-up to map the reserved-memory region in dom0,
why are you using XENMAPSPACE_gmfn_share?

To clarify my question, what are you trying to achieve? Are you trying to
share memory between Dom0 and a guest. Or are you trying to share memory
between an external entity (i.e R* core/device) and the guest?

I have in my TODO list to achieve both the goals you mentioned. However,
with this patch I am trying to enable shared cacheable memory between
Dom0 and a guest. Specifically, I am setting up Dom0 as "owner" (with
the new terminology, formerly "master"), and a DomU as "borrower".

A lot of the steps automated by libxl have to be done manually, such as
advertising the memory region as "reserved-memory" on the Dom0 device
tree and adding the "owner" entries to xenstore, but once that is done,
it works just fine.

Thank you for explaining what you are trying to achieve.



The alternative is to remove or extend the p2m_is_ram check at
xen/arch/arm/mm.c:1283

p2m_ram_* means the page is managed by Xen and accounting will be done.
Similarly XENMAPSPACE_gmfn_share will do accounting on the page mapped
through that.

In the case of reserved-memory, we never handled them properly in Xen (see
[1]).

There are 2 types of reserved-memory region: static and dynamic. The dynamic
one are not a concerned as addressed are not specified in them.

In the case of static one, they are backed by a page in Xen because we
didn't updated the code to carve them out from xenheap. This means that you
are mapping those pages in Dom0, yet they may not be assigned to Dom0 and
may get allocated for Xen internal use or to another domain.

As such, this patch is just a workaround to an already broken code. So the
first step is fixing the brokenness of reserved-memory region. Then we can
discuss whether this patch is relevant for any of your use case.

By fixing the brokenness of reserved-memory region, you mean remove them
from xenheap? Anything else you can think of that doesn't work right?

I will try to summarize the discussion we had today.

From my understanding of the device-tree binding for reserved-region, any regions described under that node will be a subset of regions described in the node /memory.

reserved-region can either be dynamic or static. Dynamic means the region will be allocated by the OS on boot. In the static case, the regions is fixed by the HW vendor.

The main concern is static region because Xen must not allocate those regions for another purpose (e.g internal memory or guest memory).

I can see two ways to handle reversed-memory regions in Xen:
1) The regions are not treated as device from Xen PoV. They will need to get excluded from xenheap in early boot. Those regions will not be backed with struct page_info and therefore they could not be mapped using the foreign mapping interface. For guest they would need to be mapped using XENDOMCTL_memory_mapping (i.e iomem= from xl). The interface would need to be extended with memory attributes (e.g caching, shareability) as we map the MMIO region with strict attributes today.

2) The regions are treated as RAM from Xen PoV. They will need to be registered in xenheap and also ensuring in early stage they cannot be allocated by xenheap. As they will be backed with a struct page_info, we would need to do proper reference counting and making sure they can never be re-allocated (e.g if the guest ever decide to balloon those pages). The map could be mapped in another guest using the foreign mapping interface.

In both case, we also need to ensure that for each reserved-region node, we have a corresponding range in /memory.

The option 1) is probably the easiest. It involves less change in the core code. It has also the advantage to hide a reserved-region from Dom0 (i.e with xen,passthrough) and directly assign to the guest (via iomem). We may need to investigate the implication from the kernel side (some of the reserved-memory could be marked as re-usable).

Finally, regarding sharing memory between dom0 and the guest. I would look at using dynamic reserved-region. This would allow dom0 to allocate the region at boot. However, I don't know whether it is easy to retrieve the allocated region from userspace.

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.