[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation


  • To: Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien.grall.oss@xxxxxxxxx>
  • From: Penny Zheng <Penny.Zheng@xxxxxxx>
  • Date: Fri, 4 Jun 2021 04:00:47 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=pkqWtVYJ30BWp+L9Leb4awSw7mxYMDTgQ18YuCPzsxA=; b=jcbtWaH5QLxiik80+U5P45ijb3nvjM5wIBthuQXMk0QsOiuJi0XrWuUzOmKtL5n+/QWCQxTK92uY4mgpfSjBpDlVvKZZ7gBVyaLjyUONDEUbmViflb3GoxmP0IqR9MhZ2KyxbVKLIJEJDewkhXCtQ+MODMkN4L7f0j9fOS3RwA5NhlatAqHmo5QAIdGza7Dzv1xtVVZaiBH+sOG8ZA7a6po0TWVYjp/Mut46T2XQK5jPf3e6SQArO38xJcRRkZpPfv5IjbUApCtIeZ02CMWjRRhEBY4HbnPZXlolMqbvV76tw/g4yK2QdpLuYhTypS1k7uj0xpLUvSNIFhX2gFJ6Tg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SRy3ZGUR3QoVgwGpp9ZBUw/2bud3qddgOO4tR/abwsrPbgQpW/ZIusU16Er6QkxJKfkieESpf1XV2v4FpygUP0UonL6lNfPTL9ErmLKYIFpgUJ5BQP80r5tFca8RjufaPmcM5SoBlW26LJUmqBTr4TJeplOlbqV6L0k9fZ+br6vpjcqyKk+CFjwfW+Son0IxOpBm9nvcmV3SLiRa2mv6YyIKSaENmAZKsYGT8W955zcA8+ecYG2GyuDeVtYB5z3aW7j2Jos1f8j6Io3DZtLVF/BFiRGn8Xdl1gnQpPVnwrDRoV1fKIGr5m1DrtKGNLNpE9BECinRIKo7rSkm2hD1ZA==
  • Authentication-results-original: kernel.org; dkim=none (message not signed) header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>, Wei Chen <Wei.Chen@xxxxxxx>, nd <nd@xxxxxxx>
  • Delivery-date: Fri, 04 Jun 2021 04:01:24 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: kernel.org; dkim=none (message not signed) header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYCAAiGdAIAAuwwggAA2KACAFH+QgIABhnKAgADPmoCAAAmNAIAAHmCAgAAx6vA=
  • Thread-topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation

Hi stefano and julien 

> -----Original Message-----
> From: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> Sent: Friday, June 4, 2021 7:56 AM
> To: Julien Grall <julien.grall.oss@xxxxxxxxx>
> Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Penny Zheng
> <Penny.Zheng@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; Bertrand Marquis
> <Bertrand.Marquis@xxxxxxx>; Wei Chen <Wei.Chen@xxxxxxx>; nd
> <nd@xxxxxxx>
> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
> 
> On Thu, 3 Jun 2021, Julien Grall wrote:
> > On Thu, 3 Jun 2021 at 22:33, Stefano Stabellini <sstabellini@xxxxxxxxxx>
> wrote:
> > > On Thu, 3 Jun 2021, Julien Grall wrote:
> > > > On 02/06/2021 11:09, Penny Zheng wrote:
> > > > > I could not think a way to fix it properly in codes, do you have
> > > > > any suggestion? Or we just put a warning in doc/commits.
> > > >
> > > > The correct approach is to find the parent of staticmemdomU1 (i.e.
> > > > reserved-memory) and use the #address-cells and #size-cells from there.
> > >
> > > Julien is right about how to parse the static-memory.
> > >
> > > But I have a suggestion on the new binding. The /reserved-memory
> > > node is a weird node: it is one of the very few node (the only node
> > > aside from
> > > /chosen) which is about software configurations rather than hardware
> > > description.
> > >
> > > For this reason, in a device tree with multiple domains
> > > /reserved-memory doesn't make a lot of sense: for which domain is the
> memory reserved?
> >
> > IHMO, /reserved-memory refers to the memory that the hypervisor should
> > not touch. It is just a coincidence that most of the domains are then
> > passed through to dom0.
> >
> > This also matches the fact that the GIC, /memory is consumed by the
> > hypervisor directly and not the domain..
> 
> In system device tree one of the key principles is to distinguish between
> hardware description and domains configuration. The domains configuration
> is under /domains (originally it was under /chosen then the DT maintainers
> requested to move it to its own top-level node), while everything else is for
> hardware description.
> 
> /chosen and /reserved-memory are exceptions. They are top-level nodes but
> they are for software configurations. In system device tree configurations go
> under the domain node. This makes sense: Xen, dom0 and domU can all have
> different reserved-memory and chosen configurations.
> 
> /domains/domU1/reserved-memory gives us a clear way to express reserved-
> memory configurations for domU1.
> 
> Which leaves us with /reserved-memory. Who is that for? It is for the default
> domain.
> 
> The default domain is the one receiving all devices by default. In a Xen 
> setting,
> it is probably Dom0. In this case, we don't want to add reserved-memory
> regions for DomUs to Dom0's list. Dom0's reserved-memory list is for its own
> drivers. We could also make an argument that the default domain is Xen itself.
> From a spec perspective, that would be fine too. In this case, /reserved-
> memory is a list of memory regions reserved for Xen drivers.  Either way, I 
> don't
> think is a great fit for domains memory allocations.
> 
> 
> > > This was one of the first points raised by Rob Herring in reviewing
> > > system device tree.
> > >
> > > So the solution we went for is the following: if there is a default
> > > domain /reserved-memory applies to the default domain. Otherwise,
> > > each domain is going to have its own reserved-memory. Example:
> > >
> > >         domU1 {
> > >             compatible = "xen,domain";
> > >             #address-cells = <0x1>;
> > >             #size-cells = <0x1>;
> > >             cpus = <2>;
> > >
> > >             reserved-memory {
> > >                 #address-cells = <2>;
> > >                 #size-cells = <2>;
> > >
> > >                 static-memory@0x30000000 {
> > >                     compatible = "xen,static-memory-domain";
> > >                     reg = <0x0 0x30000000 0x0 0x20000000>;
> > >                 };
> > >             };
> > >         };
> >
> > This sounds wrong to me because the memory is reserved from the
> > hypervisor PoV not from the domain. IOW, when I read this, I think the
> > memory will be reserved in the domain.
> 
> It is definitely very wrong to place the static-memory allocation under
> /chosen/domU1/reserved-memory. Sorry if I caused confusion. I only meant it
> as an example of how reserved-memory (actual reserved-memory list driver-
> specific memory ranges) is used.
> 
> 
> > >
> > > So I don't think we want to use reserved-memory for this, either
> > > /reserved-memory or /chosen/domU1/reserved-memory. Instead it would
> > > be good to align it with system device tree and define it as a new
> > > property under domU1.
> >
> > Do you have any formal documentation of the system device-tree?
> 
> It lives here:
> https://github.com/devicetree-org/lopper/tree/master/specification
> 
> Start from specification.md. It is the oldest part of the spec, so it is not 
> yet
> written with a formal specification language.
> 
> FYI there are a number of things in-flight in regards to domains that we
> discussed in the last call but they are not yet settled, thus, they are not 
> yet
> committed (access flags definitions and hierarchical domains). However, they
> don't affect domains memory allocations so from that perspective nothing has
> changed.
> 
> 
> > > In system device tree we would use a property called "memory" to
> > > specify one or more ranges, e.g.:
> > >
> > >     domU1 {
> > >         memory = <0x0 0x500000 0x0 0x7fb00000>;
> > >
> > > Unfortunately for xen,domains we have already defined "memory" to
> > > specify an amount, rather than a range. That's too bad because the
> > > most natural way to do this would be:
> > >
> > >     domU1 {
> > >         size = <amount>;
> > >         memory = <ranges>;
> > >
> > > When we'll introduce native system device tree support in Xen we'll
> > > be able to do that. For now, we need to come up with a different property.
> > > For instance: "static-memory" (other names are welcome if you have a
> > > better suggestion).
> > >
> > > We use a new property called "static-memory" together with
> > > #static-memory-address-cells and #static-memory-size-cells to define
> > > how many cells to use for address and size.
> > >
> > > Example:
> > >
> > >     domU1 {
> > >         #static-memory-address-cells = <2>;
> > >         #static-memory-size-cells = <2>;
> > >         static-memory = <0x0 0x500000 0x0 0x7fb00000>;
> >
> > This is pretty similar to what Penny suggested. But I dislike it
> > because of the amount of code that needs to be duplicated with the
> > reserved memory.
> 
> Where is the code duplication? In the parsing itself?
> 
> If there is code duplication, can we find a way to share some of the code to
> avoid the duplication?

Both your opinions are so convincing... :/

Correct me if I am wrong:
I think the duplication which Julien means are here, See commit 
https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@xxxxxxx/20210518052113.725808-3-penny.zheng@xxxxxxx/
I added a another similar loop in dt_unreserved_regions to unreserve static 
memory.
For this part, I could try to extract common codes.

But another part I think is just this commit, where I added another check for 
static memory
in early_scan_node:

+    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", 
NULL) )
+        process_static_memory(fdt, node, "xen,static-mem", address_cells,
+                              size_cells, &bootinfo.static_mem);

TBH, I don't know how to fix here....

I've already finished Patch v2, we could continue discussing on how to define 
it in
Device tree here, and it will be included in Patch v3~~~ 😉

Cheers
Penny Zheng


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.