[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/7] xen/arm: keep track of reserved-memory regions



On Sat, 10 Aug 2019, Julien Grall wrote:
> On Fri, 9 Aug 2019, 23:21 Stefano Stabellini, <sstabellini@xxxxxxxxxx> wrote:
>       On Wed, 7 Aug 2019, Julien Grall wrote:
>       > Hi Stefano,
>       >
>       > On 06/08/2019 22:49, Stefano Stabellini wrote:
>       > > As we parse the device tree in Xen, keep track of the 
> reserved-memory
>       > > regions as they need special treatment (follow-up patches will make 
> use
>       > > of the stored information.)
>       > >
>       > > Reuse process_memory_node to add reserved-memory regions to the
>       > > bootinfo.reserved_mem array.
>       > >
>       > > Refuse to continue once we reach the max number of reserved memory
>       > > regions to avoid accidentally mapping any portions of them into a 
> VM.
>       > >
>       > > Signed-off-by: Stefano Stabellini <stefanos@xxxxxxxxxx>
>       > >
>       > > ---
>       > > Changes in v4:
>       > > - depth + 1 in process_reserved_memory_node
>       >
>       > Ah, you fixed it in this patch. But then, this does not match the
>       > documentation in patch #1.
> 
>       Yes good point, see below
> 
> 
>       > > - pass address_cells and size_cells to device_tree_for_each_node
>       > > - pass struct meminfo * instead of a boolean to process_memory_node
>       > > - improve in-code comment
>       >
>       > I can't see any comment, is that an improvement? :)
> 
>       It got lost with the refactoring of the code, but I don't think we need
>       it anymore
> 
> 
>       > > - use a separate process_reserved_memory_node (separate from
>       > >    process_memory_node) function wrapper to have different error 
> handling
>       > >
>       > > Changes in v3:
>       > > - match only /reserved-memory
>       > > - put the warning back in place for reg not present on a normal 
> memory
>       > >    region
>       > > - refuse to continue once we reach the max number of reserved memory
>       > >    regions
>       > >
>       > > Changes in v2:
>       > > - call process_memory_node from process_reserved_memory_node to 
> avoid
>       > >    duplication
>       > > ---
>       > >   xen/arch/arm/bootfdt.c      | 43 
> +++++++++++++++++++++++++++++++------
>       > >   xen/include/asm-arm/setup.h |  1 +
>       > >   2 files changed, 38 insertions(+), 6 deletions(-)
>       > >
>       > > diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
>       > > index c22d57cd72..3e6fd63b16 100644
>       > > --- a/xen/arch/arm/bootfdt.c
>       > > +++ b/xen/arch/arm/bootfdt.c
>       > > @@ -144,6 +144,7 @@ static int __init process_memory_node(const 
> void *fdt,
>       > > int node,
>       > >       const __be32 *cell;
>       > >       paddr_t start, size;
>       > >       u32 reg_cells = address_cells + size_cells;
>       > > +    struct meminfo *mem = (struct meminfo *)data;
>       >
>       > The cast is unnecessary.
>       >
>       > The rest of the code looks good. Pending the discussion about
>       > device_tree_for_each_node:
>       >
>       > Acked-by: Julien Grall <julien.grall@xxxxxxx>
> 
>       Thank you. I removed the cast. Also, I think that it makes more sense to
>       do the depth increase (depth + 1) inside the implementation of
>       device_tree_for_each_node instead of at the caller site, like it is done
>       in this patch. This would match the documentation better and is cleaner
>       from an interface point of view. So I'll remove the depth increase from
>       this patch and move it to the first patch (min_depth = depth + 1).
> 
> 
> Well, you don't need to pass the depth at all. It is just an artificial 
> number for libfdt to know were to stop.
> 
> We also don't need the absolute depth in any of the early FDT. The relative 
> one is sufficient.

Yes, you are right, good suggestion
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.