|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v2] xen/arm: Fully initialise struct membanks_hdr fields
>>
>>>
>>>> ---
>>>> ---
>>>> xen/arch/arm/domain_build.c | 13 ++++---------
>>>> xen/arch/arm/include/asm/kernel.h | 5 ++++-
>>>> xen/arch/arm/static-shmem.c | 3 ++-
>>>> xen/include/xen/bootfdt.h | 16 ++++++++++++++++
>>>> 4 files changed, 26 insertions(+), 11 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>>> index b072a16249fe..9e3132fb21d8 100644
>>>> --- a/xen/arch/arm/domain_build.c
>>>> +++ b/xen/arch/arm/domain_build.c
>>>> @@ -1039,7 +1039,7 @@ void __init allocate_memory(struct domain *d, struct
>>>> kernel_info *kinfo)
>>>> */
>>>> if ( is_hardware_domain(d) )
>>>> {
>>>> - struct membanks *gnttab = xzalloc_flex_struct(struct membanks,
>>>> bank, 1);
>>>> + struct membanks *gnttab = membanks_xzalloc(1, RESERVED_MEMORY);
>>>> /*
>>>> * Exclude the following regions:
>>>> * 1) Remove reserved memory
>>>> @@ -1057,13 +1057,10 @@ void __init allocate_memory(struct domain *d,
>>>> struct kernel_info *kinfo)
>>>> gnttab->bank[0].start = kinfo->gnttab_start;
>>>> gnttab->bank[0].size = kinfo->gnttab_size;
>>>>
>>>> - hwdom_free_mem = xzalloc_flex_struct(struct membanks, bank,
>>>> - NR_MEM_BANKS);
>>>> + hwdom_free_mem = membanks_xzalloc(NR_MEM_BANKS, MEMORY);
>>>> if ( !hwdom_free_mem )
>>>> goto fail;
>>>>
>>>> - hwdom_free_mem->max_banks = NR_MEM_BANKS;
>>>> -
>>>> if ( find_unallocated_memory(kinfo, mem_banks,
>>>> ARRAY_SIZE(mem_banks),
>>>> hwdom_free_mem,
>>>> add_hwdom_free_regions) )
>>>> goto fail;
>>>> @@ -1293,7 +1290,7 @@ static int __init find_host_extended_regions(const
>>>> struct kernel_info *kinfo,
>>>> struct membanks *ext_regions)
>>>> {
>>>> int res;
>>>> - struct membanks *gnttab = xzalloc_flex_struct(struct membanks, bank,
>>>> 1);
>>>> + struct membanks *gnttab = membanks_xzalloc(1, RESERVED_MEMORY);
>>>>
>>>> /*
>>>> * Exclude the following regions:
>>>> @@ -1374,12 +1371,10 @@ int __init make_hypervisor_node(struct domain *d,
>>>> }
>>>> else
>>>> {
>>>> - ext_regions = xzalloc_flex_struct(struct membanks, bank,
>>>> NR_MEM_BANKS);
>>>> + ext_regions = membanks_xzalloc(NR_MEM_BANKS, RESERVED_MEMORY);
>>> I'm a bit confused what is the expectations behind using different types of
>>> enum region_type, mostly because it can point to
>>> different address spaces depending on the context. Above, you marked gnttab
>>> as RESERVED_MEMORY (I guess because this
>>> region has already been found - but in fact it is still unused) and
>>> hwdom_free_mem as MEMORY. So I would at least expect
>>> ext_regions to be of MEMORY type as well. After all both hwdom_free_mem and
>>> ext_regions contain
>>> banks of unused/free memory (although former lists host memory while latter
>>> can also contain guest physical
>>> memory). Could you please clarify the intended use?
>>
>> You are right, that should be MEMORY, my bad! Could it be something
>> addressable on commit or should I send another one?
> I can do that on commit but first, can you please answer what is the intended
> use of enum region_type?
> At first I was expecting gnttab to be MEMORY as well. I don't see a
> difference between a region prepared by Xen
> for domain to store gnttab vs extended regions. Both specify regions of
> unused address space. It's just that the region
> for gnttab must always be present but extended regions don't have to.
Right, at first I thought gnttab could be reserved memory, but now that you
pointed out it’s not the right thing to do, I remember
now that these type reflects the device tree grouping for the memory banks, so
RESERVED_MEMORY is only for these regions
in the /reserved-memory tree, STATIC_SHARED_MEMORY is for the
'xen,domain-shared-memory-v1’ comaptible node and
MEMORY is for /memory regions.
Now I would say that we could use MEMORY also for regions prepared by Xen as
long as we don’t need to differentiate them in
a different way from /memory regions.
Please let me know your thoughts.
Cheers,
Luca
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |