[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH v3 07/12] arch/arm: create device tree nodes for hwdom cpufreq cpu driver



On Thu, Oct 23, 2014 at 6:49 PM, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> Hi Oleksandr,
>
> On 10/23/2014 04:07 PM, Oleksandr Dmytryshyn wrote:
>> This patch copies all cpu@xxxxxx@N nodes (from input
>> device tree) with properties to /hypervisor/pcpus
>> node (device tree for hwdom). Thus we can give all
>> information about all physical CPUs in the pcpus node.
>> Driver in hwdom should parse /hypervisor/pcpus path
>> instead of the /cpus path in the device tree.
>>
>> Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@xxxxxxxxxxxxxxx>
>> ---
>>  xen/arch/arm/domain_build.c | 67 
>> +++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 67 insertions(+)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 2db0236..2186514 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -326,6 +326,69 @@ static int make_memory_node(const struct domain *d,
>>      return res;
>>  }
>>
>> +#ifdef HAS_CPUFREQ
>> +static int fdt_copy_phys_cpus_nodes(void *fdt)
>> +{
>> +    int res;
>> +    const struct dt_device_node *cpus = dt_find_node_by_path("/cpus");
>> +    const struct dt_device_node *npcpu;
>> +    const struct dt_property *pp;
>> +    char *node_name;
>> +
>> +    if ( !cpus )
>> +    {
>> +        dprintk(XENLOG_ERR, "Missing /cpus node in the device tree?\n");
>> +        return -ENOENT;
>> +    }
>> +
>> +    /*
>> +     * create pcpus node and copy to it
>> +     * original cpu@xxxxxx@N nodes with its properties.
>> +     * This is needed for the cpufreq cpu driver in Dom0
>> +     */
>> +    DPRINT("Create pcpus node\n");
>> +
>> +    res = fdt_begin_node(fdt, "pcpus");
>
> This new bindings has to be documented somewhere. The best places would
> be in Documentation/devicetree/bindins/arm/xen.txt (see Linux repo).
I'll do this in the next patch-set.

>> +    if ( res )
>> +        return res;
>> +
>> +    dt_for_each_child_node( cpus, npcpu )
>> +    {
>> +        if ( dt_device_type_is_equal(npcpu, "cpu") )
>> +        {
>> +            node_name = strrchr(dt_node_full_name(npcpu), '/');
>> +            node_name++;
>> +
>> +            ASSERT(node_name && *node_name);
>
> The first check on node_name is pointless because of the node_name++ above.
>
> I would divide the ASSERT in 2 parts, and do the first check before
> node_name++;
I'll do this in the next patch-set.

>> +
>> +            DPRINT("Copy %s node to the pcpus\n", node_name);
>> +
>> +            res = fdt_begin_node(fdt, node_name);
>> +            if ( res )
>> +                return res;
>> +
>> +            dt_for_each_property_node( npcpu, pp )
>> +            {
>> +                if ( pp->length )
>> +                {
>> +                    res = fdt_property(fdt, pp->name, pp->value,
>> +                                        pp->length);
>> +                    if ( res )
>> +                        return res;
>> +                }
>> +            }
>> +
>
> You can use write_properties to replace this loop.
I'll try to do this in the next patch set.

>> +            res = fdt_end_node(fdt);
>> +            if ( res )
>> +                return res;
>> +        }
>> +    }
>> +
>> +    res = fdt_end_node(fdt);
>> +    return res;
>> +}
>> +#endif /* HAS_CPUFREQ */
>> +
>>  static int make_hypervisor_node(struct domain *d,
>>                                  void *fdt, const struct dt_device_node 
>> *parent)
>>  {
>> @@ -386,6 +449,10 @@ static int make_hypervisor_node(struct domain *d,
>>      if ( res )
>>          return res;
>>
>> +    #ifdef HAS_CPUFREQ
>> +    fdt_copy_phys_cpus_nodes(fdt);
>
> You forgot to check the return value of the function.
I'll add checking in the next patch-set.

> Regards,
>
> --
> Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.