[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC



On Wed, Sep 21, 2016 at 11:09:11AM +0100, Julien Grall wrote:
>
>
>On 20/09/16 21:17, Stefano Stabellini wrote:
>>On Tue, 20 Sep 2016, Julien Grall wrote:
>>>Hi Stefano,
>>>
>>>On 20/09/2016 20:09, Stefano Stabellini wrote:
>>>>On Tue, 20 Sep 2016, Julien Grall wrote:
>>>>>Hi,
>>>>>
>>>>>On 20/09/2016 12:27, George Dunlap wrote:
>>>>>>On Tue, Sep 20, 2016 at 11:03 AM, Peng Fan <van.freenix@xxxxxxxxx>
>>>>>>wrote:
>>>>>>>On Tue, Sep 20, 2016 at 02:54:06AM +0200, Dario Faggioli wrote:
>>>>>>>>On Mon, 2016-09-19 at 17:01 -0700, Stefano Stabellini wrote:
>>>>>>>>>On Tue, 20 Sep 2016, Dario Faggioli wrote:
>>>>>>>I'd like to add a computing capability in xen/arm, like this:
>>>>>>>
>>>>>>>struct compute_capatiliby
>>>>>>>{
>>>>>>>   char *core_name;
>>>>>>>   uint32_t rank;
>>>>>>>   uint32_t cpu_partnum;
>>>>>>>};
>>>>>>>
>>>>>>>struct compute_capatiliby cc=
>>>>>>>{
>>>>>>>  {"A72", 4, 0xd08},
>>>>>>>  {"A57", 3, 0xxxx},
>>>>>>>  {"A53", 2, 0xd03},
>>>>>>>  {"A35", 1, ...},
>>>>>>>}
>>>>>>>
>>>>>>>Then when identify cpu, we decide which cpu is big and which cpu is
>>>>>>>little
>>>>>>>according to the computing rank.
>>>>>>>
>>>>>>>Any comments?
>>>>>>
>>>>>>I think we definitely need to have Xen have some kind of idea the
>>>>>>order between processors, so that the user doesn't need to figure out
>>>>>>which class / pool is big and which pool is LITTLE.  Whether this sort
>>>>>>of enumeration is the best way to do that I'll let Julien and Stefano
>>>>>>give their opinion.
>>>>>
>>>>>I don't think an hardcoded list of processor in Xen is the right solution.
>>>>>There are many existing processors and combinations for big.LITTLE so it
>>>>>will
>>>>>nearly be impossible to keep updated.
>>>>>
>>>>>I would expect the firmware table (device tree, ACPI) to provide relevant
>>>>>data
>>>>>for each processor and differentiate big from LITTLE core.
>>>>>Note that I haven't looked at it for now. A good place to start is looking
>>>>>at
>>>>>how Linux does.
>>>>
>>>>That's right, see Documentation/devicetree/bindings/arm/cpus.txt. It is
>>>>trivial to identify the two different CPU classes and which cores belong
>>>>to which class.t, as
>>>
>>>The class of the CPU can be found from the MIDR, there is no need to use the
>>>device tree/acpi for that. Note that I don't think there is an easy way in
>>>ACPI (i.e not in AML) to find out the class.
>>>
>>>>It is harder to figure out which one is supposed to be
>>>>big and which one LITTLE. Regardless, we could default to using the
>>>>first cluster (usually big), which is also the cluster of the boot cpu,
>>>>and utilize the second cluster only when the user demands it.
>>>
>>>Why do you think the boot CPU will usually be a big one? In the case of Juno
>>>platform it is configurable, and the boot CPU is a little core on r2 by
>>>default.
>>>
>>>In any case, what we care about is differentiate between two set of CPUs. I
>>>don't think Xen should care about migrating a guest vCPU between big and
>>>LITTLE cpus. So I am not sure why we would want to know that.
>>
>>No, it is not about migrating (at least yet). It is about giving useful
>>information to the user. It would be nice if the user had to choose
>>between "big" and "LITTLE" rather than "class 0x1" and "class 0x100", or
>>even "A7" or "A15".
>
>I don't think it is wise to assume that we may have only 2 kind of CPUs on
>the platform. We may have more in the future, if so how would you name them?

Consider more than 2 kinds of physical cpus,
"vcpuclass=["0-1:A35","2-5:A53", "6-7:A72"]" seems easier to be handled

Regards,
Peng.

>
>IHMO, asking the user to specify the type of CPUs he wants would be the
>easiest way (though a bit difficult for the user) and avoid us to rely on
>non-upstreamed bindings.
>
>Regards,
>
>-- 
>Julien Grall

-- 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.