[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC 0/5] xen/arm: support big.little SoC



On 19/09/16 12:06, Julien Grall wrote:
> Hi George,
> 
> On 19/09/2016 11:45, George Dunlap wrote:
>> On Mon, Sep 19, 2016 at 9:53 AM, Julien Grall <julien.grall@xxxxxxx>
>> wrote:
>>>>> As mentioned in the mail you pointed above, this series is not
>>>>> enough to
>>>>> make
>>>>> big.LITTLE working on then. Xen is always using the boot CPU to detect
>>>>> the
>>>>> list of features. With big.LITTLE features may not be the same.
>>>>>
>>>>> And I would prefer to see Xen supporting big.LITTLE correctly before
>>>>> beginning to think to expose big.LITTLE to the userspace (via cpupool)
>>>>> automatically.
>>>>
>>>>
>>>> Do you mean vcpus be scheduled between big and little cpus freely?
>>>
>>>
>>> By supporting big.LITTLE correctly I meant Xen thinks that all the
>>> cores has
>>> the same set of features. So the feature detection is only done the boot
>>> CPU. See processor_setup for instance...
>>>
>>> Moving vCPUs between big and little cores would be a hard task (cache
>>> line
>>> issue, and possibly feature) and I don't expect to ever cross this in
>>> Xen.
>>> However, I am expecting to see big.LITTLE exposed to the guest (i.e
>>> having
>>> big and little vCPUs).
>>
>> So it sounds like the big and LITTLE cores are architecturally
>> different enough that software must be aware of which one it's running
>> on?
> 
> That's correct. Each big and LITTLE cores may have different errata,
> different features...
> 
> It has also the advantage to let the guest dealing itself with its own
> power efficiency without introducing a specific Xen interface.
> 
>>
>> Exposing varying numbers of big and LITTLE vcpus to guests seems like
>> a sensible approach.  But at the moment cpupools only allow a domain
>> to be in exactly one pool -- meaning if we use cpupools to control the
>> big.LITTLE placement, you won't be *able* to have guests with both big
>> and LITTLE vcpus.
>>
>>>> If need to create all the pools, need to decided how many pools need
>>>> to be
>>>> created.
>>>> I thought about this, but I do not come out a good idea.
>>>>
>>>> The cpupool0 is defined in xen/common/cpupool.c, if need to create many
>>>> pools,
>>>> need to alloc cpupools dynamically when booting. I would not like to
>>>> change a
>>>> lot to common code.
>>>
>>>
>>> Why? We should avoid to choose a specific design just because the common
>>> code does not allow you to do it without heavy change.
>>>
>>> We never came across the big.LITTLE problem on x86, so it is normal to
>>> modify the code.
>>
>> Julien is correct; there's no reason we couldn't have a default
>> multiple pools on boot.
>>
>>>> The implementation in this patchset I think is an easy way to let
>>>> Big and
>>>> Little
>>>> CPUs all run.
>>>
>>>
>>> I care about having a design allowing an easy use of big.LITTLE on
>>> Xen. Your
>>> solution requires the administrator to know the underlying platform and
>>> create the pool.
>>>
>>> In the solution I suggested, the pools would be created by Xen (and
>>> the info
>>> exposed to the userspace for the admin).
>>
>> FWIW another approach could be the one taken by "xl
>> cpupool-numa-split": you could have "xl cpupool-bigLITTLE-split" or
>> something that would automatically set up the pools.
>>
>> But expanding the schedulers to know about different classes of cpus,
>> and having vcpus specified as running only on specific types of pcpus,
>> seems like a more flexible approach.
> 
> So, if I understand correctly, you would not recommend to extend the
> number of CPU pool per domain, correct?

Before deciding in which direction to go (multiple cpupools, sub-pools,
kind of implicit cpu pinning) I think we should think about the
implications regarding today's interfaces:

- Do we want to be able to use different schedulers for big/little
  (this would mean some cpupool related solution)? I'd prefer to
  have only one scheduler type for each domain. :-)

- What about scheduling parameters like weight and cap? How would
  those apply (answer probably influencing pinning solution).
  Remember that especially the downsides of pinning led to the
  introduction of cpupools.

- Is big.LITTLE to be expected to be combined with NUMA?

- Do we need to support live migration for domains containing both
  types of cpus?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.