|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum
>>> On 08.05.19 at 16:36, <jgross@xxxxxxxx> wrote:
> On 06/05/2019 12:01, Jan Beulich wrote:
>>>>> On 06.05.19 at 11:23, <jgross@xxxxxxxx> wrote:
>>> And that was mentioned in the cover letter: cpu hotplug is not yet
>>> handled (hence the RFC status of the series).
>>>
>>> When cpu hotplug is being added it might be appropriate to switch the
>>> scheme as you suggested. Right now the current solution is much more
>>> simple.
>>
>> I see (I did notice the cover letter remark, but managed to not
>> honor it when writing the reply), but I'm unconvinced if incurring
>> more code churn by not dealing with things the "dynamic" way
>> right away is indeed the "more simple" (overall) solution.
>
> I have started to address cpu on/offlining now.
>
> There are multiple design decisions to take.
>
> 1. Interaction between sched-gran and smt boot parameters
> 2. Interaction between sched-gran and xen-hptool smt switching
> 3. Interaction between sched-gran and single cpu on/offlining
>
> Right now any guest won't see a difference regarding sched-gran
> selection. This means we don't have to think about potential migration
> restrictions. This might change in future when we want to enable the
> guest to e.g. use core scheduling themselves in order to mitigate
> against side channel attacks within the guest.
>
> The most simple solution would be (and I'd like to send out V1 of my
> series with that implemented):
>
> sched-gran=core and sched-gran=socket don't allow dynamical switching
> of smt via xen-hptool.
>
> With sched-gran=core or sched-gran=socket offlining a single cpu results
> in moving the complete core or socket to cpupool_free_cpus and then
> offlining from there. Only complete cores/sockets can be moved to any
> cpupool. When onlining a cpu it is added to cpupool_free_cpus and if
> the core/socket is completely online it will automatically be added to
> Pool-0 (as today any single onlined cpu).
Well, this is in line with what was discussed on the call yesterday, so
I think it's an acceptable initial state to end up in. Albeit, just for
completeness, I'm not convinced there's no use for "smt-{dis,en}able"
anymore with core-aware scheduling implemented just in Xen - it
may still be considered useful as long as we don't expose proper
topology to guests, for them to be able to do something similar.
> The next steps (for future patches) could be:
>
> - per-cpupool smt settings (static at cpupool creation, moving a domain
> between cpupools with different smt settings not supported)
>
> - support moving domains between cpupools with different smt settings
> (a guest started with smt=0 would only ever use 1 thread per core)
Yes, in its most general terms: Such movement may be wasteful, but
should be possible to be carried out safely in all cases.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |