[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 08/11] xen: sched: allow for choosing credit2 runqueues configuration at boot



On 07/04/16 06:04, Juergen Gross wrote:
> On 06/04/16 19:23, Dario Faggioli wrote:
>> In fact, credit2 uses CPU topology to decide how to arrange
>> its internal runqueues. Before this change, only 'one runqueue
>> per socket' was allowed. However, experiments have shown that,
>> for instance, having one runqueue per physical core improves
>> performance, especially in case hyperthreading is available.
>>
>> In general, it makes sense to allow users to pick one runqueue
>> arrangement at boot time, so that:
>>  - more experiments can be easily performed to even better
>>    assess and improve performance;
>>  - one can select the best configuration for his specific
>>    use case and/or hardware.
>>
>> This patch enables the above.
>>
>> Note that, for correctly arranging runqueues to be per-core,
>> just checking cpu_to_core() on the host CPUs is not enough.
>> In fact, cores (and hyperthreads) on different sockets, can
>> have the same core (and thread) IDs! We, therefore, need to
>> check whether the full topology of two CPUs matches, for
>> them to be put in the same runqueue.
>>
>> Note also that the default (although not functional) for
>> credit2, since now, has been per-socket runqueue. This patch
>> leaves things that way, to avoid mixing policy and technical
>> changes.
>>
>> Finally, it would be a nice feature to be able to select
>> a particular runqueue arrangement, even when creating a
>> Credit2 cpupool. This is left as future work.
>>
>> Signed-off-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>
>> Signed-off-by: Uma Sharma <uma.sharma523@xxxxxxxxx>
> 
> With the one comment below addressed:
> 
> Reviewed-by: Juergen Gross <jgross@xxxxxxxx>
> 
>> ---
>> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
>> Cc: Uma Sharma <uma.sharma523@xxxxxxxxx>
>> Cc: Juergen Gross <jgross@xxxxxxxx>
>> ---
>> Cahnges from v1:
>>  * fix bug in parameter parsing, and start using strcmp()
>>    for that, as requested during review.
>> ---
>>  docs/misc/xen-command-line.markdown |   19 +++++++++
>>  xen/common/sched_credit2.c          |   76 
>> +++++++++++++++++++++++++++++++++--
>>  2 files changed, 90 insertions(+), 5 deletions(-)
>>
> 
> ...
> 
>> @@ -2006,7 +2067,10 @@ cpu_to_runqueue(struct csched2_private *prv, unsigned 
>> int cpu)
>>          BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID ||
>>                 cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
>>  
>> -        if ( cpu_to_socket(cpumask_first(&rqd->active)) == 
>> cpu_to_socket(cpu) )
>> +        if ( opt_runqueue == OPT_RUNQUEUE_ALL ||
>> +             (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, 
>> cpu)) ||
>> +             (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, 
>> cpu)) ||
>> +             (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, 
>> cpu)) )
>>              break;
>>      }
>>  
>> @@ -2170,6 +2234,8 @@ csched2_init(struct scheduler *ops)
>>      printk(" load_window_shift: %d\n", opt_load_window_shift);
>>      printk(" underload_balance_tolerance: %d\n", 
>> opt_underload_balance_tolerance);
>>      printk(" overload_balance_tolerance: %d\n", 
>> opt_overload_balance_tolerance);
>> +    printk(" runqueues arrangement: per-%s\n",
>> +           opt_runqueue == OPT_RUNQUEUE_CORE ? "core" : "socket");
> 
> I asked this before: shouldn't the optiones "node" and "all" be
> respected here, too?

Dario, would it make sense to put the string names ("core", "socket",
&c) in an array, then have both parse_credit2_runqueue() iterate over
the array to find the appropriate numeric value, and have this use the
array to convert from the numeric value to a string?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.