[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 19/19] xen: credit2: use cpumask_first instead of cpumask_any when choosing cpu



On Mon, 2016-06-20 at 02:30 -0600, Jan Beulich wrote:
> > 
> > > 
> > > > 
> > > > On 18.06.16 at 01:13, <dario.faggioli@xxxxxxxxxx> wrote:
> > because it is cheaper, and there is no much point in
> > randomizing which cpu gets selected anyway, as such
> > choice will be overridden shortly after, in runq_tickle().
> If it will always be overridden, why fill it in the first place? And
> if there
> are cases where it won't get overridden, you're re-introducing a
> preference towards lower CPU numbers, which I think is not a good
> idea. 
>
It will never be used directly as the actual target CPU --at least
according to my analysis of the code.

runq_tickle() will consider it, but only as an hint, and will actually
use it only if it satisfies all the other load balancing conditions
(being part of a fully idle core, being idle, being in hard affinity,
being in preemptable, etc).

As I said in the rest of the changelog, if we really fear, or start to
observe, that lower CPU numbers are being preferred, we can add
countermeasures (stashing the CPU we chose last time and use
cpumask_cycle(), as we do in Credit1, for another thing).

My feeling is that they won't, as the load balancing logic in
runq_tickle() will make that unlikely enough.

> Can the code perhaps be rearranged to avoid the cpumask_any()
> when another value will subsequently get stored anyway?
> 
I thought about it, and although for sure there are alternatives, none
of the ones I could come up with were looking better than the present
situation.

Fact is, when the pick_cpu() hook is called in vcpu_migrate(), what
vcpu_migrate() wants back from it is indeed a CPU number. Then (through
vcpu_move_locked()) it either just sets v->processor equal to such CPU,
or call the migrate() hook.

On Credit1, the CPU returned by pick_cpu() is indeed the CPU where we
want the vcpu to run, and setting v->processor to that is all we need
to do for migrating a vcpu (and in fact, migrate() is not even
defined).

On Credit2, we (ab?)use pick_cpu() to actually select not really a CPU,
but a runqueue. The fact that we return a random CPU from the runqueue
we decided we want is the (pretty clever, IMO) way with which we avoid
having to teach schedule.c about runqueues. Then, in migrate() (which
is defined for Credit2), we do the other way round: we hand a CPU to
Credit2 and it will translate that back in a runqueue (the runqueue
where that CPU sits).

Archaeology confirms that the migrate() hook was introduced (in
ff38d3faa7d "credit2: add a callback to migrate to a new cpu")
specifically for Credit2.

The main difference, wrt all the above, between Credit1 and Credit2 is
that in Credit1 there is one runqueue per each CPU, in Credit2, more
CPUs use the same runqueue. The current pick_cpu()/migrate() approach
lets both the schedulers, despite this difference, achieve what they
want. Note also how such an approach targets the simplest case (<<hey,
sched_*.c, give me a CPU!>>), which is good when reading and wanting to
understand schedule.c. It's then responsibility of any scheduler that
wants to play fancy tricks --like Credit2 does with runqueues-- to take
care of that, without making anyone else paying the price in terms of
complexity.

Every alternative I thought to, always involved making things less
straightforward in schedule.c, which is something I'd rather avoid. If
anyone has better alternatives, I'm all ears. :-)

I certainly can add more comments, in sched_credit2.c, for explaining
the situation.

Thanks and Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.