|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] xen: sched: Credit2: during scheduling, update the idle mask before using it
>>> On 11.10.18 at 15:44, <dfaggioli@xxxxxxxx> wrote:
> Load balancing, when happening, at the end of a "scheduler epoch", can
> trigger vcpu migration, which in its turn may call runq_tickle(). If the
> cpu where this happens was idle, but we're now going to schedule a vcpu
> on it, let's update the runq's idle cpus mask accordingly _before_ doing
> load balancing.
>
> Not doing that, in fact, may cause runq_tickle() to think that the cpu
> is still idle, and tickle it to go pick up a vcpu from the runqueue,
> which might be wrong/unideal.
Makes sense to me; I seem to vaguely recall that something
along these lines was done years ago for credit1 as well.
> Backporting: this does not fix a system crash. However, it fixes the
> behavior of the scheduler --which I'd call wrong more than just suboptimal.
>
> Therefore, I'd be inclined to ask for this to be backported. It should
> be fairly straightforward, but as usual, I'm up for helping with that.
I'll try to remember to pick it up when I've seen it go in.
> --- a/xen/common/sched_credit2.c
> +++ b/xen/common/sched_credit2.c
> @@ -3554,6 +3554,13 @@ csched2_schedule(
> __set_bit(__CSFLAG_scheduled, &snext->flags);
> }
>
> + /* Clear the idle mask if necessary */
> + if ( cpumask_test_cpu(cpu, &rqd->idle) )
> + {
> + __cpumask_clear_cpu(cpu, &rqd->idle);
> + smt_idle_mask_clear(cpu, &rqd->smt_idle);
> + }
I realize you're merely moving code, but is there a reason to do
the test-and-clear in two steps rather than one. It being the
non-atomic variant, it can't be shared memory, and hence the
cache line ping-pong consideration applicable in other cases is
irrelevant here afaict.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |