|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Scheduler regression in 4.7
On Thu, 2016-08-11 at 16:42 +0100, Andrew Cooper wrote:
> On 11/08/16 15:28, Dario Faggioli wrote:
> > On Thu, 2016-08-11 at 14:39 +0100, Andrew Cooper wrote:
> > > It will be IS_RUNQ_IDLE() which is the problem.
> > >
> > Ok, that does one step of list traversing (the runq). What I didn't
> > understand from your report is what crashed when.
> IS_RUNQ_IDLE() was traversing a list, and it encountered an element
> which was being concurrently deleted on a different pcpu.
>
Yes, I figured it was a race like this, I was asking whether it was
boot, domain creation, domain shutdown, etc (to which you replied
below, so thanks :-) ).
> > AFAICR, during domain destruction we basically move the domain to
> > cpupool0, and without a patch that I sent recently, that is always
> > done
> > as a full fledged cpupool movement, even if the domain is _already_
> > in
> > cpupool0. So, even if you are not using cpupools, and since you
> > mention
> > domain shutdown we probably are looking at 2).
> XenServer doesn't use any cpupools, so all pcpus and vcpus are in
> cpupool0.
>
Right. But since you say the issue manifests during domain destruction,
this may still be triggered by this callchain:
domain_kill(d)
cpupool_move_domain(d, cpupool0)
cpupool_move_domain_locked()
sched_move_domain()
SCHED_OP(insert_vcpu)
csched_vcpu_insert()
csched_cpu_pick()
Considering that, as I was saying, without f6bde162c4 (which is not in
4.7), even if the domain is already in cpupool0, sched_move_domain() is
called, and it then calls insert_vcpu(), etc.
Of course, f6bde162c4 is *not* the solution. It will mitigate the issue
in such a way that it won't show up if you don't really use cpupools,
but if there's a race, that still may happen when using cpupools and
destroying a domain in a pool different than cpupool0.
I have to say that, on staging-4.7, I am able to create, reboot and
destroy a domain, without seeing this issue... but again, if it's a
race, this certainly does not mean it's not there!
Also, below, you say that you think we're in domain construction
(which, I agree, is what seems to result from the stack trace).
> It is a vm reboot of a an HVM domU (CentOS 7 64bit, although I doubt
> that is relevant).
>
> The testcase is vm lifecycle ops on a 32vcpu VM, on a host which
> happens
> to have 32pcpus.
>
FWIW, I tried 16 vcpus on an host with 16 pcpus (and 16 vcpus dom0).
> > The questions I'm asking above have the aim of figuring out what
> > the
> > status of the runq could be, and why adding a call to
> > csched_cpu_pick()
> > from insert_vcpu() is making things explode...
> It turns out that the stack trace is rather less stack rubble than I
> first thought. We are in domain construction, and specifically the
> XEN_DOMCTL_max_vcpus hypercall. All other pcpus are in idle.
>
> for ( i = 0; i < max; i++ )
> {
> if ( d->vcpu[i] != NULL )
> continue;
>
> cpu = (i == 0) ?
> cpumask_any(online) :
> cpumask_cycle(d->vcpu[i-1]->processor, online);
>
> if ( alloc_vcpu(d, i, cpu) == NULL )
> goto maxvcpu_out;
> }
>
> The cpumask_cycle() call is complete and execution has moved into
> alloc_vcpu()
>
> Unfortunately, none of the code around here spills i or cpu onto the
> stack, so I can't see which values the have from the stack dump.
>
> However, I see that csched_vcpu_insert() plays with vc->processor,
> which
> surely invalidates the cycle logic behind this loop?
>
Yes, but I don't think that is a problem. The purpose of calling
csched_cpu_pick() from insert_vcpu() is to find a (the best?) placement
for the vcpu. That will have to be put in v->processor (and the vcpu
queued in the runq of such processor).
But the point is that _csched_vcpu_pick(), in order to come up with a
(potentially) new (and better!) cpu for the vcpu, *reads* v->processor,
to know where v is now, and take that into account in load balancing
calculations (mostly, idleness/SMT stuff). So, those calls to
cpumask_any() and cpumask_cycle() are useful as they give
csched_cpu_pick() something to start with, and it's ok for it to be
overridden.
But fact is, IS_RUNQ_IDLE() *does* access the runq of that initial cpu
and, as explained here, for Credit1, this happens without holding the
proper spinlock:
/* This is safe because vc isn't yet being scheduled */
vc->processor = csched_cpu_pick(ops, vc);
lock = vcpu_schedule_lock_irq(vc);
if ( !__vcpu_on_runq(svc) && vcpu_runnable(vc) && !vc->is_running )
__runq_insert(svc);
vcpu_schedule_unlock_irq(lock, vc);
Which I think may well explain the race.
So, the solution appears to me to be to move the call to
csched_cpu_pick() inside the critical section. As a matter of fact, for
Credit2 it's already like that (while for RTDS, that may indeed not be
necessary).
I guess the fact that the runq was actually being accessed also in the
Credit1 case, was hidden enough inside IS_RUNQ_IDLE() for both George
not to notice when doing the patch, as well as me when reviewing...
sorry for that.
I'll send a patch for staging (which then will have to be backported).
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |