[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/3] xen: Have schedulers revise initial placement



On Fri, 2016-07-15 at 19:07 +0100, Andrew Cooper wrote:
> On 15/07/16 19:02, George Dunlap wrote:
> > 
> > diff --git a/xen/common/sched_credit2.c
> > b/xen/common/sched_credit2.c
> > index 3b9aa27..5a04985 100644
> > --- a/xen/common/sched_credit2.c
> > +++ b/xen/common/sched_credit2.c
> > @@ -1620,15 +1620,23 @@ csched2_vcpu_insert(const struct scheduler
> > *ops, struct vcpu *vc)
> >  
> >      BUG_ON(is_idle_vcpu(vc));
> >  
> > +    /* Locks in cpu_pick expect irqs to be disabled */
> > +    local_irq_disable();
> This doesn't make the problem much worse, but is there a plan to fix
> this issue?
> 
There's a little bit more than a plan.

I've got a proof of concept implementation which was working (now I
need to refresh it), but for which I never really managed to evaluate
the performance impact as accurately as I wanted to.

In fact, I actually have a couple of variants implemented, that I was
comparing against each others, in addition to against 'vanilla'. The
problem was that I really was not seeing any impact at all, which
looked strange (I was expecting improvement, at least on some
workloads), and I wanted to investigate further.

I'm leaving here the link to two branches, where I stashed some of the
code that I have come up so far. As I said, it's WIP and needs
refreshing and reworking.

> None of the scheduler-accounting functions should be disabling
> interrupts.
> 
They don't. But you can't keep irq disabled for some operations and
enabled for others, on the same lock (because of the irq-safety
spinlock checks/enforcement).

So you have to always keep IRQ enabled, for all scheduling operations,
which is ok for _almost_ all of them, with the only exception of the
wakeup of a vcpu.

So, the idea was to treat that one case specially, i.e., put the waking
vcpus in a queue, and then drain the queue somehow. The insertion in
the queue needs to be done disabling interrupts, but the draining
--which is where the actual scheduling related hooks and operations are
done-- can be done with IRQs on, which is what we want.

What I was experimenting on was trying different ways of managing such
a queue, e.g., only one queue for all CPUs or per-CPU queues; or
whether to always drain the queue or only pick a couple of vcpu and
defer the rest again; or whether to allow concurrent draining of the
queue, or only have one CPU (at a time) doing that; etc etc.

The "1 queue for all" and "per-CPU queues" is what's in the following
two branches:

git://xenbits.xen.org/people/dariof/xen.git  wip/sched/irq-enabled
http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/wip/sched/irq-enabled

git://xenbits.xen.org/people/dariof/xen.git  wip/sched/irq-enabled-percpu
http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/wip/sched/irq-enabled-percpu

I'll get back to this soon. In the meanwhile, feel free to comment,
toss ideas, criticize, whatever. :-D

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.