[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 06/24] xen: credit2: implement yield()



On Tue, 2016-09-13 at 14:33 +0100, George Dunlap wrote:
> On 17/08/16 18:18, Dario Faggioli wrote:
> > Alternatively, we can actually _subtract_ some credits to a
> > yielding vcpu.
> > That will sort of make the effect of a call to yield last in time.
> 
> But normally we want the yield to be temporary, right?  The kinds of
> places it typically gets called is when the vcpu is waiting for a
> spinlock held by another (probably pre-empted) vcpu.  Doing a
> permanent
> credit subtraction will bias the credit algorithm against cpus that
> have
> a high amount of spinlock contention (since probably all the vcpus
> will
> be calling yield pretty regularly)
> 
Yes, indeed. Good point, actually. However, one can also think of a
scenario where:
 - A yields, and is descheduled in favour of B, as a consequence of
   that
 - B runs for just a little while and blocks
 - C and A are in runqueue, and A, without counting the idle bias, has 
   more credit than C. So A will be picked up again, even if it 
   yielded very recently, and it may still be in the spinlock wait (or 
   whatever place that is yielding in a tight loop)

Well, in this case, A will yield again, and C will be picked, i.e.,
what would have happened in the first place, if we subtracted credits
to A. (I.e., functionally, this would work the same way, but with more
overhead.)

So, again, can this happen? How frequently, both in absolute and
relative terms? Very hard to tell! So, really...
> 
> Yes, this is simple and should be effective for now.  We can look at
> improving it later.
> 
...glad you also think this. Let's go for it. :-)

> > diff --git a/docs/misc/xen-command-line.markdown b/docs/misc/xen-
> > command-line.markdown
> > @@ -1389,6 +1389,16 @@ Choose the default scheduler.
> >  ### sched\_credit2\_migrate\_resist
> >  > `= <integer>`
> >  
> > +### sched\_credit2\_yield\_bias
> > +> `= <integer>`
> > +
> > +> Default: `1000`
> > +
> > +Set how much a yielding vcpu will be penalized, in order to
> > actually
> > +give a chance to run to some other vcpu. This is basically a bias,
> > in
> > +favour of the non-yielding vcpus, expressed in microseconds
> > (default
> > +is 1ms).
> 
> Probably add _us to the end to indicate that the number is in
> microseconds.
> 
Good idea, although right now we have "sched_credit2_migrate_resist"
which does not have the suffixe.

Still, I'm doing as you suggest because I like it better, and we'll fix
"migrate_resist" later, if we want consistency.

> > @@ -2247,10 +2267,22 @@ runq_candidate(struct csched2_runqueue_data
> > *rqd,
> >      struct list_head *iter;
> >      struct csched2_vcpu *snext = NULL;
> >      struct csched2_private *prv = CSCHED2_PRIV(per_cpu(scheduler,
> > cpu));
> > +    int yield_bias = 0;
> >  
> >      /* Default to current if runnable, idle otherwise */
> >      if ( vcpu_runnable(scurr->vcpu) )
> > +    {
> > +        /*
> > +         * The way we actually take yields into account is like
> > this:
> > +         * if scurr is yielding, when comparing its credits with
> > other
> > +         * vcpus in the runqueue, act like those other vcpus had
> > yield_bias
> > +         * more credits.
> > +         */
> > +        if ( unlikely(scurr->flags & CSFLAG_vcpu_yield) )
> > +            yield_bias = CSCHED2_YIELD_BIAS;
> > +
> >          snext = scurr;
> > +    }
> >      else
> >          snext = CSCHED2_VCPU(idle_vcpu[cpu]);
> >  
> > @@ -2268,6 +2300,7 @@ runq_candidate(struct csched2_runqueue_data
> > *rqd,
> >      list_for_each( iter, &rqd->runq )
> >      {
> >          struct csched2_vcpu * svc = list_entry(iter, struct
> > csched2_vcpu, runq_elem);
> > +        int svc_credit = svc->credit + yield_bias;
> 
> Just curious, why did you decide to add yield_bias to everyone else,
> rather than just subtracting it from snext->credit?
> 
I honestly don't recall. :-)

It indeed feels more natural to subtract to next. I've done it that way
now, let me give it a test spin and resend...

> > @@ -2918,6 +2957,14 @@ csched2_init(struct scheduler *ops)
> >      printk(XENLOG_INFO "load tracking window lenght %llu ns\n",
> >             1ULL << opt_load_window_shift);
> >  
> > +    if ( opt_yield_bias < CSCHED2_YIELD_BIAS_MIN )
> > +    {
> > +        printk("WARNING: %s: opt_yield_bias %d too small,
> > resetting\n",
> > +               __func__, opt_yield_bias);
> > +        opt_yield_bias = 1000; /* 1 ms */
> > +    }
> 
> Why do we need a minimum bias?  And why reset it to 1ms rather than
> SCHED2_YIELD_BIAS_MIN?
> 
You know what, I don't think we need that. I probably was thinking that
we may always want to force yield to have _some_ effect, but there may
be (or may will be) someone who just want to disable it at all... And
in that case, this check will be in his way.

I'll kill it.

Thanks and regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.