[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/4] sched: credit2: respect per-vcpu hard affinity



On Tue, 2015-03-31 at 15:37 +0100, George Dunlap wrote:
> On 03/26/2015 09:48 AM, Justin T. Weaver wrote:

> > --- a/xen/common/sched_credit2.c
> > +++ b/xen/common/sched_credit2.c

> > @@ -194,6 +195,12 @@ int opt_overload_balance_tolerance=-3;
> >  integer_param("credit2_balance_over", opt_overload_balance_tolerance);
> >  
> >  /*
> > + * Use this to avoid having too many cpumask_t structs on the stack
> > + */
> > +static cpumask_t **scratch_mask = NULL;
> > +#define csched2_cpumask scratch_mask[smp_processor_id()]
> 
> Since I'm asking for changes below:  It's not clear to me when I'm
> scanning the code that csched2_cpumask is a scratch variable.  What
> about calling the actual variable _scratch_cpumask and havind the
> #define be scratch_cpumask?
> 
+1

> >      /* Find the runqueue with the lowest instantaneous load */
> >      for_each_cpu(i, &prv->active_queues)
> >      {
> >          struct csched2_runqueue_data *rqd;
> > -        s_time_t rqd_avgload;
> > +        s_time_t rqd_avgload = MAX_LOAD;
> >  
> >          rqd = prv->rqd + i;
> >  
> >          /* If checking a different runqueue, grab the lock,
> > -         * read the avg, and then release the lock.
> > +         * check hard affinity, read the avg, and then release the lock.
> >           *
> >           * If on our own runqueue, don't grab or release the lock;
> >           * but subtract our own load from the runqueue load to simulate
> > -         * impartiality */
> > +         * impartiality.
> > +         *
> > +         * svc's hard affinity may have changed; this function is the
> > +         * credit 2 scheduler's first opportunity to react to the change,
> > +         * so it is possible here that svc does not have hard affinity
> > +         * with any of the pcpus of svc's currently assigned run queue.
> > +         */
> >          if ( rqd == svc->rqd )
> >          {
> > -            rqd_avgload = rqd->b_avgload - svc->avgload;
> > +            if ( cpumask_intersects(vc->cpu_hard_affinity, &rqd->active) )
> > +                rqd_avgload = rqd->b_avgload - svc->avgload;
> >          }
> >          else if ( spin_trylock(&rqd->lock) )
> >          {
> > -            rqd_avgload = rqd->b_avgload;
> > +            if ( cpumask_intersects(vc->cpu_hard_affinity, &rqd->active) )
> > +                rqd_avgload = rqd->b_avgload;
> > +
> >              spin_unlock(&rqd->lock);
> >          }
> >          else
> 
> Since we're already potentially falling through and doing the comparison
> below with an unchanged rqd_avgload (which has now been set to MAX_LOAD
> above), I wonder if it makes more sense to remove this "else continue"
> here, just to avoid confusing people.
>
Agreed! It was me that suggested Justing how to reorg this block of code
during v2's review... my bad not spotting that the 'else / continue' was
useless then! :-P

Regards,
Dario

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.