[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10]xen: sched: convert RTDS from time to event driven model



On Thu, 2016-03-17 at 00:17 -0400, Tianyang Chen wrote:
> The current RTDS code has several problems:
>  - The scheduler, although the algorithm is event driven by
>    nature, follows a time driven model (is invoked periodically!),
>    making the code looks unnatural;
                     ^look

>  - Budget replenishment logic, budget enforcement logic and
> scheduling
>    decisions are mixed and entangled, making the code hard to
> understand;
>  - The various queues of vcpus are scanned various times, making the
>    code inefficient;
> 
Un-capitalize all the items ("the scheduler...", "budget
replenishment...", "the various").

> This patch separates budget replenishment and enforcement by adding
> a replenishment timer, which fires at the next most imminent
> release time of all runnable vcpus.
> 
> A replenishment queue has been added to keep track of all vcpus that
> are runnable.
> 
Too specific and too few specific at the same time:

"This patch separates budget replenishment and enforcement. It does that by 
handling the former with a dedicated timer, and a queue of pending 
replenishment events."

> In the main scheduling function, we now return when a scheduling
> decision should be made next time, such as when the budget of the
> currently running vcpu runs out.
> 
I suggested this, but I actually don't like it that much any longer. In
fact, this sort of implies that everyone reading this knows what kind
of information is actually returned by the "main scheduling function":

"We also make sure that the main scheduling function is called when a
scheduling decision is necessary, such as when the currently running
vcpu runs out of budget."

> Finally, when waking up a vcpu, it tickles various vcpus
> appropriately,
> like all other schedulers do.
> 
it who?

In this case, I guess I like what I did suggest yesterday:

"Finally, when waking up a vcpu, it is now enough to tickle the various
CPUs appropriately, like all other schedulers also do."

> --- a/xen/common/sched_rt.c
> +++ b/xen/common/sched_rt.c
> @@ -142,6 +156,13 @@ static cpumask_var_t *_cpumask_scratch;
>   */
>  static unsigned int nr_rt_ops;
>  
> +static void repl_timer_handler(void *data);
> +
> +#define deadline_runq_insert(...) \
> +  deadline_queue_insert(&__q_elem, ##__VA_ARGS__)
> +#define deadline_replq_insert(...) \
> +  deadline_queue_insert(&replq_elem, ##__VA_ARGS__)
> +
Why moving this up here? It's easier to figure out what they do and why
they are useful, if you leave them close to deadline_queue_insert().

> @@ -380,11 +430,80 @@ rt_update_deadline(s_time_t now, struct rt_vcpu
> *svc)
>      return;
>  }
>  
> +/*
> + * Helpers for removing and inserting a vcpu in a queue
> + * that is being kept ordered by the vcpus' deadlines (as EDF
> + * mandates).
> + *
> + * For callers' convenience, the vcpu removing helper returns
> + * true if the vcpu removed was the one at the front of the
> + * queue; similarly, the inserting helper returns true if the
> + * inserted ended at the front of the queue (i.e., in both
> + * cases, if the vcpu with the earliest deadline is what we
> + * are dealing with).
> + */
> +static inline bool_t
> +deadline_queue_remove(struct list_head *queue, struct list_head
> *elem)
> +{
> +    int pos = 0;
> +
> +    if ( queue->next != elem )
> +        pos = 1;
> +
> +    list_del_init(elem);
> +    return !pos;
> +}
> +
> +static inline bool_t
> +deadline_queue_insert(struct rt_vcpu * (*_get_q_elem)(struct
> list_head *elem),
>
You can avoid specifying 'elem' here. It makes the line shorter (I know
it's within 80 characters anyway, but shorter is still better).

> +                      struct rt_vcpu *svc, struct list_head *elem,
> +                      struct list_head *queue)
> +{
> +    struct list_head *iter;
> +    int pos = 0;
> +
> +    list_for_each ( iter, queue )
> +    {
> +        struct rt_vcpu * iter_svc = (*_get_q_elem)(iter);
> +        if ( svc->cur_deadline <= iter_svc->cur_deadline )
> +            break;
> +        pos++;
> +    }
> +    list_add_tail(elem, iter);
> +    return !pos;
> +}
> +
>  static inline void
>  __q_remove(struct rt_vcpu *svc)
>  {
> -    if ( __vcpu_on_q(svc) )
> -        list_del_init(&svc->q_elem);
> +    ASSERT( __vcpu_on_q(svc) );
> +    list_del_init(&svc->q_elem);
> +}
> +
> +static inline void
> +__replq_remove(const struct scheduler *ops, struct rt_vcpu *svc)
> +{
>
replq_remove() (no double underscores)!!

> @@ -1150,6 +1309,86 @@ rt_dom_cntl(
>      return rc;
>  }
>  
> +/*
> + * The replenishment timer handler picks vcpus
> + * from the replq and does the actual replenishment.
> + */
> +static void repl_timer_handler(void *data){
> +    s_time_t now = NOW();
> +    struct scheduler *ops = data;
> +    struct rt_private *prv = rt_priv(ops);
> +    struct list_head *replq = rt_replq(ops);
> +    struct list_head *runq = rt_runq(ops);
> +    struct timer *repl_timer = prv->repl_timer;
> +    struct list_head *iter, *tmp;
> +    struct rt_vcpu *svc;
> +
No need of this blank line...

> +    LIST_HEAD(tmp_replq);
> +
... you're still defining local variables.

Ok. It looks like we reached the point where all the comment I have
remaining (i.e., the one I just put in this mail), are really really
really really minor.

If you take care of all of them in next version, such patch can also
come with:

Reviewed-by: Dario Faggioli <dario.faggioli@xxxxxxxxxx>

Thanks for all this work!

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.