[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] Modified RTDS scheduler to use an event-driven model instead of polling.



[I agree with most of Dario's comment and only reply on the comments
that I didn't quite understand/agree.]

>> > @@ -411,15 +422,77 @@ __runq_insert(const struct scheduler *ops, struct 
>> > rt_vcpu *svc)
>> >              struct rt_vcpu * iter_svc = __q_elem(iter);
>> >              if ( svc->cur_deadline <= iter_svc->cur_deadline )
>> >                      break;
>> > +            else
>> > +                ++inserted_index;
>> >           }
>> >          list_add_tail(&svc->q_elem, iter);
>> > +        return inserted_index;
>> >      }
>> >      else
>> >      {
>> > -        list_add(&svc->q_elem, &prv->depletedq);
>> > +        // If we are inserting into previously empty depletedq, rearm
>> > +        //   rearm replenish timer. It will handle further scheduling 
>> > until
>> > +        //   the depletedq is empty again (if ever)
>> > +        if( list_empty(depletedq) )
>> > +            set_timer( &prv->replenishment_timer, svc->cur_deadline );
>>
>> Hmm, can you handle the set_timer outside of this function? It should
>> be possible and make this runq_insert() function serve for only one
>> purpose.
>>
> TBH, I think I like it being here. I mean, __runq_insert is called in
> many places (at least right now), and I wouldn't like to have the check
> and the arming repeated everywhere.
>
> Then, that being said, I'm still hoping that some of these calls could
> go away, and maybe there are call sites where we know that the timer
> won't be armed in any case... So, we'll have to see about this.
>
> So, just take this comment as "I don't dislike the timer machinery to be
> in here". :-)

I see and agree. :-)


>
>> > +static void replenishment_handler(void* data)
>> > +{
>> > +    const struct scheduler *ops = data;
>> > +    struct rt_private *prv = rt_priv(ops);
>> > +    struct list_head *depletedq = rt_depletedq(ops);
>> > +    struct list_head *iter;
>> > +    struct list_head *tmp;
>> > +    struct rt_vcpu *svc = NULL;
>> > +    s_time_t now = NOW();
>> > +    int new_position = 0;
>> > +    unsigned long flags;
>> > +
>> > +    spin_lock_irqsave(&prv->lock, flags);
>> > +
>> > +    // Replenish the vCPU that triggered this, and resort it into runq
>> > +    list_for_each_safe(iter, tmp, depletedq)
>> > +    {
>> > +        svc = __q_elem(iter);
>> > +        if ( now >= svc->cur_deadline )
>> > +        {
>> > +            rt_update_deadline(now, svc);
>> > +            __q_remove(svc); /* remove from depleted queue */
>> > +            new_position = __runq_insert(ops, svc); /* add to runq */
>> > +        }
>> > +        else break; // leave list_for_each_safe
>> > +    }
>> > +
>> > +    // If we become one of top [# CPUs] in the runq, tickle it
>> > +    // TODO: make this work when multiple tickles are required
>>
>> Why do you need multiple tickles?
>>
> Because the loop above may have been done more than one replenishment,
> which makes sense.

Ah-ha, I got the reason.I should have read it twice to figure it out. :-)
>
> While we are here. I've said that I don't like the fact that we have one
> big spinlock for the whole scheduler. However, we do have it right now,
> so let's make good use of it.

OK.

>
> So (but please double check what I'm about to say, and provide your
> view), it should be safe to access the various svc->vc->processor from
> inside here, since we're holding the big log. If that's the case, then
> it should be fairly easy to figure out which are the pCPUs that needs to
> reschedule (playing with the index, etc), and then use their
> vc->processor to decide what pCPU to actually tickle, isn't this the
> case?

Hmm, I don't quite get what you meant here although I read it for
three times. :-(

Did you mean we can decide which PCPUs to tickle for the several
"highest" priority vcpus after their budgets get replenished?
If so, doesn't that mean that we will "duplicate" (part of) the
runq_tickle code to find the pCPUs to preempt? Is it better than the
approach that just call runq_tickle each time whenever a high priority
"ready" VCPU is found?

>
> (let me know if I've explained myself...)

Sure. I'm not sure I really got the idea of how to handle the multiple
tickles scenario you mentioned above. ;-(

>
> Allow me to say this: thanks (to both Daegan and Meng) for doing this
> work. It's been a pleasure to have so much an interesting architectural
> discussion, and it's great to see results already. :-D

Thank you very much for your always very helpful suggestions/advices! :-D

Best regards,

Meng


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania
http://www.cis.upenn.edu/~mengxu/

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.