[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7]xen: sched: convert RTDS from time to event driven model



On Thu, Mar 10, 2016 at 11:43 AM, Dario Faggioli
<dario.faggioli@xxxxxxxxxx> wrote:
> On Thu, 2016-03-10 at 10:28 -0500, Meng Xu wrote:
>> On Thu, Mar 10, 2016 at 5:38 AM, Dario Faggioli
>> <dario.faggioli@xxxxxxxxxx> wrote:
>> >
>> > I don't think we really need to count anything. In fact, what I had
>> > in
>> > mind and tried to put down in pseudocode is that we traverse the
>> > list
>> > of replenishment events twice. During the first traversal, we do
>> > not
>> > remove the elements that we replenish (i.e., the ones that we call
>> > rt_update_deadline() on). Therefore, we can just do the second
>> > traversal, find them all in there, handle the tickling, and --in
>> > this
>> > case-- remove and re-insert them. Wouldn't this work?
>> My concern is that:
>> Once we run rt_update_deadline() in the first traversal of the list,
>> we have updated the cur_deadline and cur_budget already.
>> Since the replenish queue is sorted by the cur_deadline, how can we
>> know which vcpu has been updated in the first traversal and need to
>> be
>> reinsert?  We don't have to traverse the whole replq to reinsert all
>> vcpus since some of them haven't been replenished yet.
>>
> Ah, you're right, doing all the rt_update_deadline() in the first loop,
> we screw the stop condition of the second loop.
>
> I still don't like counting, it looks fragile. :-/
>
> This that you propose here...
>> If we wan to avoid the counting, we can add a flag like
>>  #define __RTDS_delayed_reinsert_replq     4
>> #define RTDS_delayed_reinsert_replq  (1<<
>> __RTDS_delayed_reinsert_replq)
>> so that we know when we should stop at the second traversal.
>>
> ...seems like it could work, but I also am not super happy about it, as
> it does not look to me there should be the need of such a generic piece
> of information such as a flag, for this very specific purpose.
>
> I mean, I know we have plenty of free bits in flag, but it's something
> that happens *all* *inside* one function (replenishment timer handler).

OK. Agree. The internal list idea flashed in my mind before and I
didn't catch it. ;-)

>
> What about an internal (to the timer replenishment fucntion),
> temporary, list. Something along the lines of:

I think the pseudo-code makes sense. I just need to add some more
logic into it to make it complete. It forgets to handle the runq.

>
>   ...
>   LIST_HEAD(tmp_replq);
>
>   list_for_each_safe(iter, tmp, replq)
>   {
>       svc = replq_elem(iter);
>
>       if ( now < svc->cur_deadline )
>           break;
>
>       list_del(&svc->replq_elem);
>       rt_update_deadline(now, svc);
>       list_add(&svc->replq_elem, &tmp_replq);

         /* if svc is on runq, we need to put it to the correct place
since its deadline changes. */
          if( __vcpu_on_q(svc) )
          {
              /* put back to runq */
             __q_remove(svc);
             __runq_insert(ops, svc);


>   }
>
>   list_for_each_safe(iter, tmp, tmp_replq)
>   {
>       svc = replq_elem(iter);
>
>       < tickling logic >
>
>       list_del(&svc->replq_elem);
>       deadline_queue_insert(&replq_elem, svc, &svc->replq_elem, replq);
>   }
>   ...
>
> So, basically, the idea is:
>  - first, we fetch all the vcpus that needs a replenishment, remove
>    them from replenishment queue, do the replenishment and stash them
>    in a temp list;
>  - second, for all the vcpus that we replenished (which we know which
>    ones they are: all the ones in the temp list!) we apply the proper
>    tickling logic, remove them from the temp list and queue their new
>    replenishment event.
>
> It may look a bit convoluted, all these list moving, but I do like the
> fact that is is super self-contained.
>
> How does that sound / What did I forget this time ? :-)

Besides we need to "resort" the runq if the to-be-replenished vcpu is
on the runq now. Besides that, I think it's good.

>
> BTW, I hope I got the code snippet right, but please, let's focus and
> discuss the idea.

Right. :-)

Thanks and Best Regards,

Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.