[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/4] xen: add real time scheduler rt



Hi Dario,

Thank you very much for your comments! I will just comment on the
points that needs clarification and will solve all of these comments.

>> +/*
>> + * Debug related code, dump vcpu/cpu information
>> + */
>> +static void
>> +rt_dump_vcpu(const struct scheduler *ops, const struct rt_vcpu *svc)
>> +{
>> +    struct rt_private *prv = RT_PRIV(ops);
>> +    char cpustr[1024];
>> +    cpumask_t *cpupool_mask;
>> +
>> +    ASSERT(svc != NULL);
>> +    /* flag vcpu */
>> +    if( svc->sdom == NULL )
>> +        return;
>> +
>> +    cpumask_scnprintf(cpustr, sizeof(cpustr), svc->vcpu->cpu_hard_affinity);
>> +    printk("[%5d.%-2u] cpu %u, (%"PRI_stime", %"PRI_stime"),"
>> +           " cur_b=%"PRI_stime" cur_d=%"PRI_stime" last_start=%"PRI_stime
>> +           " onR=%d runnable=%d cpu_hard_affinity=%s ",
>>
> How does this come up in the console? Should we break it with a '\n'
> somewhere? It looks rather long...

Some information is not so useful here, such as the period and budget
of the vcpu, which can be displayed by using the tool stack. I can
remove some of them to make this line shorter. I will remove
svc->budget, svc->period and prv->cpus.

>> +            svc->vcpu->domain->domain_id,
>> +            svc->vcpu->vcpu_id,
>> +            svc->vcpu->processor,
>> +            svc->period,
>> +            svc->budget,
>> +            svc->cur_budget,
>> +            svc->cur_deadline,
>> +            svc->last_start,
>> +            __vcpu_on_runq(svc),
>> +            vcpu_runnable(svc->vcpu),
>> +            cpustr);
>> +    memset(cpustr, 0, sizeof(cpustr));
>> +    cpupool_mask = cpupool_scheduler_cpumask(svc->vcpu->domain->cpupool);
>> +    cpumask_scnprintf(cpustr, sizeof(cpustr), cpupool_mask);
>> +    printk("cpupool=%s ", cpustr);
>> +    memset(cpustr, 0, sizeof(cpustr));
>> +    cpumask_scnprintf(cpustr, sizeof(cpustr), &prv->cpus);
>> +    printk("prv->cpus=%s\n", cpustr);
>> +
>> +    /* TRACE */
>> +    {
>> +        struct {
>> +            unsigned dom:16,vcpu:16;
>> +            unsigned processor;
>> +            unsigned cur_budget_lo, cur_budget_hi;
>> +            unsigned cur_deadline_lo, cur_deadline_hi;
>> +            unsigned is_vcpu_on_runq:16,is_vcpu_runnable:16;
>> +        } d;
>> +        d.dom = svc->vcpu->domain->domain_id;
>> +        d.vcpu = svc->vcpu->vcpu_id;
>> +        d.processor = svc->vcpu->processor;
>> +        d.cur_budget_lo = (unsigned) svc->cur_budget;
>> +        d.cur_budget_hi = (unsigned) (svc->cur_budget >> 32);
>> +        d.cur_deadline_lo = (unsigned) svc->cur_deadline;
>> +        d.cur_deadline_hi = (unsigned) (svc->cur_deadline >> 32);
>> +        d.is_vcpu_on_runq = __vcpu_on_runq(svc);
>> +        d.is_vcpu_runnable = vcpu_runnable(svc->vcpu);
>> +        trace_var(TRC_RT_VCPU_DUMP, 1,
>> +                  sizeof(d),
>> +                  (unsigned char *)&d);
>> +    }
>> +}
>> +

>> + */
>> +static void
>> +__runq_insert(const struct scheduler *ops, struct rt_vcpu *svc)
>> +{
>> +    struct rt_private *prv = RT_PRIV(ops);
>> +    struct list_head *runq = RUNQ(ops);
>> +    struct list_head *iter;
>> +    spinlock_t *schedule_lock;
>> +
>>This empty line above seems to be actually empty, but looking more
> carefully, it does contain 4 spaces, doesn't it?
>
> If that's the case, avoid doing this, i.e., make sure that empty lines
> are actually empty. :-D
>
> Looking at each patch with `git show' should highlight occurrences of
> this phenomenon, as well  as of any trailing white space, by marking
> them in red.
>
>> +    schedule_lock = per_cpu(schedule_data, 
>> svc->vcpu->processor).schedule_lock;
>> +    ASSERT( spin_is_locked(schedule_lock) );
>> +
> As of now, the only lock around is prv->lock, isn't it? So this
> per_cpu(xxx) is a complex way to get to prv->lock, or am I missing
> something.

Yes. It's the only lock right now. When I split the RunQ to two
queues: RunQ, DepletedQ, I can still use one lock, (but probably two
locks are more efficient?)

>
> In credit, the pre-inited set of locks are actually used "as they are",
> while in credit2, there is some remapping going on, but there is more
> than one lock anyway. That's why you find things like the above in those
> two schedulers. Here, you should not need anything like that, (as you do
> everywhere else) just go ahead and use prv->lock.
>
> Of course, that does not mean you don't need the lock remapping in
> rt_alloc_pdata(). That code looks ok to me, just adapt this bit above,
> as, like this, it makes things harder to understand.
>
> Or am I overlooking something?

I think you didn't overlook anything. I will refer to credit2 to see
how it is using multiple locks, since it's likely we will have two
locks here.

>> +/*
>> + * Burn budget in nanosecond granularity
>> + */
>> +static void
>> +burn_budgets(const struct scheduler *ops, struct rt_vcpu *svc, s_time_t now)
>> +{
>>
> burn_budget()? I mean, why the trailing 's'?
>
> (yes, this is a very minor thing.)
>
>> +    s_time_t delta;
>> +
>> +    /* don't burn budget for idle VCPU */
>> +    if ( is_idle_vcpu(svc->vcpu) )
>> +        return;
>> +
>> +    rt_update_helper(now, svc);
>> +
>> +    /* not burn budget when vcpu miss deadline */
>> +    if ( now >= svc->cur_deadline )
>> +        return;
>> +
> How this can be true?

You are right! After rt_update_helper(), this should be always true.
Will change as you said and preferred. :-)

>
> Unless I'm missing something, in rt_update_helper(), if the deadline is
> behind now, you move it ahead of it (and replenish the budget). Here you
> check again whether the deadline is behind now, which should not be
> possible, as you just took care of that... Isn't it so?
>
> Considering both mine and George's suggestion, if you rework the helper
> and move the check out of it, then this one is fine (and you just call
> the helper if the condition is verified). If you don't want to do that,
> then I guess you can have the helper returning 0|1 depending on whether
> or not the update happened, and use such value here, for deciding
> whether to bail or not.
>
> I think I'd prefer the former (pulling the check out of the helper).
>


>> +/*
>> + * schedule function for rt scheduler.
>> + * The lock is already grabbed in schedule.c, no need to lock here
>> + */
>> +static struct task_slice
>> +rt_schedule(const struct scheduler *ops, s_time_t now, bool_t 
>> tasklet_work_scheduled)
>> +{
>> +    const int cpu = smp_processor_id();
>> +    struct rt_private * prv = RT_PRIV(ops);
>> +    struct rt_vcpu * const scurr = RT_VCPU(current);
>> +    struct rt_vcpu * snext = NULL;
>> +    struct task_slice ret = { .migrated = 0 };
>> +
>> +    /* clear ticked bit now that we've been scheduled */
>> +    if ( cpumask_test_cpu(cpu, &prv->tickled) )
>> +        cpumask_clear_cpu(cpu, &prv->tickled);
>> +
> Is the test important? cpumask operations may be quite expensive, and I
> think always clearing is better than always testing and sometimes
> (rather often, I think) clear.
>
> I'm open to other views on this, though. :-)

I think we can just clear it, unless clear is much more expensive than
test. If no objection, I will just clear it in the next version.

> Right. For this round, I tried, while looking at the patch, as hard as I
> could to concentrate on the algorithm, and on how the Xen scheduling
> framework is being used here. As a result, I confirm my previous
> impression that this code is in a fair state and that, as an
> experimental and in-development feature, it could well be checked in
> soon (as far as comments are addressed, of course :-D ).
>

I will solve all of these comments this week and try my best to
release the next version at the weekend. (Well, if not, it should be
early next week. Many simple things to change and modify. :-) )

Thank you very much!

Best,

Meng

-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.