[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 2/6] x86/time: implement tsc as clocksource



On 08/29/2016 10:36 AM, Jan Beulich wrote:
>>>> On 26.08.16 at 17:11, <joao.m.martins@xxxxxxxxxx> wrote:
>> On 08/25/2016 11:06 AM, Jan Beulich wrote:
>>>>>> On 24.08.16 at 14:43, <joao.m.martins@xxxxxxxxxx> wrote:
>>>> This patch introduces support for using TSC as platform time source
>>>> which is the highest resolution time and most performant to get (~20
>>>> nsecs).
>>>
>>> Is this indeed still the case with the recently added fences?
>> Hmm, may be not as fast with the added fences, But definitely the fastest 
>> time
>> source. If you prefer I can remove the mention to time taken.
> 
> I'd say either you re-measure it with the added fences, or remove it
> as potentially being stale/wrong.
OK.

>>>>  - Change init_tsctimer to skip all the tests and assume it's called
>>>>  only on reliable TSC conditions and no warps observed. Tidy
>>>>  initialization on verify_tsc_reliability as suggested by Konrad.
>>>
>>> Which means that contrary to what you say in the cover letter
>>> there's nothing to prevent it from being used when CPU hotplug
>>> is possible.
>> Probably the cover letter wasn't completely clear on this. I mention that I 
>> split it
>> between Patch 2 and 5 (intent was for easier review), and you can see that I 
>> add
>> check in patch 5, plus preventing online of CPUs too.
>>
>>> I don't think you should skip basic sanity checks, and
>>> place entirely on the admin the burden of knowing whether the
>>> option is safe to use.
>> Neither do I think it should be skipped. We mainly don't duplicate these 
>> checks. In
>> the end I am just running init_tsctimer, in the TSC_RELIABLE block and if no 
>> warps
>> are observed. So putting that in init_tsctimer would just duplicate the 
>> previously
>> done checks. Or would you still prefer as done in previous version i.e. all 
>> checks in
>> both init_tsctimer and verify_tsc_reliability?
> 
> I'm not sure they're needed in both places; what you need to make
> certain is that they're reliably gone through, and that this happens
> early enough.
They are reliably gone through and we get to avoid duplication of checks. Unless
there's a preference to re-add these checks in init_tsctimer, I'll keep these 
as is.
verify_tsc_reliability(...) needs to perform this checks and init_tsctimer is 
only
called these reliable TSC conditions.

> As to breaking this off into the later patch - please don't forget
> that this (as any) series may get applied in pieces, so deferring a
> crucial check to a later patch is not acceptable. If you mean things
> to be split for easier review you may check whether you can find
> a way to add the check in q prereq patch.
OK, note taken. I'll get this check moved from patch 5 to here, as it's the 
place
where it should be.


>>>> +            {
>>>> +                struct cpu_time *t = &per_cpu(cpu_time, cpu);
>>>> +
>>>> +                t->stamp.local_tsc = boot_tsc_stamp;
>>>> +                t->stamp.local_stime = 0;
>>>> +                t->stamp.local_stime = get_s_time_fixed(boot_tsc_stamp);
>>>> +                t->stamp.master_stime = t->stamp.local_stime;
>>>> +            }
>>>
>>> Without any synchronization, how "good" are the chances that
>>> this updating would race with something the other CPUs do?
>>
>> I assumed that at this stage init calls are still being invoked that we 
>> update all
>> cpus time infos, but maybe it's a misconception. I can have this part in one 
>> function
>> and have it done on_selected_cpus() and wait until all cpu time structures 
>> get
>> updated. That perhaps would be better?
> 
> I think so - even if the risk of thee being a race right now is rather
> low, that way you'd avoid this becoming a problem if secondary
> CPUs get made do something other than idling at this point in time.
Indeed, I'll do it that way then.

>>>> @@ -1528,6 +1607,7 @@ void __init early_time_init(void)
>>>>  
>>>>      preinit_pit();
>>>>      tmp = init_platform_timer();
>>>> +    plt_tsc.frequency = tmp;
>>>
>>> How can this be the TSC frequency? init_platform_timer()
>>> specifically skips the TSC. And I can see no other place where
>>> plt_tsc.frequency gets initialized. Am I overlooking something?
>>>
>> That's the calibrated TSC frequency. Before I was setting pts->frequency in
>> init_tsctimer through a temporary variable called tsc_freq. So I thought I 
>> could just
>> drop the variable and set plt_tsc directly. The difference though from 
>> previous
>> versions is that since commit 9334029 this value is returned from platform 
>> time
>> source init() and calibrated against platform timer, instead of always 
>> against PIT.
> 
> This doesn't seem to answer my primary question: Where does
> plt_tsc.frequency get initialized now? try_platform_timer() and
> init_platform_timer() don't - PIT and ACPI timer use static
> initializers, and HEPT gets taken care of in init_hpet(), and hence so
> should init_tsctimer() do (instead of just returning this apparently
> never initialized value). And that's unrelated to what ->init() returns.

plt_tsc.frequency is certainly initialized in early_time_init(). And then on
try_platform_timer we have plt_src = *pts (pts would be a pointer to plt_tsc 
when
called from verify_tsc_reliability()).

IOW, effectively I changed from this:

#v2

static u64 tsc_freq;

static s64 __init init_tsctimer(struct platform_timesource *pts)
{
   ...
   pts->frequency = tsc_freq;
   return 1;
}

...

void __init early_time_init(void)
{
   u64 tmp = init_pit_and_calibrate_tsc();

   tsc_freq = tmp;
}

*To:*

#v3

static s64 __init init_tsctimer(struct platform_timesource *pts)
{
    return pts->frequency;
}


void __init early_time_init(void)
{
    ...
    tmp = init_platform_timer();
    plt_tsc.frequency = tmp;
}

Does this answer your question? Note that my purpose with the change, was to 
remove
the tsc_freq temporary variable. If it makes things less clear (as in doing 
things
differently from other platform timers) I can go back to v2 in this aspect.

Joao

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.