[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 2/5] x86/time: implement tsc as clocksource




On 09/20/2016 05:17 PM, Joao Martins wrote:
> On 09/20/2016 02:55 PM, Jan Beulich wrote:
>>>>> On 20.09.16 at 12:15, <joao.m.martins@xxxxxxxxxx> wrote:
>>> On 09/20/2016 08:13 AM, Jan Beulich wrote:
>>>>>>> On 19.09.16 at 19:54, <joao.m.martins@xxxxxxxxxx> wrote:
>>>>> On 09/19/2016 05:25 PM, Jan Beulich wrote:
>>>>>>>>> On 19.09.16 at 18:11, <joao.m.martins@xxxxxxxxxx> wrote:
>>>>>>> On 09/19/2016 11:13 AM, Jan Beulich wrote:
>>>>>>>>>>> On 14.09.16 at 19:37, <joao.m.martins@xxxxxxxxxx> wrote:
>>>>>>>>> Since b64438c7c ("x86/time: use correct (local) time stamp in
>>>>>>>>> constant-TSC calibration fast path") updates to cpu time use local
>>>>>>>>> stamps, which means platform timer is only used to seed the initial
>>>>>>>>> cpu time.  With clocksource=tsc there is no need to be in sync with
>>>>>>>>> another clocksource, so we reseed the local/master stamps to be values
>>>>>>>>> of TSC and update the platform time stamps accordingly. Time
>>>>>>>>> calibration is set to 1sec after we switch to TSC, thus these stamps
>>>>>>>>> are reseeded to also ensure monotonic returning values right after the
>>>>>>>>> point we switch to TSC. This is also to avoid the possibility of
>>>>>>>>> having inconsistent readings in this short period (i.e. until
>>>>>>>>> calibration fires).
>>>>>>>>
>>>>>>>> And within this one second, which may cover some of Dom0's
>>>>>>>> booting up, it is okay to have inconsistencies?
>>>>>>> It's not okay which is why I am removing this possibility when 
>>>>>>> switching to TSC.
>>>>>>> The inconsistencies in those readings (if I wasn't adjusting) would be 
>>>>>>> because
>>>>>>> we would be using (in that 1-sec) those cpu time tuples calculated by 
>>>>>>> the
>>>>>>> previous calibration or platform time initialization (while still was 
>>>>>>> HPET,
>>>>>>> ACPI, etc as clocksource). Would you prefer me removing the "avoid" and 
>>>>>>> instead
>>>>>>> change it to "remove the possibility" in this last sentence?
>>>>>>
>>>>>> Let's not do the 2nd step before the 1st, which is the question of
>>>>>> what happens prior to and what actually changes at this first
>>>>>> calibration (after 1 sec).
>>>>> The first calibration won't change much - this 1-sec was meant when having
>>>>> nop_rendezvous which is the first time platform timer would be used to 
>>>>> set local
>>>>> cpu_time (will adjust the mention above as it's misleading for the reader 
>>>>> as it
>>>>> doesn't refer to this patch).
>>>>
>>>> So what makes it that it actually _is_ nop_rendezvous after that one
>>>> second? (And yes, part of this may indeed be just bad placement of
>>>> the description, as iirc nop_rendezvous gets introduced only in a later
>>>> patch.)
>>> Because with nop_rendezvous we will be using the platform timer to get a
>>> monotonic time tuple and *set* cpu_time as opposed to just adding up plain 
>>> TSC
>>> delta as it is the case prior to b64438c7c. Thus the reseeding of the cpu 
>>> times
>>> solves both ends of the problem, with nop_rendezvous until it is first
>>> calibration fixes it, and without nop_rendezvous to remove the latch 
>>> adjustment
>>> from initial platform timer.
>>
>> So am I getting you right (together with the second part of your reply
>> further down) that you escape answering the question raised by saying
>> that it doesn't really matter which rendezvous function gets used, when
>> TSC is the clock source?
> Correct and in my defense I wasn't escaping the question, as despite
> unfortunate mis-mention in the patch (or bad English) I think the above
> explains it. During that time window, we now just need to ensure that we will
> get monotonic results solely based on the individual CPU time (i.e. calls to
> get_s_time or anything that uses cpu_time). Unless the calibration function is
> doing something wrong/fishy, I don't see a reason for this to go wrong.
> 
>> I.e. the introduction of nop_rendezvous is
>> really just to avoid unnecessary overhead?
> Yes, but note that it's only the case since recent commit b64438c7c where
> cpu_time stime is now incremented with TSC based deltas with a matching TSC
> stamp. Before it wasn't the case. The main difference with nop_rendezvous 
> (other
> than the significant overhead) versus std_rendezvous is that we use a single
> global tuple propagated to all cpus, whereas with std_rendezvous each tuple is
> different and will vary according to when it rendezvous with cpu 0.
> 
>> In which case it should
>> probably be a separate patch, saying so in its description.
> OK, will move that out of Patch 4 into its own while keeping the same logic.
I have to take back my comment: having redouble-checked on a test run overnight
with std_rendezvous and stable bit, and I saw time going backwards a few times
(~100ns) but only after a few hours (initially there were none - probably why I
was led into error). This is in contrast to nop_rendezvous where I see none in
weeks.

Joao

P.S. If you received similar earlier response but my mailer was misbehaving -
please ignore and sorry for the noise.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.