[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/5] x86/time: deal with negative deltas in get_s_time_fixed()



That calls on_selected_cpus(), but send_IPI_mask() may then still resort to
all-but-self. In that case all IPIs are sent in one go.
Plus as said, how IPIs are sent doesn't matter for the invocation of
time_calibration_rendezvous_tail(). They'll all run at the same time, not
one after the other.
At the hardware level, no one can guarantee that the processors will simultaneously respond to the signal and execute your code nanosecond after you send the ipi. Especially when we're talking about NUMA configurations. I'm afraid the possible and impossible in the laws of physics is also beyond the scope of this thread.

Since further down you build upon that "IPI lag", I fear we first need to
settle on this aspect of your theory.
 I've already provided the delay logs. It's not hard for me to repeat.

The patch:

@@ -1732,6 +1753,8 @@ static void cf_check local_time_calibration(void)

if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )

{

/* Atomically read cpu_calibration struct and write cpu_time struct. */

+ printk("update stime on time calibrate %d, %lu -> %lu (%lu, %lu)\n", smp_processor_id(), t->stamp.local_stime, c->local_stime,

+ t->last_seen_ns, t->last_seen_tsc);

local_irq_disable();

t->stamp = *c;

local_irq_enable();


 2 hours of work:

(XEN) update stime on time calibrate 0, 8564145820102 -> 8565145861597 (8565145862216, 0)
(XEN) update stime on time calibrate 1, 8564145820129 -> 8565145861609 (8565145863957, 0)
(XEN) update stime on time calibrate 3, 8564145819996 -> 8565145861491 (8565145864800, 0)
(XEN) update stime on time calibrate 2, 8564145820099 -> 8565145861609 (8565145865372, 0)

8565145861609 - 8565145861491 = 115 * 3 (3.00 GHz) = 345 lag


3 hours of work:

(XEN) update stime on time calibrate 0, 22914730829200 -> 22915730869993 (22915730870665, 0)
(XEN) update stime on time calibrate 1, 22914730829073 -> 22915730869889 (22915730870693, 0)
(XEN) update stime on time calibrate 2, 22914730829052 -> 22915730869841 (22915730872231, 0)
(XEN) update stime on time calibrate 3, 22914730828892 -> 22915730869696 (22915730872096, 0)

22915730869993 - 22915730869696 = 297 * 3 (3.00 GHz) = 891 lag


2-3 day of work:
(XEN) update stime on time calibrate 0, 254477161980127 -> 254478162020920 (254478162021549, 0)
(XEN) update stime on time calibrate 2, 254477161977638 -> 254478162018429 (254478162022187, 0)
(XEN) update stime on time calibrate 1, 254477161978192 -> 254478162018972 (254478162022776, 0)
(XEN) update stime on time calibrate 3, 254477161976832 -> 254478162017636 (254478162021394, 0)

254478162020920 - 254478162017636 = 3284 * 3 (3.00 GHz) = 9852 lag
 
 As you can see, the core lag is strictly determined by their sequence number. I won't argue about what percentage of this delay is due to rounding error and what percentage is due to ipi lag. To reproduce this, simply add the patch (excluding t->last_seen_ns and t->last_seen_tsc , which were necessary for my own understanding). Then enable the hypervisor with the settings cpufreq=xen:performance max_cstate=1 . Clocksource is left at the default (i.e., hpet).

On Mon, Jan 12, 2026 at 7:08 PM Jan Beulich <jbeulich@xxxxxxxx> wrote:
On 12.01.2026 15:51, Anton Markov wrote:
> That's if IPIs are sent sequentially. In the most common case, they aren't,
>> though - we use the all-but-self shorthand.
>
> Actually, even if IPIs are sent sequentially, I can't see where you spot
>> this effect: Both callers of time_calibration_rendezvous_tail() signal all
>> secondary CPUs to continue at the same time. Hence they'll all execute
>> time_calibration_rendezvous_tail() in parallel.
>
> In parallel, but with a slight delay.
>
> Are they? I fear I don't know which part of the code you're talking about.
>
> In the function "time_calibration" (xen/arch/x86/time.c) Sorry, I don't
> take into account that you don't stay in context, being distracted by other
> threads.

That calls on_selected_cpus(), but send_IPI_mask() may then still resort to
all-but-self. In that case all IPIs are sent in one go.

Plus as said, how IPIs are sent doesn't matter for the invocation of
time_calibration_rendezvous_tail(). They'll all run at the same time, not
one after the other.

Since further down you build upon that "IPI lag", I fear we first need to
settle on this aspect of your theory.

Jan

> One of the reasons we (iirc) don't do that is that since the scaling factor
>> is also slightly imprecise, we'd prefer to avoid scaling very big values.
>> IOW by changing as you suggest we'd trade one accumulating error for
>> another.
>
> As I wrote above, there will be an error when using scale_delta, but it
> won't accumulate between calls to time_calibration_rendezvous_tail. In the
> current version, the old error (ipi lag + rounding error) persists due to
> the use of the old local_stime in the get_s_time_fixed function, and it's
> added to the new error and accumulates with each call.
> If
>
> c->local_stime = get_s_time_fixed(old_tsc ?: new_tsc);
>
> replaced with:
>
> c->local_stime = scale_delta(old_tsc ?: new_tsc);
>
> Then we'll only be dealing with the error due to the current ipi lag and
> rough rounding, within a single call.
>
> If I understand you correctly, you fixed the rough rounding of scale_delta
> by reducing the values using get_s_time_fixed . But the problem is, that
> didn't help. The error now accumulates gradually because we're relying on
> old calculations. Furthermore, while the old rounding error was limited,
> now the error accumulates infinitely, albeit very slowly. If this is so,
> then the solution to the problem becomes less obvious.
>
> Roughly speaking, my servers start to go crazy after a week of continuous
> operation, as the time lag between cores reaches 1 millisecond or more.
>
> On Mon, Jan 12, 2026 at 5:13 PM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>
>> On 12.01.2026 13:49, Anton Markov wrote:
>>>> Btw, your prior response was too hard to properly read, due to excess
>> blank
>>>> lines with at the same time squashed leading blanks. Together with your
>>>> apparent inability to avoid top-posting, I think you really want to
>> adjust
>>>> your mail program's configuration.
>>>
>>> I suggest we skip the discussion of formatting and the number of spaces
>> in
>>> messages and instead focus on the topic of the thread. I have a very
>>> difficult time troubleshooting difficult-to-reproduce bugs, and the fact
>>> that their descriptions are difficult to read due to the number of spaces
>>> is probably the least of the difficulties.
>>
>> Perhaps, yet it still makes dealing with things more difficult.
>>
>>> That invocation of get_s_time_fixed() reduces to scale_delta() (without
>>>> further rdtsc_ordered()), as non-zero at_tsc is passed in all cases. IOW
>>>> it's not quite clear to me what change you are suggesting (that would
>>>> actually make a functional difference).
>>>
>>> Replacing get_s_time_fixed with scale_delta will remove the calculation
>>> dependency on the previous local_stime value, which accumulates lag
>> between
>>> cores. This is because: rdtsc_ordered is not called synchronously on the
>>> cores, but by the difference offset by the ipi speed. Therefore, we get:
>>>
>>> core0: current_rdtsc;
>>> core1: current_rdtsc + ipi speed;
>>> coreN: current_rdtsc + ipi speed * N;
>>
>> That's if IPIs are sent sequentially. In the most common case, they aren't,
>> though - we use the all-but-self shorthand.
>>
>> Actually, even if IPIs are sent sequentially, I can't see where you spot
>> this effect: Both callers of time_calibration_rendezvous_tail() signal all
>> secondary CPUs to continue at the same time. Hence they'll all execute
>> time_calibration_rendezvous_tail() in parallel.
>>
>>> Since ipi values are sent alternately in a loop to core0,
>>
>> Are they? I fear I don't know which part of the code you're talking about.
>>
>>> in the version
>>> with get_s_time_fixed, we get the following local_stime calculation
>> format.
>>>
>>> coreN: local_stime = local_stime + scale_delta((current_rdtsc +
>> (ipi_speed
>>> * N)) – local_rdtsc);
>>
>> One of the reasons we (iirc) don't do that is that since the scaling factor
>> is also slightly imprecise, we'd prefer to avoid scaling very big values.
>> IOW by changing as you suggest we'd trade one accumulating error for
>> another.
>>
>> Jan
>>
>>> This means the time on each core will differ by ipi_speed * N. And since
>>> we're using the values of the previous local_stime, the difference will
>>> accumulate because the previous local_stime was also offset. In the
>> version
>>> with scale_delta, we get:
>>>
>>> coreN: local_stime = scale_delta(current_rdtsc + (ipi_speed * N));
>>>
>>> This means there will still be a difference, but it won't accumulate, and
>>> the offsets will remain within normal limits.
>>>
>>> If it's still unclear: If your local_stime in get_s_time_fixed is offset
>>> relative to other cores, then the fact that rdtsc_ordered and local_tsc
>> are
>>> not offset doesn't change anything, since you're using the delta relative
>>> to local_stime.
>>>
>>> core0_local_stime + (rdtsc_ordered() - local_tsc) != core1_local_stime +
>>> (rdtsc_ordered() - local_tsc); // Even if rdtsc_ordered() and local_tsc
>> are
>>> equal across cores.
>>>
>>> On 96-core configurations, up to a millisecond of latency can accumulate
>> in
>>> local_stime over a week of operation, and this is a significant
>>> difference. This
>>> is due to the fact that I use cpufreq=xen:performance max_cstate=1 ,
>>> meaning that in my configuration, local_stime is never overwritten by
>>> master_stime.
>>>
>>> Thanks.
>>
>


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.