search for: cycles_last

Displaying 15 results from an estimated 15 matches for "cycles_last".

Did you mean: cycle_last
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...cenario under which we need it? >> >> So my memory is probably a bit foggy, but I recall that as we >> accelerated gettimeofday, we found that even on systems that claimed >> to have synced TSCs, they were actually just slightly out of sync. >> Enough that right after cycles_last had been updated, a read on >> another cpu could come in just behind cycles_last, resulting in a >> negative interval causing lots of havoc. >> >> So the sanity check is needed to avoid that case. > > Your memory serves you right. That's indeed observable on CPUs...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...cenario under which we need it? >> >> So my memory is probably a bit foggy, but I recall that as we >> accelerated gettimeofday, we found that even on systems that claimed >> to have synced TSCs, they were actually just slightly out of sync. >> Enough that right after cycles_last had been updated, a read on >> another cpu could come in just behind cycles_last, resulting in a >> negative interval causing lots of havoc. >> >> So the sanity check is needed to avoid that case. > > Your memory serves you right. That's indeed observable on CPUs...
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...io under which we need it? > > > > So my memory is probably a bit foggy, but I recall that as we > > accelerated gettimeofday, we found that even on systems that claimed > > to have synced TSCs, they were actually just slightly out of sync. > > Enough that right after cycles_last had been updated, a read on > > another cpu could come in just behind cycles_last, resulting in a > > negative interval causing lots of havoc. > > > > So the sanity check is needed to avoid that case. > > Your memory serves you right. That's indeed observable on...
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...io under which we need it? > > > > So my memory is probably a bit foggy, but I recall that as we > > accelerated gettimeofday, we found that even on systems that claimed > > to have synced TSCs, they were actually just slightly out of sync. > > Enough that right after cycles_last had been updated, a read on > > another cpu could come in just behind cycles_last, resulting in a > > negative interval causing lots of havoc. > > > > So the sanity check is needed to avoid that case. > > Your memory serves you right. That's indeed observable on...
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...hn, what's the scenario under which we need it? > > So my memory is probably a bit foggy, but I recall that as we > accelerated gettimeofday, we found that even on systems that claimed > to have synced TSCs, they were actually just slightly out of sync. > Enough that right after cycles_last had been updated, a read on > another cpu could come in just behind cycles_last, resulting in a > negative interval causing lots of havoc. > > So the sanity check is needed to avoid that case. Your memory serves you right. That's indeed observable on CPUs which lack TSC_ADJUST. @...
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...? > > > > > > So my memory is probably a bit foggy, but I recall that as we > > > accelerated gettimeofday, we found that even on systems that claimed > > > to have synced TSCs, they were actually just slightly out of sync. > > > Enough that right after cycles_last had been updated, a read on > > > another cpu could come in just behind cycles_last, resulting in a > > > negative interval causing lots of havoc. > > > > > > So the sanity check is needed to avoid that case. > > > > Your memory serves you right. Th...
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...ed it? > >> > >> So my memory is probably a bit foggy, but I recall that as we > >> accelerated gettimeofday, we found that even on systems that claimed > >> to have synced TSCs, they were actually just slightly out of sync. > >> Enough that right after cycles_last had been updated, a read on > >> another cpu could come in just behind cycles_last, resulting in a > >> negative interval causing lots of havoc. > >> > >> So the sanity check is needed to avoid that case. > > > > Your memory serves you right. That...
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...>> >>>> So my memory is probably a bit foggy, but I recall that as we >>>> accelerated gettimeofday, we found that even on systems that claimed >>>> to have synced TSCs, they were actually just slightly out of sync. >>>> Enough that right after cycles_last had been updated, a read on >>>> another cpu could come in just behind cycles_last, resulting in a >>>> negative interval causing lots of havoc. >>>> >>>> So the sanity check is needed to avoid that case. >>> >>> Your memory serves...
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...>> >>>> So my memory is probably a bit foggy, but I recall that as we >>>> accelerated gettimeofday, we found that even on systems that claimed >>>> to have synced TSCs, they were actually just slightly out of sync. >>>> Enough that right after cycles_last had been updated, a read on >>>> another cpu could come in just behind cycles_last, resulting in a >>>> negative interval causing lots of havoc. >>>> >>>> So the sanity check is needed to avoid that case. >>> >>> Your memory serves...
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...instead. I don't barrier() because * we don't actually need a barrier, and if this function * ever gets inlined it will generate worse code. */ asm volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gto...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...instead. I don't barrier() because * we don't actually need a barrier, and if this function * ever gets inlined it will generate worse code. */ asm volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gto...