Thomas Gleixner
2018-Sep-18 10:06 UTC
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote:> On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the topology. The sync > code might just run on the same package an simply not see it. Yes, w/o > TSC_ADJUST the TSC sync code can just fail to see the havoc.Even with TSC adjust the TSC can be slightly off by design on multi-socket systems. Thanks, tglx
Thomas Gleixner
2018-Sep-18 10:41 UTC
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote:> On Tue, 18 Sep 2018, Thomas Gleixner wrote: > > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > > Your memory serves you right. That's indeed observable on CPUs which > > > > lack TSC_ADJUST. > > > > > > But, if the gtod code can observe this, then why doesn't the code that > > > checks the sync? > > > > Because it depends where the involved CPUs are in the topology. The sync > > code might just run on the same package an simply not see it. Yes, w/o > > TSC_ADJUST the TSC sync code can just fail to see the havoc. > > Even with TSC adjust the TSC can be slightly off by design on multi-socket > systems.Here are the gory details: https://lore.kernel.org/lkml/3c1737210708230408i7a8049a9m5db49e6c4d89ab62 at mail.gmail.com/ The changelog has an explanation as well. d8bb6f4c1670 ("x86: tsc prevent time going backwards") I still have one of the machines which is affected by this. Thanks, tglx
Peter Zijlstra
2018-Sep-18 12:48 UTC
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, Sep 18, 2018 at 12:41:57PM +0200, Thomas Gleixner wrote:> On Tue, 18 Sep 2018, Thomas Gleixner wrote: > > On Tue, 18 Sep 2018, Thomas Gleixner wrote: > > > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > > > Your memory serves you right. That's indeed observable on CPUs which > > > > > lack TSC_ADJUST. > > > > > > > > But, if the gtod code can observe this, then why doesn't the code that > > > > checks the sync? > > > > > > Because it depends where the involved CPUs are in the topology. The sync > > > code might just run on the same package an simply not see it. Yes, w/o > > > TSC_ADJUST the TSC sync code can just fail to see the havoc. > > > > Even with TSC adjust the TSC can be slightly off by design on multi-socket > > systems. > > Here are the gory details: > > https://lore.kernel.org/lkml/3c1737210708230408i7a8049a9m5db49e6c4d89ab62 at mail.gmail.com/ > > The changelog has an explanation as well. > > d8bb6f4c1670 ("x86: tsc prevent time going backwards") > > I still have one of the machines which is affected by this.Are we sure this isn't a load vs rdtsc reorder? Because if I look at the current code: notrace static u64 vread_tsc(void) { u64 ret = (u64)rdtsc_ordered(); u64 last = gtod->cycle_last; if (likely(ret >= last)) return ret; /* * GCC likes to generate cmov here, but this branch is extremely * predictable (it's just a function of time and the likely is * very likely) and there's a data dependence, so force GCC * to generate a branch instead. I don't barrier() because * we don't actually need a barrier, and if this function * ever gets inlined it will generate worse code. */ asm volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gtod->mult; looks like: load gtod->cycle_last lfence rdtsc which avoids that possibility, the cycle_last load must have completed before the rdtsc.
Apparently Analagous Threads
- [patch 09/11] x86/vdso: Simplify the invalid vclock case
- [patch 09/11] x86/vdso: Simplify the invalid vclock case
- [patch 09/11] x86/vdso: Simplify the invalid vclock case
- [patch 09/11] x86/vdso: Simplify the invalid vclock case
- [patch 09/11] x86/vdso: Simplify the invalid vclock case