search for: vclock

Displaying 20 results from an estimated 42 matches for "vclock".

Did you mean: clock
2018 Sep 14
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
The code flow for the vclocks is convoluted as it requires the vclocks which can be invalidated separately from the vsyscall_gtod_data sequence to store the fact in a separate variable. That's inefficient. Restructure the code so the vclock readout returns cycles and the conversion to nanoseconds is handled at the call si...
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Mon, 17 Sep 2018, John Stultz wrote: > On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: > > Also, I'm not entirely convinced that this "last" thing is needed at > > all. John, what's the scenario under which we need it? > > So my memory is probably a bit foggy, but I recall that as we > accelerated gettimeofday, we
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Peter Zijlstra wrote: > On Tue, Sep 18, 2018 at 09:52:26AM +0200, Thomas Gleixner wrote: > > On Mon, 17 Sep 2018, John Stultz wrote: > > > On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: > > > > Also, I'm not entirely convinced that this "last" thing is needed at > > > > all. John,
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Thomas Gleixner wrote: > > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > > Your memory serves you right. That's indeed observable on CPUs which > > > > lack TSC_ADJUST. > > > > > > But, if the gtod code can observe this, then why doesn't the code that > > >
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Peter Zijlstra wrote: > On Tue, Sep 18, 2018 at 12:41:57PM +0200, Thomas Gleixner wrote: > > I still have one of the machines which is affected by this. > > Are we sure this isn't a load vs rdtsc reorder? Because if I look at the > current code: The load order of last vs. rdtsc does not matter at all. CPU0 CPU1 .... now0 = rdtsc_ordered(); ...
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Andy Lutomirski wrote: > > On Sep 18, 2018, at 12:52 AM, Thomas Gleixner <tglx at linutronix.de> wrote: > > > >> On Mon, 17 Sep 2018, John Stultz wrote: > >>> On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: > >>> Also, I'm not entirely convinced that this "last" thing is
2018 Sep 19
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Wed, 19 Sep 2018, Rasmus Villemoes wrote: > On 2018-09-19 00:46, Thomas Gleixner wrote: > > On Tue, 18 Sep 2018, Andy Lutomirski wrote: > >>> > >> > >> Do we do better if we use signed arithmetic for the whole calculation? > >> Then a small backwards movement would result in a small backwards result. > >> Or we could offset everything so
2018 Sep 18
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Andy Lutomirski wrote: > > On Sep 18, 2018, at 3:46 PM, Thomas Gleixner <tglx at linutronix.de> wrote: > > On Tue, 18 Sep 2018, Andy Lutomirski wrote: > >> Do we do better if we use signed arithmetic for the whole calculation? > >> Then a small backwards movement would result in a small backwards result. > >> Or we could offset
2018 Sep 18
1
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > So if the TSC on CPU1 is slightly behind the TSC on CPU0 then now1 can be > smaller than cycle_last. The TSC sync stuff does not catch the small delta > for unknown raisins. I'll go and find that machine and test that again. Of course it does not trigger anymore. We accumulated code between the point in timekeeping_advance() where the TSC
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, Sep 18, 2018 at 09:52:26AM +0200, Thomas Gleixner wrote: > On Mon, 17 Sep 2018, John Stultz wrote: > > On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: > > > Also, I'm not entirely convinced that this "last" thing is needed at > > > all. John, what's the scenario under which we need it? > > > > So
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, Sep 18, 2018 at 09:52:26AM +0200, Thomas Gleixner wrote: > On Mon, 17 Sep 2018, John Stultz wrote: > > On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: > > > Also, I'm not entirely convinced that this "last" thing is needed at > > > all. John, what's the scenario under which we need it? > > > > So
2018 Sep 27
1
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Wed, 19 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Andy Lutomirski wrote: > > > On Sep 18, 2018, at 3:46 PM, Thomas Gleixner <tglx at linutronix.de> wrote: > > > On Tue, 18 Sep 2018, Andy Lutomirski wrote: > > >> Do we do better if we use signed arithmetic for the whole calculation? > > >> Then a small backwards movement would
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Oct 03
2
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
...w TSC frequency) and >> then tell L0 that we're done and it can stop emulating TSC accesses. > > That?s delightful! Does the emulation magic also work for L1 user > mode? As far as I understand - yes, all rdtsc* calls will trap into L0. > If so, couldn?t we drop the HyperV vclock entirely and just > fold the adjustment into the core timekeeping data? (Preferably the > actual core data, which would require core changes, but it could > plausibly be done in arch code, too.) Not all Hyper-V hosts support reenlightenment notifications (and, if I'm not mistaken, yo...
2018 Oct 03
2
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
...w TSC frequency) and >> then tell L0 that we're done and it can stop emulating TSC accesses. > > That?s delightful! Does the emulation magic also work for L1 user > mode? As far as I understand - yes, all rdtsc* calls will trap into L0. > If so, couldn?t we drop the HyperV vclock entirely and just > fold the adjustment into the core timekeeping data? (Preferably the > actual core data, which would require core changes, but it could > plausibly be done in arch code, too.) Not all Hyper-V hosts support reenlightenment notifications (and, if I'm not mistaken, yo...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gtod->mult; looks like: load gtod->cycle_last lfence rdtsc which avoids that possibility, the cycle_last load must have completed before the rdtsc.
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gtod->mult; looks like: load gtod->cycle_last lfence rdtsc which avoids that possibility, the cycle_last load must have completed before the rdtsc.
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
> On Sep 18, 2018, at 3:46 PM, Thomas Gleixner <tglx at linutronix.de> wrote: > > On Tue, 18 Sep 2018, Andy Lutomirski wrote: >>> On Sep 18, 2018, at 12:52 AM, Thomas Gleixner <tglx at linutronix.de> wrote: >>> >>>>> On Mon, 17 Sep 2018, John Stultz wrote: >>>>> On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
> On Sep 18, 2018, at 3:46 PM, Thomas Gleixner <tglx at linutronix.de> wrote: > > On Tue, 18 Sep 2018, Andy Lutomirski wrote: >>> On Sep 18, 2018, at 12:52 AM, Thomas Gleixner <tglx at linutronix.de> wrote: >>> >>>>> On Mon, 17 Sep 2018, John Stultz wrote: >>>>> On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at