search for: vgetcyc

Displaying 15 results from an estimated 15 matches for "vgetcyc".

2018 Sep 14
0
[patch 10/11] x86/vdso: Move cycle_last handling into the caller
...s a data dependence, so force GCC - * to generate a branch instead. I don't barrier() because - * we don't actually need a barrier, and if this function - * ever gets inlined it will generate worse code. - */ - asm volatile (""); - return last; -} - notrace static inline u64 vgetcyc(int mode) { if (mode == VCLOCK_TSC) - return vread_tsc(); + return (u64)rdtsc_ordered(); #ifdef CONFIG_PARAVIRT_CLOCK else if (mode == VCLOCK_PVCLOCK) return vread_pvclock(); @@ -168,17 +141,19 @@ notrace static inline u64 vgetcyc(int mo notrace static int do_hres(clockid_t clk, struct...
2018 Sep 14
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...sc_pg); - - if (current_tick != U64_MAX) - return current_tick; - *mode = VCLOCK_NONE; - return 0; + return hv_read_tsc_page(tsc_pg); } #endif @@ -182,47 +150,42 @@ notrace static u64 vread_tsc(void) return last; } -notrace static inline u64 vgetsns(int *mode) +notrace static inline u64 vgetcyc(int mode) { - u64 v; - cycles_t cycles; - - if (gtod->vclock_mode == VCLOCK_TSC) - cycles = vread_tsc(); + if (mode == VCLOCK_TSC) + return vread_tsc(); #ifdef CONFIG_PARAVIRT_CLOCK - else if (gtod->vclock_mode == VCLOCK_PVCLOCK) - cycles = vread_pvclock(mode); + else if (mode == VCLOCK_...
2018 Sep 17
11
[patch V2 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
0
[patch 11/11] x66/vdso: Add CLOCK_TAI support
...2 +- arch/x86/entry/vsyscall/vsyscall_gtod.c | 4 ++++ arch/x86/include/asm/vgtod.h | 6 +++++- 3 files changed, 10 insertions(+), 2 deletions(-) --- a/arch/x86/entry/vdso/vclock_gettime.c +++ b/arch/x86/entry/vdso/vclock_gettime.c @@ -140,7 +140,7 @@ notrace static inline u64 vgetcyc(int mo notrace static int do_hres(clockid_t clk, struct timespec *ts) { - struct vgtod_ts *base = &gtod->basetime[clk]; + struct vgtod_ts *base = &gtod->basetime[clk & VGTOD_HRES_MASK]; u64 cycles, last, ns; unsigned int seq; --- a/arch/x86/entry/vsyscall/vsyscall_gtod.c...
2018 Sep 19
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...serve time going backwards. > > > > I'll have a look into that. It needs some thought vs. the fractional part > > of the base time, but it should be not rocket science to get that > > correct. Famous last words... > > Does the sentinel need to be U64_MAX? What if vgetcyc and its minions > returned gtod->cycle_last-1 (for some value of 1), and the caller just > does "if ((s64)cycles - (s64)last < 0) return fallback; ns += > (cycles-last)* ...". That should just be a "sub ; js ; ". It's an extra > load of ->cycle_last, but...
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote: > On Tue, 18 Sep 2018, Peter Zijlstra wrote: > > > Your memory serves you right. That's indeed observable on CPUs which > > > lack TSC_ADJUST. > > > > But, if the gtod code can observe this, then why doesn't the code that > > checks the sync? > > Because it depends where the involved CPUs are in the
2018 Sep 14
2
[patch 11/11] x66/vdso: Add CLOCK_TAI support
...call/vsyscall_gtod.c | 4 ++++ > arch/x86/include/asm/vgtod.h | 6 +++++- > 3 files changed, 10 insertions(+), 2 deletions(-) > > --- a/arch/x86/entry/vdso/vclock_gettime.c > +++ b/arch/x86/entry/vdso/vclock_gettime.c > @@ -140,7 +140,7 @@ notrace static inline u64 vgetcyc(int mo > > notrace static int do_hres(clockid_t clk, struct timespec *ts) > { > - struct vgtod_ts *base = &gtod->basetime[clk]; > + struct vgtod_ts *base = &gtod->basetime[clk & VGTOD_HRES_MASK]; > u64 cycles, last, ns; > unsigned int seq; > &...
2018 Sep 14
2
[patch 11/11] x66/vdso: Add CLOCK_TAI support
...call/vsyscall_gtod.c | 4 ++++ > arch/x86/include/asm/vgtod.h | 6 +++++- > 3 files changed, 10 insertions(+), 2 deletions(-) > > --- a/arch/x86/entry/vdso/vclock_gettime.c > +++ b/arch/x86/entry/vdso/vclock_gettime.c > @@ -140,7 +140,7 @@ notrace static inline u64 vgetcyc(int mo > > notrace static int do_hres(clockid_t clk, struct timespec *ts) > { > - struct vgtod_ts *base = &gtod->basetime[clk]; > + struct vgtod_ts *base = &gtod->basetime[clk & VGTOD_HRES_MASK]; > u64 cycles, last, ns; > unsigned int seq; > &...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...code. */ asm volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gtod->mult; looks like: load gtod->cycle_last lfence rdtsc which avoids that possibility, the cycle_last load must have completed befor...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...code. */ asm volatile (""); return last; } That does: lfence rdtsc load gtod->cycle_last Which obviously allows us to observe a cycles_last that is later than the rdtsc itself, and thus time can trivially go backwards. The new code: last = gtod->cycle_last; cycles = vgetcyc(gtod->vclock_mode); if (unlikely((s64)cycles < 0)) return vdso_fallback_gettime(clk, ts); if (cycles > last) ns += (cycles - last) * gtod->mult; looks like: load gtod->cycle_last lfence rdtsc which avoids that possibility, the cycle_last load must have completed befor...
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
> On Sep 18, 2018, at 12:52 AM, Thomas Gleixner <tglx at linutronix.de> wrote: > >> On Mon, 17 Sep 2018, John Stultz wrote: >>> On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: >>> Also, I'm not entirely convinced that this "last" thing is needed at >>> all. John, what's the scenario under which we
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
> On Sep 18, 2018, at 12:52 AM, Thomas Gleixner <tglx at linutronix.de> wrote: > >> On Mon, 17 Sep 2018, John Stultz wrote: >>> On Mon, Sep 17, 2018 at 12:25 PM, Andy Lutomirski <luto at kernel.org> wrote: >>> Also, I'm not entirely convinced that this "last" thing is needed at >>> all. John, what's the scenario under which we