Displaying 16 results from an estimated 16 matches for "vdso_fallback_gettim".
Did you mean:
vdso_fallback_gettime
2018 Sep 14
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...leixner <tglx at linutronix.de>
---
arch/x86/entry/vdso/vclock_gettime.c | 81 +++++++++--------------------------
1 file changed, 21 insertions(+), 60 deletions(-)
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -48,16 +48,6 @@ notrace static long vdso_fallback_gettim
return ret;
}
-notrace static long vdso_fallback_gtod(struct timeval *tv, struct timezone *tz)
-{
- long ret;
-
- asm("syscall" : "=a" (ret) :
- "0" (__NR_gettimeofday), "D" (tv), "S" (tz) : "memory");
- return ret;
-}
-
-
#else...
2018 Sep 17
11
[patch V2 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime()
implementation, which extended the clockid switch case and added yet
another slightly different copy of the same code.
Especially the extended switch case is problematic as the compiler tends to
generate a jump table which then requires to use retpolines. If jump tables
are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime()
implementation, which extended the clockid switch case and added yet
another slightly different copy of the same code.
Especially the extended switch case is problematic as the compiler tends to
generate a jump table which then requires to use retpolines. If jump tables
are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime()
implementation, which extended the clockid switch case and added yet
another slightly different copy of the same code.
Especially the extended switch case is problematic as the compiler tends to
generate a jump table which then requires to use retpolines. If jump tables
are disabled it adds yet another conditional to the existing
2018 Sep 14
0
[patch 10/11] x86/vdso: Move cycle_last handling into the caller
...>basetime[clk];
+ u64 cycles, last, ns;
unsigned int seq;
- u64 cycles, ns;
do {
seq = gtod_read_begin(gtod);
ts->tv_sec = base->sec;
ns = base->nsec;
+ last = gtod->cycle_last;
cycles = vgetcyc(gtod->vclock_mode);
if (unlikely((s64)cycles < 0))
return vdso_fallback_gettime(clk, ts);
- ns += (cycles - gtod->cycle_last) * gtod->mult;
+ if (cycles > last)
+ ns += (cycles - last) * gtod->mult;
ns >>= gtod->shift;
} while (unlikely(gtod_read_retry(gtod, seq)));
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote:
> On Tue, 18 Sep 2018, Peter Zijlstra wrote:
> > > Your memory serves you right. That's indeed observable on CPUs which
> > > lack TSC_ADJUST.
> >
> > But, if the gtod code can observe this, then why doesn't the code that
> > checks the sync?
>
> Because it depends where the involved CPUs are in the
2018 Sep 18
2
[patch 09/11] x86/vdso: Simplify the invalid vclock case
On Tue, 18 Sep 2018, Thomas Gleixner wrote:
> On Tue, 18 Sep 2018, Peter Zijlstra wrote:
> > > Your memory serves you right. That's indeed observable on CPUs which
> > > lack TSC_ADJUST.
> >
> > But, if the gtod code can observe this, then why doesn't the code that
> > checks the sync?
>
> Because it depends where the involved CPUs are in the
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...e
rdtsc
load gtod->cycle_last
Which obviously allows us to observe a cycles_last that is later than
the rdtsc itself, and thus time can trivially go backwards.
The new code:
last = gtod->cycle_last;
cycles = vgetcyc(gtod->vclock_mode);
if (unlikely((s64)cycles < 0))
return vdso_fallback_gettime(clk, ts);
if (cycles > last)
ns += (cycles - last) * gtod->mult;
looks like:
load gtod->cycle_last
lfence
rdtsc
which avoids that possibility, the cycle_last load must have completed
before the rdtsc.
2018 Sep 18
3
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...e
rdtsc
load gtod->cycle_last
Which obviously allows us to observe a cycles_last that is later than
the rdtsc itself, and thus time can trivially go backwards.
The new code:
last = gtod->cycle_last;
cycles = vgetcyc(gtod->vclock_mode);
if (unlikely((s64)cycles < 0))
return vdso_fallback_gettime(clk, ts);
if (cycles > last)
ns += (cycles - last) * gtod->mult;
looks like:
load gtod->cycle_last
lfence
rdtsc
which avoids that possibility, the cycle_last load must have completed
before the rdtsc.
2017 Feb 09
0
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
...gt;
#include <linux/kernel.h>
@@ -32,6 +33,11 @@ extern u8 pvclock_page
__attribute__((visibility("hidden")));
#endif
+#ifdef CONFIG_HYPERV_TSCPAGE
+extern u8 hvclock_page
+ __attribute__((visibility("hidden")));
+#endif
+
#ifndef BUILD_VDSO32
notrace static long vdso_fallback_gettime(long clock, struct timespec *ts)
@@ -141,6 +147,32 @@ static notrace u64 vread_pvclock(int *mode)
return last;
}
#endif
+#ifdef CONFIG_HYPERV_TSCPAGE
+static notrace u64 vread_hvclock(int *mode)
+{
+ const struct ms_hyperv_tsc_page *tsc_pg =
+ (const struct ms_hyperv_tsc_page *)&hvclock_p...
2017 Feb 09
4
[PATCH 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol
defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the
required support. Simple sysbench test shows the following results:
Before:
# time sysbench --test=memory --max-requests=500000 run
...
real 1m22.618s
user 0m50.193s
sys 0m32.268s
After:
# time sysbench --test=memory
2017 Feb 09
4
[PATCH 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol
defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the
required support. Simple sysbench test shows the following results:
Before:
# time sysbench --test=memory --max-requests=500000 run
...
real 1m22.618s
user 0m50.193s
sys 0m32.268s
After:
# time sysbench --test=memory
2017 Mar 03
4
[PATCH v3 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
merge window is about to close so I hope it's OK to make another try here.
Changes since v2:
- Add explicit READ_ONCE() to not rely on 'volatile' [Andy Lutomirski]
- rdtsc() -> rdtsc_ordered() [Andy Lutomirski]
- virt_rmb() -> smp_rmb() [Thomas Gleixner, Andy Lutomirski]
Thomas, Andy, it seems the only blocker for the series was the ambiguity with
TSC page read algorithm.
2017 Mar 03
4
[PATCH v3 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
merge window is about to close so I hope it's OK to make another try here.
Changes since v2:
- Add explicit READ_ONCE() to not rely on 'volatile' [Andy Lutomirski]
- rdtsc() -> rdtsc_ordered() [Andy Lutomirski]
- virt_rmb() -> smp_rmb() [Thomas Gleixner, Andy Lutomirski]
Thomas, Andy, it seems the only blocker for the series was the ambiguity with
TSC page read algorithm.
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi,
while we're still waiting for a definitive ACK from Microsoft that the
algorithm is good for SMP case (as we can't prevent the code in vdso from
migrating between CPUs) I'd like to send v2 with some modifications to keep
the discussion going.
Changes since v1:
- Document the TSC page reading protocol [Thomas Gleixner].
- Separate the TSC page reading code from