search for: pvti

Displaying 20 results from an estimated 21 matches for "pvti".

Did you mean: pvi
2017 Feb 08
2
[PATCH RFC 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
...pecial_mapping *sm, > ret = vm_insert_pfn(vma, vmf->address, > __pa_symbol(&__vvar_page) >> PAGE_SHIFT); > } else if (sym_offset == image->sym_pvclock_page) { > - struct pvclock_vsyscall_time_info *pvti = > - pvclock_pvti_cpu0_va(); > - if (pvti && vclock_was_used(VCLOCK_PVCLOCK)) { > - ret = vm_insert_pfn( > - vma, > - vmf->address, > -...
2017 Feb 08
2
[PATCH RFC 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
...pecial_mapping *sm, > ret = vm_insert_pfn(vma, vmf->address, > __pa_symbol(&__vvar_page) >> PAGE_SHIFT); > } else if (sym_offset == image->sym_pvclock_page) { > - struct pvclock_vsyscall_time_info *pvti = > - pvclock_pvti_cpu0_va(); > - if (pvti && vclock_was_used(VCLOCK_PVCLOCK)) { > - ret = vm_insert_pfn( > - vma, > - vmf->address, > -...
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented the required support re-using pvclock_page VVAR. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench
2017 Feb 08
3
[PATCH RFC 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. I implemented the required support re-using pvclock_page VVAR. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench
2018 Sep 14
0
[patch 09/11] x86/vdso: Simplify the invalid vclock case
...endif #ifdef CONFIG_PARAVIRT_CLOCK @@ -98,7 +73,7 @@ static notrace const struct pvclock_vsys return (const struct pvclock_vsyscall_time_info *)&pvclock_page; } -static notrace u64 vread_pvclock(int *mode) +static notrace u64 vread_pvclock(void) { const struct pvclock_vcpu_time_info *pvti = &get_pvti0()->pvti; u64 ret; @@ -130,10 +105,8 @@ static notrace u64 vread_pvclock(int *mo do { version = pvclock_read_begin(pvti); - if (unlikely(!(pvti->flags & PVCLOCK_TSC_STABLE_BIT))) { - *mode = VCLOCK_NONE; - return 0; - } + if (unlikely(!(pvti->flags &...
2017 Feb 08
0
[PATCH RFC 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
...so64_enabled = 1; @@ -112,13 +113,24 @@ static int vvar_fault(const struct vm_special_mapping *sm, ret = vm_insert_pfn(vma, vmf->address, __pa_symbol(&__vvar_page) >> PAGE_SHIFT); } else if (sym_offset == image->sym_pvclock_page) { - struct pvclock_vsyscall_time_info *pvti = - pvclock_pvti_cpu0_va(); - if (pvti && vclock_was_used(VCLOCK_PVCLOCK)) { - ret = vm_insert_pfn( - vma, - vmf->address, - __pa(pvti) >> PAGE_SHIFT); + if (vclock_was_used(VCLOCK_PVCLOCK)) { + struct pvclock_vsyscall_time_info *pvti = + pvclock_pvti_cpu0_va();...
2018 Sep 14
0
[patch 10/11] x86/vdso: Move cycle_last handling into the caller
...--------------------- 1 file changed, 7 insertions(+), 32 deletions(-) --- a/arch/x86/entry/vdso/vclock_gettime.c +++ b/arch/x86/entry/vdso/vclock_gettime.c @@ -76,9 +76,8 @@ static notrace const struct pvclock_vsys static notrace u64 vread_pvclock(void) { const struct pvclock_vcpu_time_info *pvti = &get_pvti0()->pvti; - u64 ret; - u64 last; u32 version; + u64 ret; /* * Note: The kernel and hypervisor must guarantee that cpu ID @@ -111,13 +110,7 @@ static notrace u64 vread_pvclock(void) ret = __pvclock_read_cycles(pvti, rdtsc_ordered()); } while (pvclock_read_retry(pvti,...
2018 Sep 17
11
[patch V2 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2018 Sep 14
24
[patch 00/11] x86/vdso: Cleanups, simmplifications and CLOCK_TAI support
Matt attempted to add CLOCK_TAI support to the VDSO clock_gettime() implementation, which extended the clockid switch case and added yet another slightly different copy of the same code. Especially the extended switch case is problematic as the compiler tends to generate a jump table which then requires to use retpolines. If jump tables are disabled it adds yet another conditional to the existing
2024 Jan 22
2
[PATCH] mm: Remove double faults once write a device pfn
...> > diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c > index 7645730dc228..dd2431c2975f 100644 > --- a/arch/x86/entry/vdso/vma.c > +++ b/arch/x86/entry/vdso/vma.c > @@ -185,7 +185,8 @@ static vm_fault_t vvar_fault(const struct vm_special_mapping *sm, > if (pvti && vclock_was_used(VDSO_CLOCKMODE_PVCLOCK)) { > return vmf_insert_pfn_prot(vma, vmf->address, > __pa(pvti) >> PAGE_SHIFT, > - pgprot_decrypted(vma->vm_page_prot)); > + pgprot_decrypted(vma->vm_page_prot), > + true); > } > }...
2024 Jan 24
1
[PATCH] mm: Remove double faults once write a device pfn
...gt;>>> --- a/arch/x86/entry/vdso/vma.c >> >>>>> +++ b/arch/x86/entry/vdso/vma.c >> >>>>> @@ -185,7 +185,8 @@ static vm_fault_t vvar_fault(const struct >> >>>> vm_special_mapping *sm, >> >>>>> if (pvti && vclock_was_used(VDSO_CLOCKMODE_PVCLOCK)) >> >>>> { >> >>>>> return vmf_insert_pfn_prot(vma, vmf->address, >> >>>>> __pa(pvti) >> PAGE_SHIFT, >> >>&...
2024 Jan 23
2
[PATCH] mm: Remove double faults once write a device pfn
...6/entry/vdso/vma.c >>> index 7645730dc228..dd2431c2975f 100644 >>> --- a/arch/x86/entry/vdso/vma.c >>> +++ b/arch/x86/entry/vdso/vma.c >>> @@ -185,7 +185,8 @@ static vm_fault_t vvar_fault(const struct >> vm_special_mapping *sm, >>> if (pvti && vclock_was_used(VDSO_CLOCKMODE_PVCLOCK)) >> { >>> return vmf_insert_pfn_prot(vma, vmf->address, >>> __pa(pvti) >> PAGE_SHIFT, >>> - pgprot_decrypted(vma-...
2024 Jan 24
2
[PATCH] mm: Remove double faults once write a device pfn
...5730dc228..dd2431c2975f 100644 >>>>> --- a/arch/x86/entry/vdso/vma.c >>>>> +++ b/arch/x86/entry/vdso/vma.c >>>>> @@ -185,7 +185,8 @@ static vm_fault_t vvar_fault(const struct >>>> vm_special_mapping *sm, >>>>> if (pvti && vclock_was_used(VDSO_CLOCKMODE_PVCLOCK)) >>>> { >>>>> return vmf_insert_pfn_prot(vma, vmf->address, >>>>> __pa(pvti) >> PAGE_SHIFT, >>>>> -...
2017 Feb 09
0
[PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
...m/page.h> #include <asm/desc.h> #include <asm/cpufeature.h> +#include <asm/mshyperv.h> #if defined(CONFIG_X86_64) unsigned int __read_mostly vdso64_enabled = 1; @@ -120,6 +121,12 @@ static int vvar_fault(const struct vm_special_mapping *sm, vmf->address, __pa(pvti) >> PAGE_SHIFT); } + } else if (sym_offset == image->sym_hvclock_page) { + struct ms_hyperv_tsc_page *tsc_pg = hv_get_tsc_page(); + + if (tsc_pg && vclock_was_used(VCLOCK_HVCLOCK)) + ret = vm_insert_pfn(vma, vmf->address, + vmalloc_to_pfn(tsc_pg)); } if (ret...
2017 Feb 09
4
[PATCH 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the required support. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory
2017 Feb 09
4
[PATCH 0/2] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, Hyper-V TSC page clocksource is suitable for vDSO, however, the protocol defined by the hypervisor is different from VCLOCK_PVCLOCK. Implemented the required support. Simple sysbench test shows the following results: Before: # time sysbench --test=memory --max-requests=500000 run ... real 1m22.618s user 0m50.193s sys 0m32.268s After: # time sysbench --test=memory
2017 Mar 03
4
[PATCH v3 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, merge window is about to close so I hope it's OK to make another try here. Changes since v2: - Add explicit READ_ONCE() to not rely on 'volatile' [Andy Lutomirski] - rdtsc() -> rdtsc_ordered() [Andy Lutomirski] - virt_rmb() -> smp_rmb() [Thomas Gleixner, Andy Lutomirski] Thomas, Andy, it seems the only blocker for the series was the ambiguity with TSC page read algorithm.
2017 Mar 03
4
[PATCH v3 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, merge window is about to close so I hope it's OK to make another try here. Changes since v2: - Add explicit READ_ONCE() to not rely on 'volatile' [Andy Lutomirski] - rdtsc() -> rdtsc_ordered() [Andy Lutomirski] - virt_rmb() -> smp_rmb() [Thomas Gleixner, Andy Lutomirski] Thomas, Andy, it seems the only blocker for the series was the ambiguity with TSC page read algorithm.
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, while we're still waiting for a definitive ACK from Microsoft that the algorithm is good for SMP case (as we can't prevent the code in vdso from migrating between CPUs) I'd like to send v2 with some modifications to keep the discussion going. Changes since v1: - Document the TSC page reading protocol [Thomas Gleixner]. - Separate the TSC page reading code from