search for: pvop_callee1

Displaying 20 results from an estimated 49 matches for "pvop_callee1".

2020 Aug 09
2
[PATCH v3 4/7] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
On 8/7/20 4:38 AM, Juergen Gross wrote: > @@ -377,10 +373,7 @@ static inline pte_t __pte(pteval_t val) > { > pteval_t ret; > > - if (sizeof(pteval_t) > sizeof(long)) > - ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32); > - else > - ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); > + ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); > > return (pte_t) { .pte = ret }; Can this now simply return (pte_t) ret? -boris
2020 Aug 09
2
[PATCH v3 4/7] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
On 8/7/20 4:38 AM, Juergen Gross wrote: > @@ -377,10 +373,7 @@ static inline pte_t __pte(pteval_t val) > { > pteval_t ret; > > - if (sizeof(pteval_t) > sizeof(long)) > - ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32); > - else > - ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); > + ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); > > return (pte_t) { .pte = ret }; Can this now simply return (pte_t) ret? -boris
2020 Aug 10
1
[PATCH v3 4/7] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
...inline pte_t __pte(pteval_t val) >>> ? { >>> ????? pteval_t ret; >>> ? -??? if (sizeof(pteval_t) > sizeof(long)) >>> -??????? ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> >>> 32); >>> -??? else >>> -??????? ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); >>> +??? ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); >>> ? ????? return (pte_t) { .pte = ret }; >> >> >> Can this now simply return (pte_t) ret? > > I don't think so, but I can turn it into > > ? return native_ma...
2020 Aug 07
0
[PATCH v3 4/7] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
...ine void write_ldt_entry(struct desc_struct *dt, int entry, const void *desc) @@ -377,10 +373,7 @@ static inline pte_t __pte(pteval_t val) { pteval_t ret; - if (sizeof(pteval_t) > sizeof(long)) - ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32); - else - ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); + ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); return (pte_t) { .pte = ret }; } @@ -389,11 +382,7 @@ static inline pteval_t pte_val(pte_t pte) { pteval_t ret; - if (sizeof(pteval_t) > sizeof(long)) - ret = PVOP_CALLEE2(pteval_t, mmu.pte_val, - p...
2020 Aug 15
0
[PATCH v4 1/6] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
...const void *desc) @@ -375,52 +371,22 @@ static inline void paravirt_release_p4d(unsigned long pfn) static inline pte_t __pte(pteval_t val) { - pteval_t ret; - - if (sizeof(pteval_t) > sizeof(long)) - ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32); - else - ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); - - return (pte_t) { .pte = ret }; + return (pte_t) { PVOP_CALLEE1(pteval_t, mmu.make_pte, val) }; } static inline pteval_t pte_val(pte_t pte) { - pteval_t ret; - - if (sizeof(pteval_t) > sizeof(long)) - ret = PVOP_CALLEE2(pteval_t, mmu.pte_val, - pte.pt...
2020 Aug 07
4
[PATCH v3 0/7] Remove 32-bit Xen PV guest support
The long term plan has been to replace Xen PV guests by PVH. The first victim of that plan are now 32-bit PV guests, as those are used only rather seldom these days. Xen on x86 requires 64-bit support and with Grub2 now supporting PVH officially since version 2.04 there is no need to keep 32-bit PV guest support alive in the Linux kernel. Additionally Meltdown mitigation is not available in the
2019 Jul 15
5
[PATCH 0/2] Remove 32-bit Xen PV guest support
The long term plan has been to replace Xen PV guests by PVH. The first victim of that plan are now 32-bit PV guests, as those are used only rather seldom these days. Xen on x86 requires 64-bit support and with Grub2 now supporting PVH officially since version 2.04 there is no need to keep 32-bit PV guest support alive in the Linux kernel. Additionally Meltdown mitigation is not available in the
2019 Jul 15
5
[PATCH 0/2] Remove 32-bit Xen PV guest support
The long term plan has been to replace Xen PV guests by PVH. The first victim of that plan are now 32-bit PV guests, as those are used only rather seldom these days. Xen on x86 requires 64-bit support and with Grub2 now supporting PVH officially since version 2.04 there is no need to keep 32-bit PV guest support alive in the Linux kernel. Additionally Meltdown mitigation is not available in the
2020 Jul 01
5
[PATCH v2 0/4] Remove 32-bit Xen PV guest support
The long term plan has been to replace Xen PV guests by PVH. The first victim of that plan are now 32-bit PV guests, as those are used only rather seldom these days. Xen on x86 requires 64-bit support and with Grub2 now supporting PVH officially since version 2.04 there is no need to keep 32-bit PV guest support alive in the Linux kernel. Additionally Meltdown mitigation is not available in the
2020 Jul 01
5
[PATCH v2 0/4] Remove 32-bit Xen PV guest support
The long term plan has been to replace Xen PV guests by PVH. The first victim of that plan are now 32-bit PV guests, as those are used only rather seldom these days. Xen on x86 requires 64-bit support and with Grub2 now supporting PVH officially since version 2.04 there is no need to keep 32-bit PV guest support alive in the Linux kernel. Additionally Meltdown mitigation is not available in the
2020 Aug 15
6
[PATCH v4 0/6] x86/paravirt: cleanup after 32-bit PV removal
A lot of cleanup after removal of 32-bit Xen PV guest support in paravirt code. Changes in V4: - dropped patches 1-3, as already committed - addressed comments to V3 - added new patches 5+6 Changes in V3: - addressed comments to V2 - split patch 1 into 2 patches - new patches 3 and 7 Changes in V2: - rebase to 5.8 kernel - addressed comments to V1 - new patches 3 and 4 Juergen Gross (6):
2020 Aug 10
0
[PATCH v3 4/7] x86/paravirt: remove 32-bit support from PARAVIRT_XXL
...oss wrote: >> @@ -377,10 +373,7 @@ static inline pte_t __pte(pteval_t val) >> { >> pteval_t ret; >> >> - if (sizeof(pteval_t) > sizeof(long)) >> - ret = PVOP_CALLEE2(pteval_t, mmu.make_pte, val, (u64)val >> 32); >> - else >> - ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); >> + ret = PVOP_CALLEE1(pteval_t, mmu.make_pte, val); >> >> return (pte_t) { .pte = ret }; > > > Can this now simply return (pte_t) ret? I don't think so, but I can turn it into return native_make_pte(PVOP_CALLEE1(...)); Ju...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...ch/x86/include/asm/paravirt.h >>> index c25dd22f7c70..d9e954fb37df 100644 >>> --- a/arch/x86/include/asm/paravirt.h >>> +++ b/arch/x86/include/asm/paravirt.h >>> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >>> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >>> } >>> >>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); >>> +} >>> + >>> #end...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...ch/x86/include/asm/paravirt.h >>> index c25dd22f7c70..d9e954fb37df 100644 >>> --- a/arch/x86/include/asm/paravirt.h >>> +++ b/arch/x86/include/asm/paravirt.h >>> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >>> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >>> } >>> >>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); >>> +} >>> + >>> #end...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...t a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index c25dd22f7c70..d9e954fb37df 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) > return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); > } > > +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) > +{ > + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); > +} > + > #endif /* SMP && PARAVIRT_SPINLOCKS */ > > #ifdef C...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...t a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index c25dd22f7c70..d9e954fb37df 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) > return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); > } > > +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) > +{ > + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); > +} > + > #endif /* SMP && PARAVIRT_SPINLOCKS */ > > #ifdef C...
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
...avirt.h @@ -671,6 +671,12 @@ static __always_inline void pv_kick(int cpu) static __always_inline bool pv_vcpu_is_preempted(long cpu) { + /* + * Guard against invalid cpu number or the kernel might panic. + */ + if (WARN_ON_ONCE((unsigned long)cpu >= nr_cpu_ids)) + return false; + return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu); } -- 2.18.1
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
...avirt.h @@ -671,6 +671,12 @@ static __always_inline void pv_kick(int cpu) static __always_inline bool pv_vcpu_is_preempted(long cpu) { + /* + * Guard against invalid cpu number or the kernel might panic. + */ + if (WARN_ON_ONCE((unsigned long)cpu >= nr_cpu_ids)) + return false; + return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu); } -- 2.18.1
2018 Aug 13
11
[PATCH v2 00/11] x86/paravirt: several cleanups
This series removes some no longer needed stuff from paravirt infrastructure and puts large quantities of paravirt ops under a new config option PARAVIRT_XXL which is selected by XEN_PV only. A pvops kernel without XEN_PV being configured is about 2.5% smaller with this series applied. tip commit 5800dc5c19f34e6e03b5adab1282535cb102fafd ("x86/paravirt: Fix spectre-v2 mitigations for
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch