search for: pv_kick

Displaying 20 results from an estimated 95 matches for "pv_kick".

2020 Jul 23
2
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
On Thu, Jul 09, 2020 at 12:06:13PM -0400, Waiman Long wrote: > We don't really need to do a pv_spinlocks_init() if pv_kick() isn't > supported. Waiman, if you cannot explain how not having kick is a sane thing, what are you saying here?
2020 Jul 23
2
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
On Thu, Jul 09, 2020 at 12:06:13PM -0400, Waiman Long wrote: > We don't really need to do a pv_spinlocks_init() if pv_kick() isn't > supported. Waiman, if you cannot explain how not having kick is a sane thing, what are you saying here?
2020 Jul 23
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
On 7/23/20 10:00 AM, Peter Zijlstra wrote: > On Thu, Jul 09, 2020 at 12:06:13PM -0400, Waiman Long wrote: >> We don't really need to do a pv_spinlocks_init() if pv_kick() isn't >> supported. > Waiman, if you cannot explain how not having kick is a sane thing, what > are you saying here? > The current PPC paravirt spinlock code doesn't do any cpu kick. It does an equivalence of pv_wait by yielding the cpu to the lock holder only. The pv_spi...
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote: > On 04/01/2015 05:03 PM, Peter Zijlstra wrote: > >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > >>I am sorry that I don't quite get what you mean here. My point is that in > >>the hashing step, a cpu will need to scan an empty
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote: > On 04/01/2015 05:03 PM, Peter Zijlstra wrote: > >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote: > >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote: > >>I am sorry that I don't quite get what you mean here. My point is that in > >>the hashing step, a cpu will need to scan an empty
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...code for queue_spin_unlock_slowpath(); provide NOPs for > + * all the PV callbacks. > + */ > + > +static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } > +static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } > +static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } > + > +static __always_inline void __pv_wait_head(struct qspinlock *lock) { } > + > +#define pv_enabled() false > + > +#define pv_init_node __pv_init_node > +#define pv_wait_node __pv_wait_node > +#define pv_kick_node __pv_kick_node &g...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...code for queue_spin_unlock_slowpath(); provide NOPs for > + * all the PV callbacks. > + */ > + > +static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } > +static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } > +static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } > + > +static __always_inline void __pv_wait_head(struct qspinlock *lock) { } > + > +#define pv_enabled() false > + > +#define pv_init_node __pv_init_node > +#define pv_wait_node __pv_wait_node > +#define pv_kick_node __pv_kick_node &g...
2020 Jul 08
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...; be able to change that to also support directed yield. Though I'm >> not sure if this is actually the cause of the slowdown yet. > > Regarding the paravirt lock, I have taken a further look into the > current PPC spinlock code. There is an equivalent of pv_wait() but no > pv_kick(). Maybe PPC doesn't really need that. So powerpc has two types of wait, either undirected "all processors" or directed to a specific processor which has been preempted by the hypervisor. The simple spinlock code does a directed wait, because it knows the CPU which is holding the...
2020 Jul 08
2
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...; be able to change that to also support directed yield. Though I'm >> not sure if this is actually the cause of the slowdown yet. > > Regarding the paravirt lock, I have taken a further look into the > current PPC spinlock code. There is an equivalent of pv_wait() but no > pv_kick(). Maybe PPC doesn't really need that. So powerpc has two types of wait, either undirected "all processors" or directed to a specific processor which has been preempted by the hypervisor. The simple spinlock code does a directed wait, because it knows the CPU which is holding the...
2015 Mar 19
0
[PATCH 8/9] qspinlock: Generic paravirt support
...ock can be freed/reused, > >+ * however we can still use the pointer value to search in our cpu > >+ * array. > >+ * > >+ * XXX: get rid of this loop > >+ */ > >+ for_each_possible_cpu(cpu) { > >+ if (per_cpu(__pv_lock_wait, cpu) == lock) > >+ pv_kick(cpu); > >+ } > >+} > > I do want to get rid of this loop too. On average, we have to scan about > half the number of CPUs available. So it isn't that different > performance-wise compared with my original idea of following the list from > tail to head. And how about...
2015 Mar 19
0
[PATCH 8/9] qspinlock: Generic paravirt support
...ock can be freed/reused, > >+ * however we can still use the pointer value to search in our cpu > >+ * array. > >+ * > >+ * XXX: get rid of this loop > >+ */ > >+ for_each_possible_cpu(cpu) { > >+ if (per_cpu(__pv_lock_wait, cpu) == lock) > >+ pv_kick(cpu); > >+ } > >+} > > I do want to get rid of this loop too. On average, we have to scan about > half the number of CPUs available. So it isn't that different > performance-wise compared with my original idea of following the list from > tail to head. And how about...
2015 Apr 02
0
[PATCH 8/9] qspinlock: Generic paravirt support
...ck_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -2,6 +2,8 @@ #error "do not include this file" #endif +#include <linux/hash.h> + /* * Implement paravirt qspinlocks; the general idea is to halt the vcpus instead * of spinning them. @@ -107,7 +109,84 @@ static void pv_kick_node(struct mcs_spin pv_kick(pn->cpu); } -static DEFINE_PER_CPU(struct qspinlock *, __pv_lock_wait); +/* + * Hash table using open addressing with an linear probe sequence. + * + * Since we should not be holding locks from NMI context (very rare indeed) the + * max load factor is 0.75, whi...
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
...paravirt.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index c25c38a05c1c..4cfb465dcde4 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -671,6 +671,12 @@ static __always_inline void pv_kick(int cpu) static __always_inline bool pv_vcpu_is_preempted(long cpu) { + /* + * Guard against invalid cpu number or the kernel might panic. + */ + if (WARN_ON_ONCE((unsigned long)cpu >= nr_cpu_ids)) + return false; + return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu); } -- 2.18.1
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
...paravirt.h | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index c25c38a05c1c..4cfb465dcde4 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -671,6 +671,12 @@ static __always_inline void pv_kick(int cpu) static __always_inline bool pv_vcpu_is_preempted(long cpu) { + /* + * Guard against invalid cpu number or the kernel might panic. + */ + if (WARN_ON_ONCE((unsigned long)cpu >= nr_cpu_ids)) + return false; + return PVOP_CALLEE1(bool, lock.vcpu_is_preempted, cpu); } -- 2.18.1
2020 Jul 08
0
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...nk we might actually > be able to change that to also support directed yield. Though I'm > not sure if this is actually the cause of the slowdown yet. Regarding the paravirt lock, I have taken a further look into the current PPC spinlock code. There is an equivalent of pv_wait() but no pv_kick(). Maybe PPC doesn't really need that. Attached are two additional qspinlock patches that adds a CONFIG_PARAVIRT_QSPINLOCKS_LITE option to not require pv_kick(). There is also a fixup patch to be applied after your patchset. I don't have access to a PPC LPAR with shared processor at the...
2020 Jul 08
1
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
...gt; Date: Tue, 7 Jul 2020 22:29:16 -0400 > Subject: [PATCH 2/9] locking/pvqspinlock: Introduce > CONFIG_PARAVIRT_QSPINLOCKS_LITE > > Add a new PARAVIRT_QSPINLOCKS_LITE config option that allows > architectures to use the PV qspinlock code without the need to use or > implement a pv_kick() function, thus eliminating the atomic unlock > overhead. The non-atomic queued_spin_unlock() can be used instead. > The pv_wait() function will still be needed, but it can be a dummy > function. > > With that option set, the hybrid PV queued/unfair locking code should > still b...
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...+ +/* + * Generate the native code for queue_spin_unlock_slowpath(); provide NOPs for + * all the PV callbacks. + */ + +static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } +static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } +static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } + +static __always_inline void __pv_wait_head(struct qspinlock *lock) { } + +#define pv_enabled() false + +#define pv_init_node __pv_init_node +#define pv_wait_node __pv_wait_node +#define pv_kick_node __pv_kick_node + +#define pv_wait_head __pv_wait_head +...
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...+ +/* + * Generate the native code for queue_spin_unlock_slowpath(); provide NOPs for + * all the PV callbacks. + */ + +static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } +static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } +static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } + +static __always_inline void __pv_wait_head(struct qspinlock *lock) { } + +#define pv_enabled() false + +#define pv_init_node __pv_init_node +#define pv_wait_node __pv_wait_node +#define pv_kick_node __pv_kick_node + +#define pv_wait_head __pv_wait_head +...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...Ah yes, clever that. > > > >>+ /* > >>+ * At this point the memory pointed at by lock can be freed/reused, > >>+ * however we can still use the PV node to kick the CPU. > >>+ */ > >>+ if (READ_ONCE(node->state) == vcpu_halted) > >>+ pv_kick(node->cpu); > >>+} > >>+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock); > >However I feel the PV_CALLEE_SAVE_REGS_THUNK thing belongs in the x86 > >code. > > That is why I originally put my version of the qspinlock_paravirt.h header > file under arch/x8...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...Ah yes, clever that. > > > >>+ /* > >>+ * At this point the memory pointed at by lock can be freed/reused, > >>+ * however we can still use the PV node to kick the CPU. > >>+ */ > >>+ if (READ_ONCE(node->state) == vcpu_halted) > >>+ pv_kick(node->cpu); > >>+} > >>+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock); > >However I feel the PV_CALLEE_SAVE_REGS_THUNK thing belongs in the x86 > >code. > > That is why I originally put my version of the qspinlock_paravirt.h header > file under arch/x8...