search for: pv_link_and_wait_node

Displaying 20 results from an estimated 23 matches for "pv_link_and_wait_node".

2014 Oct 27
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...BUILD_BUG_ON(sizeof(struct pv_qnode) > 5*sizeof(struct mcs_spinlock)); - if (!pv_enabled()) - return; - pn->cpustate = PV_CPU_ACTIVE; pn->mayhalt = false; pn->mycpu = smp_processor_id(); @@ -132,9 +129,6 @@ static inline bool pv_link_and_wait_node(u32 old, struct mcs struct pv_qnode *ppn, *pn = (struct pv_qnode *)node; unsigned int count; - if (!pv_enabled()) - return false; - if (!(old & _Q_TAIL_MASK)) { node->locked = true; /* At queue head now */...
2014 Oct 27
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...BUILD_BUG_ON(sizeof(struct pv_qnode) > 5*sizeof(struct mcs_spinlock)); - if (!pv_enabled()) - return; - pn->cpustate = PV_CPU_ACTIVE; pn->mayhalt = false; pn->mycpu = smp_processor_id(); @@ -132,9 +129,6 @@ static inline bool pv_link_and_wait_node(u32 old, struct mcs struct pv_qnode *ppn, *pn = (struct pv_qnode *)node; unsigned int count; - if (!pv_enabled()) - return false; - if (!(old & _Q_TAIL_MASK)) { node->locked = true; /* At queue head now */...
2014 Oct 29
1
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...enabled. If yes, it has to do an atomic cmpxchg to clear the lock bit or call the slowpath function to kick the queue head cpu. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Waiman Long <Waiman.Long at hp.com> --- arch/x86/include/asm/paravirt.h | 19 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 411 +++++++++++++++++++++++...
2014 Oct 29
1
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...enabled. If yes, it has to do an atomic cmpxchg to clear the lock bit or call the slowpath function to kick the queue head cpu. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Waiman Long <Waiman.Long at hp.com> --- arch/x86/include/asm/paravirt.h | 19 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 411 +++++++++++++++++++++++...
2014 Oct 16
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...enabled. If yes, it has to do an atomic cmpxchg to clear the lock bit or call the slowpath function to kick the queue head cpu. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Waiman Long <Waiman.Long at hp.com> --- arch/x86/include/asm/paravirt.h | 20 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 403 +++++++++++++++++++++++...
2014 Oct 16
2
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...enabled. If yes, it has to do an atomic cmpxchg to clear the lock bit or call the slowpath function to kick the queue head cpu. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Waiman Long <Waiman.Long at hp.com> --- arch/x86/include/asm/paravirt.h | 20 ++ arch/x86/include/asm/paravirt_types.h | 20 ++ arch/x86/include/asm/pvqspinlock.h | 403 +++++++++++++++++++++++...
2014 Oct 24
3
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
On 10/24/2014 04:47 AM, Peter Zijlstra wrote: > On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote: >> +static inline void pv_init_node(struct mcs_spinlock *node) >> +{ >> + struct pv_qnode *pn = (struct pv_qnode *)node; >> + >> + BUILD_BUG_ON(sizeof(struct pv_qnode)> 5*sizeof(struct mcs_spinlock)); >> + >> + if (!pv_enabled()) >> +
2014 Oct 24
3
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
On 10/24/2014 04:47 AM, Peter Zijlstra wrote: > On Thu, Oct 16, 2014 at 02:10:38PM -0400, Waiman Long wrote: >> +static inline void pv_init_node(struct mcs_spinlock *node) >> +{ >> + struct pv_qnode *pn = (struct pv_qnode *)node; >> + >> + BUILD_BUG_ON(sizeof(struct pv_qnode)> 5*sizeof(struct mcs_spinlock)); >> + >> + if (!pv_enabled()) >> +
2014 Jun 15
0
[PATCH 10/11] qspinlock: Paravirt support
...resp. before/after the paired MCS ops. - wait_head/queue_unlock; the interesting part here is finding the head node to kick. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/include/asm/paravirt.h | 39 +++++++ arch/x86/include/asm/paravirt_types.h | 15 ++ arch/x86/include/asm/qspinlock.h | 25 ++++ arch/x8...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
...f(struct pv_node)> 5*sizeof(struct mcs_spinlock)); > + > + pn->cpu = smp_processor_id(); > + pn->head = INVALID_HEAD; > +} > + > +static inline struct pv_node *pv_decode_tail(u32 tail) > +{ > + return (struct pv_node *)decode_tail(tail); > +} > + > +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) > +{ > + struct pv_node *ppn, *pn = (struct pv_node *)node; > + unsigned int count; > + > + if (!(old& _Q_TAIL_MASK)) { > + pn->head = NO_HEAD; > + return; > + } > + > + ppn = pv_decode_tail(old); > + ACCESS_ONCE(ppn-&gt...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
...f(struct pv_node)> 5*sizeof(struct mcs_spinlock)); > + > + pn->cpu = smp_processor_id(); > + pn->head = INVALID_HEAD; > +} > + > +static inline struct pv_node *pv_decode_tail(u32 tail) > +{ > + return (struct pv_node *)decode_tail(tail); > +} > + > +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) > +{ > + struct pv_node *ppn, *pn = (struct pv_node *)node; > + unsigned int count; > + > + if (!(old& _Q_TAIL_MASK)) { > + pn->head = NO_HEAD; > + return; > + } > + > + ppn = pv_decode_tail(old); > + ACCESS_ONCE(ppn-&gt...
2014 Oct 27
0
[PATCH v12 09/11] pvqspinlock, x86: Add para-virtualization support
...gt; EXPORT_SYMBOL(queue_spin_lock_slowpath); > + > +#if !defined(_GEN_PV_LOCK_SLOWPATH) && defined(CONFIG_PARAVIRT_SPINLOCKS) > +/* > + * Generate the PV version of the queue_spin_lock_slowpath function > + */ > +#undef pv_init_node > +#undef pv_wait_check > +#undef pv_link_and_wait_node > +#undef pv_wait_head > +#undef EXPORT_SYMBOL > +#undef in_pv_code > + > +#define _GEN_PV_LOCK_SLOWPATH > +#define EXPORT_SYMBOL(x) > +#define in_pv_code return_true > +#define pv_enabled return_false > + > +#include "qspinlock.c" > + > +#endif...
2014 Oct 29
15
[PATCH v13 00/11] qspinlock: a 4-byte queue spinlock with PV support
v12->v13: - Change patch 9 to generate separate versions of the queue_spin_lock_slowpath functions for bare metal and PV guest. This reduces the performance impact of the PV code on bare metal systems. v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo
2014 Oct 29
15
[PATCH v13 00/11] qspinlock: a 4-byte queue spinlock with PV support
v12->v13: - Change patch 9 to generate separate versions of the queue_spin_lock_slowpath functions for bare metal and PV guest. This reduces the performance impact of the PV code on bare metal systems. v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo
2014 Nov 03
0
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...(lock); > +} Idem, that static key stuff is wrong, use PV ops to switch between unlock paths. > @@ -354,7 +394,7 @@ queue: > * if there was a previous node; link it and wait until reaching the > * head of the waitqueue. > */ > - if (old & _Q_TAIL_MASK) { > + if (!pv_link_and_wait_node(old, node) && (old & _Q_TAIL_MASK)) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > @@ -369,9 +409,11 @@ queue: > * > * *,x,y -> *,0,0 > */ > - while ((val = smp_load_acquire(&lock->val.counter)) & > - _Q_LOCKED_...
2015 Jan 20
13
[PATCH v14 00/11] qspinlock: a 4-byte queue spinlock with PV support
v13->v14: - Patches 1 & 2: Add queue_spin_unlock_wait() to accommodate commit 78bff1c86 from Oleg Nesterov. - Fix the system hang problem when using PV qspinlock in an over-committed guest due to a racing condition in the pv_set_head_in_tail() function. - Increase the MAYHALT_THRESHOLD from 10 to 1024. - Change kick_cpu into a regular function pointer instead of a
2015 Jan 20
13
[PATCH v14 00/11] qspinlock: a 4-byte queue spinlock with PV support
v13->v14: - Patches 1 & 2: Add queue_spin_unlock_wait() to accommodate commit 78bff1c86 from Oleg Nesterov. - Fix the system hang problem when using PV qspinlock in an over-committed guest due to a racing condition in the pv_set_head_in_tail() function. - Increase the MAYHALT_THRESHOLD from 10 to 1024. - Change kick_cpu into a regular function pointer instead of a
2014 Jun 20
2
[PATCH 10/11] qspinlock: Paravirt support
...ops. > > - wait_head/queue_unlock; the interesting part here is finding the > head node to kick. > > Tracking the head is done in two parts, firstly the pv_wait_head will > store its cpu number in whichever node is pointed to by the tail part > of the lock word. Secondly, pv_link_and_wait_node() will propagate the > existing head from the old to the new tail node. I dug in the code and I have some comments about it, but before I post them I was wondering if you have any plans to run any performance tests against the PV ticketlock with normal and over-committed scenarios? Looking at...
2014 Jun 20
2
[PATCH 10/11] qspinlock: Paravirt support
...ops. > > - wait_head/queue_unlock; the interesting part here is finding the > head node to kick. > > Tracking the head is done in two parts, firstly the pv_wait_head will > store its cpu number in whichever node is pointed to by the tail part > of the lock word. Secondly, pv_link_and_wait_node() will propagate the > existing head from the old to the new tail node. I dug in the code and I have some comments about it, but before I post them I was wondering if you have any plans to run any performance tests against the PV ticketlock with normal and over-committed scenarios? Looking at...
2014 Oct 16
15
[PATCH v12 00/11] qspinlock: a 4-byte queue spinlock with PV support
v11->v12: - Based on PeterZ's version of the qspinlock patch (https://lkml.org/lkml/2014/6/15/63). - Incorporated many of the review comments from Konrad Wilk and Paolo Bonzini. - The pvqspinlock code is largely from my previous version with PeterZ's way of going from queue tail to head and his idea of using callee saved calls to KVM and XEN codes. v10->v11: - Use a