search for: queue_get_lock_qcode

Displaying 18 results from an estimated 18 matches for "queue_get_lock_qcode".

2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...(atomic_cmpxchg(..., _QSPINLOCK_LOCKED) == my_qcode) goto release_node; goto notify_next; } if (prev_qcode & _QSPINLOCK_LOCKED) prev_qcode &= ~_QSPINLOCK_LOCKED; else queue_spin_unlock(lock); > + while (true) { > + u32 qcode; > + int retval; > + > + retval = queue_get_lock_qcode(lock, &qcode, my_qcode); > + if (retval > 0) > + ; /* Lock not available yet */ > + else if (retval < 0) > + /* Lock taken, can release the node & return */ > + goto release_node; I guess this is for 3/8which adds the optimized version of queue_get_lock_qcode()...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...(atomic_cmpxchg(..., _QSPINLOCK_LOCKED) == my_qcode) goto release_node; goto notify_next; } if (prev_qcode & _QSPINLOCK_LOCKED) prev_qcode &= ~_QSPINLOCK_LOCKED; else queue_spin_unlock(lock); > + while (true) { > + u32 qcode; > + int retval; > + > + retval = queue_get_lock_qcode(lock, &qcode, my_qcode); > + if (retval > 0) > + ; /* Lock not available yet */ > + else if (retval < 0) > + /* Lock taken, can release the node & return */ > + goto release_node; I guess this is for 3/8which adds the optimized version of queue_get_lock_qcode()...
2014 Feb 26
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...osedly current qcode value + * Return: true if successful, false otherwise + */ +static inline int +queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode) +{ + qcode <<= _QCODE_OFFSET; + return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode; +} + +#define queue_get_lock_qcode queue_get_lock_qcode +/** + * queue_get_lock_qcode - get the lock & qcode values + * @lock : Pointer to queue spinlock structure + * @qcode : Pointer to the returned qcode value + * @mycode: My qcode value + * Return : > 0 if lock is not available + * = 0 if lock is free + * < 0 if...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...osedly current qcode value + * Return: true if successful, false otherwise + */ +static inline int +queue_spin_trylock_and_clr_qcode(struct qspinlock *lock, u32 qcode) +{ + qcode <<= _QCODE_OFFSET; + return atomic_cmpxchg(&lock->qlcode, qcode, _QSPINLOCK_LOCKED) == qcode; +} + +#define queue_get_lock_qcode queue_get_lock_qcode +/** + * queue_get_lock_qcode - get the lock & qcode values + * @lock : Pointer to queue spinlock structure + * @qcode : Pointer to the returned qcode value + * @mycode: My qcode value + * Return : > 0 if lock is not available + * = 0 if lock is free + * < 0 if...
2014 Mar 02
1
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
...ecific queue spinlock union structure > + */ > +union arch_qspinlock { > + struct qspinlock slock; > + u8 lock; /* Lock bit */ > +}; And this enables the optimized version of queue_spin_setlock(). But why does it check ACCESS_ONCE(qlock->lock) == 0 ? This is called right after queue_get_lock_qcode() returns 0, this locked should be likely unlocked. Oleg.
2014 Mar 02
1
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
...ecific queue spinlock union structure > + */ > +union arch_qspinlock { > + struct qspinlock slock; > + u8 lock; /* Lock bit */ > +}; And this enables the optimized version of queue_spin_setlock(). But why does it check ACCESS_ONCE(qlock->lock) == 0 ? This is called right after queue_get_lock_qcode() returns 0, this locked should be likely unlocked. Oleg.
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...* + ************************************************************************ + * Inline functions used by the queue_spin_lock_slowpath() function * + * that may get superseded by a more optimized version. * + ************************************************************************ + */ + +#ifndef queue_get_lock_qcode +/** + * queue_get_lock_qcode - get the lock & qcode values + * @lock : Pointer to queue spinlock structure + * @qcode : Pointer to the returned qcode value + * @mycode: My qcode value (not used) + * Return : > 0 if lock is not available, = 0 if lock is free + */ +static inline int +queue_g...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...* + ************************************************************************ + * Inline functions used by the queue_spin_lock_slowpath() function * + * that may get superseded by a more optimized version. * + ************************************************************************ + */ + +#ifndef queue_get_lock_qcode +/** + * queue_get_lock_qcode - get the lock & qcode values + * @lock : Pointer to queue spinlock structure + * @qcode : Pointer to the returned qcode value + * @mycode: My qcode value (not used) + * Return : > 0 if lock is not available, = 0 if lock is free + */ +static inline int +queue_g...
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Mar 12
0
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...Use cmpxchg to set the lock bit & clear the waiting bit + */ + if (cmpxchg(&qlock->lock_wait, _QSPINLOCK_WAITING, + _QSPINLOCK_LOCKED) == _QSPINLOCK_WAITING) + return 1; /* Got the lock */ + arch_mutex_cpu_relax(); + goto retry_lock; + } return 0; } @@ -172,7 +294,7 @@ queue_get_lock_qcode(struct qspinlock *lock, u32 *qcode, u32 mycode) u32 qlcode = (u32)atomic_read(&lock->qlcode); *qcode = qlcode >> _QCODE_OFFSET; - return qlcode & _QSPINLOCK_LOCKED; + return qlcode & _QSPINLOCK_LWMASK; } #endif /* _Q_MANY_CPUS */ @@ -185,7 +307,7 @@ static __always_in...
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Mar 12
0
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
...Next queue node addr */ }; @@ -341,6 +367,11 @@ static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval) { return 0; } #endif +#ifndef queue_get_qcode +#define queue_get_qcode(lock) (atomic_read(&(lock)->qlcode) &\ + ~_QSPINLOCK_LOCKED) +#endif + #ifndef queue_get_lock_qcode /** * queue_get_lock_qcode - get the lock & qcode values @@ -496,6 +527,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval) unsigned int cpu_nr, qn_idx; struct qnode *node, *next; u32 prev_qcode, my_qcode; + PV_SET_VAR(int, hcnt, 0); /* * Try the quick spinning...