Displaying 8 results from an estimated 8 matches for "mcs_spin_unlock".
2014 Jun 17
2
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock,
2014 Jun 17
2
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
> + * The basic principle of a queue-based spinlock can best be understood
> + * by studying a classic queue-based spinlock implementation called the
> + * MCS lock. The paper below provides a good description for this kind
> + * of lock.
> + *
> + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
> + *
> + * This queue spinlock implementation is based on the MCS lock,
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
Forgot to ask...
On 02/26, Waiman Long wrote:
>
> +notify_next:
> + /*
> + * Wait, if needed, until the next one in queue set up the next field
> + */
> + while (!(next = ACCESS_ONCE(node->next)))
> + arch_mutex_cpu_relax();
> + /*
> + * The next one in queue is now at the head
> + */
> + smp_store_release(&next->wait, false);
Do we really need
2014 Jun 23
0
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
...know what you mean.. So that is actually implied by the last
paragraph, but I suppose I can make it explicit; something like:
*
* Another way to look at it is:
*
* lock(tail,locked)
* struct mcs_spinlock node;
* mcs_spin_lock(tail, &node);
* test-and-set locked;
* mcs_spin_unlock(tail, &node);
*
* unlock(tail,locked)
* clear locked
*
* Where we have compressed (tail,locked) into a single u32 word.
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
Forgot to ask...
On 02/26, Waiman Long wrote:
>
> +notify_next:
> + /*
> + * Wait, if needed, until the next one in queue set up the next field
> + */
> + while (!(next = ACCESS_ONCE(node->next)))
> + arch_mutex_cpu_relax();
> + /*
> + * The next one in queue is now at the head
> + */
> + smp_store_release(&next->wait, false);
Do we really need
2016 Oct 25
0
[GIT PULL v2 4/5] processor.h: Remove cpu_relax_lowlatency users
...ng/mcs_spinlock.h
+++ b/kernel/locking/mcs_spinlock.h
@@ -28,7 +28,7 @@ struct mcs_spinlock {
#define arch_mcs_spin_lock_contended(l) \
do { \
while (!(smp_load_acquire(l))) \
- cpu_relax_lowlatency(); \
+ cpu_relax(); \
} while (0)
#endif
@@ -108,7 +108,7 @@ void mcs_spin_unlock(struct mcs_spinlock **lock, struct mcs_spinlock *node)
return;
/* Wait until the next pointer is set */
while (!(next = READ_ONCE(node->next)))
- cpu_relax_lowlatency();
+ cpu_relax();
}
/* Pass lock to next waiter. */
diff --git a/kernel/locking/mutex.c b/kernel/locking/mute...
2016 Oct 25
7
[GIT PULL v2 0/5] cpu_relax: drop lowlatency, introduce yield
Peter,
here is v2 with some improved patch descriptions and some fixes. The
previous version has survived one day of linux-next and I only changed
small parts.
So unless there is some other issue, feel free to pull (or to apply
the patches) to tip/locking.
The following changes since commit 07d9a380680d1c0eb51ef87ff2eab5c994949e69:
Linux 4.9-rc2 (2016-10-23 17:10:14 -0700)
are available in
2016 Oct 25
7
[GIT PULL v2 0/5] cpu_relax: drop lowlatency, introduce yield
Peter,
here is v2 with some improved patch descriptions and some fixes. The
previous version has survived one day of linux-next and I only changed
small parts.
So unless there is some other issue, feel free to pull (or to apply
the patches) to tip/locking.
The following changes since commit 07d9a380680d1c0eb51ef87ff2eab5c994949e69:
Linux 4.9-rc2 (2016-10-23 17:10:14 -0700)
are available in