Displaying 3 results from an estimated 3 matches for "123adcefd40f".
2020 Jul 24
0
[PATCH v4 6/6] powerpc: implement smp_cond_load_relaxed
...busy loop using the
preferred SMT priority pattern.
Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
---
arch/powerpc/include/asm/barrier.h | 14 ++++++++++++++
1 file changed, 14 insertions(+)
diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index 123adcefd40f..9b4671d38674 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -76,6 +76,20 @@ do { \
___p1; \
})
+#define smp_cond_load_relaxed(ptr, cond_expr) ({ \
+ typeof(ptr) __PTR = (ptr); \
+ __unqual_scalar_typeof(*ptr) VAL; \
+ VAL = REA...
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance
results.
What I've found is I might have been measuring the worst load point for
the paravirt case, and by looking at a range of loads it's clear that
queued spinlocks are overall better even on PV, doubly so when you look
at the generally much improved worst case latencies.
I have defaulted it to N even though
2019 Nov 08
15
[PATCH 00/13] Finish off [smp_]read_barrier_depends()
Hi all,
Although [smp_]read_barrier_depends() became part of READ_ONCE() in
commit 76ebbe78f739 ("locking/barriers: Add implicit
smp_read_barrier_depends() to READ_ONCE()"), it still limps on in the
Linux memory model with the sinister hope of attracting innocent new
users so that it becomes impossible to remove altogether.
Let's strike before it's too late: there's only