Displaying 20 results from an estimated 20 matches for "__smp_store_release".
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...structions from being
> > > @@ -67,18 +59,18 @@
> > > #define data_barrier(x) \
> > > asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >
> > > -#define smp_store_release(p, v) \
> > > +#define __smp_store_release(p, v) \
> > > do { \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> >
> > , therefore this will emit an lwsync no matter SMP or UP.
>
> Absolutely. But smp_store_r...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...structions from being
> > > @@ -67,18 +59,18 @@
> > > #define data_barrier(x) \
> > > asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> > >
> > > -#define smp_store_release(p, v) \
> > > +#define __smp_store_release(p, v) \
> > > do { \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> >
> > , therefore this will emit an lwsync no matter SMP or UP.
>
> Absolutely. But smp_store_r...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ave another user,
please see this mail:
http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
in definition of PPC's __atomic_op_release().
But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.
Anyway, I will modify my patch.
Regards,
Boqun
>
> > > > > WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acquire(p) \
> > &...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ave another user,
please see this mail:
http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
in definition of PPC's __atomic_op_release().
But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.
Anyway, I will modify my patch.
Regards,
Boqun
>
> > > > > WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acquire(p) \
> > &...
2016 Jan 10
0
[PATCH v3 27/41] x86: define __smp_xxx
...mp_rmb() dma_rmb()
+#define __smp_wmb() barrier()
+#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0)
#if defined(CONFIG_X86_PPRO_FENCE)
@@ -50,31 +43,31 @@
* model and we should fall back to full barriers.
*/
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...; /*
> * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
> #define data_barrier(x) \
> asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>
> -#define smp_store_release(p, v) \
> +#define __smp_store_release(p, v) \
> do { \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
, therefore this will emit an lwsync no matter SMP or UP.
Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_st...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...; /*
> * This is a barrier which prevents following instructions from being
> @@ -67,18 +59,18 @@
> #define data_barrier(x) \
> asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
>
> -#define smp_store_release(p, v) \
> +#define __smp_store_release(p, v) \
> do { \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
, therefore this will emit an lwsync no matter SMP or UP.
Another thing is that smp_lwsync() may have a third user(other than
smp_load_acquire() and smp_st...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...lized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?
It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.
And these are now gone.
Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().
> > > > This reduces the amount of arch-specific boiler-plate code.
> > > >
> > > > Signed-off-by: Michael S. Tsirkin &...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...s on this architecture.
Finally, the following patches put the __smp_XXX APIs to work for virt:
-. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs
xen patches are untested
virtio ones have been tested on x86
-. Patches 33-34 teach virtio to use
__smp_load_acquire/__smp_store_release/__smp_store_mb
This is what started all this work.
tested on x86
The patchset has been in linux-next for a bit, so far without issues.
Michael S. Tsirkin (34):
Documentation/memory-barriers.txt: document __smb_mb()
asm-generic: guard smp_store_release/load_acquire
ia64: rename nop-&g...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...s on this architecture.
Finally, the following patches put the __smp_XXX APIs to work for virt:
-. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs
xen patches are untested
virtio ones have been tested on x86
-. Patches 33-34 teach virtio to use
__smp_load_acquire/__smp_store_release/__smp_store_mb
This is what started all this work.
tested on x86
The patchset has been in linux-next for a bit, so far without issues.
Michael S. Tsirkin (34):
Documentation/memory-barriers.txt: document __smb_mb()
asm-generic: guard smp_store_release/load_acquire
ia64: rename nop-&g...
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 30
0
[PATCH 20/34] ia64: define __smp_XXX
...#define smp_mb__after_atomic() barrier()
+#define __smp_mb__before_atomic() barrier()
+#define __smp_mb__after_atomic() barrier()
/*
* IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
* need for asm trickery!
*/
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*...
2015 Dec 31
0
[PATCH v2 24/32] sparc: define __smp_xxx
...c/include/asm/barrier_64.h
@@ -37,14 +37,14 @@ do { __asm__ __volatile__("ba,pt %%xcc, 1f\n\t" \
#define rmb() __asm__ __volatile__("":::"memory")
#define wmb() __asm__ __volatile__("":::"memory")
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*...
2015 Dec 31
0
[PATCH v2 22/32] s390: define __smp_xxx
...b() mb()
-#define smp_rmb() rmb()
-#define smp_wmb() wmb()
-
-#define smp_store_release(p, v) \
+#define __smp_mb() mb()
+#define __smp_rmb() rmb()
+#define __smp_wmb() wmb()
+#define smp_mb() __smp_mb()
+#define smp_rmb() __smp_rmb()
+#define smp_wmb() __smp_wmb()
+
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p...
2016 Jan 06
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
>
> in definition of PPC's __atomic_op_release().
>
>
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
>
> Anyway, I will modify my patch.
>
> Regards,
> Boqun
Thanks!
Could you send an ack then please?
> >
> > > > > > WRITE_ONCE(*p, v); \
> > > > > > } while (0)
> > >...
2015 Dec 31
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...c(SMPWMB) : : :"memory")
/*
* This is a barrier which prevents following instructions from being
@@ -67,18 +59,18 @@
#define data_barrier(x) \
asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_lwsync(); \
+ __smp_lwsync(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
com...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...er which prevents following instructions from being
> > @@ -67,18 +59,18 @@
> > #define data_barrier(x) \
> > asm volatile("twi 0,%0,0; isync" : : "r" (x) : "memory");
> >
> > -#define smp_store_release(p, v) \
> > +#define __smp_store_release(p, v) \
> > do { \
> > compiletime_assert_atomic_type(*p); \
> > - smp_lwsync(); \
> > + __smp_lwsync(); \
>
> , therefore this will emit an lwsync no matter SMP or UP.
Absolutely. But smp_store_release (without __) will not.
Please no...