Displaying 20 results from an estimated 32 matches for "__smp_load_acquire".
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...t;: : :"memory")
> #define rmb() __asm__ __volatile__("mb": : :"memory")
> #define wmb() __asm__ __volatile__("wmb": : :"memory")
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p) \
> +({ \
> + __unqual_scalar_typeof(*p) ___p1 = \
> + (*(volatile typeof(___p1) *)(p)); \
> + compiletime_assert_atomic_type(*p); \
> + ___p1; \
> +})
Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to p...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...t;: : :"memory")
> #define rmb() __asm__ __volatile__("mb": : :"memory")
> #define wmb() __asm__ __volatile__("wmb": : :"memory")
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p) \
> +({ \
> + __unqual_scalar_typeof(*p) ___p1 = \
> + (*(volatile typeof(___p1) *)(p)); \
> + compiletime_assert_atomic_type(*p); \
> + ___p1; \
> +})
Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to p...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...;will at kernel.org> wrote:
> On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> > On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > +#define __smp_load_acquire(p) \
> > > +({ \
> > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > + (*(volatile typeof(___p1) *)(p));...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...;will at kernel.org> wrote:
> On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> > On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > +#define __smp_load_acquire(p) \
> > > +({ \
> > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > + (*(volatile typeof(___p1) *)(p));...
2020 Jul 02
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p) \
> > +({ \
> > + __unqual_scalar_typeof(*p) ___p1 = \
> > + (*(volatile typeof(___p1) *)(p)); \
> > + compiletime_assert_atomic_type(*p); \
> > + ___p1; \
> > +})
>
> Sorry if I'm being thick, but doesn't this need...
2020 Jul 02
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...gt; wrote:
> > On Thu, Jul 02, 2020 at 10:32:39AM +0100, Mark Rutland wrote:
> > > On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > > > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > > +#define __smp_load_acquire(p) \
> > > > +({ \
> > > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > > + (*(volatile typeof(___p1) *)(p));...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...sync(), this makes it have another user,
please see this mail:
http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
in definition of PPC's __atomic_op_release().
But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.
Anyway, I will modify my patch.
Regards,
Boqun
>
> > > > > WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acqui...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...sync(), this makes it have another user,
please see this mail:
http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
in definition of PPC's __atomic_op_release().
But I think removing smp_lwsync() is a good idea and actually I think we
can go further to remove __smp_lwsync() and let __smp_load_acquire and
__smp_store_release call __lwsync() directly, but that is another thing.
Anyway, I will modify my patch.
Regards,
Boqun
>
> > > > > WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acqui...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...>
I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/
Regards,
Boqun
> > > WRITE_ONCE(*p, v); \
> > > } while (0)
> > >
> > > -#define smp_load_acquire(p) \
> > > +#define __smp_load_acquire(p) \
> > > ({ \
> > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> > > ___p1; \
> > > })
> > >...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...>
I think deleting smp_lwsync() is fine, though I need to change atomic
variants patches on PPC because of it ;-/
Regards,
Boqun
> > > WRITE_ONCE(*p, v); \
> > > } while (0)
> > >
> > > -#define smp_load_acquire(p) \
> > > +#define __smp_load_acquire(p) \
> > > ({ \
> > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> > > ___p1; \
> > > })
> > >...
2016 Jan 06
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...please see this mail:
>
> http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
>
> in definition of PPC's __atomic_op_release().
>
>
> But I think removing smp_lwsync() is a good idea and actually I think we
> can go further to remove __smp_lwsync() and let __smp_load_acquire and
> __smp_store_release call __lwsync() directly, but that is another thing.
>
> Anyway, I will modify my patch.
>
> Regards,
> Boqun
Thanks!
Could you send an ack then please?
> >
> > > > > > WRITE_ONCE(*p, v); \
> > > > > >...
2016 Jan 10
0
[PATCH v3 27/41] x86: define __smp_xxx
...back to full barriers.
*/
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
___p1; \
})
#else /* regular x86 TSO memory ordering */
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v)...
2020 Jun 30
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...ad of "b". Therefore, on some CPUs, such
- * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb()
- * in cases like this where there are no data dependencies.
- */
-#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
+#define __smp_load_acquire(p) \
+({ \
+ __unqual_scalar_typeof(*p) ___p1 = \
+ (*(volatile typeof(___p1) *)(p)); \
+ compiletime_assert_atomic_type(*p); \
+ ___p1; \
+})
#ifdef CONFIG_SMP
#define __ASM_SMP_MB "\tmb\n"
diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...mp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
> ___p1; \
> })
>
> --
> MST
>
> --
> To unsubscribe from this list: send the...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...mp_lwsync() get involved in this cleanup? If I understand you
correctly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
> ___p1; \
> })
>
> --
> MST
>
> --
> To unsubscribe from this list: send the...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...eread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?
It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.
And these are now gone.
Instead there are __smp_store_release and __smp_load_acquire
which call __smp_lwsync.
These are only used for virt and on SMP.
UP variants are generic - they just call barrier().
> > > > This reduces the amount of arch-specific boiler-plate code.
> > > >
> > > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com>...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...Hoping for some acks on this architecture.
Finally, the following patches put the __smp_XXX APIs to work for virt:
-. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs
xen patches are untested
virtio ones have been tested on x86
-. Patches 33-34 teach virtio to use
__smp_load_acquire/__smp_store_release/__smp_store_mb
This is what started all this work.
tested on x86
The patchset has been in linux-next for a bit, so far without issues.
Michael S. Tsirkin (34):
Documentation/memory-barriers.txt: document __smb_mb()
asm-generic: guard smp_store_release/load_acquire...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...Hoping for some acks on this architecture.
Finally, the following patches put the __smp_XXX APIs to work for virt:
-. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs
xen patches are untested
virtio ones have been tested on x86
-. Patches 33-34 teach virtio to use
__smp_load_acquire/__smp_store_release/__smp_store_mb
This is what started all this work.
tested on x86
The patchset has been in linux-next for a bit, so far without issues.
Michael S. Tsirkin (34):
Documentation/memory-barriers.txt: document __smb_mb()
asm-generic: guard smp_store_release/load_acquire...
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the