Displaying 20 results from an estimated 45 matches for "___p1".
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...: : :"memory")
> #define wmb() __asm__ __volatile__("wmb": : :"memory")
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p) \
> +({ \
> + __unqual_scalar_typeof(*p) ___p1 = \
> + (*(volatile typeof(___p1) *)(p)); \
> + compiletime_assert_atomic_type(*p); \
> + ___p1; \
> +})
Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to provide the acquire semantic?
IIUC prior to this commit alpha would h...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...: : :"memory")
> #define wmb() __asm__ __volatile__("wmb": : :"memory")
> -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> +#define __smp_load_acquire(p) \
> +({ \
> + __unqual_scalar_typeof(*p) ___p1 = \
> + (*(volatile typeof(___p1) *)(p)); \
> + compiletime_assert_atomic_type(*p); \
> + ___p1; \
> +})
Sorry if I'm being thick, but doesn't this need a barrier after the
volatile access to provide the acquire semantic?
IIUC prior to this commit alpha would h...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...rrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > +#define __smp_load_acquire(p) \
> > > +({ \
> > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > + (*(volatile typeof(___p1) *)(p)); \
> > > + compiletime_assert_atomic_type(*p); \
> > > + ___p1; \
> > &...
2020 Jul 02
2
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...rrier_depends() __asm__ __volatile__("mb": : :"memory")
> > > +#define __smp_load_acquire(p) \
> > > +({ \
> > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > + (*(volatile typeof(___p1) *)(p)); \
> > > + compiletime_assert_atomic_type(*p); \
> > > + ___p1; \
> > &...
2020 Jul 02
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...__asm__ __volatile__("mb": : :"memory")
> > > > +#define __smp_load_acquire(p) \
> > > > +({ \
> > > > + __unqual_scalar_typeof(*p) ___p1 = \
> > > > + (*(volatile typeof(___p1) *)(p)); \
> > > > + compiletime_assert_atomic_type(*p); \
> > > > + ___p1;...
2020 Jul 02
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...Mark Rutland wrote:
> On Tue, Jun 30, 2020 at 06:37:20PM +0100, Will Deacon wrote:
> > -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p) \
> > +({ \
> > + __unqual_scalar_typeof(*p) ___p1 = \
> > + (*(volatile typeof(___p1) *)(p)); \
> > + compiletime_assert_atomic_type(*p); \
> > + ___p1; \
> > +})
>
> Sorry if I'm being thick, but doesn't this need a barrier after the
> volatile access to provide the acquire semantic?
>...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ectly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
> ___p1; \
> })
>
> --
> MST
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...ectly, this cleanup focuses on external API like smp_{r,w,}mb(),
while smp_lwsync() is internal to PPC.
Regards,
Boqun
> WRITE_ONCE(*p, v); \
> } while (0)
>
> -#define smp_load_acquire(p) \
> +#define __smp_load_acquire(p) \
> ({ \
> typeof(*p) ___p1 = READ_ONCE(*p); \
> compiletime_assert_atomic_type(*p); \
> - smp_lwsync(); \
> + __smp_lwsync(); \
> ___p1; \
> })
>
> --
> MST
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...patches on PPC because of it ;-/
Regards,
Boqun
> > > WRITE_ONCE(*p, v); \
> > > } while (0)
> > >
> > > -#define smp_load_acquire(p) \
> > > +#define __smp_load_acquire(p) \
> > > ({ \
> > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> > > ___p1; \
> > > })
> > >
> > > --
> > > MST
> > >
> > > -...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...patches on PPC because of it ;-/
Regards,
Boqun
> > > WRITE_ONCE(*p, v); \
> > > } while (0)
> > >
> > > -#define smp_load_acquire(p) \
> > > +#define __smp_load_acquire(p) \
> > > ({ \
> > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > compiletime_assert_atomic_type(*p); \
> > > - smp_lwsync(); \
> > > + __smp_lwsync(); \
> > > ___p1; \
> > > })
> > >
> > > --
> > > MST
> > >
> > > -...
2016 Jan 10
0
[PATCH v3 27/41] x86: define __smp_xxx
...(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
WRITE_ONCE(*p, v); \
} while (0)
-#define smp_load_acquire(p) \
+#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
- smp_mb(); \
+ __smp_mb(); \
___p1; \
})
#else /* regular x86 TSO memory ordering */
-#define smp_store_release(p, v) \
+#define __smp_store_release(p, v) \
do { \
compiletime_assert_ato...
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2020 Jul 02
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...ine read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
> > +#define __smp_load_acquire(p) \
> > +({ \
> > + __unqual_scalar_typeof(*p) ___p1 = \
> > + (*(volatile typeof(___p1) *)(p)); \
> > + compiletime_assert_atomic_type(*p); \
> > + ___p1; \
> > +})...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...> WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acquire(p) \
> > > > > +#define __smp_load_acquire(p) \
> > > > > ({ \
> > > > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > > > compiletime_assert_atomic_type(*p); \
> > > > > - smp_lwsync(); \
> > > > > + __smp_lwsync(); \
> > > > > ___p1; \
> > > > > })
> > > > >
> >...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...> WRITE_ONCE(*p, v); \
> > > > > } while (0)
> > > > >
> > > > > -#define smp_load_acquire(p) \
> > > > > +#define __smp_load_acquire(p) \
> > > > > ({ \
> > > > > typeof(*p) ___p1 = READ_ONCE(*p); \
> > > > > compiletime_assert_atomic_type(*p); \
> > > > > - smp_lwsync(); \
> > > > > + __smp_lwsync(); \
> > > > > ___p1; \
> > > > > })
> > > > >
> >...
2020 Jun 30
0
[PATCH 04/18] alpha: Override READ_ONCE() with barriered implementation
...y" could be set to 3 and "x" to 0. Use rmb()
- * in cases like this where there are no data dependencies.
- */
-#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory")
+#define __smp_load_acquire(p) \
+({ \
+ __unqual_scalar_typeof(*p) ___p1 = \
+ (*(volatile typeof(___p1) *)(p)); \
+ compiletime_assert_atomic_type(*p); \
+ ___p1; \
+})
#ifdef CONFIG_SMP
#define __ASM_SMP_MB "\tmb\n"
diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include/asm/rwonce.h
new file mode 100644
index 000000000000..83a9...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh