Displaying 8 results from an estimated 8 matches for "config_weak_ord".
2015 Dec 31
0
[PATCH v2 11/32] mips: reuse asm-generic/barrier.h
...do { } while(0)
-#define smp_read_barrier_depends() do { } while(0)
-
#ifdef CONFIG_CPU_HAS_SYNC
#define __sync() \
__asm__ __volatile__( \
@@ -87,8 +84,6 @@
#define wmb() fast_wmb()
#define rmb() fast_rmb()
-#define dma_wmb() fast_wmb()
-#define dma_rmb() fast_rmb()
#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
# ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
#define __WEAK_LLSC_MB " \n"
#endif
-#define smp_store_mb(var, value) \
- do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
#define smp_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :&...
2016 Jan 10
0
[PATCH v3 11/41] mips: reuse asm-generic/barrier.h
...do { } while(0)
-#define smp_read_barrier_depends() do { } while(0)
-
#ifdef CONFIG_CPU_HAS_SYNC
#define __sync() \
__asm__ __volatile__( \
@@ -87,8 +84,6 @@
#define wmb() fast_wmb()
#define rmb() fast_rmb()
-#define dma_wmb() fast_wmb()
-#define dma_rmb() fast_rmb()
#if defined(CONFIG_WEAK_ORDERING) && defined(CONFIG_SMP)
# ifdef CONFIG_CPU_CAVIUM_OCTEON
@@ -112,9 +107,6 @@
#define __WEAK_LLSC_MB " \n"
#endif
-#define smp_store_mb(var, value) \
- do { WRITE_ONCE(var, value); smp_mb(); } while (0)
-
#define smp_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :&...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the