search for: wmb

Displaying 20 results from an estimated 433 matches for "wmb".

2012 Feb 08
18
[PATCH 0 of 4] Prune outdated/impossible preprocessor symbols, and update VIOAPIC emulation
Patch 1 removes CONFIG_SMP Patch 2 removes separate smp_{,r,w}mb()s as a result of patch 1 Patch 4 removes __ia64__ defines from the x86 arch tree Patch 3 is related to patch 4 and changes the VIOAPIC to emulate version 0x20 as a performance gain. It preceeds Patch 4 so as to be more clear about the functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
2016 Jan 12
7
[PATCH v2 0/3] x86: faster mb()+other barrier.h tweaks
...performance) from comment changes approved by Linus, from (so far unreviewed) comment change I came up with myself. Lightly tested on my system. Michael S. Tsirkin (3): x86: drop mfence in favor of lock+addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO arch/x86/include/asm/barrier.h | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) -- MST
2016 Jan 12
7
[PATCH v2 0/3] x86: faster mb()+other barrier.h tweaks
...performance) from comment changes approved by Linus, from (so far unreviewed) comment change I came up with myself. Lightly tested on my system. Michael S. Tsirkin (3): x86: drop mfence in favor of lock+addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO arch/x86/include/asm/barrier.h | 10 +++------- 1 file changed, 3 insertions(+), 7 deletions(-) -- MST
2008 Jun 10
1
[PATCH] xen: Use wmb instead of rmb in xen_evtchn_do_upcall().
This patch is ported one from 534:77db69c38249 of linux-2.6.18-xen.hg. Use wmb instead of rmb to enforce ordering between evtchn_upcall_pending and evtchn_pending_sel stores in xen_evtchn_do_upcall(). Cc: Samuel Thibault <samuel.thibault at eu.citrix.com> Signed-off-by: Isaku Yamahata <yamahata at valinux.co.jp> --- drivers/xen/events.c | 2 +- 1 files change...
2016 Jan 13
6
[PATCH v3 0/4] x86: faster mb()+documentation tweaks
...self. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (4): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) -- MST
2016 Jan 13
6
[PATCH v3 0/4] x86: faster mb()+documentation tweaks
...self. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (4): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 20 +++++++++----------- 1 file changed, 9 insertions(+), 11 deletions(-) -- MST
2016 Jan 04
4
[PATCH 1/3] checkpatch.pl: add missing memory barriers
...it a/scripts/checkpatch.pl b/scripts/checkpatch.pl > index 2b3c228..0245bbe 100755 > --- a/scripts/checkpatch.pl > +++ b/scripts/checkpatch.pl > @@ -5116,7 +5116,14 @@ sub process { > ? } > ? } > ?# check for memory barriers without a comment. > - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) { > + > + my @barriers = ('mb', 'rmb', 'wmb', 'read_barrier_depends'); > + my @smp_barriers = ('smp_store_release', 'smp_load_acquire', 'smp_store_mb'); >...
2016 Jan 04
4
[PATCH 1/3] checkpatch.pl: add missing memory barriers
...it a/scripts/checkpatch.pl b/scripts/checkpatch.pl > index 2b3c228..0245bbe 100755 > --- a/scripts/checkpatch.pl > +++ b/scripts/checkpatch.pl > @@ -5116,7 +5116,14 @@ sub process { > ? } > ? } > ?# check for memory barriers without a comment. > - if ($line =~ /\b(mb|rmb|wmb|read_barrier_depends|smp_mb|smp_rmb|smp_wmb|smp_read_barrier_depends)\(/) { > + > + my @barriers = ('mb', 'rmb', 'wmb', 'read_barrier_depends'); > + my @smp_barriers = ('smp_store_release', 'smp_load_acquire', 'smp_store_mb'); >...
2011 Aug 23
1
Testing Specific Hypothesis
....Equity 91 -0.009786651 TGT.UN.Equity 92 -0.002091613 UNH.UN.Equity 93 -0.007545588 UNH.UN.Equity 94 0.018162619 UNH.UN.Equity 95 0.018460900 UNH.UN.Equity 96 0.002647658 UNH.UN.Equity 97 0.013203331 UNH.UN.Equity 98 -0.004009623 UNH.UN.Equity 99 0.009640957 WMB.UN.Equity 100 -0.016134343 WMB.UN.Equity 101 0.000669344 WMB.UN.Equity 102 -0.005685619 WMB.UN.Equity 103 0.017827111 WMB.UN.Equity 104 0.003304693 WMB.UN.Equity 105 -0.011198946 WMB.UN.Equity > aov(values~ind,data=otestme)Call: aov(formula = values ~ ind, data = otest...
2016 Jan 12
1
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
...ry ordering, because for pure CPU > memory ordering stores and loads are already ordered. > > The only reason to use lfence/sfence is after you've used nontemporal > stores for IO. By the way, the comment in barrier.h says: /* * Some non-Intel clones support out of order store. wmb() ceases to be * a nop for these. */ and while the 1st sentence may well be true, if you have an SMP system with out of order stores, making wmb not a nop will not help. Additionally as you point out, wmb is not a nop even for regular intel CPUs because of these weird use-cases. Drop this comm...
2016 Jan 12
1
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
...ry ordering, because for pure CPU > memory ordering stores and loads are already ordered. > > The only reason to use lfence/sfence is after you've used nontemporal > stores for IO. By the way, the comment in barrier.h says: /* * Some non-Intel clones support out of order store. wmb() ceases to be * a nop for these. */ and while the 1st sentence may well be true, if you have an SMP system with out of order stores, making wmb not a nop will not help. Additionally as you point out, wmb is not a nop even for regular intel CPUs because of these weird use-cases. Drop this comm...
2016 Jan 05
1
[PATCH v2 22/32] s390: define __smp_xxx
...el S. Tsirkin wrote: > > arch/s390/kernel/vdso.c: smp_mb(); > > Looking at > Author: Christian Borntraeger <borntraeger at de.ibm.com> > Date: Fri Sep 11 16:23:06 2015 +0200 > > s390/vdso: use correct memory barrier > > By definition smp_wmb only orders writes against writes. (Finish all > previous writes, and do not start any future write). To protect the > vdso init code against early reads on other CPUs, let's use a full > smp_mb at the end of vdso init. As right now smp_wmb is implemented > as fu...
2016 Jan 05
1
[PATCH v2 22/32] s390: define __smp_xxx
...el S. Tsirkin wrote: > > arch/s390/kernel/vdso.c: smp_mb(); > > Looking at > Author: Christian Borntraeger <borntraeger at de.ibm.com> > Date: Fri Sep 11 16:23:06 2015 +0200 > > s390/vdso: use correct memory barrier > > By definition smp_wmb only orders writes against writes. (Finish all > previous writes, and do not start any future write). To protect the > vdso init code against early reads on other CPUs, let's use a full > smp_mb at the end of vdso init. As right now smp_wmb is implemented > as fu...
2016 Jan 04
2
[PATCH v2 22/32] s390: define __smp_xxx
On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote: > This defines __smp_xxx barriers for s390, > for use by virtualization. > > Some smp_xxx barriers are removed as they are > defined correctly by asm-generic/barriers.h > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers > unconditionally on this architecture. > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com> > Acked-by: Arnd Bergmann <arnd at arndb.de> > --- > arch/s390/include/asm/barrier.h | 15 +++++++++------ > 1 file changed, 9 insertions...
2016 Jan 04
2
[PATCH v2 22/32] s390: define __smp_xxx
On Thu, Dec 31, 2015 at 09:08:38PM +0200, Michael S. Tsirkin wrote: > This defines __smp_xxx barriers for s390, > for use by virtualization. > > Some smp_xxx barriers are removed as they are > defined correctly by asm-generic/barriers.h > > Note: smp_mb, smp_rmb and smp_wmb are defined as full barriers > unconditionally on this architecture. > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com> > Acked-by: Arnd Bergmann <arnd at arndb.de> > --- > arch/s390/include/asm/barrier.h | 15 +++++++++------ > 1 file changed, 9 insertions...
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
...tead. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (5): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: use mb() around clflush x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 21 ++++++++++++--------- arch/x86/kernel/process.c | 4 ++-- 2 files changed, 14 insertions(+), 11 deletions(-) -- MST
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
...tead. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (5): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: use mb() around clflush x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 21 ++++++++++++--------- arch/x86/kernel/process.c | 4 ++-- 2 files changed, 14 insertions(+), 11 deletions(-) -- MST
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
...tead. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (5): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: use mb() around clflush x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 17 ++++++++--------- arch/x86/kernel/process.c | 4 ++-- 2 files changed, 10 insertions(+), 11 deletions(-) -- MST
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
...tead. Changes from v2: add patch adding cc clobber for addl tweak commit log for patch 2 use addl at SP-4 (as opposed to SP) to reduce data dependencies Michael S. Tsirkin (5): x86: add cc clobber for addl x86: drop a comment left over from X86_OOSTORE x86: tweak the comment about use of wmb for IO x86: use mb() around clflush x86: drop mfence in favor of lock+addl arch/x86/include/asm/barrier.h | 17 ++++++++--------- arch/x86/kernel/process.c | 4 ++-- 2 files changed, 10 insertions(+), 11 deletions(-) -- MST
2016 Jan 28
0
[PATCH v5 3/5] x86: tweak the comment about use of wmb for IO
On x86, we *do* still use the non-nop rmb/wmb for IO barriers, but even that is generally questionable. Leave them around as historial unless somebody can point to a case where they care about the performance, but tweak the comment so people don't think they are strictly required in all cases. Signed-off-by: Michael S. Tsirkin <mst at...