search for: set_mb

Displaying 13 results from an estimated 13 matches for "set_mb".

Did you mean: set_wmb
2015 Apr 29
4
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > In the pv_scan_next() function, the slow cmpxchg atomic operation is > performed even if the other CPU is not even close to being halted. This > extra cmpxchg can harm slowpath performance. > > This patch introduces the new mayhalt flag to indicate if the other > spinning CPU is close to being halted or not. The
2015 Apr 29
4
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > In the pv_scan_next() function, the slow cmpxchg atomic operation is > performed even if the other CPU is not even close to being halted. This > extra cmpxchg can harm slowpath performance. > > This patch introduces the new mayhalt flag to indicate if the other > spinning CPU is close to being halted or not. The
2016 Jan 12
1
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote: > From: Davidlohr Bueso <dave at stgolabs.net> > > With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()") > it was made clear that the context of this call (and thus set_mb) > is strictly for CPU ordering, as opposed to IO. As such all archs > should use the smp variant of mb(), respecting the semantics and > saving a mandatory barrier on UP. > > Signed-off...
2016 Jan 12
1
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
On Sun, Jan 10, 2016 at 04:16:32PM +0200, Michael S. Tsirkin wrote: > From: Davidlohr Bueso <dave at stgolabs.net> > > With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()") > it was made clear that the context of this call (and thus set_mb) > is strictly for CPU ordering, as opposed to IO. As such all archs > should use the smp variant of mb(), respecting the semantics and > saving a mandatory barrier on UP. > > Signed-off...
2015 Apr 29
0
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
...it goes beyond this particular patch. Patterns like this: xchg(&pn->mayhalt, true); are just evil and disgusting. Even befoe this patch, that code had (void)xchg(&pn->state, vcpu_halted); which is *wrong* and should never be done. If you want it to be "set_mb()" (which sets a value and has a memory barrier), then use set_mb(). Yes, it happens to use a "xchg()" to do so, but dammit, it documents that whole "this is a memory barrier" in the name. Also, anybody who does this should damn well document why the memory barrier is need...
2016 Jan 10
0
[PATCH v3 01/41] lcoking/barriers, arch: Use smp barriers in smp_store_release()
From: Davidlohr Bueso <dave at stgolabs.net> With commit b92b8b35a2e ("locking/arch: Rename set_mb() to smp_store_mb()") it was made clear that the context of this call (and thus set_mb) is strictly for CPU ordering, as opposed to IO. As such all archs should use the smp variant of mb(), respecting the semantics and saving a mandatory barrier on UP. Signed-off-by: Davidlohr Bueso <dbues...
2007 Apr 18
1
[RFC] [PATCH] Split host arch headers for UML's benefit
...005-08-16 11:32:28.000000000 -0400 @@ -0,0 +1,190 @@ +#ifndef __SYSTEM_ABI__ +#define __SYSTEM_ABI__ + +#include <asm/cpufeature.h> + +#ifdef CONFIG_SMP +#define smp_mb() mb() +#define smp_rmb() rmb() +#define smp_wmb() wmb() +#define smp_read_barrier_depends() read_barrier_depends() +#define set_mb(var, value) do { xchg(&var, value); } while (0) +#else +#define smp_mb() barrier() +#define smp_rmb() barrier() +#define smp_wmb() barrier() +#define smp_read_barrier_depends() do { } while(0) +#define set_mb(var, value) do { var = value; barrier(); } while (0) +#endif + +#define xchg(ptr,v) ((...
2007 Apr 18
1
[RFC] [PATCH] Split host arch headers for UML's benefit
...005-08-16 11:32:28.000000000 -0400 @@ -0,0 +1,190 @@ +#ifndef __SYSTEM_ABI__ +#define __SYSTEM_ABI__ + +#include <asm/cpufeature.h> + +#ifdef CONFIG_SMP +#define smp_mb() mb() +#define smp_rmb() rmb() +#define smp_wmb() wmb() +#define smp_read_barrier_depends() read_barrier_depends() +#define set_mb(var, value) do { xchg(&var, value); } while (0) +#else +#define smp_mb() barrier() +#define smp_rmb() barrier() +#define smp_wmb() barrier() +#define smp_read_barrier_depends() do { } while(0) +#define set_mb(var, value) do { var = value; barrier(); } while (0) +#endif + +#define xchg(ptr,v) ((...
2012 Feb 08
18
[PATCH 0 of 4] Prune outdated/impossible preprocessor symbols, and update VIOAPIC emulation
Patch 1 removes CONFIG_SMP Patch 2 removes separate smp_{,r,w}mb()s as a result of patch 1 Patch 4 removes __ia64__ defines from the x86 arch tree Patch 3 is related to patch 4 and changes the VIOAPIC to emulate version 0x20 as a performance gain. It preceeds Patch 4 so as to be more clear about the functional change. Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the