Displaying 16 results from an estimated 16 matches for "smp_lwsync".
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...> > > for use by virtualization.
> > >
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
>
> I think this is the part that was missed in review.
>
Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?
> > > This reduces the amount of arch-specific boiler-plate code.
> > >
> > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> > > Acked-by: Arnd Bergmann <arnd at arndb.de>
> > > --...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...> > > for use by virtualization.
> > >
> > > smp_xxx barriers are removed as they are
> > > defined correctly by asm-generic/barriers.h
>
> I think this is the part that was missed in review.
>
Yes, I realized my mistake after reread the series. But smp_lwsync() is
not defined in asm-generic/barriers.h, right?
> > > This reduces the amount of arch-specific boiler-plate code.
> > >
> > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
> > > Acked-by: Arnd Bergmann <arnd at arndb.de>
> > > --...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > >
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > >
> > > > I'm OK to change my patch accordingly, but do we re...
2016 Jan 06
2
[PATCH v2 15/32] powerpc: define __smp_xxx
On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
[snip]
> > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > smp_load_acquire() and smp_store_release()):
> > > >
> > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > >
> > > > I'm OK to change my patch accordingly, but do we re...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...c 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
> #define dma_rmb() __lwsync()
> #define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>
> -#ifdef CONFIG_SMP
> -#define smp_lwsync() __lwsync()
> +#define __smp_lwsync() __lwsync()
>
so __smp_lwsync() is always mapped to lwsync, right?
> -#define smp_mb() mb()
> -#define smp_rmb() __lwsync()
> -#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define s...
2016 Jan 05
2
[PATCH v2 15/32] powerpc: define __smp_xxx
...c 100644
> --- a/arch/powerpc/include/asm/barrier.h
> +++ b/arch/powerpc/include/asm/barrier.h
> @@ -44,19 +44,11 @@
> #define dma_rmb() __lwsync()
> #define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
>
> -#ifdef CONFIG_SMP
> -#define smp_lwsync() __lwsync()
> +#define __smp_lwsync() __lwsync()
>
so __smp_lwsync() is always mapped to lwsync, right?
> -#define smp_mb() mb()
> -#define smp_rmb() __lwsync()
> -#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> -#else
> -#define s...
2016 Jan 06
0
[PATCH v2 15/32] powerpc: define __smp_xxx
On Wed, Jan 06, 2016 at 09:51:52AM +0800, Boqun Feng wrote:
> On Tue, Jan 05, 2016 at 06:16:48PM +0200, Michael S. Tsirkin wrote:
> [snip]
> > > > > Another thing is that smp_lwsync() may have a third user(other than
> > > > > smp_load_acquire() and smp_store_release()):
> > > > >
> > > > > http://article.gmane.org/gmane.linux.ports.ppc.embedded/89877
> > > > >
> > > > > I'm OK to change my patch...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...n.
> > > >
> > > > smp_xxx barriers are removed as they are
> > > > defined correctly by asm-generic/barriers.h
> >
> > I think this is the part that was missed in review.
> >
>
> Yes, I realized my mistake after reread the series. But smp_lwsync() is
> not defined in asm-generic/barriers.h, right?
It isn't because as far as I could tell it is not used
outside arch/powerpc/include/asm/barrier.h
smp_store_release and smp_load_acquire.
And these are now gone.
Instead there are __smp_store_release and __smp_load_acquire
which call __...
2016 Jan 05
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...clude/asm/barrier.h
> > +++ b/arch/powerpc/include/asm/barrier.h
> > @@ -44,19 +44,11 @@
> > #define dma_rmb() __lwsync()
> > #define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
> >
> > -#ifdef CONFIG_SMP
> > -#define smp_lwsync() __lwsync()
> > +#define __smp_lwsync() __lwsync()
> >
>
> so __smp_lwsync() is always mapped to lwsync, right?
Yes.
> > -#define smp_mb() mb()
> > -#define smp_rmb() __lwsync()
> > -#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"...
2015 Dec 31
0
[PATCH v2 15/32] powerpc: define __smp_xxx
...lude/asm/barrier.h
index 980ad0c..c0deafc 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -44,19 +44,11 @@
#define dma_rmb() __lwsync()
#define dma_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#ifdef CONFIG_SMP
-#define smp_lwsync() __lwsync()
+#define __smp_lwsync() __lwsync()
-#define smp_mb() mb()
-#define smp_rmb() __lwsync()
-#define smp_wmb() __asm__ __volatile__ (stringify_in_c(SMPWMB) : : :"memory")
-#else
-#define smp_lwsync() barrier()
-
-#define smp_mb() barrier()
-#define smp_rmb() barrier()
-#define...
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1:
- replaced my asm-generic patch with an equivalent patch already in tip
- add wrappers with virt_ prefix for better code annotation,
as suggested by David Miller
- dropped XXX in patch names as this makes vger choke, Cc all relevant
mailing lists on all patches (not personal email, as the list becomes
too long then)
I parked this in vhost tree for now, but the
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
This is really trying to cleanup some virt code, as suggested by Peter, who
said
> You could of course go fix that instead of mutilating things into
> sort-of functional state.
This work is needed for virtio, so it's probably easiest to
merge it through my tree - is this fine by everyone?
Arnd, if you agree, could you ack this please?
Note to arch maintainers: please don't
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2:
- extended checkpatch tests for barriers, and added patches
teaching it to warn about incorrect usage of barriers
(__smp_xxx barriers are for use by asm-generic code only),
should help prevent misuse by arch code
to address comments by Russell King
- patched more instances of xen to use virt_ barriers
as suggested by Stefano Stabellini
- implemented a 2 byte xchg on sh