similar to: [PATCH v3 02/38] virtio_ring: sparse warning fixup

Displaying 20 results from an estimated 5000 matches similar to: "[PATCH v3 02/38] virtio_ring: sparse warning fixup"

2020 Jul 10
1
[PATCH] virtio_ring: sparse warning fixup
virtio_store_mb was built with split ring in mind so it accepts __virtio16 arguments. Packed ring uses __le16 values, so sparse complains. It's just a store with some barriers so let's convert it to a macro, we don't loose too much type safety by doing that. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- include/linux/virtio_ring.h | 19 +++++++++---------- 1 file
2020 Aug 03
0
[PATCH v2 02/24] virtio_ring: sparse warning fixup
virtio_store_mb was built with split ring in mind so it accepts __virtio16 arguments. Packed ring uses __le16 values, so sparse complains. It's just a store with some barriers so let's convert it to a macro, we don't loose too much type safety by doing that. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- include/linux/virtio_ring.h | 19 +++++++++---------- 1 file
2015 Dec 31
0
[PATCH v2 32/32] virtio_ring: use virt_store_mb
We need a full barrier after writing out event index, using virt_store_mb there seems better than open-coding. As usual, we need a wrapper to account for strong barriers. It's tempting to use this in vhost as well, for that, we'll need a variant of smp_store_mb that works on __user pointers. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- include/linux/virtio_ring.h |
2016 Jan 01
1
[PATCH v2 32/32] virtio_ring: use virt_store_mb
Hello. On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote: > We need a full barrier after writing out event index, using > virt_store_mb there seems better than open-coding. As usual, we need a > wrapper to account for strong barriers. > > It's tempting to use this in vhost as well, for that, we'll > need a variant of smp_store_mb that works on __user pointers. > >
2016 Jan 01
1
[PATCH v2 32/32] virtio_ring: use virt_store_mb
Hello. On 12/31/2015 10:09 PM, Michael S. Tsirkin wrote: > We need a full barrier after writing out event index, using > virt_store_mb there seems better than open-coding. As usual, we need a > wrapper to account for strong barriers. > > It's tempting to use this in vhost as well, for that, we'll > need a variant of smp_store_mb that works on __user pointers. > >
2015 Dec 17
2
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 12:22:22PM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > Seems to give a speedup on my box but I'm less sure about this one. E.g. as > > xchng faster than mfence on all/most intel CPUs? Anyone has an opinion? > > Would help if you Cc people who would actually know this :-) Good point. Glad
2015 Dec 17
2
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 12:22:22PM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > Seems to give a speedup on my box but I'm less sure about this one. E.g. as > > xchng faster than mfence on all/most intel CPUs? Anyone has an opinion? > > Would help if you Cc people who would actually know this :-) Good point. Glad
2015 Dec 17
4
[PATCH] virtio_ring: use smp_store_mb
We need a full barrier after writing out event index, using smp_store_mb there seems better than open-coding. As usual, we need a wrapper to account for strong barriers/non smp. It's tempting to use this in vhost as well, for that, we'll need a variant of smp_store_mb that works on __user pointers. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Seems to give a speedup
2015 Dec 17
4
[PATCH] virtio_ring: use smp_store_mb
We need a full barrier after writing out event index, using smp_store_mb there seems better than open-coding. As usual, we need a wrapper to account for strong barriers/non smp. It's tempting to use this in vhost as well, for that, we'll need a variant of smp_store_mb that works on __user pointers. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Seems to give a speedup
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
virtio is using barriers to order memory accesses, thus dma_wmb/rmb is a good match. Build-tested on x86: Before [mst at tuck linux]$ size drivers/virtio/virtio_ring.o text data bss dec hex filename 11392 820 0 12212 2fb4 drivers/virtio/virtio_ring.o After mst at tuck linux]$ size drivers/virtio/virtio_ring.o text data bss dec hex filename
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
virtio is using barriers to order memory accesses, thus dma_wmb/rmb is a good match. Build-tested on x86: Before [mst at tuck linux]$ size drivers/virtio/virtio_ring.o text data bss dec hex filename 11392 820 0 12212 2fb4 drivers/virtio/virtio_ring.o After mst at tuck linux]$ size drivers/virtio/virtio_ring.o text data bss dec hex filename
2018 Apr 19
0
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
On 19/04/2018 19:35, Michael S. Tsirkin wrote: > virtio is using barriers to order memory accesses, thus > dma_wmb/rmb is a good match. > > Build-tested on x86: Before > > [mst at tuck linux]$ size drivers/virtio/virtio_ring.o > text data bss dec hex filename > 11392 820 0 12212 2fb4 drivers/virtio/virtio_ring.o > > After > mst
2015 Dec 17
2
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > +static inline void virtio_store_mb(bool weak_barriers, > > + __virtio16 *p, __virtio16 v) > > +{ > > +#ifdef CONFIG_SMP > > + if (weak_barriers) > > + smp_store_mb(*p, v); > > + else > > +#endif >
2015 Dec 17
2
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > +static inline void virtio_store_mb(bool weak_barriers, > > + __virtio16 *p, __virtio16 v) > > +{ > > +#ifdef CONFIG_SMP > > + if (weak_barriers) > > + smp_store_mb(*p, v); > > + else > > +#endif >
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
On Thu, Apr 19, 2018 at 07:39:21PM +0200, Paolo Bonzini wrote: > On 19/04/2018 19:35, Michael S. Tsirkin wrote: > > virtio is using barriers to order memory accesses, thus > > dma_wmb/rmb is a good match. > > > > Build-tested on x86: Before > > > > [mst at tuck linux]$ size drivers/virtio/virtio_ring.o > > text data bss dec hex
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
On Thu, Apr 19, 2018 at 07:39:21PM +0200, Paolo Bonzini wrote: > On 19/04/2018 19:35, Michael S. Tsirkin wrote: > > virtio is using barriers to order memory accesses, thus > > dma_wmb/rmb is a good match. > > > > Build-tested on x86: Before > > > > [mst at tuck linux]$ size drivers/virtio/virtio_ring.o > > text data bss dec hex
2015 Dec 17
3
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 02:57:26PM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 03:16:20PM +0200, Michael S. Tsirkin wrote: > > On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote: > > > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > > > +static inline void virtio_store_mb(bool weak_barriers, > > > > +
2015 Dec 17
3
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 02:57:26PM +0100, Peter Zijlstra wrote: > On Thu, Dec 17, 2015 at 03:16:20PM +0200, Michael S. Tsirkin wrote: > > On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote: > > > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > > > +static inline void virtio_store_mb(bool weak_barriers, > > > > +
2015 Dec 17
0
[PATCH] virtio_ring: use smp_store_mb
On Thu, Dec 17, 2015 at 03:16:20PM +0200, Michael S. Tsirkin wrote: > On Thu, Dec 17, 2015 at 11:52:38AM +0100, Peter Zijlstra wrote: > > On Thu, Dec 17, 2015 at 12:32:53PM +0200, Michael S. Tsirkin wrote: > > > +static inline void virtio_store_mb(bool weak_barriers, > > > + __virtio16 *p, __virtio16 v) > > > +{ > > > +#ifdef CONFIG_SMP > >
2015 Apr 08
0
[PATCH] virtio_ring: Update weak barriers to use dma_wmb/rmb
On Tue, Apr 07, 2015 at 05:47:42PM -0700, Alexander Duyck wrote: > This change makes it so that instead of using smp_wmb/rmb which varies > depending on the kernel configuration we can can use dma_wmb/rmb which for > most architectures should be equal to or slightly more strict than > smp_wmb/rmb. > > The advantage to this is that these barriers are available to uniprocessor