search for: virt_wmb

Displaying 20 results from an estimated 20 matches for "virt_wmb".

Did you mean: virt_mb
2015 Dec 31
0
[PATCH v2 34/34] xen/io: use virt_xxx barriers
...ace/io/ring.h index 7dc685b..21f4fbd 100644 --- a/include/xen/interface/io/ring.h +++ b/include/xen/interface/io/ring.h @@ -208,12 +208,12 @@ struct __name##_back_ring { \ #define RING_PUSH_REQUESTS(_r) do { \ - wmb(); /* back sees requests /before/ updated producer index */ \ + virt_wmb(); /* back sees requests /before/ updated producer index */ \ (_r)->sring->req_prod = (_r)->req_prod_pvt; \ } while (0) #define RING_PUSH_RESPONSES(_r) do { \ - wmb(); /* front sees responses /before/ updated producer index */ \ + virt_wmb(); /* front sees responses /...
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...ux/virtio_ring.h @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) if (weak_barriers) virt_rmb(); else - rmb(); + dma_rmb(); } static inline void virtio_wmb(bool weak_barriers) @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) if (weak_barriers) virt_wmb(); else - wmb(); + dma_wmb(); } static inline void virtio_store_mb(bool weak_barriers, -- MST
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...ux/virtio_ring.h @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) if (weak_barriers) virt_rmb(); else - rmb(); + dma_rmb(); } static inline void virtio_wmb(bool weak_barriers) @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) if (weak_barriers) virt_wmb(); else - wmb(); + dma_wmb(); } static inline void virtio_store_mb(bool weak_barriers, -- MST
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2016 Jan 01
0
[PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
...eak_barriers) static inline void virtio_rmb(bool weak_barriers) { if (weak_barriers) - smp_rmb(); + virt_rmb(); else rmb(); } @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers) static inline void virtio_wmb(bool weak_barriers) { if (weak_barriers) - smp_wmb(); + virt_wmb(); else wmb(); } -#else -static inline void virtio_mb(bool weak_barriers) -{ - mb(); -} - -static inline void virtio_rmb(bool weak_barriers) -{ - rmb(); -} - -static inline void virtio_wmb(bool weak_barriers) -{ - wmb(); -} -#endif struct virtio_device; struct virtqueue; -- MST
2015 Dec 31
0
[PATCH v2 33/34] xenbus: use virt_xxx barriers
...@@ int xb_write(const void *data, unsigned len) avail = len; /* Must write data /after/ reading the consumer index. */ - mb(); + virt_mb(); memcpy(dst, data, avail); data += avail; len -= avail; /* Other side must not see new producer until data is there. */ - wmb(); + virt_wmb(); intf->req_prod += avail; /* Implies mb(): other side will see the updated producer. */ @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len) avail = len; /* Must read data /after/ reading the producer index. */ - rmb(); + virt_rmb(); memcpy(data, src, avail);...
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...ile("" ::: "memory") -#define mb() __sync_synchronize() - -#define smp_mb() mb() -# define dma_rmb() barrier() -# define dma_wmb() barrier() -# define smp_rmb() barrier() -# define smp_wmb() barrier() +#define virt_mb() __sync_synchronize() +#define virt_rmb() barrier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \ + __ATOMIC_SEQ_CST); \ + b...
2018 Apr 19
0
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...irtio_rmb(bool weak_barriers) > if (weak_barriers) > virt_rmb(); > else > - rmb(); > + dma_rmb(); > } > > static inline void virtio_wmb(bool weak_barriers) > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > if (weak_barriers) > virt_wmb(); > else > - wmb(); > + dma_wmb(); > } > > static inline void virtio_store_mb(bool weak_barriers, >
2019 Jun 05
0
[PATCH 4/4] drm/virtio: Add memory barriers for capset cache.
...-583,6 +583,8 @@ static void virtio_gpu_cmd_capset_cb(struct virtio_gpu_device *vgdev, cache_ent->id == le32_to_cpu(cmd->capset_id)) { memcpy(cache_ent->caps_cache, resp->capset_data, cache_ent->size); + /* Copy must occur before is_valid is signalled. */ + virt_wmb(); atomic_set(&cache_ent->is_valid, 1); break; } -- 2.22.0.rc1.311.g5d7573a151-goog
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...ile("" ::: "memory") -#define mb() __sync_synchronize() - -#define smp_mb() mb() -# define dma_rmb() barrier() -# define dma_wmb() barrier() -# define smp_rmb() barrier() -# define smp_wmb() barrier() +#define virt_mb() __sync_synchronize() +#define virt_rmb() barrier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \ + __ATOMIC_SEQ_CST); \ + b...
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...iers) > > virt_rmb(); > > else > > - rmb(); > > + dma_rmb(); > > } > > > > static inline void virtio_wmb(bool weak_barriers) > > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > > if (weak_barriers) > > virt_wmb(); > > else > > - wmb(); > > + dma_wmb(); > > } > > > > static inline void virtio_store_mb(bool weak_barriers, > >
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...iers) > > virt_rmb(); > > else > > - rmb(); > > + dma_rmb(); > > } > > > > static inline void virtio_wmb(bool weak_barriers) > > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > > if (weak_barriers) > > virt_wmb(); > > else > > - wmb(); > > + dma_wmb(); > > } > > > > static inline void virtio_store_mb(bool weak_barriers, > >
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2019 Jun 05
10
[PATCH 1/4] drm/virtio: Ensure cached capset entries are valid before copying.
From: David Riley <davidriley at chromium.org> virtio_gpu_get_caps_ioctl could return success with invalid data if a second caller to the function occurred after the entry was created in virtio_gpu_cmd_get_capset but prior to the virtio_gpu_cmd_capset_cb callback being called. This could leak contents of memory as well since the caps_cache allocation is done without zeroing.
2019 Nov 08
15
[PATCH 00/13] Finish off [smp_]read_barrier_depends()
Hi all, Although [smp_]read_barrier_depends() became part of READ_ONCE() in commit 76ebbe78f739 ("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()"), it still limps on in the Linux memory model with the sinister hope of attracting innocent new users so that it becomes impossible to remove altogether. Let's strike before it's too late: there's only
2020 Jul 10
24
[PATCH 00/18] Allow architectures to override __READ_ONCE()
Hi all, This is version three of the patches I previously posted here: v1: https://lore.kernel.org/lkml/20191108170120.22331-1-will at kernel.org/ v2: https://lore.kernel.org/r/20200630173734.14057-1-will at kernel.org Changes since v2 include: * Actually add the barrier in READ_ONCE() for Alpha! * Implement Alpha's smp_load_acquire() using __READ_ONCE(), rather than the other
2020 Jun 30
32
[PATCH 00/18] Allow architectures to override __READ_ONCE()
Hi everyone, This is the long-awaited version two of the patches I previously posted in November last year: https://lore.kernel.org/lkml/20191108170120.22331-1-will at kernel.org/ I ended up parking the series while the READ_ONCE() implementation was being overhauled, but with that merged during the recent merge window and LTO patches being posted again [1], it was time for a refresh. The
2020 Jun 30
32
[PATCH 00/18] Allow architectures to override __READ_ONCE()
Hi everyone, This is the long-awaited version two of the patches I previously posted in November last year: https://lore.kernel.org/lkml/20191108170120.22331-1-will at kernel.org/ I ended up parking the series while the READ_ONCE() implementation was being overhauled, but with that merged during the recent merge window and LTO patches being posted again [1], it was time for a refresh. The