search for: virt_rmb

Displaying 20 results from an estimated 30 matches for "virt_rmb".

Did you mean: virt_mb
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...nged, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h index bbf3252..fab0213 100644 --- a/include/linux/virtio_ring.h +++ b/include/linux/virtio_ring.h @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) if (weak_barriers) virt_rmb(); else - rmb(); + dma_rmb(); } static inline void virtio_wmb(bool weak_barriers) @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) if (weak_barriers) virt_wmb(); else - wmb(); + dma_wmb(); } static inline void virtio_store_mb(bool weak_barriers, -- MST
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...nged, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h index bbf3252..fab0213 100644 --- a/include/linux/virtio_ring.h +++ b/include/linux/virtio_ring.h @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) if (weak_barriers) virt_rmb(); else - rmb(); + dma_rmb(); } static inline void virtio_wmb(bool weak_barriers) @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) if (weak_barriers) virt_wmb(); else - wmb(); + dma_wmb(); } static inline void virtio_store_mb(bool weak_barriers, -- MST
2016 Jan 10
0
[PATCH v3 39/41] xen/events: use virt_xxx barriers
drivers/xen/events/events_fifo.c uses rmb() to communicate with the other side. For guests compiled with CONFIG_SMP, smp_rmb would be sufficient, so rmb() here is only needed if a non-SMP guest runs on an SMP host. Switch to the virt_rmb barrier which serves this exact purpose. Pull in asm/barrier.h here to make sure the file is self-contained. Suggested-by: David Vrabel <david.vrabel at citrix.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/xen/events/events_fifo.c | 3 ++- 1 file changed, 2 i...
2017 Feb 14
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...ould ensure that other reads from the TSC page are completed before the > second read of the sequence counter. I am working with the Windows team to correctly > reflect this algorithm in the Hyper-V specification. Thank you, do I get it right that combining the above I only need to replace virt_rmb() barriers with plain rmb() to get 'lfence' in hv_read_tsc_page (PATCH 2)? As members of struct ms_hyperv_tsc_page are volatile we don't need READ_ONCE(), compilers are not allowed to merge accesses. The resulting code looks good to me: (gdb) disassemble read_hv_clock_tsc Dump of asse...
2017 Feb 14
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...ould ensure that other reads from the TSC page are completed before the > second read of the sequence counter. I am working with the Windows team to correctly > reflect this algorithm in the Hyper-V specification. Thank you, do I get it right that combining the above I only need to replace virt_rmb() barriers with plain rmb() to get 'lfence' in hv_read_tsc_page (PATCH 2)? As members of struct ms_hyperv_tsc_page are volatile we don't need READ_ONCE(), compilers are not allowed to merge accesses. The resulting code looks good to me: (gdb) disassemble read_hv_clock_tsc Dump of asse...
2019 Jun 05
10
[PATCH 1/4] drm/virtio: Ensure cached capset entries are valid before copying.
From: David Riley <davidriley at chromium.org> virtio_gpu_get_caps_ioctl could return success with invalid data if a second caller to the function occurred after the entry was created in virtio_gpu_cmd_get_capset but prior to the virtio_gpu_cmd_capset_cb callback being called. This could leak contents of memory as well since the caps_cache allocation is done without zeroing.
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, while we're still waiting for a definitive ACK from Microsoft that the algorithm is good for SMP case (as we can't prevent the code in vdso from migrating between CPUs) I'd like to send v2 with some modifications to keep the discussion going. Changes since v1: - Document the TSC page reading protocol [Thomas Gleixner]. - Separate the TSC page reading code from
2017 Feb 14
6
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
Hi, while we're still waiting for a definitive ACK from Microsoft that the algorithm is good for SMP case (as we can't prevent the code in vdso from migrating between CPUs) I'd like to send v2 with some modifications to keep the discussion going. Changes since v1: - Document the TSC page reading protocol [Thomas Gleixner]. - Separate the TSC page reading code from
2016 Jan 01
0
[PATCH v2 30/32] virtio_ring: update weak barriers to use __smp_xxx
...-#ifdef CONFIG_SMP static inline void virtio_mb(bool weak_barriers) { if (weak_barriers) - smp_mb(); + virt_mb(); else mb(); } @@ -33,7 +32,7 @@ static inline void virtio_mb(bool weak_barriers) static inline void virtio_rmb(bool weak_barriers) { if (weak_barriers) - smp_rmb(); + virt_rmb(); else rmb(); } @@ -41,26 +40,10 @@ static inline void virtio_rmb(bool weak_barriers) static inline void virtio_wmb(bool weak_barriers) { if (weak_barriers) - smp_wmb(); + virt_wmb(); else wmb(); } -#else -static inline void virtio_mb(bool weak_barriers) -{ - mb(); -} - -static i...
2015 Dec 31
0
[PATCH v2 33/34] xenbus: use virt_xxx barriers
...until data is there. */ - wmb(); + virt_wmb(); intf->req_prod += avail; /* Implies mb(): other side will see the updated producer. */ @@ -180,14 +180,14 @@ int xb_read(void *data, unsigned len) avail = len; /* Must read data /after/ reading the producer index. */ - rmb(); + virt_rmb(); memcpy(data, src, avail); data += avail; len -= avail; /* Other side must not see free space until we've copied out */ - mb(); + virt_mb(); intf->rsp_cons += avail; pr_debug("Finished read of %i bytes (%i to go)\n", avail, len); -- MST
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...) #define barrier() asm volatile("" ::: "memory") -#define mb() __sync_synchronize() - -#define smp_mb() mb() -# define dma_rmb() barrier() -# define dma_wmb() barrier() -# define smp_rmb() barrier() -# define smp_wmb() barrier() +#define virt_mb() __sync_synchronize() +#define virt_rmb() barrier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \...
2018 Apr 19
0
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...t; > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h > index bbf3252..fab0213 100644 > --- a/include/linux/virtio_ring.h > +++ b/include/linux/virtio_ring.h > @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) > if (weak_barriers) > virt_rmb(); > else > - rmb(); > + dma_rmb(); > } > > static inline void virtio_wmb(bool weak_barriers) > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > if (weak_barriers) > virt_wmb(); > else > - wmb(); > + dma_wmb(); > } >...
2019 Jun 05
0
[PATCH 4/4] drm/virtio: Add memory barriers for capset cache.
...7a3c5..502f5f7c2298 100644 --- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c +++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c @@ -542,6 +542,9 @@ static int virtio_gpu_get_caps_ioctl(struct drm_device *dev, if (!ret) return -EBUSY; + /* is_valid check must proceed before copy of the cache entry. */ + virt_rmb(); + ptr = cache_ent->caps_cache; if (copy_to_user((void __user *)(unsigned long)args->addr, ptr, size)) diff --git a/drivers/gpu/drm/virtio/virtgpu_vq.c b/drivers/gpu/drm/virtio/virtgpu_vq.c index dd5ead2541c2..b974eba4fe7d 100644 --- a/drivers/gpu/drm/virtio/virtgpu_vq.c +++ b/drivers...
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...) #define barrier() asm volatile("" ::: "memory") -#define mb() __sync_synchronize() - -#define smp_mb() mb() -# define dma_rmb() barrier() -# define dma_wmb() barrier() -# define smp_rmb() barrier() -# define smp_wmb() barrier() +#define virt_mb() __sync_synchronize() +#define virt_rmb() barrier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \...
2020 Feb 12
5
[PATCH 0/5] x86/vmware: Steal time accounting support
Hello, This patchset introduces steal time accounting support for the VMware guest. The idea and implementation of guest steal time support is similar to KVM ones and it is based on steal clock. The steal clock is a per CPU structure in a shared memory between hypervisor and guest, initialized by each CPU through hypercall. Steal clock is got updated by the hypervisor and read by the guest. The
2017 Feb 14
0
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...he TSC page are completed before the >> second read of the sequence counter. I am working with the Windows team to correctly >> reflect this algorithm in the Hyper-V specification. > > > Thank you, > > do I get it right that combining the above I only need to replace > virt_rmb() barriers with plain rmb() to get 'lfence' in hv_read_tsc_page > (PATCH 2)? As members of struct ms_hyperv_tsc_page are volatile we don't > need READ_ONCE(), compilers are not allowed to merge accesses. The > resulting code looks good to me: No, on multiple counts, unfortunat...
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...linux/virtio_ring.h b/include/linux/virtio_ring.h > > index bbf3252..fab0213 100644 > > --- a/include/linux/virtio_ring.h > > +++ b/include/linux/virtio_ring.h > > @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) > > if (weak_barriers) > > virt_rmb(); > > else > > - rmb(); > > + dma_rmb(); > > } > > > > static inline void virtio_wmb(bool weak_barriers) > > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > > if (weak_barriers) > > virt_wmb(); > > els...
2018 Apr 19
4
[PATCH] virtio_ring: switch to dma_XX barriers for rpmsg
...linux/virtio_ring.h b/include/linux/virtio_ring.h > > index bbf3252..fab0213 100644 > > --- a/include/linux/virtio_ring.h > > +++ b/include/linux/virtio_ring.h > > @@ -35,7 +35,7 @@ static inline void virtio_rmb(bool weak_barriers) > > if (weak_barriers) > > virt_rmb(); > > else > > - rmb(); > > + dma_rmb(); > > } > > > > static inline void virtio_wmb(bool weak_barriers) > > @@ -43,7 +43,7 @@ static inline void virtio_wmb(bool weak_barriers) > > if (weak_barriers) > > virt_wmb(); > > els...
2017 Feb 15
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...the >>> second read of the sequence counter. I am working with the Windows team to correctly >>> reflect this algorithm in the Hyper-V specification. >> >> >> Thank you, >> >> do I get it right that combining the above I only need to replace >> virt_rmb() barriers with plain rmb() to get 'lfence' in hv_read_tsc_page >> (PATCH 2)? As members of struct ms_hyperv_tsc_page are volatile we don't >> need READ_ONCE(), compilers are not allowed to merge accesses. The >> resulting code looks good to me: > > No, on multip...
2017 Feb 15
2
[PATCH v2 0/3] x86/vdso: Add Hyper-V TSC page clocksource support
...the >>> second read of the sequence counter. I am working with the Windows team to correctly >>> reflect this algorithm in the Hyper-V specification. >> >> >> Thank you, >> >> do I get it right that combining the above I only need to replace >> virt_rmb() barriers with plain rmb() to get 'lfence' in hv_read_tsc_page >> (PATCH 2)? As members of struct ms_hyperv_tsc_page are volatile we don't >> need READ_ONCE(), compilers are not allowed to merge accesses. The >> resulting code looks good to me: > > No, on multip...