similar to: [PATCH net-next 12/12] tools/virtio: fix smp_mb on x86

Displaying 20 results from an estimated 5000 matches similar to: "[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86"

2018 Jan 26
0
[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86
On 2018?01?26? 07:36, Michael S. Tsirkin wrote: > Offset 128 overlaps the last word of the redzone. > Use 132 which is always beyond that. > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com> > --- > tools/virtio/ringtest/main.h | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/tools/virtio/ringtest/main.h
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
mfence appears to be way slower than a locked instruction - let's use lock+add unconditionally, as we always did on old 32-bit. Results: perf stat -r 10 -- ./virtio_ring_0_9 --sleep --host-affinity 0 --guest-affinity 0 Before: 0.922565990 seconds time elapsed ( +- 1.15% ) After: 0.578667024 seconds time elapsed
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
mfence appears to be way slower than a locked instruction - let's use lock+add unconditionally, as we always did on old 32-bit. Results: perf stat -r 10 -- ./virtio_ring_0_9 --sleep --host-affinity 0 --guest-affinity 0 Before: 0.922565990 seconds time elapsed ( +- 1.15% ) After: 0.578667024 seconds time elapsed
2016 Jan 21
1
[PATCH] tools/virtio: add ringtest utilities
This adds micro-benchmarks useful for tuning virtio ring layouts. Three layouts are currently implemented: - virtio 0.9 compatible one - an experimental extension bypassing the ring index, polling ring itself instead - an experimental extension bypassing avail and used ring completely Typical use: sh run-on-all.sh perf stat -r 10 --log-fd 1 -- ./ring It doesn't depend on the kernel
2016 Jan 21
1
[PATCH] tools/virtio: add ringtest utilities
This adds micro-benchmarks useful for tuning virtio ring layouts. Three layouts are currently implemented: - virtio 0.9 compatible one - an experimental extension bypassing the ring index, polling ring itself instead - an experimental extension bypassing avail and used ring completely Typical use: sh run-on-all.sh perf stat -r 10 --log-fd 1 -- ./ring It doesn't depend on the kernel
2016 Jan 12
5
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Tue, Jan 12, 2016 at 12:54 PM, Linus Torvalds <torvalds at linux-foundation.org> wrote: > On Tue, Jan 12, 2016 at 12:30 PM, Andy Lutomirski <luto at kernel.org> wrote: >> >> I recall reading somewhere that lock addl $0, 32(%rsp) or so (maybe even 64) >> was better because it avoided stomping on very-likely-to-be-hot write >> buffers. > > I suspect it
2016 Jan 12
5
[PATCH 3/4] x86,asm: Re-work smp_store_mb()
On Tue, Jan 12, 2016 at 12:54 PM, Linus Torvalds <torvalds at linux-foundation.org> wrote: > On Tue, Jan 12, 2016 at 12:30 PM, Andy Lutomirski <luto at kernel.org> wrote: >> >> I recall reading somewhere that lock addl $0, 32(%rsp) or so (maybe even 64) >> was better because it avoided stomping on very-likely-to-be-hot write >> buffers. > > I suspect it
2017 Oct 26
3
[PATCH] virtio/ringtest: fix up need_event math
last kicked event index must be updated unconditionally: even if we don't need to kick, we do not want to re-check the same entry for events. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- tools/virtio/ringtest/ring.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/tools/virtio/ringtest/ring.c b/tools/virtio/ringtest/ring.c index
2017 Oct 26
3
[PATCH] virtio/ringtest: fix up need_event math
last kicked event index must be updated unconditionally: even if we don't need to kick, we do not want to re-check the same entry for events. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- tools/virtio/ringtest/ring.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/tools/virtio/ringtest/ring.c b/tools/virtio/ringtest/ring.c index
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Jan 28
10
[PATCH v5 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Aug 22
5
[PATCH] CodingStyle: add some more error handling guidelines
commit commit ea04036032edda6f771c1381d03832d2ed0f6c31 ("CodingStyle: add some more error handling guidelines") suggests never naming goto labels after the goto location - that is the error that is handled. But it's actually pretty common and IMHO it's a reasonable style provided each error gets its own label, and each label comes after the matching cleanup: foo
2016 Aug 22
5
[PATCH] CodingStyle: add some more error handling guidelines
commit commit ea04036032edda6f771c1381d03832d2ed0f6c31 ("CodingStyle: add some more error handling guidelines") suggests never naming goto labels after the goto location - that is the error that is handled. But it's actually pretty common and IMHO it's a reasonable style provided each error gets its own label, and each label comes after the matching cleanup: foo
2018 Jan 25
0
[PATCH net-next 11/12] tools/virtio: copy READ/WRITE_ONCE
This is to make ptr_ring test build again. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- tools/virtio/ringtest/main.h | 57 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 57 insertions(+) diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h index 5706e07..593a328 100644 --- a/tools/virtio/ringtest/main.h +++ b/tools/virtio/ringtest/main.h @@
2017 Jan 16
7
[PULL 0/5] virtio/s390 patches for -next
Michael, the following patches have all been posted in the past. I've collected them on top of your vhost branch -- please let me know whether this works for you. The following changes since commit 6bdf1e0efb04a1716373646cb6f35b73addca492: Makefile: drop -D__CHECK_ENDIAN__ from cflags (2016-12-16 00:13:43 +0200) are available in the git repository at:
2017 Jan 16
7
[PULL 0/5] virtio/s390 patches for -next
Michael, the following patches have all been posted in the past. I've collected them on top of your vhost branch -- please let me know whether this works for you. The following changes since commit 6bdf1e0efb04a1716373646cb6f35b73addca492: Makefile: drop -D__CHECK_ENDIAN__ from cflags (2016-12-16 00:13:43 +0200) are available in the git repository at:
2017 Oct 27
1
[PATCH] virtio/ringtest: virtio_ring: fix up need_event math
last kicked event index must be updated unconditionally: even if we don't need to kick, we do not want to re-check the same entry for events. Reported-by: Cornelia Huck <cohuck at redhat.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- tools/virtio/ringtest/virtio_ring_0_9.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git
2017 Oct 27
1
[PATCH] virtio/ringtest: virtio_ring: fix up need_event math
last kicked event index must be updated unconditionally: even if we don't need to kick, we do not want to re-check the same entry for events. Reported-by: Cornelia Huck <cohuck at redhat.com> Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- tools/virtio/ringtest/virtio_ring_0_9.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git