Displaying 14 results from an estimated 14 matches for "5600u".
Did you mean:
5600
2017 Sep 01
2
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...ing. Fix this by calling
vhost_vq_avail_empty() instead.
This issue could be noticed by doing netperf TCP_RR benchmark as
client from guest (but not host). With this fix, TCP_RR from guest to
localhost restores from 1375.91 trans per sec to 55235.28 trans per
sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
Fixes: 030881372460 ("vhost_net: basic polling support")
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
- The patch is needed for -stable
---
drivers/vhost/net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drive...
2017 Sep 01
2
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...ing. Fix this by calling
vhost_vq_avail_empty() instead.
This issue could be noticed by doing netperf TCP_RR benchmark as
client from guest (but not host). With this fix, TCP_RR from guest to
localhost restores from 1375.91 trans per sec to 55235.28 trans per
sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
Fixes: 030881372460 ("vhost_net: basic polling support")
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
- The patch is needed for -stable
---
drivers/vhost/net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/vhost/net.c b/drive...
2017 Sep 05
1
[PATCH net V2] vhost_net: correctly check tx avail during rx busy polling
...ing. Fix this by calling
vhost_vq_avail_empty() instead.
This issue could be noticed by doing netperf TCP_RR benchmark as
client from guest (but not host). With this fix, TCP_RR from guest to
localhost restores from 1375.91 trans per sec to 55235.28 trans per
sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
Fixes: 030881372460 ("vhost_net: basic polling support")
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
- The patch is needed for -stable
- Changes from V1: enable vq notification when needed
---
drivers/vhost/net.c | 7 ++++++-
1 file changed, 6 insertions...
2017 Sep 05
1
[PATCH net V2] vhost_net: correctly check tx avail during rx busy polling
...ing. Fix this by calling
vhost_vq_avail_empty() instead.
This issue could be noticed by doing netperf TCP_RR benchmark as
client from guest (but not host). With this fix, TCP_RR from guest to
localhost restores from 1375.91 trans per sec to 55235.28 trans per
sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
Fixes: 030881372460 ("vhost_net: basic polling support")
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
- The patch is needed for -stable
- Changes from V1: enable vq notification when needed
---
drivers/vhost/net.c | 7 ++++++-
1 file changed, 6 insertions...
2018 Nov 23
1
[PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
...d, using all of descriptor buffer might
cause a slowdown.
Rather you should be able to get
about the same speedup, but from skipping checking
the used ring in virtio.
> Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
> TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
> 2.60GHz.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/vhost/vhost.c | 19 ++++++++++++-------
> 1 file changed, 12 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index 3a5f81a66d34....
2018 Nov 23
5
[PATCH net-next 0/3] basic in order support for vhost_net
Hi:
This series implement basic in order feature support for
vhost_net. This feature requires both driver and device to use
descriptors in order which can simplify the implementation and
optimizaton for both side. The series also implement a simple
optimization that avoid read available ring. Test shows 10%
performance improvement.
More optimizations could be done on top.
Jason Wang (3):
2017 Sep 01
0
[PATCH net] vhost_net: correctly check tx avail during rx busy polling
...vhost_vq_avail_empty() instead.
>
> This issue could be noticed by doing netperf TCP_RR benchmark as
> client from guest (but not host). With this fix, TCP_RR from guest to
> localhost restores from 1375.91 trans per sec to 55235.28 trans per
> sec on my laptop (Intel(R) Core(TM) i7-5600U CPU @ 2.60GHz).
>
> Fixes: 030881372460 ("vhost_net: basic polling support")
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> - The patch is needed for -stable
> ---
> drivers/vhost/net.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
&...
2018 Nov 23
0
[PATCH net-next 3/3] vhost: don't touch avail ring if in_order is negotiated
Device use descriptors table in order, so there's no need to read
index from available ring. This eliminate the cache contention on
avail ring completely.
Virito-user + vhost_kernel + XDP_DROP gives about ~10% improvement on
TX from 4.8Mpps to 5.3Mpps on Intel(R) Core(TM) i7-5600U CPU @
2.60GHz.
Signed-off-by: Jason Wang <jasowang at redhat.com>
---
drivers/vhost/vhost.c | 19 ++++++++++++-------
1 file changed, 12 insertions(+), 7 deletions(-)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 3a5f81a66d34..c8be151bc897 100644
--- a/drivers/vhost/vhos...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2016 Mar 20
14
[PATCH v2 0/7] tests/qemu: Add program for tracing and analyzing boot times.
v1 was here:
https://www.redhat.com/archives/libguestfs/2016-March/thread.html#00157
Not running the 'hwclock' command reduces boot times considerably.
However I'm not sure if it is safe. See the question I posted on
qemu-devel:
http://thread.gmane.org/gmane.comp.emulators.qemu/402194
At the moment, about 50% of the time is consumed by SeaBIOS. Of this,
about ⅓rd is SGABIOS
2017 Dec 02
0
Re: [nbdkit PATCH] nbd: Fix memory leak
...[ 0.031307] smpboot: Max logical packages: 1
[ 0.031963] x2apic enabled
nbdkit: debug: starting worker thread nbd.14
[ 0.032006] Switched APIC routing to physical x2apic.
[ 0.034000] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.034000] smpboot: CPU0: Intel(R) Core(TM) i7-5600U nbdkit: debug: starting worker thread nbd.15
CPU @ 2.60GHz (family: 0x6, model: 0x3d, stepping: 0x4)
[ 0.034071] Performance Events: Broadwell events, Intel PMU driver.
[ 0.035023] ... version: 2
[ 0.035529] ... bit width: 48
[ 0.036007] ... generic registers...
2017 Dec 02
2
[nbdkit PATCH] nbd: Fix memory leak
When converting from a single transaction to a linked list, I
forgot to free the storage for each member of the list.
Reported-by: Richard W.M. Jones <rjones at redhat.com>
Fixes: 7f5bb9bf13f041ea7702bda557d9dd668bc3423a
Signed-off-by: Eric Blake <eblake at redhat.com>
---
I'm still not sure why 'make check' passes while 'make check-valgrind'
fails for
2016 Mar 22
19
[PATCH v3 0/11] tests/qemu: Add program for tracing and analyzing boot times.
Lots of changes since v2, too much to remember or summarize.
Please ignore patch 11/11, it's just for my testing.
Rich.