search for: iperf3

Displaying 20 results from an estimated 77 matches for "iperf3".

Did you mean: iperf
2018 Jun 27
2
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...For avoiding deadlock, change the code to lock the vq one > > by one and use the VHOST_NET_VQ_XX as a subclass for > > mutex_lock_nested. With the patch, qemu can set differently > > the busyloop_timeout for rx or tx queue. > > > > We set the poll-us=100us and use the iperf3 to test > > its throughput. The iperf3 command is shown as below. > > > > on the guest: > > iperf3 -s -D > > > > on the host: > > iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 > > > > * With the patch: 23.1 Gbits/sec > > * With...
2023 Nov 14
2
emulate ARM ?
Hi guys. How do you emulate AMR arch - I mean, with what's in distro &| SIGs repos as oppose to do-it-yourself? many thanks, L.
2023 Nov 16
1
emulate ARM ?
On Nov 14, 2023, at 13:44, lejeczek via CentOS <centos at centos.org> wrote: > > How do you emulate AMR arch With QEMU: $ uname -r 5.14.0-284.30.1.el9_2.x86_64 $ sudo dnf install qemu-user-static-aarch64 $ docker pull --platform=linux/arm64 tangentsoft/iperf3 $ docker export $(docker create --name iperf3 tangentsoft/iperf3) > iperf3.tar $ tar xf iperf3.tar $ file bin/iperf3 bin/iperf3: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, BuildID[sha1]=254575ed4ae36c21c317691a8008f3382eb7225e, stripped $ bin/iperf3 -s --------...
2018 Jun 30
1
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
...gt; From: Tonghao Zhang <xiangxia.m.yue at gmail.com> > > This patch improves the guest receive and transmit performance. > On the handle_tx side, we poll the sock receive queue at the > same time. handle_rx do that in the same way. > > We set the poll-us=100us and use the iperf3 to test Where/how do you configure poll-us=100us ? Are you talking about /proc/sys/net/core/busy_poll ? p.s. Nice performance boost! :-) > its bandwidth, use the netperf to test throughput and mean > latency. When running the tests, the vhost-net kthread of > that VM, is alway 100% CP...
2018 Jun 26
3
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...he same time. handle_rx do that in the same way. For avoiding deadlock, change the code to lock the vq one by one and use the VHOST_NET_VQ_XX as a subclass for mutex_lock_nested. With the patch, qemu can set differently the busyloop_timeout for rx or tx queue. We set the poll-us=100us and use the iperf3 to test its throughput. The iperf3 command is shown as below. on the guest: iperf3 -s -D on the host: iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 * With the patch: 23.1 Gbits/sec * Without the patch: 12.7 Gbits/sec Signed-off-by: Tonghao Zhang <zhangtonghao at didichuxing.com>...
2018 Jun 26
3
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...he same time. handle_rx do that in the same way. For avoiding deadlock, change the code to lock the vq one by one and use the VHOST_NET_VQ_XX as a subclass for mutex_lock_nested. With the patch, qemu can set differently the busyloop_timeout for rx or tx queue. We set the poll-us=100us and use the iperf3 to test its throughput. The iperf3 command is shown as below. on the guest: iperf3 -s -D on the host: iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 * With the patch: 23.1 Gbits/sec * Without the patch: 12.7 Gbits/sec Signed-off-by: Tonghao Zhang <zhangtonghao at didichuxing.com>...
2020 Apr 27
4
[PATCH net-next 0/3] vsock: support network namespace
...img,if=virtio --nographic \ > -device vhost-vsock-pci,guest-cid=4 > l1_vm$ ip netns exec ns2 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ > -drive file=/tmp/vsockvm2.img,if=virtio --nographic \ > -device vhost-vsock-pci,guest-cid=4 > > # all iperf3 listen on CID_ANY and port 5201, but in different netns > l1_vm$ ./iperf3 --vsock -s # connection from l0 or guests started > # on default netns (init_net) > l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s > l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s > >...
2020 Apr 27
4
[PATCH net-next 0/3] vsock: support network namespace
...img,if=virtio --nographic \ > -device vhost-vsock-pci,guest-cid=4 > l1_vm$ ip netns exec ns2 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ > -drive file=/tmp/vsockvm2.img,if=virtio --nographic \ > -device vhost-vsock-pci,guest-cid=4 > > # all iperf3 listen on CID_ANY and port 5201, but in different netns > l1_vm$ ./iperf3 --vsock -s # connection from l0 or guests started > # on default netns (init_net) > l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s > l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s > >...
2018 Jun 20
1
[PATCH] net: vhost: improve performance when enable busyloop
This patch improves the guest receive performance from host. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. we set the poll-us=100 us and use the iperf3 to test its throughput. The iperf3 command is shown as below. iperf3 -s -D iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 --bandwidth 100000M * With the patch: 21.1 Gbits/sec * Without the patch: 12.7 Gbits/sec Signed-off-by: Tonghao Zhang <zhangtonghao at didichuxing.com> --- driver...
2015 Mar 13
3
Network throughput testing software available for CentOS/Linux
...m > > The source is there, and I would be surprised if it didn't build > easily on EL7. > > https://alteeve.ca/an-repo/el6/SRPMS/iperf-2.0.5-11.el6.anvil.src.rpm +1 for iperf, and it's available on EPEL also https://dl.fedoraproject.org/pub/epel/6/x86_64/ EPEL6 has iperf and iperf3 while EPEL7 has just iperf3. netperf is also very good, but it's more complex to use and I'm not aware of packages for it. Marcelo
2020 Apr 27
0
[PATCH net-next 0/3] vsock: support network namespace
...t; > -device vhost-vsock-pci,guest-cid=4 > > l1_vm$ ip netns exec ns2 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ > > -drive file=/tmp/vsockvm2.img,if=virtio --nographic \ > > -device vhost-vsock-pci,guest-cid=4 > > > > # all iperf3 listen on CID_ANY and port 5201, but in different netns > > l1_vm$ ./iperf3 --vsock -s # connection from l0 or guests started > > # on default netns (init_net) > > l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s > > l1_vm$ ip netns exec ns1 ./iperf3 -...
2020 Apr 28
0
[PATCH net-next 0/3] vsock: support network namespace
...>> -device vhost-vsock-pci,guest-cid=4 >> l1_vm$ ip netns exec ns2 qemu-system-x86_64 -m 1G -M accel=kvm -smp 2 \ >> -drive file=/tmp/vsockvm2.img,if=virtio --nographic \ >> -device vhost-vsock-pci,guest-cid=4 >> >> # all iperf3 listen on CID_ANY and port 5201, but in different netns >> l1_vm$ ./iperf3 --vsock -s # connection from l0 or guests started >> # on default netns (init_net) >> l1_vm$ ip netns exec ns1 ./iperf3 --vsock -s >> l1_vm$ ip netns exec ns1 ./iperf3 --vs...
2018 Jun 30
0
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patch improves the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. We set the poll-us=100us and use the iperf3 to test its bandwidth, use the netperf to test throughput and mean latency. When running the tests, the vhost-net kthread of that VM, is alway 100% CPU. The commands are shown as below. iperf3 -s -D iperf3 -c IP -i 1 -P 1 -t 20 -M 1400 or netserver netperf -H IP -t TCP_RR -l 20 -- -O "THRO...
2018 Jun 30
9
[PATCH net-next v3 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. This patches are splited from previous big patch: http://patchwork.ozlabs.org/patch/934673/ For more performance report, see patch 4. Tonghao Zhang (4): net: vhost:
2018 Jun 27
0
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
...n the same way. > > For avoiding deadlock, change the code to lock the vq one > by one and use the VHOST_NET_VQ_XX as a subclass for > mutex_lock_nested. With the patch, qemu can set differently > the busyloop_timeout for rx or tx queue. > > We set the poll-us=100us and use the iperf3 to test > its throughput. The iperf3 command is shown as below. > > on the guest: > iperf3 -s -D > > on the host: > iperf3 -c 192.168.1.100 -i 1 -P 10 -t 10 -M 1400 > > * With the patch: 23.1 Gbits/sec > * Without the patch: 12.7 Gbits/sec > > Signed-off-b...
2017 May 17
2
Improving packets/sec and data rate - v1.0.24
...pher and seeing ~5% increases across the board. Each Tinc node does have AES-NI. I've also read through/found https://github.com/gsliepen/tinc/issues/110 which is very interesting. The TInc nodes are all on Centos6 AWS EC2 instances as c3.large's w/ EIP's. I've been testing with iperf3 and am able to get around 510Mb/s on the raw network. Over the tun interface/Tinc network, I'm only able to max it out to around 120Mb/s. Anyone have any suggestions on settings or system changes that might be able to assist here? I'm also curious if upgrading to 1.0.31 would help and pla...
2019 Jul 30
1
[PATCH net-next v5 0/5] vsock/virtio: optimizations to increase the throughput
...> > v4: https://patchwork.kernel.org/cover/11047717 > > v3: https://patchwork.kernel.org/cover/10970145 > > v2: https://patchwork.kernel.org/cover/10938743 > > v1: https://patchwork.kernel.org/cover/10885431 > > > > Below are the benchmarks step by step. I used iperf3 [1] modified with VSOCK > > support. As Michael suggested in the v1, I booted host and guest with 'nosmap'. > > > > A brief description of patches: > > - Patches 1: limit the memory usage with an extra copy for small packets > > - Patches 2+3: reduce the num...
2019 Jul 25
2
SMB Direct support?
...orkstations machine that supports SMB Direct natively and I also have 2 Mellanox ConnectX-3 dual SFP+ cards (one in the Windows machine, one in a local server) that both support RDMA and RoCE (RDMA Over Converged Ethernet). These systems are connected with a 10G SFP+ switch and fiber optic cables. iperf3 performance tests max out the connections, and I have enabled jumbo frames (MTU 9000). The server is currently running Debian 10 with kernel 5.0.15, with Samba version 4.9.5. I have only SMB3 enabled in the smb.conf and I want to try out SMB Direct and see how much it improves performance in our l...
2018 Mar 06
2
[OT] Load testing with SIPp
...urrent calls/50 CAPS limit I would like to improve, if possible. Tests are done with both signaling and media like this: SIPp <---> SUT (asterisk 13) <---> Asterisk box echoing media I checked bandwidth first and got 930 Mb/s on each leg (from SIPp to SUT or SUT to echoing box) using iperf3 TCP testing though my target relies on UDP My questions are: 1. Have you ever noticed a better scalability using UDP or TCP ? 2. Where do Retransmission I'm observing on SIPp console most probably come from ? Network issues ? My SIPp not beeing correctly tuned ? Lack of resources somewhere...
2018 Jul 02
5
[PATCH net-next v4 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive and transmit performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v3 -> v4: fix some issues v2 -> v3: This patches are splited from previous big patch: