Hi, I am testing tinc for a very large scale deployment. I am using tinc-1.1 for testing. test results below are for tinc in switch mode. all other settings are default. test is performed in LAN env. 2 different hosts. I am getting only 24.6 Mbits/sec when tinc is used. without tinc on the same hosts/link I get 95 to 100 Mbits/sec using iperf. Over Tinc: iperf -c 192.168.9.9 -b 100m -l 32k -w 128k [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 29.4 MBytes 24.6 Mbits/sec [ 3] Sent 940 datagrams [ 3] Server Report: [ 3] 0.0-10.4 sec 6.72 MBytes 5.40 Mbits/sec 22.602 ms 724/ 939 (77%) [ 3] 0.0-10.4 sec 1 datagrams received out-of-order Without Tinc: iperf -c 10.206.131.254 -b 100m -l 32k -w 128k [ 3] local 10.172.241.254 port 53809 connected with 10.206.131.254 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 119 MBytes 100 Mbits/sec [ 3] Sent 3817 datagrams [ 3] Server Report: [ 3] 0.0-10.3 sec 51.5 MBytes 42.2 Mbits/sec 8.436 ms 2168/ 3817 (57%) Using Tinc: iperf -c 192.168.9.9 -u -b 100M [ 3] local 192.168.9.1 port 58384 connected with 192.168.9.9 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 33.5 MBytes 28.1 Mbits/sec [ 3] Sent 23897 datagrams read failed: Connection refused [ 3] WARNING: did not receive ack of last datagram after 5 tries. Without Tinc: iperf -c 10.206.131.254 -u -b 100M [ 3] local 10.172.241.254 port 56376 connected with 10.206.131.254 port 5001 read failed: Connection refused [ 3] WARNING: did not receive ack of last datagram after 1 tries. [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 114 MBytes 95.4 Mbits/sec [ 3] Sent 81124 datagrams I have disabled cipher and digest but has no impact. I also tried with tcp only and increasing MTU to 1500 without any success. please let me know if I am missing something. I used tinc-1.0.19 and tinc-1.0.24 with same results. With tinc-1.1 I see that the CPU consumption is reduced from 95% (with tinc-1.0) to 45% (with tinc-1.1) Any help or pointers to increase the throughput will be much appreciated. Thanks, Anil Here are some additional details on server for the first test above. iperf -s -u -l 32k -w 128k -i 1 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 32768 byte datagrams UDP buffer size: 256 KByte (WARNING: requested 128 KByte) ------------------------------------------------------------ [With Tinc] [ 3] local 192.168.9.9 port 5001 connected with 192.168.9.1 port 37737 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 1.0 sec 736 KBytes 6.03 Mbits/sec 14.079 ms 12/ 35 (34%) [ 3] 1.0- 2.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 2.0- 3.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 3.0- 4.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 4.0- 5.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 5.0- 6.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 6.0- 7.0 sec 0.00 Bytes 0.00 bits/sec 14.079 ms 0/ 0 (-nan%) [ 3] 7.0- 8.0 sec 1.38 MBytes 11.5 Mbits/sec 21.598 ms 713/ 757 (94%) [ 3] 8.0- 9.0 sec 1.88 MBytes 15.7 Mbits/sec 21.999 ms 0/ 60 (0%) [ 3] 9.0-10.0 sec 1.88 MBytes 15.7 Mbits/sec 22.190 ms 0/ 60 (0%) [ 3] 0.0-10.4 sec 6.72 MBytes 5.40 Mbits/sec 22.603 ms 724/ 939 (77%) [ 3] 0.0-10.4 sec 1 datagrams received out-of-order read failed: Connection refused [ Without tinc ] [ 4] local 10.206.131.254 port 5001 connected with 10.172.241.254 port 53809 [ 4] 0.0- 1.0 sec 11.4 MBytes 95.9 Mbits/sec 0.281 ms 0/ 366 (0%) [ 4] 1.0- 2.0 sec 11.4 MBytes 95.9 Mbits/sec 0.412 ms 1/ 367 (0.27%) [ 4] 2.0- 3.0 sec 11.4 MBytes 95.9 Mbits/sec 1.136 ms 0/ 366 (0%) [ 4] 3.0- 4.0 sec 3.16 MBytes 26.5 Mbits/sec 0.332 ms 264/ 365 (72%) [ 4] 4.0- 5.0 sec 2.47 MBytes 20.7 Mbits/sec 0.876 ms 312/ 391 (80%) [ 4] 5.0- 6.0 sec 2.84 MBytes 23.9 Mbits/sec 0.505 ms 287/ 378 (76%) [ 4] 6.0- 7.0 sec 2.00 MBytes 16.8 Mbits/sec 0.420 ms 309/ 373 (83%) [ 4] 7.0- 8.0 sec 2.12 MBytes 17.8 Mbits/sec 0.446 ms 321/ 389 (83%) [ 4] 8.0- 9.0 sec 2.16 MBytes 18.1 Mbits/sec 0.259 ms 303/ 372 (81%) [ 4] 9.0-10.0 sec 2.12 MBytes 17.8 Mbits/sec 0.794 ms 321/ 389 (83%) [ 4] 0.0-10.3 sec 51.5 MBytes 42.2 Mbits/sec 8.437 ms 2168/ 3817 (57%) Debug output: of tincd when above is run. Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 .....key is removed from here.... and for lines below .... Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) Writing packet of 96 bytes to Linux tun/tap device (tap mode) Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 Received packet of 1450 bytes from server (10.172.241.254 port 655) Writing packet of 1450 bytes to Linux tun/tap device (tap mode) Received packet of 96 bytes from server (10.172.241.254 port 655) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20141128/fd90e821/attachment.html>
On Fri, Nov 28, 2014 at 01:24:04PM +0530, Anil Moris wrote:> I am using tinc-1.1 for testing. > test results below are for tinc in switch mode. all other settings are > default. test is performed in LAN env. 2 different hosts.[...]> Got REQ_KEY from server (10.172.241.254 port 655): 15 server hostA 21 > Received packet of 1450 bytes from server (10.172.241.254 port 655) > Writing packet of 1450 bytes to Linux tun/tap device (tap mode) > Received packet of 96 bytes from server (10.172.241.254 port 655) > Writing packet of 96 bytes to Linux tun/tap device (tap mode)This is a problem in tinc-1.1pre10, where it tunnels some packets needlessly over TCP, which cause the drop in bandwidth. You can try out the latest from our git repository, or disable the new protocol with ExperimentalProtocol = no. -- Met vriendelijke groet / with kind regards, Guus Sliepen <guus at tinc-vpn.org> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: <http://www.tinc-vpn.org/pipermail/tinc/attachments/20141203/ed51810a/attachment.sig>