Displaying 20 results from an estimated 5000 matches similar to: "poor throughput with tinc"
2010 May 29
1
IFB0 throughput 3-4% lower than expected
I have two boxes for the purpose of testing traffic control and
my knowledge thereof (which is at the inkling stage). The boxes are
connected by 100Mbit ethernet cards via a switch.
For egress traffic via eth0 I achieve a throughput that is close to the
specified CEILing, particularly for values above 1mbit. Ingress traffic
does not seem so well behaved. Above about 1mbit rates achieved are
2006 Feb 01
0
prio test results
Hi, below are some test results from implementing a prio qdisc ''that is also below''.
The qdisc is attacted to a vlan interface for my external network. Both tests were run
at the same time.
The links are policed at 6.0M ''by our provider''.
192.168.70.1 --> 192.168.30.1
My question is: If using a prio qdisc should''nt the iperf run with a tos of b8
have
2010 Sep 20
0
No subject
connection will remain a TCP connection unless it is broken and restarted.
Usually if I stop the client and wait for about 30 seconds to reconnect,
there is a much greater chance that the MTU probes work fine, and in about
30 seconds MTU is fixed to 1416.
Every time when the MTU probing fails, I see latency between 700 - 1000 ms
with 32 byte pings over a LAN.
Every time when the MTU probing does
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is
802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf,
I never get more than a total of about 3Gbps throughput. Is there anything
to tweak to get better throughput? Or am I running into other limits (e.g.
was reading about tcp retransmit limits for mode 0).
The iperf test was run with iperf -s on the
2010 Nov 24
1
slow network throughput, how to improve?
would like some input on this one please.
Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch
transfer files to each other at about 30MB/s between each other.
Both servers have the following setup:
CentOS 5.5 x64
XEN
1GB NIC's
7200rpm SATA HDD's
The hardware configuration can't change, I need to use these servers
as they are. They are both used in production
2010 Aug 03
1
performance with libvirt and kvm
Hi,
I am seeing a performance degradation while using libvirt to start my
vm (kvm). vm is fedora 12 and host is also fedora 12, both with
2.6.32.10-90.fc12.i686. Here are the statistics from iperf :
>From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec
>From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec
libvirt command as seen from ps output :
/usr/bin/qemu-kvm -S -M
2003 Dec 01
0
No subject
2.4.18 kernel) using 3 x 60GB WD 7200 IDE drives on a 7500-4 controller I
could get peak I/O of 452 MBytes/sec, and a sustainable I/O rate of over
100 MBytes/sec. That is not exactly a 'dunno' performance situation. These
tests were done using dbench and RAID5.
Let's get that right:
100 MBytes/sec == 800 Mbits/sec, which is just a tad over 100 Mbits/sec
(the bottleneck if you use
2018 May 10
0
Tinc 1.1pre15 double-crash
Hello,
this morning I apparently had tinc crash on me.
In 2 independent tinc clusters of 3 nodes each (but located in the same datacenter), one tinc process crashed in each of the clusters.
One process apparently with `status=6/ABRT`, the other with `status=11/SEGV`.
Interestingly, they crashed with only 5 minutes difference.
The only thing I can come up with that might explain this correlation
2017 Dec 10
0
Problems with packages being dropped between nodes in the vpn
Hi
I have some problems with my vpn. Im running version 1.1pre15 on all nodes.
I have four nodes in my network.
Node1 -> connects to Node2
Node2 -> connects to Node1
Node3 -> connects to Node1 and Node2
Node4 -> connects to Node1 and Node2
The problem is the connection between Node3 and Node4. The traffic is going via Node1 and Node2. Its unstable. package drops almost all the time
2010 Nov 15
5
Poor performance on bandwidth, Xen 4.0.1 kernel pvops 2.6.32.24
Hello list,
I have two differents installation Xen Hypervisor on two identical
physical server, on the same switch :
The problem is on my new server (Xen 4.0.1 with pvops kernel 2.6.32.24), I
have bad performance on bandwidth
I have test with a files copy and "iperf".
Result iperf average:
Transfert
Bandwidth
XEN-A -> Windows
2015 Jul 31
0
Indirect routing issue?
Hi there,
I am experiencing an annoying but not critical issue with (I think)
tinc's internal routing. My setup is this:
HostA (local. ConnectTo = HostC)
HostB (geographically close. ConnectTo = HostC)
HostC (far away. ConnectTo = nothing)
Without tinc, pings from HostA to HostB take around 10ms, and from
HostA/B to HostC around 200ms.
With tinc, pings from HostA to HostB take nearly
2015 Jul 02
0
Samba server read issues
Hi all,
I set up a samba server into Debian 3.2.0-4-amd64. This runs as guest OS into VirtualBox machine over OS X host OS.
Connection seems pretty good:
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.0.21 port 5001
2007 Jan 02
2
strange speed issue
Hello,
I'm trying to connect some windows machines together using tinc 1.0.6.
The basic connectivity (ping) works fine as expected. But I'm getting
really poor speeds over the tinc tunnel. The test machines are on the
same switch and get values ranging from 6-9 MBytes/s speaking directly to
each other. However over the tinc tunnel the speeds are in the range of
20-40 kbytes/s. The machines
2008 Feb 05
0
Need help in analyzing ntop data
Hi,
I want to do some analysis of NTOP data. Currently I have installed
NTOP on Centos 5.1 and I am able to see some network data being
graphed. But there is no documentation given whether NTOP is showing
Network Throughput in MBytes or MBits for ex I am getting
Throughput Min: 163.7k , Max: 3.0 M and Last 859.4k and there are
some options like anomalia, upper,lower and trend (30min).
Under
2006 Jun 21
1
Expected network throughput
Hi,
I have just started to work with Xen and have a question regarding the
expected network throughput. Here is my configuration:
Processor: 2.8 GHz Intel Celeron (Socket 775)
Motherboard: Gigabyte 8I865GVMF-775
Memory: 1.5 GB
Basic system: Kubuntu 6.06 Dapper Drake
Xen version: 3.02 (Latest 3.0 stable download)
I get the following iperf results:
Src Dest Throughput
Dom0 Dom0
2016 Jun 02
0
Bug report/compiling with debug.
Hi All,
Looking for some guidance on debugging and also reporting a bug.
First off I can't seem to find any information on reproducing the ubuntu
16.04 build with debug symbols, I do have a core file I'd like to
introspect to potentially provide a patch.
Secondly the bug report itself:
I am receiving a segfault signal 11 when one or more machines share a
*Address* specific IP.
For
2012 Dec 03
1
Strange QoS behavior
Hi,
I'm having some weird problem with the setup of the QoS on a bridged network.
As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected).
Using 1280 units I get a speed og 5Mbps.
I'm aware of peak and
2006 Jan 16
1
Periodic routing problem
Hi, I've been running tinc for a couple of months and it's great, but I
have a periodic problem which maybe you guys can figure out. I operate a
3-node tinc VPN, lets say A, B and C.
A
/ \
B --- C
The problem is that after a while, node C can't exchange data with node
B. It works fine (ping and other traffic) for about 10 minutes, then
fails. Here is some debug
2012 Dec 03
1
Strange behavior of QoS
Hi,
I'm having some weird problem with the setup of the QoS on a bridged network.
As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected).
Using 1280 units I get a speed og 5Mbps.
I'm aware of peak and
2018 Apr 11
0
Route certain trafic via a tinc node that is not directly connected.
Hello again :)
Thank you all for your reply's. Below are the config files of the 3 hosts.
I useĀ tinc in router mode. I do not have a kernel mode config lines
anywhere so tinc must be using the default settings here.
I added the ipaddressx to subnets on hostc and this works. Traffic to
that ip is now routed via hostc.
But since this ipaddressx address changes often I need to resolve it