similar to: UDPbuffer adjustment

Displaying 20 results from an estimated 800 matches similar to: "UDPbuffer adjustment"

2010 Dec 10
1
UDP buffer overflows?
Hi, On one of our asterisk systems that is quite busy, we are seeing the following from 'netstat -s': Udp: 17725210 packets received 36547 packets to unknown port received. 44017 packet receive errors 17101174 packets sent RcvbufErrors: 44017 <--- this When this number increases, we see SIP errors, and in particular Qualify packets are lost, and
2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2010 Mar 16
2
What kernel params to use with KVM hosts??
Hi all, I order to reach maximum performance on my centos kvm hosts I have use these params: - On /etc/grub.conf: kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet - On sysctl.conf # Special network params net.core.rmem_default = 8388608 net.core.wmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
Currently I'm running CentOS 4.4 on a Dell Poweredge 850 with an Intel Pro/1000 Quad-port adapter. I seem to be able to only achieve 80% utilization on the adapter, while on the same box running Fedora Core 5 I was able to reach 99% utilization. I am using iSCSI Enterprise Target as my application and I am using the nullio feature, it just discards any write and sends back random data for
2005 May 23
0
problem in speeds [Message from superlinux]
i am assigned a network to replace its "Windows server with ISA caching proxy" with another "debian linux with squid proxy" with both "linux" and "ISA" are completely differnet boxes. i am using linux 2.6 kernel since the linux server has SATA hard disks . the network has downlink with a penta@net DVB card for down-link ; then it''s connected
2016 Jan 07
0
Samba over slow connections
Am 07.01.2016 um 11:58 schrieb Sébastien Le Ray: > Hi list (and happy new year), > > I'm experiencing some troubles using Samba (4.1.17 debian version) over > VPN. Basically we've following setup : > > PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server > > Copying big (say > 1MiB) files from PC to Samba file server almost > always ends up with a
2016 Jan 07
1
Samba over slow connections
Le 07/01/2016 12:22, Reindl Harald a écrit : > > /usr/sbin/ifconfig eth0 txqueuelen 100 > ______________________________________________ > > ifcfg-eth0: > > ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" > ______________________________________________ > > sysctl.conf: > > net.core.rmem_max = 65536 > net.core.wmem_max = 65536
2012 Apr 17
1
Help needed with NFS issue
I have four NFS servers running on Dell hardware (PE2900) under CentOS 5.7, x86_64. The number of NFS clients is about 170. A few days ago, one of the four, with no apparent changes, stopped responding to NFS requests for two minutes every half an hour (approx). Let's call this "the hang". It has been doing this for four days now. There are no log messages of any kind pertaining
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
I'm using Samba & smbfs to make directories on a Linux file server available across a switched Gigabit network. Unfortunately, when mounting the shares to another Linux system with smbfs, the performance is terrible. To test the setup, I created both a 100mb and 650mb file and transferred them with ftp, smbclient, and smbfs (mounted share). I also used iperf to send each file, just out of
2016 Feb 22
2
Tinc 1.0 - Limit Bandwidth
Hi Team I am trying to limit one of my tinc nodes to only consume 100Kbps bandwidth (Send and Receive). I am not sure if I am setting the right option under its main configuration file (tinc.conf) . Will these options serve my purpose? I have set them up but the Tunnel Adapter (running on Windows) still consumes all the bandwidth available. ================ UDPRcvBuf = 12500 UDPSndBuf = 12500
2007 Dec 28
7
Xen and networking.
I have a beefy machine (Intel dual-quad core, 16GB memory 2 x GigE) I have loaded RHEL5.1-xen on the hardware and have created two logical systems: 4 cpus, 7.5 GB memory 1 x Gige Following RHEL guidelines, I have it set up so that eth0->xenbr0 and eth1->xenbr1 Each of the two RHEL5.1 guests uses one of the interfaces and this is verified at the switch by seeing the unique MAC addresses.
2016 Feb 22
0
Tinc 1.0 - Limit Bandwidth
On Mon, Feb 22, 2016 at 09:17:41AM +0300, Yazeed Fataar wrote: > I am trying to limit one of my tinc nodes to only consume 100Kbps bandwidth > (Send and Receive). I am not sure if I am setting the right option under > its main configuration file (tinc.conf) . Will these options serve my > purpose? I have set them up but the Tunnel Adapter (running on Windows) > still consumes all
2015 Dec 31
0
Self-DoS
On Wed, Dec 30, 2015 at 05:26:38PM +0000, Pierre Beck wrote: > I have successfully connected a network of about 60 nodes (many of which are virtual machines) with tinc 1.0 but encounter a severe bug when physical connectivity between two major locations is lost and then reconnected. From what I gathered, many nodes attempt to connect to many other nodes, causing 100% CPU load on all nodes,
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcommit_memory=2 # Memory to
2015 Dec 30
2
Self-DoS
Hi, I have successfully connected a network of about 60 nodes (many of which are virtual machines) with tinc 1.0 but encounter a severe bug when physical connectivity between two major locations is lost and then reconnected. From what I gathered, many nodes attempt to connect to many other nodes, causing 100% CPU load on all nodes, taking down the whole network with no node succeeding connecting
2005 May 13
4
Gigabit Throughput too low
Hi I was wondering if you ever got better performance out of your Gigabit/IDE/Fc2? I am facing a similar situation. I am running FC2 with Samba 3.x My problem lies in not that I am limited to 10 MBytes per second sustained. I think it's related to this pdflush and how it's buffers are setup. (I have been doing some research and before 2.6 kernels bdflush was the method that was used and
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year), I'm experiencing some troubles using Samba (4.1.17 debian version) over VPN. Basically we've following setup : PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server Copying big (say > 1MiB) files from PC to Samba file server almost always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error occured" if trying to copy from
2013 Nov 07
0
GlusterFS with NFS client hang up some times
I have the following setup with GlusterFS. Server: 4 - CPU: Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz - RAM: 32G - HDD: 1T, 7200 RPM (x 10) - Network card: 1G x 4 (bonding) OS: Centos 6.4 - File system: XFS > Disk /dev/sda: 1997.1 GB, 1997149306880 bytes > 255 heads, 63 sectors/track, 242806 cylinders > Units = cylinders of 16065 * 512 = 8225280 bytes >
2010 May 03
0
TCP Tuning/Apache question (possibly OT)
Hello All: I've been requested to add some TCP tuning parameters to some CentOS 5.4 systems. These tunings are for the TCP receive buffer windows: net.core.rmem_max net.core.wmem_max Information on this tuning is broadly available: http://fasterdata.es.net/TCP-tuning/linux.html http://www.speedguide.net/read_articles.php?id=121 Potential downsides are available:
2017 May 17
0
Improving packets/sec and data rate - v1.0.24
Hi, Terribly sorry about the duplicated message. I've completed the upgrade to Tinc 1.0.31 but, have not seen much of a performance increase. The change looks to be similar to switching to both aes-256-cbc w/ sha256 (which are now the default so, that makes sense). Out tinc.conf is reasonably simple: Name = $hostname_for_node Device = /dev/net/tun PingTimeout = 60 ReplayWindow = 625