search for: tcp_wmem

Displaying 20 results from an estimated 34 matches for "tcp_wmem".

2009 Mar 11
3
Intermittent NFS problems with NetApp server
...ease the TCP receive buffer size for NFS on the client. Some googling around got me to check these values for TCP: # sysctl net.ipv4.tcp_mem net.ipv4.tcp_mem = 98304 131072 196608 # sysctl net.ipv4.tcp_rmem net.ipv4.tcp_rmem = 4096 87380 4194304 # sysctl net.ipv4.tcp_wmem net.ipv4.tcp_wmem = 4096 16384 4194304 So these seem fine to me (i.e., the max is greater than 32768). Is there an NFS (as opposed to TCP) setting I should be tweaking? Any ideas why the NetApp is issuing those warnings? Any other suggestions on how to debug this problem? Tha...
2007 Mar 19
1
sysctl errors
...Custom Settings: net.ipv4.tcp_max_syn_backlog=2048 net.ipv4.tcp_fin_timeout=30 net.ipv4.tcp_keepalive_intvl=10 net.ipv4.tcp_keepalive_probes=7 net.ipv4.tcp_keepalive_time=1800 net.ipv4.tcp_max_tw_buckets=360000 net.ipv4.tcp_synack_retries=3 net.ipv4.tcp_rmem="4096 87380 16777216" net.ipv4.tcp_wmem="4096 87380 16777216" net.ipv4.tcp_mem="8388608 8388608 8388608" ---------errors----------- # sysctl -p [errors] error: unknown error 22 setting key 'net.ipv4.tcp_rmem' error: unknown error 22 setting key 'net.ipv4.tcp_wmem' error: unknown error 22 setting key...
2007 Dec 28
7
Xen and networking.
...a large burst like each can do 100MB/sec, but then in a couple of seconds, tapers off to the 15-40MB/sec until the dd finishes. Things I have tried (installed on the host and the guests) net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 sysctl -w net.ipv4.tcp_congestion_control=cubic Any ideas? -- --tmac RedHat Certified Engineer #804006984323821...
2010 Mar 16
2
What kernel params to use with KVM hosts??
....18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet - On sysctl.conf # Special network params net.core.rmem_default = 8388608 net.core.wmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 1048576 4194304 16777216 net.ipv4.tcp_wmem = 1048576 4194304 16777216 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Virtual machines special kernel params vm.swappiness = 0 Do I need to configure something more?...
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year), I'm experiencing some troubles using Samba (4.1.17 debian version) over VPN. Basically we've following setup : PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server Copying big (say > 1MiB) files from PC to Samba file server almost always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error occured" if trying to copy from
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcommit_memory=2 # Memory to activate bdflush # vm.bdflush="40 500 0 0 500 3000 60 20 0" ---...
2003 Jul 05
13
HTB doesn''t respect rate values
Hi, machine: AMD K6 200 MHz Linux distribution: Mandrake 8.1 kernel: compiled 2.4.21 applied this: #define PSCHED_CLOCK_SOURCE PSCHED_CPU in file linux/include/net/pkt_sched.h bevore compiled the kernel (described on http://www.docum.org/stef.coene/qos/faq/cache/40.html) bandwitch on eth0: 128kbit The most simple configuration - 122kbit guaranted for WWW (sport 80) and
2006 Sep 20
5
Transfer rates faster than 23MBps?
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4 Content Storage servers to Windows clients with 6 DVD burners and robotic arms and other cool gadgets. The servers used to be Windows based, but we're migrating to RedHat for a host of reasons. Unfortunately, the RedHat Samba servers are about 2.5 times slower than the Windows servers. Windows will copy a 1GB file
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...ls IP maximum send buffer size (bytes) net.core.wmem_max = 262144 # Controls TCP memory utilization (pages) net.ipv4.tcp_mem = 49152 65536 98304 # Controls TCP sliding receive window buffer (bytes) net.ipv4.tcp_rmem = 4096 87380 16777216 # Controls TCP sliding send window buffer (bytes) net.ipv4.tcp_wmem = 4096 65536 16777216 Ross S. W. Walker Information Systems Manager Medallion Financial, Corp. 437 Madison Avenue 38th Floor New York, NY 10022 Tel: (212) 328-2165 Fax: (212) 328-2125 WWW: http://www.medallion.com <http://www.medallion.com> ______________________________________________...
2012 Apr 17
1
Help needed with NFS issue
...= 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_no_metrics_save = 1 The {r,w}mem_{max,default} values are twice what they were previously; changing these had no effect. The number of dirty pages is nowhere near the dirty_ratio when the hangs occur; there may be only 50MB...
2005 Mar 20
4
I/O descriptor ring size bottleneck?
...P topologies. Right now the configuration is quite simple -- two Xen boxes connected via a dummynet router. The dummynet router is set to limit bandwidth to 500Mbps and simulate an RTT of 80ms. I''m using the following sysctl values: net.ipv4.tcp_rmem = 4096 87380 4194304 net.ipv4.tcp_wmem = 4096 65536 4194304 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_bic = 0 (tcp westwood and vegas are also turned off for now) Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from inside a VM on one box talking to the netserver on the VM on the other...
2016 Jan 07
0
Samba over slow connections
...THTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" ______________________________________________ sysctl.conf: net.core.rmem_max = 65536 net.core.wmem_max = 65536 net.core.rmem_default = 32768 net.core.wmem_default = 32768 net.ipv4.tcp_rmem = 4096 32768 65536 net.ipv4.tcp_wmem = 4096 32768 65536 net.ipv4.tcp_mem = 4096 32768 65536 net.ipv4.udp_mem = 4096 32768 65536 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_dsack = 1 ______________________________________________ smb.conf: socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE max xmit = 327...
2016 Jan 07
1
Samba over slow connections
...G ${DEVICE} rx 128 tx 128" > ______________________________________________ > > sysctl.conf: > > net.core.rmem_max = 65536 > net.core.wmem_max = 65536 > net.core.rmem_default = 32768 > net.core.wmem_default = 32768 > net.ipv4.tcp_rmem = 4096 32768 65536 > net.ipv4.tcp_wmem = 4096 32768 65536 > net.ipv4.tcp_mem = 4096 32768 65536 > net.ipv4.udp_mem = 4096 32768 65536 > net.ipv4.tcp_moderate_rcvbuf = 1 > net.ipv4.tcp_sack = 1 > net.ipv4.tcp_dsack = 1 > ______________________________________________ > > smb.conf: > > socket options = TCP_NO...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86. Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet connection and my ping time to my ISP is 160-170 msec. When downloading something with Firefox, I am getting download speeds of about 100-180 KB/sec (for example when downloading SP2 of XP from MS server). Are the CentOS networking settings OK for this kind of latency, or do I have to change some settings?
2007 Oct 11
0
How to set MTU
Hello, I made some tests with HTB, and i usually set MTU to the default value of TCP_WMEM (that is 65536 for me). I have this case : 50 Mbits/s 10 Mbits/s A----------------------------------B---------------------------------------C if A send to B, bandwidth is about 50 Mbits/s if B send to C, bandwidth is about 10 Mbits/s But if A send t...
2007 Mar 19
3
net.ipv4 TCP/IP Optimizations = sysctl.conf?
...alive_intvl=10 /sbin/sysctl -w net.ipv4.tcp_keepalive_probes=7 /sbin/sysctl -w net.ipv4.tcp_keepalive_time=1800 /sbin/sysctl -w net.ipv4.tcp_max_tw_buckets=360000 /sbin/sysctl -w net.ipv4.tcp_synack_retries=3 /sbin/sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216" /sbin/sysctl -w net.ipv4.tcp_wmem="4096 87380 16777216" /sbin/sysctl -w net.ipv4.tcp_mem="8388608 8388608 8388608" --------------snip--------------- p.s. these are meant for a specific technology we use, so not sure if everyone reading would be best served by using them, mileage may vary! : / -karlski
2007 Oct 11
2
udp question
i all I use linux as GiGE router and have 6 NIC on it Those days the NIC interrupt takes around 100% CPU but the system is 4G memroy and 8 CPU. I can't see any error packet in this NIC interface too After I block the udp, the %CPU drops. but the UDP only takes around 8M in general We use UDP traffic for voice. Do you have any suggestion ? increase the kernel parameter? Thank you so much
2013 Sep 05
0
windows guest network kept down automatically when several windows guest running in one KVM host,
...6736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 ########### add by operation V1.0 begin ############ net.ipv4.ip_local_port_range = 32768 65000 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 net.ipv4.tcp_rmem = 4096 87380 8388608 net.ipv4.tcp_wmem = 4096 65536 8388608 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_window_scaling = 0 net.ipv4.tcp_sack = 0 net.ipv4.tcp_timestamps = 0 kernel.panic = 5 vm.swappiness = 51 ########### add by operation V1.0 end ############ net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptab...
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
.../proc/sys/net/core/rmem_max echo 262144 > /proc/sys/net/core/wmem_max echo 163840 > /proc/sys/net/core/rmem_default echo 163840 > /proc/sys/net/core/wmem_default echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_rmem echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_wmem echo "49152 163840 262144" > /proc/sys/net/ipv4/tcp_mem These, however, have only helped each of the transfer types performance-wise (FTP especially, smbfs wasn't really affected at all). Does anybody have any idea why I'm seeing such a huge difference between the smbfs and s...
2004 Jun 10
6
Shaping incoming traffic on the other interface
Hi, I have a typical configuration for my firewall/gateway box: single network card, with a pppoe connection to the DSL modem. I''m already successfully shaping the uplink (how come that the wondershaper.htb doesn''t use the ceil parameter? It should implement bandwidth borrowing!) but i found the ingress policy a little bit rough. I''d like to keep the traffic categories