Displaying 20 results from an estimated 31 matches for "tcp_rmem".
2009 Mar 11
3
Intermittent NFS problems with NetApp server
...88.
This is less than the recommended value of 32768 bytes.
You should increase the TCP receive buffer size for NFS on the
client.
Some googling around got me to check these values for TCP:
# sysctl net.ipv4.tcp_mem
net.ipv4.tcp_mem = 98304 131072 196608
# sysctl net.ipv4.tcp_rmem
net.ipv4.tcp_rmem = 4096 87380 4194304
# sysctl net.ipv4.tcp_wmem
net.ipv4.tcp_wmem = 4096 16384 4194304
So these seem fine to me (i.e., the max is greater than 32768). Is
there an NFS (as opposed to TCP) setting I should be tweaking? Any
ideas why the NetApp is is...
2007 Mar 19
1
sysctl errors
...e following settings in /etc/sysctl.conf file:
# Custom Settings:
net.ipv4.tcp_max_syn_backlog=2048
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_keepalive_intvl=10
net.ipv4.tcp_keepalive_probes=7
net.ipv4.tcp_keepalive_time=1800
net.ipv4.tcp_max_tw_buckets=360000
net.ipv4.tcp_synack_retries=3
net.ipv4.tcp_rmem="4096 87380 16777216"
net.ipv4.tcp_wmem="4096 87380 16777216"
net.ipv4.tcp_mem="8388608 8388608 8388608"
---------errors-----------
# sysctl -p [errors]
error: unknown error 22 setting key 'net.ipv4.tcp_rmem'
error: unknown error 22 setting key 'net.ipv4...
2007 Dec 28
7
Xen and networking.
...e 25MB/sec from each.
It starts off with a large burst like each can do 100MB/sec, but then
in a couple
of seconds, tapers off to the 15-40MB/sec until the dd finishes.
Things I have tried (installed on the host and the guests)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
sysctl -w net.ipv4.tcp_congestion_control=cubic
Any ideas?
--
--tmac
Re...
2010 Mar 16
2
What kernel params to use with KVM hosts??
...- On /etc/grub.conf:
kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet
- On sysctl.conf
# Special network params
net.core.rmem_default = 8388608
net.core.wmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 1048576 4194304 16777216
net.ipv4.tcp_wmem = 1048576 4194304 16777216
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Virtual machines special kernel params
vm.swappine...
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year),
I'm experiencing some troubles using Samba (4.1.17 debian version) over
VPN. Basically we've following setup :
PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server
Copying big (say > 1MiB) files from PC to Samba file server almost
always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error
occured" if trying to copy from
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values
-------------------------------------------
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
# vm.max-readahead = ?
# vm.min-readahead = ?
# HW Controler Off
# max-readahead = 1024
# min-readahead = 256
# Memory over-commit
# vm.overcommit_memory=2
# Memory to activate bdflush
# vm.bdflush=&qu...
2006 Sep 20
5
Transfer rates faster than 23MBps?
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4
Content Storage servers to Windows clients with 6 DVD burners and
robotic arms and other cool gadgets. The servers used to be Windows
based, but we're migrating to RedHat for a host of reasons.
Unfortunately, the RedHat Samba servers are about 2.5 times slower than
the Windows servers. Windows will copy a 1GB file
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...144
# Controls IP maximum receive buffer size (bytes)
net.core.rmem_max = 262144
# Controls IP maximum send buffer size (bytes)
net.core.wmem_max = 262144
# Controls TCP memory utilization (pages)
net.ipv4.tcp_mem = 49152 65536 98304
# Controls TCP sliding receive window buffer (bytes)
net.ipv4.tcp_rmem = 4096 87380 16777216
# Controls TCP sliding send window buffer (bytes)
net.ipv4.tcp_wmem = 4096 65536 16777216
Ross S. W. Walker
Information Systems Manager
Medallion Financial, Corp.
437 Madison Avenue
38th Floor
New York, NY 10022
Tel: (212) 328-2165
Fax: (212) 328-2125
WWW: http://www.medall...
2012 Apr 17
1
Help needed with NFS issue
...und_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_no_metrics_save = 1
The {r,w}mem_{max,default} values are twice what they were previously;
changing these had no effect.
The number of dirty pages is nowhere near the dirty_ratio when t...
2005 Mar 20
4
I/O descriptor ring size bottleneck?
...m doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.
I''m using the following sysctl values:
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 65536 4194304
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_bic = 0
(tcp westwood and vegas are also turned off for now)
Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
inside a VM on one box...
2019 Dec 12
4
Controlling SO_RCVBUF
I have a customer who is complaining about slow SFTP transfers over a long haul connection. The current transfer rate is limited by the TCP window size and the RTT. I looked at HPN-SSH, but that won't work because we don't control what software the peer is using. I was thinking about coding a much more modest enhancement that just does SO_RCVBUF for specific subsystems. In the interest
2016 Jan 07
0
Samba over slow connections
..._____________________
ifcfg-eth0:
ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
______________________________________________
sysctl.conf:
net.core.rmem_max = 65536
net.core.wmem_max = 65536
net.core.rmem_default = 32768
net.core.wmem_default = 32768
net.ipv4.tcp_rmem = 4096 32768 65536
net.ipv4.tcp_wmem = 4096 32768 65536
net.ipv4.tcp_mem = 4096 32768 65536
net.ipv4.udp_mem = 4096 32768 65536
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_dsack = 1
______________________________________________
smb.conf:
socket options = TCP_NODELAY IPTOS...
2016 Jan 07
1
Samba over slow connections
..._OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
> ______________________________________________
>
> sysctl.conf:
>
> net.core.rmem_max = 65536
> net.core.wmem_max = 65536
> net.core.rmem_default = 32768
> net.core.wmem_default = 32768
> net.ipv4.tcp_rmem = 4096 32768 65536
> net.ipv4.tcp_wmem = 4096 32768 65536
> net.ipv4.tcp_mem = 4096 32768 65536
> net.ipv4.udp_mem = 4096 32768 65536
> net.ipv4.tcp_moderate_rcvbuf = 1
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_dsack = 1
> ______________________________________________
>
> s...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86.
Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet
connection and my ping time to my ISP is 160-170 msec.
When downloading something with Firefox, I am getting download speeds of
about 100-180 KB/sec (for example when downloading SP2 of XP from MS
server).
Are the CentOS networking settings OK for this kind of latency, or do I
have to change some settings?
2007 Mar 19
3
net.ipv4 TCP/IP Optimizations = sysctl.conf?
...l -w net.ipv4.tcp_fin_timeout=30
/sbin/sysctl -w net.ipv4.tcp_keepalive_intvl=10
/sbin/sysctl -w net.ipv4.tcp_keepalive_probes=7
/sbin/sysctl -w net.ipv4.tcp_keepalive_time=1800
/sbin/sysctl -w net.ipv4.tcp_max_tw_buckets=360000
/sbin/sysctl -w net.ipv4.tcp_synack_retries=3
/sbin/sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
/sbin/sysctl -w net.ipv4.tcp_wmem="4096 87380 16777216"
/sbin/sysctl -w net.ipv4.tcp_mem="8388608 8388608 8388608"
--------------snip---------------
p.s. these are meant for a specific technology we use, so not sure if
everyone reading would be...
2007 Oct 11
2
udp question
i all
I use linux as GiGE router and have 6 NIC on it
Those days the NIC interrupt takes around 100% CPU but the system is 4G memroy and 8 CPU. I can't see any error packet in this NIC interface too
After I block the udp, the %CPU drops. but the UDP only takes around 8M in general
We use UDP traffic for voice.
Do you have any suggestion ? increase the kernel parameter?
Thank you so much
2013 Sep 05
0
windows guest network kept down automatically when several windows guest running in one KVM host,
...size, in bytes
kernel.shmmax = 68719476736
# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 4294967296
########### add by operation V1.0 begin ############
net.ipv4.ip_local_port_range = 32768 65000
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_rmem = 4096 87380 8388608
net.ipv4.tcp_wmem = 4096 65536 8388608
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_window_scaling = 0
net.ipv4.tcp_sack = 0
net.ipv4.tcp_timestamps = 0
kernel.panic = 5
vm.swappiness = 51
########### add by operation V1.0 end ############
net.bridge.bridge-nf-call-ip6tab...
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
...ew adjustments to the TCP settings on each system:
echo 262144 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_max
echo 163840 > /proc/sys/net/core/rmem_default
echo 163840 > /proc/sys/net/core/wmem_default
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_wmem
echo "49152 163840 262144" > /proc/sys/net/ipv4/tcp_mem
These, however, have only helped each of the transfer types
performance-wise (FTP especially, smbfs wasn't really affected at all).
Does anybody have any i...
2018 Jul 11
4
UDP for data?
Hi,
I'm very interested in making SSH use UDP for large data chunks. Maybe
you know FASP
(https://en.wikipedia.org/wiki/Fast_and_Secure_Protocol), but that is
proprietary, although the website says it's based upon open source
methods.
Is it possible to make openssh work with UDP for this purpose?
Thanks in advance,
Stef Bon
2005 May 13
4
Gigabit Throughput too low
...if you had improved your situation, and if you did what you
did.
BTW here are some tweaks for network stack related stuff for Gig.
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
sysctl -w net.core.rmem_default=65536
sysctl -w net.core.wmem_default=65536
sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
sysctl -w net.ipv4.route.flush=1
Brian M. Duncan
Katten Muchin Rosenman LLP
525 West Monroe Street
Chicago IL 60661-3693
312-577-8045
brian.duncan@k...