search for: rmem_max

Displaying 20 results from an estimated 44 matches for "rmem_max".

2011 Mar 11
1
UDP Perfomance tuning
...1902 packets to unknown port received. 109709802 packet receive errors 7239 packets sent We had checked all the kernel configurations and set them as recommended. This is the settings and tests we have done Server: net.core.rmem_default = 2097152 net.core.wmem_default = 2097152 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 strace -fttt -o /tmp/server.trace iperf -s -u -p 2222 -w 6m -i 5 -l 1k Client: net.core.rmem_default = 2097152 net.core.wmem_default = 2097152 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 strace -fttt -o /tmp/client.trace iperf -p 2222 -u -w 6m -i 5...
2007 Dec 28
7
Xen and networking.
...ec When I run two simultaneously, I cannot seem to get above 25MB/sec from each. It starts off with a large burst like each can do 100MB/sec, but then in a couple of seconds, tapers off to the 15-40MB/sec until the dd finishes. Things I have tried (installed on the host and the guests) net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.tcp_moderate_rcvbuf = 1 # recommended to increase this for 1000 BT or higher net.core.netdev_max_backlog = 2500 sysctl -w net.ip...
2010 Dec 10
1
UDP buffer overflows?
...rors 17101174 packets sent RcvbufErrors: 44017 <--- this When this number increases, we see SIP errors, and in particular Qualify packets are lost, and temporarily disable handsets, causing all sorts of minor chaos. I have already tuned from the defaults of: net.core.rmem_max = 131071 net.core.wmem_max = 131071 net.core.rmem_default = 111616 net.core.wmem_default = 111616 net.core.optmem_max = 10240 net.core.netdev_max_backlog = 1000 up to: net.core.rmem_max = 1048575 net.core.wmem_max = 1048575 net.core.rmem_default = 1048575 net.core.wmem_default = 1048575...
2010 Mar 16
2
What kernel params to use with KVM hosts??
...mum performance on my centos kvm hosts I have use these params: - On /etc/grub.conf: kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet - On sysctl.conf # Special network params net.core.rmem_default = 8388608 net.core.wmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 1048576 4194304 16777216 net.ipv4.tcp_wmem = 1048576 4194304 16777216 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables...
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year), I'm experiencing some troubles using Samba (4.1.17 debian version) over VPN. Basically we've following setup : PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server Copying big (say > 1MiB) files from PC to Samba file server almost always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error occured" if trying to copy from
2020 Jan 13
0
UDPbuffer adjustment
...the operating system. UDPSndBuf = bytes (OS default) Sets the socket send buffer size for the UDP socket, in bytes. If unset, the default buffer size will be used by the operating system. OS, CentOS, just would like to know, if I increase the the /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max to 10MB by below command(in order to increase the UDP buffer size): sysctl -w net.core.rmem_max=10485760 ; sysctl -w net.core.wmem_max=10485760 but keep /proc/sys/net/core/rmem_default and /proc/sys/net/core/wmem_dafault unchanged, which is 200KB. and I don’t have...
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values ------------------------------------------- net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_window_scaling = 1 # vm.max-readahead = ? # vm.min-readahead = ? # HW Controler Off # max-readahead = 1024 # min-readahead = 256 # Memory over-commit # vm.overcom...
2006 Sep 20
5
Transfer rates faster than 23MBps?
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4 Content Storage servers to Windows clients with 6 DVD burners and robotic arms and other cool gadgets. The servers used to be Windows based, but we're migrating to RedHat for a host of reasons. Unfortunately, the RedHat Samba servers are about 2.5 times slower than the Windows servers. Windows will copy a 1GB file
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...my needs (strictly block io). Here are the TCP/IP tunables from my sysctl.conf: # Controls default receive buffer size (bytes) net.core.rmem_default = 262144 # Controls IP default send buffer size (bytes) net.core.wmem_default = 262144 # Controls IP maximum receive buffer size (bytes) net.core.rmem_max = 262144 # Controls IP maximum send buffer size (bytes) net.core.wmem_max = 262144 # Controls TCP memory utilization (pages) net.ipv4.tcp_mem = 49152 65536 98304 # Controls TCP sliding receive window buffer (bytes) net.ipv4.tcp_rmem = 4096 87380 16777216 # Controls TCP sliding send window buffe...
2016 Sep 06
2
No increased throughput with SMB Multichannel and two NICs
On Tue, Sep 06, 2016 at 11:53:04PM +0200, Daniel Vogelbacher via samba wrote: > > Delete all the crap above first :-). > > > > Then start trying to copy locally to the tmpfs share to see what > > the max local copy speed it. > > > > Now I have: > > server multi channel support = yes > vfs objects = aio_pthread,recycle > aio read size = 1 > aio
2008 Feb 06
3
nic poor performance after upgrade to xen 3.2
Hi, I''m doing some test on a network 10 gb nics and xen with version 3.1 i''m measuring 2.5 gb/sec from domU to an external physical machine with iperf. switching to 3.2 has reduced the measured performance to 40-50 Mb/sec. did anything change in the network interface? can someone help me? thanks _______________________________________________ Xen-users mailing list
2012 Apr 17
1
Help needed with NFS issue
...lance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_no_metrics_save = 1 The {r,w}mem_{ma...
2017 May 17
2
Improving packets/sec and data rate - v1.0.24
Hi, We've been running tinc for a while now but, have started hitting a bottleneck where the number of packets/sec able to be processed by our Tinc nodes is maxing out around 4,000 packets/sec. Right now, we are using the default cipher and digest settings (so, blowfish and sha1). I've been testing using aes-256-cbc for the cipher and seeing ~5% increases across the board. Each Tinc node
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all, I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any physical interface host. On this bridge throughput it is very very poor, only 2.5 Mbs. How can I increase this throughput??? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
2005 Oct 01
7
Updated presentation of Asterisk 1.2
Friends, I have updated my Asterisk 1.2 presentation with the latest information. It is still available in the same place as before: http://www.astricon.net/asterisk1-2/ Please continue to test the beta of Asterisk 1.2, available at ftp.digium.com. We need all the feedback we can get. If you are a developer and have some time for community work, please check in with the bug tracker and help us
2016 Jan 07
0
Samba over slow connections
...not waiting for WAN link to > consume them) /usr/sbin/ifconfig eth0 txqueuelen 100 ______________________________________________ ifcfg-eth0: ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" ______________________________________________ sysctl.conf: net.core.rmem_max = 65536 net.core.wmem_max = 65536 net.core.rmem_default = 32768 net.core.wmem_default = 32768 net.ipv4.tcp_rmem = 4096 32768 65536 net.ipv4.tcp_wmem = 4096 32768 65536 net.ipv4.tcp_mem = 4096 32768 65536 net.ipv4.udp_mem = 4096 32768 65536 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_sack = 1 net....
2016 Jan 07
1
Samba over slow connections
...; /usr/sbin/ifconfig eth0 txqueuelen 100 > ______________________________________________ > > ifcfg-eth0: > > ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" > ______________________________________________ > > sysctl.conf: > > net.core.rmem_max = 65536 > net.core.wmem_max = 65536 > net.core.rmem_default = 32768 > net.core.wmem_default = 32768 > net.ipv4.tcp_rmem = 4096 32768 65536 > net.ipv4.tcp_wmem = 4096 32768 65536 > net.ipv4.tcp_mem = 4096 32768 65536 > net.ipv4.udp_mem = 4096 32768 65536 > net.ipv4.tcp_modera...
2009 Apr 26
0
FW: issue with sip 180 responses
...seems like an O/S issue, because on asterisk level I can all going correctly via logs (invite is accepted=> packet is generated=> and 180 is sent immediately to the initiator ?) which tools can help me check kernel issues ? Also, tried to increase udp buffer (sysctl -w net.core.rmem_max=8388608) , but seems the problem still persists. Also , here a screenshot of a typical dump from network interface, you can clearly see what's going on. http://img7.imageshack.us/img7/6578/sip.png Thank in advanced , Nir. *C. Savinovich*** did you isolated the issue? , checked fi...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86. Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet connection and my ping time to my ISP is 160-170 msec. When downloading something with Firefox, I am getting download speeds of about 100-180 KB/sec (for example when downloading SP2 of XP from MS server). Are the CentOS networking settings OK for this kind of latency, or do I have to change some settings?
2010 May 03
0
TCP Tuning/Apache question (possibly OT)
Hello All: I've been requested to add some TCP tuning parameters to some CentOS 5.4 systems. These tunings are for the TCP receive buffer windows: net.core.rmem_max net.core.wmem_max Information on this tuning is broadly available: http://fasterdata.es.net/TCP-tuning/linux.html http://www.speedguide.net/read_articles.php?id=121 Potential downsides are available: http://www.29west.com/docs/THPM/udp-buffer-sizing.html >From the above, the rmem size is a fu...