search for: rmem_default

Displaying 20 results from an estimated 31 matches for "rmem_default".

2011 Mar 11
1
UDP Perfomance tuning
...f dropped UDP packets. Udp: 551522838 packets received 1902 packets to unknown port received. 109709802 packet receive errors 7239 packets sent We had checked all the kernel configurations and set them as recommended. This is the settings and tests we have done Server: net.core.rmem_default = 2097152 net.core.wmem_default = 2097152 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 strace -fttt -o /tmp/server.trace iperf -s -u -p 2222 -w 6m -i 5 -l 1k Client: net.core.rmem_default = 2097152 net.core.wmem_default = 2097152 net.core.rmem_max = 8388608 net.core.wmem_max = 8388608 s...
2010 Dec 10
1
UDP buffer overflows?
...<--- this When this number increases, we see SIP errors, and in particular Qualify packets are lost, and temporarily disable handsets, causing all sorts of minor chaos. I have already tuned from the defaults of: net.core.rmem_max = 131071 net.core.wmem_max = 131071 net.core.rmem_default = 111616 net.core.wmem_default = 111616 net.core.optmem_max = 10240 net.core.netdev_max_backlog = 1000 up to: net.core.rmem_max = 1048575 net.core.wmem_max = 1048575 net.core.rmem_default = 1048575 net.core.wmem_default = 1048575 net.core.optmem_max = 1048575 net.core.netdev_max_backlog...
2010 Mar 16
2
What kernel params to use with KVM hosts??
Hi all, I order to reach maximum performance on my centos kvm hosts I have use these params: - On /etc/grub.conf: kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet - On sysctl.conf # Special network params net.core.rmem_default = 8388608 net.core.wmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 1048576 4194304 16777216 net.ipv4.tcp_wmem = 1048576 4194304 16777216 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge....
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year), I'm experiencing some troubles using Samba (4.1.17 debian version) over VPN. Basically we've following setup : PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server Copying big (say > 1MiB) files from PC to Samba file server almost always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error occured" if trying to copy from
2020 Jan 13
0
UDPbuffer adjustment
...ystem. OS, CentOS, just would like to know, if I increase the the /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max to 10MB by below command(in order to increase the UDP buffer size): sysctl -w net.core.rmem_max=10485760 ; sysctl -w net.core.wmem_max=10485760 but keep /proc/sys/net/core/rmem_default and /proc/sys/net/core/wmem_dafault unchanged, which is 200KB. and I don’t have any UDPRcvBuf and UDPSndBuf config in the tinc.conf, so what’s the udp buffer size for tinc? should I change /proc/sys/net/core/rmem_default and /proc/sys/net/core/wmem_dafault as well? or prefer to set that by UDPRcvB...
2010 Jun 21
3
Increasing NFS Performance
...al'. Here are the items I've found so far that are told to increase performance. Since this is a production system, I have yet to try these: 1. Increase the number of instances of NFS running. (As found in /etc/sysconfig/nfs) 2. Try sync vs async behavior in mount parameters on clients 3. rmem_default and wmem_default parameters 4. rsize and wsize parameters (Dependent on MTU. Currently, mine is default at 1500) These are the items I'm planning to try but before I dive in (especially during a late night maintenance period...), I was hoping the list members brighter than me could give some...
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...umped up the default IP send/receiver buffer size for improved UDP transmission over 1Gbps. The CPU is a P4 Dual Core 3GHz, not top of the line but adequate for my needs (strictly block io). Here are the TCP/IP tunables from my sysctl.conf: # Controls default receive buffer size (bytes) net.core.rmem_default = 262144 # Controls IP default send buffer size (bytes) net.core.wmem_default = 262144 # Controls IP maximum receive buffer size (bytes) net.core.rmem_max = 262144 # Controls IP maximum send buffer size (bytes) net.core.wmem_max = 262144 # Controls TCP memory utilization (pages) net.ipv4.tcp_me...
2008 Feb 06
3
nic poor performance after upgrade to xen 3.2
Hi, I''m doing some test on a network 10 gb nics and xen with version 3.1 i''m measuring 2.5 gb/sec from domU to an external physical machine with iperf. switching to 3.2 has reduced the measured performance to 40-50 Mb/sec. did anything change in the network interface? can someone help me? thanks _______________________________________________ Xen-users mailing list
2012 Apr 17
1
Help needed with NFS issue
...dual bonded gigabit links in balance-alb mode. Turning off one interface in the bond made no difference. Relevant /etc/sysctl.conf parameters: vm.dirty_ratio = 50 vm.dirty_background_ratio = 1 vm.dirty_expire_centisecs = 1000 vm.dirty_writeback_centisecs = 100 vm.min_free_kbytes = 65536 net.core.rmem_default = 262144 net.core.rmem_max = 262144 net.core.wmem_default = 262144 net.core.wmem_max = 262144 net.core.netdev_max_backlog = 25000 net.ipv4.tcp_reordering = 127 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_max_syn_backlog = 8192 net.ipv4.tcp_no_metrics...
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all, I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any physical interface host. On this bridge throughput it is very very poor, only 2.5 Mbs. How can I increase this throughput??? Many thanks. -- CL Martinez carlopmart {at} gmail {d0t} com
2005 Oct 01
7
Updated presentation of Asterisk 1.2
Friends, I have updated my Asterisk 1.2 presentation with the latest information. It is still available in the same place as before: http://www.astricon.net/asterisk1-2/ Please continue to test the beta of Asterisk 1.2, available at ftp.digium.com. We need all the feedback we can get. If you are a developer and have some time for community work, please check in with the bug tracker and help us
2016 Jan 07
0
Samba over slow connections
.../sbin/ifconfig eth0 txqueuelen 100 ______________________________________________ ifcfg-eth0: ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" ______________________________________________ sysctl.conf: net.core.rmem_max = 65536 net.core.wmem_max = 65536 net.core.rmem_default = 32768 net.core.wmem_default = 32768 net.ipv4.tcp_rmem = 4096 32768 65536 net.ipv4.tcp_wmem = 4096 32768 65536 net.ipv4.tcp_mem = 4096 32768 65536 net.ipv4.udp_mem = 4096 32768 65536 net.ipv4.tcp_moderate_rcvbuf = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_dsack = 1 _____________________________________...
2016 Jan 07
1
Samba over slow connections
...______________________________ > > ifcfg-eth0: > > ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128" > ______________________________________________ > > sysctl.conf: > > net.core.rmem_max = 65536 > net.core.wmem_max = 65536 > net.core.rmem_default = 32768 > net.core.wmem_default = 32768 > net.ipv4.tcp_rmem = 4096 32768 65536 > net.ipv4.tcp_wmem = 4096 32768 65536 > net.ipv4.tcp_mem = 4096 32768 65536 > net.ipv4.udp_mem = 4096 32768 65536 > net.ipv4.tcp_moderate_rcvbuf = 1 > net.ipv4.tcp_sack = 1 > net.ipv4.tcp_dsack =...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86. Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet connection and my ping time to my ISP is 160-170 msec. When downloading something with Firefox, I am getting download speeds of about 100-180 KB/sec (for example when downloading SP2 of XP from MS server). Are the CentOS networking settings OK for this kind of latency, or do I have to change some settings?
2007 Aug 11
1
disable TCP slowstart ?
Im trying to improve my internal apache proxy. It has to deliver a lot of little/medium sized files. But every transfer starts with the usual small window size. While this is good for internet connections it is not as good for only internal connections where the environment is sane. I have tryed to tune initial window size via /proc/sys/net/ipv4/tcp_congestion_control tryed already
2005 May 23
0
problem in speeds [Message from superlinux]
...ally at its 52kbyte/sec. so what is happening FOR GOD''S SAKE!!??? in addition , after all that, i thought it was slow because of the RCV and SEND buffers settings in the kernel an the TTL=64. so set RCV and SEND as twice the kernel''s default on /etc/sysctl.conf as: net.core.rmem_default = 262140 net.core.wmem_default = 262140 net.core.rmem_max = 262140 net.core.wmem_max = 262140 the RESULT IS THE SAME!!! so what is happening FOR GOD''S SAKE!!???
2020 Apr 01
0
[ANNOUNCE] conntrack-tools 1.4.6
...nload it from: http://www.netfilter.org/projects/libnftnl/downloads.html ftp://ftp.netfilter.org/pub/libnftnl/ Happy firewalling. -------------- next part -------------- Arturo Borrero Gonzalez (2): conntrackd.conf.8: fix state filter example docs: refresh references to /proc/net/core/rmem_default Ash Hughes (2): conntrackd: search for RPC headers conntrackd: Use strdup in lexer Brian Haley (1): conntrack: Allow protocol number zero Jan-Martin Raemer (1): conntrackd: UDP IPv6 destination address not usable (Bug 1378) Jose M. Guisado Gomez (1): src: fix strnc...
2007 Oct 11
2
udp question
i all I use linux as GiGE router and have 6 NIC on it Those days the NIC interrupt takes around 100% CPU but the system is 4G memroy and 8 CPU. I can't see any error packet in this NIC interface too After I block the udp, the %CPU drops. but the UDP only takes around 8M in general We use UDP traffic for voice. Do you have any suggestion ? increase the kernel parameter? Thank you so much
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
...tu (Debian) with the 2.6.8 kernel, while the host is a Gentoo box running the 2.6.10-rc3 (nitro2) kernel. I have made a few adjustments to the TCP settings on each system: echo 262144 > /proc/sys/net/core/rmem_max echo 262144 > /proc/sys/net/core/wmem_max echo 163840 > /proc/sys/net/core/rmem_default echo 163840 > /proc/sys/net/core/wmem_default echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_rmem echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_wmem echo "49152 163840 262144" > /proc/sys/net/ipv4/tcp_mem These, however, have only helped each of t...
2005 May 13
4
Gigabit Throughput too low
...lush is bouncing around with Samba, not sure what is going on. Was curious if you had improved your situation, and if you did what you did. BTW here are some tweaks for network stack related stuff for Gig. sysctl -w net.core.rmem_max=8388608 sysctl -w net.core.wmem_max=8388608 sysctl -w net.core.rmem_default=65536 sysctl -w net.core.wmem_default=65536 sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608' sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608' sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608' sysctl -w net.ipv4.route.flush=1 Brian M. Duncan Katten Muchin Rosenman LLP 5...