Displaying 20 results from an estimated 44 matches for "wmem_max".
Did you mean:
rmem_max
2011 Mar 11
1
UDP Perfomance tuning
...received.
109709802 packet receive errors
7239 packets sent
We had checked all the kernel configurations and set them as recommended.
This is the settings and tests we have done
Server:
net.core.rmem_default = 2097152
net.core.wmem_default = 2097152
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
strace -fttt -o /tmp/server.trace iperf -s -u -p 2222 -w 6m -i 5 -l 1k
Client:
net.core.rmem_default = 2097152
net.core.wmem_default = 2097152
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
strace -fttt -o /tmp/client.trace iperf -p 2222 -u -w 6m -i 5 -t 60 -c
192.168.1.2 -b 990...
2007 Dec 28
7
Xen and networking.
...sly, I cannot seem to get above 25MB/sec from each.
It starts off with a large burst like each can do 100MB/sec, but then
in a couple
of seconds, tapers off to the 15-40MB/sec until the dd finishes.
Things I have tried (installed on the host and the guests)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
sysctl -w net.ipv4.tcp_congestion_control=cubi...
2010 Dec 10
1
UDP buffer overflows?
...nt
RcvbufErrors: 44017 <--- this
When this number increases, we see SIP errors, and in particular
Qualify packets are lost, and temporarily disable handsets, causing
all sorts of minor chaos.
I have already tuned from the defaults of:
net.core.rmem_max = 131071
net.core.wmem_max = 131071
net.core.rmem_default = 111616
net.core.wmem_default = 111616
net.core.optmem_max = 10240
net.core.netdev_max_backlog = 1000
up to:
net.core.rmem_max = 1048575
net.core.wmem_max = 1048575
net.core.rmem_default = 1048575
net.core.wmem_default = 1048575
net.core.optmem_max = 10485...
2010 Mar 16
2
What kernel params to use with KVM hosts??
...hosts I have use these params:
- On /etc/grub.conf:
kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet
- On sysctl.conf
# Special network params
net.core.rmem_default = 8388608
net.core.wmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 1048576 4194304 16777216
net.ipv4.tcp_wmem = 1048576 4194304 16777216
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# Virtual machines spec...
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year),
I'm experiencing some troubles using Samba (4.1.17 debian version) over
VPN. Basically we've following setup :
PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server
Copying big (say > 1MiB) files from PC to Samba file server almost
always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error
occured" if trying to copy from
2020 Jan 13
0
UDPbuffer adjustment
...ndBuf = bytes (OS default)
Sets the socket send buffer size for the UDP socket, in bytes. If unset, the default buffer size will be used by
the operating system.
OS, CentOS, just would like to know, if I increase the the /proc/sys/net/core/rmem_max and /proc/sys/net/core/wmem_max to 10MB by below command(in order to increase the UDP buffer size):
sysctl -w net.core.rmem_max=10485760 ; sysctl -w net.core.wmem_max=10485760
but keep /proc/sys/net/core/rmem_default and /proc/sys/net/core/wmem_dafault unchanged, which is 200KB.
and I don’t have any UDPRcvBuf and UDPSndBuf conf...
2009 Jul 07
1
Sysctl on Kernel 2.6.18-128.1.16.el5
Sysctl Values
-------------------------------------------
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_window_scaling = 1
# vm.max-readahead = ?
# vm.min-readahead = ?
# HW Controler Off
# max-readahead = 1024
# min-readahead = 256
# Memory over-commit
# vm.overcommit_memory=2
# Memory to acti...
2009 Dec 14
4
Provider Augeas not functional on Sles10?
...s looks something like this:
augeas {"sysctl.conf":
provider => "augeas",
context => "/files/etc/sysctl.conf",
changes => [
"set net.core.wmem_default 262144",
"set net.core.wmem_max 262144",
"set kernel.sem 250 32000 100 128",
],
}
When I do a ''puppetd -v -d --no-daemonize --onetime'' on my node, I get
the following error:
"err: //Node...Failed to retrieve current state of resource: Provider
au...
2006 Sep 20
5
Transfer rates faster than 23MBps?
We use SMB to transfer large files (between 1GB and 5GB) from RedHat AS4
Content Storage servers to Windows clients with 6 DVD burners and
robotic arms and other cool gadgets. The servers used to be Windows
based, but we're migrating to RedHat for a host of reasons.
Unfortunately, the RedHat Samba servers are about 2.5 times slower than
the Windows servers. Windows will copy a 1GB file
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
....conf:
# Controls default receive buffer size (bytes)
net.core.rmem_default = 262144
# Controls IP default send buffer size (bytes)
net.core.wmem_default = 262144
# Controls IP maximum receive buffer size (bytes)
net.core.rmem_max = 262144
# Controls IP maximum send buffer size (bytes)
net.core.wmem_max = 262144
# Controls TCP memory utilization (pages)
net.ipv4.tcp_mem = 49152 65536 98304
# Controls TCP sliding receive window buffer (bytes)
net.ipv4.tcp_rmem = 4096 87380 16777216
# Controls TCP sliding send window buffer (bytes)
net.ipv4.tcp_wmem = 4096 65536 16777216
Ross S. W. Walker
Infor...
2007 Apr 18
1
[Bridge] [BUG/PATCH/RFC] bridge: locally generated broadcast traffic may block sender
....0.0.0 up
brctl addbr br0
brctl addif br0 eth0
brctl addif br0 eth1
# brctl stp br0 on/off doesn't matter
# configure bridge interface
ifconfig br0 10.20.30.40 up
route add -net 224.0.0.0 netmask 240.0.0.0 dev br0
# try to send a fixed amount of multicast UDP traffic
nbytes=`cat /proc/sys/net/wmem_max`
nbytes=$(( $nbytes * 2 ))
dd if=/dev/zero bs=$nbytes count=1 | nc -nuvw1 224.0.0.123 1234
# arguments to nc:
# -w1 "wait 1sec" causes nc to exit after sending the _complete_ amount
# -n no names, -v verbose, -u UDP to multicast 224.0.0.123 port 1234
If both links are connected, the dd...
2008 Feb 06
3
nic poor performance after upgrade to xen 3.2
Hi,
I''m doing some test on a network 10 gb nics and xen
with version 3.1 i''m measuring 2.5 gb/sec from domU to an external physical machine with iperf.
switching to 3.2 has reduced the measured performance to 40-50 Mb/sec.
did anything change in the network interface?
can someone help me?
thanks
_______________________________________________
Xen-users mailing list
2012 Apr 17
1
Help needed with NFS issue
...e no difference.
Relevant /etc/sysctl.conf parameters:
vm.dirty_ratio = 50
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_no_metrics_save = 1
The {r,w}mem_{max,default} values are twice what they were previously;
ch...
2017 May 17
2
Improving packets/sec and data rate - v1.0.24
Hi,
We've been running tinc for a while now but, have started hitting a
bottleneck where the number of packets/sec able to be processed by our
Tinc nodes is maxing out around 4,000 packets/sec.
Right now, we are using the default cipher and digest settings (so,
blowfish and sha1). I've been testing using aes-256-cbc for the cipher
and seeing ~5% increases across the board. Each Tinc node
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all,
I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both
rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any
physical interface host. On this bridge throughput it is very very poor, only
2.5 Mbs. How can I increase this throughput???
Many thanks.
--
CL Martinez
carlopmart {at} gmail {d0t} com
2016 Jan 07
0
Samba over slow connections
...o
> consume them)
/usr/sbin/ifconfig eth0 txqueuelen 100
______________________________________________
ifcfg-eth0:
ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
______________________________________________
sysctl.conf:
net.core.rmem_max = 65536
net.core.wmem_max = 65536
net.core.rmem_default = 32768
net.core.wmem_default = 32768
net.ipv4.tcp_rmem = 4096 32768 65536
net.ipv4.tcp_wmem = 4096 32768 65536
net.ipv4.tcp_mem = 4096 32768 65536
net.ipv4.udp_mem = 4096 32768 65536
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_dsack = 1
_______...
2016 Jan 07
1
Samba over slow connections
...uelen 100
> ______________________________________________
>
> ifcfg-eth0:
>
> ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
> ______________________________________________
>
> sysctl.conf:
>
> net.core.rmem_max = 65536
> net.core.wmem_max = 65536
> net.core.rmem_default = 32768
> net.core.wmem_default = 32768
> net.ipv4.tcp_rmem = 4096 32768 65536
> net.ipv4.tcp_wmem = 4096 32768 65536
> net.ipv4.tcp_mem = 4096 32768 65536
> net.ipv4.udp_mem = 4096 32768 65536
> net.ipv4.tcp_moderate_rcvbuf = 1
> net.ipv4.tcp...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86.
Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet
connection and my ping time to my ISP is 160-170 msec.
When downloading something with Firefox, I am getting download speeds of
about 100-180 KB/sec (for example when downloading SP2 of XP from MS
server).
Are the CentOS networking settings OK for this kind of latency, or do I
have to change some settings?
2010 May 03
0
TCP Tuning/Apache question (possibly OT)
Hello All:
I've been requested to add some TCP tuning parameters to some CentOS
5.4 systems. These tunings are for the TCP receive buffer windows:
net.core.rmem_max
net.core.wmem_max
Information on this tuning is broadly available:
http://fasterdata.es.net/TCP-tuning/linux.html
http://www.speedguide.net/read_articles.php?id=121
Potential downsides are available:
http://www.29west.com/docs/THPM/udp-buffer-sizing.html
>From the above, the rmem size is a function of the numb...
2007 Aug 11
1
disable TCP slowstart ?
Im trying to improve my internal apache proxy. It has to deliver a lot of
little/medium sized files. But every transfer starts with the usual small
window size. While this is good for internet connections it is not as good
for only internal connections where the environment is sane.
I have tryed to tune initial window size via
/proc/sys/net/ipv4/tcp_congestion_control
tryed already