Displaying 20 results from an estimated 30 matches for "wmem_default".
Did you mean:
rmem_default
2011 Mar 11
1
UDP Perfomance tuning
...551522838 packets received
1902 packets to unknown port received.
109709802 packet receive errors
7239 packets sent
We had checked all the kernel configurations and set them as recommended.
This is the settings and tests we have done
Server:
net.core.rmem_default = 2097152
net.core.wmem_default = 2097152
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
strace -fttt -o /tmp/server.trace iperf -s -u -p 2222 -w 6m -i 5 -l 1k
Client:
net.core.rmem_default = 2097152
net.core.wmem_default = 2097152
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
strace -fttt -o /tmp/client.trace...
2010 Dec 10
1
UDP buffer overflows?
...en this number increases, we see SIP errors, and in particular
Qualify packets are lost, and temporarily disable handsets, causing
all sorts of minor chaos.
I have already tuned from the defaults of:
net.core.rmem_max = 131071
net.core.wmem_max = 131071
net.core.rmem_default = 111616
net.core.wmem_default = 111616
net.core.optmem_max = 10240
net.core.netdev_max_backlog = 1000
up to:
net.core.rmem_max = 1048575
net.core.wmem_max = 1048575
net.core.rmem_default = 1048575
net.core.wmem_default = 1048575
net.core.optmem_max = 1048575
net.core.netdev_max_backlog = 10000
with no luck.
Any more...
2010 Mar 16
2
What kernel params to use with KVM hosts??
Hi all,
I order to reach maximum performance on my centos kvm hosts I have use these params:
- On /etc/grub.conf:
kernel /vmlinuz-2.6.18-164.11.1.el5 ro root=LABEL=/ elevator=deadline quiet
- On sysctl.conf
# Special network params
net.core.rmem_default = 8388608
net.core.wmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 1048576 4194304 16777216
net.ipv4.tcp_wmem = 1048576 4194304 16777216
# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net....
2016 Jan 07
3
Samba over slow connections
Hi list (and happy new year),
I'm experiencing some troubles using Samba (4.1.17 debian version) over
VPN. Basically we've following setup :
PC === LAN ===> VPN (WAN) ==== LAN ===> Samba file Server
Copying big (say > 1MiB) files from PC to Samba file server almost
always ends up with a NT_STATUS_IO_TIMEOUT error (or "a network error
occured" if trying to copy from
2009 Dec 14
4
Provider Augeas not functional on Sles10?
...But I would like to
use it in combination with puppet. My class looks something like this:
augeas {"sysctl.conf":
provider => "augeas",
context => "/files/etc/sysctl.conf",
changes => [
"set net.core.wmem_default 262144",
"set net.core.wmem_max 262144",
"set kernel.sem 250 32000 100 128",
],
}
When I do a ''puppetd -v -d --no-daemonize --onetime'' on my node, I get
the following error:
"err: //Node.....
2010 Jun 21
3
Increasing NFS Performance
...e the items I've found so far that are told to increase performance. Since this is a production system, I have yet to try these:
1. Increase the number of instances of NFS running. (As found in /etc/sysconfig/nfs)
2. Try sync vs async behavior in mount parameters on clients
3. rmem_default and wmem_default parameters
4. rsize and wsize parameters (Dependent on MTU. Currently, mine is default at 1500)
These are the items I'm planning to try but before I dive in (especially during a late night maintenance period...), I was hoping the list members brighter than me could give some comments on the a...
2006 Dec 30
1
CentOS 4.4 e1000 and wire-speed
...over 1Gbps.
The CPU is a P4 Dual Core 3GHz, not top of the line but adequate for my
needs (strictly block io).
Here are the TCP/IP tunables from my sysctl.conf:
# Controls default receive buffer size (bytes)
net.core.rmem_default = 262144
# Controls IP default send buffer size (bytes)
net.core.wmem_default = 262144
# Controls IP maximum receive buffer size (bytes)
net.core.rmem_max = 262144
# Controls IP maximum send buffer size (bytes)
net.core.wmem_max = 262144
# Controls TCP memory utilization (pages)
net.ipv4.tcp_mem = 49152 65536 98304
# Controls TCP sliding receive window buffer (bytes)
net...
2012 Jan 25
2
Server/Client Alive mechanism issues
Hello,
I have a bandwidth-constrained connection that I'd like to run rsync
over through an SSH tunnel. I also want to detect any network drops
pretty rapidly.
On the servers I'm setting (via sshd_config):
ClientAliveCountMax 5
ClientAliveInterval 1
TCPKeepAlive no
and on the clients I'm setting (via ssh_config):
ServerAliveCountMax 5
ServerAliveInterval 1
TCPKeepAlive no
After
2008 Feb 06
3
nic poor performance after upgrade to xen 3.2
Hi,
I''m doing some test on a network 10 gb nics and xen
with version 3.1 i''m measuring 2.5 gb/sec from domU to an external physical machine with iperf.
switching to 3.2 has reduced the measured performance to 40-50 Mb/sec.
did anything change in the network interface?
can someone help me?
thanks
_______________________________________________
Xen-users mailing list
2012 Apr 17
1
Help needed with NFS issue
...f one interface in the bond made no difference.
Relevant /etc/sysctl.conf parameters:
vm.dirty_ratio = 50
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_no_metrics_save = 1
The {r,w}mem_{max,default} values are twice wha...
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all,
I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both
rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any
physical interface host. On this bridge throughput it is very very poor, only
2.5 Mbs. How can I increase this throughput???
Many thanks.
--
CL Martinez
carlopmart {at} gmail {d0t} com
2016 Jan 07
0
Samba over slow connections
...100
______________________________________________
ifcfg-eth0:
ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
______________________________________________
sysctl.conf:
net.core.rmem_max = 65536
net.core.wmem_max = 65536
net.core.rmem_default = 32768
net.core.wmem_default = 32768
net.ipv4.tcp_rmem = 4096 32768 65536
net.ipv4.tcp_wmem = 4096 32768 65536
net.ipv4.tcp_mem = 4096 32768 65536
net.ipv4.udp_mem = 4096 32768 65536
net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_dsack = 1
______________________________________________
smb.conf:
socket o...
2016 Jan 07
1
Samba over slow connections
...> ifcfg-eth0:
>
> ETHTOOL_OPTS="-K ${DEVICE} tso on lro off; -G ${DEVICE} rx 128 tx 128"
> ______________________________________________
>
> sysctl.conf:
>
> net.core.rmem_max = 65536
> net.core.wmem_max = 65536
> net.core.rmem_default = 32768
> net.core.wmem_default = 32768
> net.ipv4.tcp_rmem = 4096 32768 65536
> net.ipv4.tcp_wmem = 4096 32768 65536
> net.ipv4.tcp_mem = 4096 32768 65536
> net.ipv4.udp_mem = 4096 32768 65536
> net.ipv4.tcp_moderate_rcvbuf = 1
> net.ipv4.tcp_sack = 1
> net.ipv4.tcp_dsack = 1
> ___________________________...
2007 Nov 04
1
Bandwidth optimisation
OS: CentOS 5.0 x86.
Hi, I am using CentOS 5.0 at home, ADSL ~16 Mbps/~1 Mbps Internet
connection and my ping time to my ISP is 160-170 msec.
When downloading something with Firefox, I am getting download speeds of
about 100-180 KB/sec (for example when downloading SP2 of XP from MS
server).
Are the CentOS networking settings OK for this kind of latency, or do I
have to change some settings?
2007 Aug 11
1
disable TCP slowstart ?
Im trying to improve my internal apache proxy. It has to deliver a lot of
little/medium sized files. But every transfer starts with the usual small
window size. While this is good for internet connections it is not as good
for only internal connections where the environment is sane.
I have tryed to tune initial window size via
/proc/sys/net/ipv4/tcp_congestion_control
tryed already
2005 May 23
0
problem in speeds [Message from superlinux]
...hat is happening FOR GOD''S SAKE!!???
in addition , after all that, i thought it was slow because of the RCV
and SEND buffers settings in the kernel an the TTL=64. so set RCV and
SEND as twice the kernel''s default on /etc/sysctl.conf as:
net.core.rmem_default = 262140
net.core.wmem_default = 262140
net.core.rmem_max = 262140
net.core.wmem_max = 262140
the RESULT IS THE SAME!!!
so what is happening FOR GOD''S SAKE!!???
2007 Oct 11
2
udp question
i all
I use linux as GiGE router and have 6 NIC on it
Those days the NIC interrupt takes around 100% CPU but the system is 4G memroy and 8 CPU. I can't see any error packet in this NIC interface too
After I block the udp, the %CPU drops. but the UDP only takes around 8M in general
We use UDP traffic for voice.
Do you have any suggestion ? increase the kernel parameter?
Thank you so much
2004 Dec 31
1
SMBFS mounts slow across gigabit connection
...is a
Gentoo box running the 2.6.10-rc3 (nitro2) kernel.
I have made a few adjustments to the TCP settings on each system:
echo 262144 > /proc/sys/net/core/rmem_max
echo 262144 > /proc/sys/net/core/wmem_max
echo 163840 > /proc/sys/net/core/rmem_default
echo 163840 > /proc/sys/net/core/wmem_default
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_rmem
echo "4096 163840 262144" > /proc/sys/net/ipv4/tcp_wmem
echo "49152 163840 262144" > /proc/sys/net/ipv4/tcp_mem
These, however, have only helped each of the transfer types
performance-wise (FTP especiall...
2005 May 13
4
Gigabit Throughput too low
...t
sure what is going on.
Was curious if you had improved your situation, and if you did what you
did.
BTW here are some tweaks for network stack related stuff for Gig.
sysctl -w net.core.rmem_max=8388608
sysctl -w net.core.wmem_max=8388608
sysctl -w net.core.rmem_default=65536
sysctl -w net.core.wmem_default=65536
sysctl -w net.ipv4.tcp_rmem='4096 87380 8388608'
sysctl -w net.ipv4.tcp_wmem='4096 65536 8388608'
sysctl -w net.ipv4.tcp_mem='8388608 8388608 8388608'
sysctl -w net.ipv4.route.flush=1
Brian M. Duncan
Katten Muchin Rosenman LLP
525 West Monroe Street
Chicago IL 60661...
2020 Sep 13
20
[Bug 1464] New: Trying to populate a set raises a netlink error "Could not process rule: No space left on device"
https://bugzilla.netfilter.org/show_bug.cgi?id=1464
Bug ID: 1464
Summary: Trying to populate a set raises a netlink error "Could
not process rule: No space left on device"
Product: nftables
Version: unspecified
Hardware: x86_64
OS: Gentoo
Status: NEW
Severity: normal