Displaying 12 results from an estimated 12 matches for "netdev_max_backlog".
2010 Dec 10
1
UDP buffer overflows?
...r
Qualify packets are lost, and temporarily disable handsets, causing
all sorts of minor chaos.
I have already tuned from the defaults of:
net.core.rmem_max = 131071
net.core.wmem_max = 131071
net.core.rmem_default = 111616
net.core.wmem_default = 111616
net.core.optmem_max = 10240
net.core.netdev_max_backlog = 1000
up to:
net.core.rmem_max = 1048575
net.core.wmem_max = 1048575
net.core.rmem_default = 1048575
net.core.wmem_default = 1048575
net.core.optmem_max = 1048575
net.core.netdev_max_backlog = 10000
with no luck.
Any more suggestions?
Many thanks,
Steve
2007 Dec 28
7
Xen and networking.
...he host and the guests)
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_moderate_rcvbuf = 1
# recommended to increase this for 1000 BT or higher
net.core.netdev_max_backlog = 2500
sysctl -w net.ipv4.tcp_congestion_control=cubic
Any ideas?
--
--tmac
RedHat Certified Engineer #804006984323821 (RHEL4)
RedHat Certified Engineer #805007643429572 (RHEL5)
Principal Consultant, RABA Technologies
_______________________________________________
Xen-users mailing list
Xe...
2012 Apr 17
1
Help needed with NFS issue
.../etc/sysctl.conf parameters:
vm.dirty_ratio = 50
vm.dirty_background_ratio = 1
vm.dirty_expire_centisecs = 1000
vm.dirty_writeback_centisecs = 100
vm.min_free_kbytes = 65536
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
net.core.netdev_max_backlog = 25000
net.ipv4.tcp_reordering = 127
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_max_syn_backlog = 8192
net.ipv4.tcp_no_metrics_save = 1
The {r,w}mem_{max,default} values are twice what they were previously;
changing these had no effect.
The numb...
2014 Jan 24
1
Possible SYN flooding on port 8000. Sending cookies
...012/04/09/syn-cookies.html
Furthermore:
"While you see SYN flood warnings in logs not being really flooded,
your server is seriously misconfigured."
*A potential fix* - increase the net.ipv4.tcp_max_syn_backlog kernel
parameter. Or tune some more parameters like tcp_synack_retries and
netdev_max_backlog
*My question *- to fix this SYN flooding problem should I modify
net.ipv4.tcp_max_syn_backlog, net.core.somaxconn and the backlog size
passed to the listen() syscall or might there be an alternative easier fix
such as installing
2.3.3-kh9<https://github.com/karlheyes/icecast-kh/archive/icecast-...
2008 Feb 06
3
nic poor performance after upgrade to xen 3.2
Hi,
I''m doing some test on a network 10 gb nics and xen
with version 3.1 i''m measuring 2.5 gb/sec from domU to an external physical machine with iperf.
switching to 3.2 has reduced the measured performance to 40-50 Mb/sec.
did anything change in the network interface?
can someone help me?
thanks
_______________________________________________
Xen-users mailing list
2006 Oct 30
3
Application 500 Errors
Configuration:
(2) Dual Core Opterons
8GB RAM
Apache used to balance 40 mongrel instances
We receive Application 500 Errors. Nothing suspect appears in the log, so we
are at a lost as to what to do next.
Any advice would be welcome and/or an explanation of what types of things
cause Application 500 Errors in mongrel.
Thanks!
- Jared Brown
-------------- next part --------------
An HTML
2008 Jan 06
4
Increasing throughput on xen bridges
Hi all,
I have a rhel 5.1 xen server with two rhel 3 ES hvm guests installed. Both
rhel3 guests use an internal xen bridge (xenbr1) which it isn''t binded to any
physical interface host. On this bridge throughput it is very very poor, only
2.5 Mbs. How can I increase this throughput???
Many thanks.
--
CL Martinez
carlopmart {at} gmail {d0t} com
2012 Mar 16
1
NFS Hanging Under Heavy Load
...RAID SAS 2108
[Liberator] (rev 05)
08:03.0 VGA compatible controller: Matrox Graphics, Inc. MGA G200eW WPCM450
(rev 0a)
/etc/sysctl.conf changes:
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 2621440 16777216
net.ipv4.tcp_wmem = 4096 2621440 16777216
net.core.netdev_max_backlog = 250000
net.ipv4.route.flush = 1
net.ipv4.tcp_window_scaling = 1
vm.dirty_writeback_centisecs = 50
Has anyone else seem similar issues? I can provide additional details
about the server/configuration if anybody needs anything else. The issue
only seems to occur under high write load as we'...
2007 Jul 19
5
ridiculous slow gigabit transfer, faster with VNC
Hi,
I have a problem with file transfers between a windows systems and unix systems.
I have one win32 desktop (intel e6400 2Gb Ram), one win32 laptop (p-m 2Ghz).
Also one linux laptop (p-m 1.4GHz) and one opensolaris desktop (intel
e4400 1GB Ram).
The two laptops have built-in 100Mbit ethernet and desktops have 1Gbit
ethernet on the motherboard. Both desktops use a Marvell Yukon.
The file
2013 Feb 27
1
Slow read performance
...crease TCP max buffer size settable using setsockopt()
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
# increase Linux autotuning TCP buffer limit
net.ipv4.tcp_rmem = 4096 87380 67108864
net.ipv4.tcp_wmem = 4096 65536 67108864
# increase the length of the processor input queue
net.core.netdev_max_backlog = 250000
# recommended default congestion control is htcp
net.ipv4.tcp_congestion_control=htcp
# recommended for hosts with jumbo frames enabled
net.ipv4.tcp_mtu_probing=1
Thomas W.
Sr. Systems Administrator COLA/IGES
twake at cola.iges.org
Affiliate Computer Scientist GMU
-------------- n...
2003 Feb 03
4
[Bug 40] system hangs, Availability problems, maybe conntrack bug, possible reason here.
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=40
laforge@netfilter.org changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|NEW |ASSIGNED
------- Additional Comments From laforge@netfilter.org 2003-02-03 16:49 -------
We haven't seen this
2006 Jul 06
12
kernel BUG at net/core/dev.c:1133!
Looks like the GSO is involved?
I got this while running Dom0 only (no guests), with a
BOINC/Rosetta@home application running on all 4 cores.
changeset: 10649:8e55c5c11475
Build: x86_32p (pae).
------------[ cut here ]------------
kernel BUG at net/core/dev.c:1133!
invalid opcode: 0000 [#1]
SMP
CPU: 0
EIP: 0061:[<c04dceb0>] Not tainted VLI
EFLAGS: 00210297 (2.6.16.13-xen