search for: recvspac

Displaying 13 results from an estimated 13 matches for "recvspac".

Did you mean: recvspace
2013 Jan 31
4
zfs + NFS + FreeBSD with performance prob
.... If I do a classic scp I got normal speed ~9-10 Mbytes/s so the network is not the problem. I try to something like (find with google): net.inet.tcp.sendbuf_max: 2097152 -> 16777216 net.inet.tcp.recvbuf_max: 2097152 -> 16777216 net.inet.tcp.sendspace: 32768 -> 262144 net.inet.tcp.recvspace: 65536 -> 262144 net.inet.tcp.mssdflt: 536 -> 1452 net.inet.udp.recvspace: 42080 -> 65535 net.inet.udp.maxdgram: 9216 -> 65535 net.local.stream.recvspace: 8192 -> 65535 net.local.stream.sendspace: 8192 -> 65535 and that change nothing either. Anyone have any idea ? Reg...
2008 Jul 22
3
6.3-RELEASE-p3 recurring panics on multiple SM PDSMi+
We have 10 SuperMicro PDSMi+ 5015M-MTs that are panic'ing every few days. This started shortly after upgrade from 6.2-RELEASE to 6.3-RELEASE with freebsd-update. Other than switching to a debugging kernel, a little sysctl tuning, and patching with freebsd-update, they are stock. The debugging kernel was built from source that is also being patched with freebsd-update. These systems are
2003 Nov 03
3
(long) high traffic syslog server.
...19,762,079 dropped due to full socket buffers uptime 5:28PM up 7 days, 18:30, 2 users, load averages: 0.21, 0.23, 0.23 I though maybe syslogd was the problem, but running nc on the syslog port and sending output to /dev/null still shows the buffer problem. i've tried uping net.inet.udp.recvspace if this gets too high i will no longer be able to send udp packets and will get a socket buff full err. net.local.dgram.recvspace This didn't do much. i tried moving kern.ipc.maxsockbuf in by doubling each time This didn't help kern.ipc.maxsockbuf: 1048576 <- This is what it currently...
2004 Aug 06
0
FreeBSD in general
...e: #define BUFSIZE 16384 #define CHUNKLEN 128 And in /etc/rc.local (or another script run by root), add this: # Max out some network parameters for potential throughput improvement /sbin/sysctl -w kern.ipc.maxsockbuf=1048576 /sbin/sysctl -w net.inet.tcp.sendspace=32768 /sbin/sysctl -w net.inet.tcp.recvspace=32768 These made an obvious difference for our tests, YMMV. It's been rock solid and we're very happy with it, and the combo of FreeBSD + icecast + liveice + lame + donated old hardware = good value for a non-profit, community radio station. Plus, with the call letters KGNU, open sou...
2003 Apr 06
1
load testing and tuning a 4GB RAM server
Hello everyone, First of all, great job on the 4.8-R. We have been a long standing user of FreeBSD and are very happy with everything. Now my question. I am trying to stress test a new Dell PowerEdge server and find the limits of its hardware and my tuning. Here are the server stats: * 2x Xeon 2.8 with SMP compiled, hyperthreading NOT compiled in kernel * 4 GB of RAM, 8 GB of swap on Raid 1
2004 Aug 06
4
FreeBSD in general
Hello everyone, I am very tempted to try and use icecast as the weapon of choice to realize a live audio stream for the radio station I work for. Now, the last time I tried the FreeBSD port of Icecast, it immediately consumed something like 95% CPU time, even without any client connected. I think someone else reported that problem, but I somehow lost track of the issue - has this been
2006 Apr 12
1
powerd not behaving with an Asus A8V-MX and Athlon 64 X2 3800+
...nc: 0 vfs.nfsrv.commit_blks: 0 vfs.nfsrv.commit_miss: 0 vfs.nfsrv.realign_test: 0 vfs.nfsrv.realign_count: 0 vfs.nfsrv.gatherdelay: 10000 vfs.nfsrv.gatherdelay_v3: 0 vfs.ffs.doasyncfree: 1 vfs.ffs.doreallocblks: 1 vfs.ffs.compute_summary_at_mount: 0 net.local.stream.sendspace: 8192 net.local.stream.recvspace: 8192 net.local.dgram.maxdgram: 2048 net.local.dgram.recvspace: 4096 net.local.inflight: 0 net.local.taskcount: 0 net.local.recycled: 0 net.inet.ip.portrange.lowfirst: 1023 net.inet.ip.portrange.lowlast: 600 net.inet.ip.portrange.first: 49152 net.inet.ip.portrange.last: 65535 net.inet.ip.portrange...
2012 Mar 30
6
9-STABLE, ZFS, NFS, ggatec - suspected memory leak
...o save bandwith to the backend, compressratio around 1.05 to 1.15), atime is off. There is no special tuning in loader.conf (except I tried to limit ZFS ARC to 8GB lately, which doesn't change a lot). sysctl.conf has: kern.ipc.maxsockbuf=33554432 net.inet.tcp.sendspace=8388608 net.inet.tcp.recvspace=8388608 kern.maxfiles=64000 vfs.nfsd.maxthreads=254 Without the first three zfs+ggate goes bad after a short time (checksum errors, stall), the latter are mainly for NFS and some regular local cleanup stuff. The machines have 4 em and 2 igb network interfaces. 3 of the are dedicated links (wi...
2009 Nov 30
2
em interface slow down on 8.0R
Hi, I noticed that network connection of one of my boxes got significantly slow just after upgrading it to 8.0R. The box has an em0 (82547EI) and worked fine with 7.2R. The symptoms are: - A ping to a host on the same LAN takes 990ms RTT, it reduces gradually to around 1ms, and then it returns to around 1s. The rate was about 2ms/ping. - The response is quite slow, but no packet
2008 May 28
2
Sockets stuck in FIN_WAIT_1
...ve tried different versions of Apache and I've tried with and without the accf_http kernel filter. Here is what I have on the server now: sysctl.conf: kern.maxfiles=65535 kern.maxfilesperproc=16384 kern.ipc.maxsockbuf=4194304 kern.ipc.somaxconn=1024 net.inet.tcp.sendspace=8192 net.inet.tcp.recvspace=8192 net.inet.tcp.keepidle=900000 net.inet.tcp.keepintvl=30000 net.inet.tcp.msl=5000 net.inet.tcp.blackhole=2 net.inet.udp.blackhole=1 net.inet.tcp.inflight_enable=1 and loader.conf accf_http_load="YES" kern.ipc.nmbclusters=32768 net.inet.tcp.tcbhashsize=4096 kern.ipc.maxsockets=13107...
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
...=0 net.inet.icmp.drop_redirect=1 net.inet.tcp.drop_synfin=1 #net.inet.tcp.icmp_may_rst=0 #net.inet.udp.blackhole=1 #net.inet.tcp.blackhole=2 net.inet6.ip6.accept_rtadv=0 net.inet6.icmp6.rediraccept=0 kern.ipc.maxsockets=1000000 net.inet.tcp.maxtcptw=200000 kern.ipc.nmbclusters=262144 net.inet.tcp.recvspace=65536 net.inet.tcp.sendspace=65536 kern.ipc.somaxconn=10240 net.inet.ip.portrange.first=2048 net.inet.ip.portrange.last=65535 net.inet.tcp.msl=5000 net.inet.tcp.fast_finwait2_recycle=1 net.inet.ip.intr_queue_maxlen=4096 #net.inet.tcp.ecn.enable=1 net.inet.icmp.icmplim=5000 ----
2013 Jun 19
3
shutdown -r / shutdown -h / reboot all hang and don't cleanly dismount
Hello -STABLE@, So I've seen this situation seemingly randomly on a number of both physical 9.1 boxes as well as VMs for I would say 6-9 months at least. I finally have a physical box here that reproduces it consistently that I can reboot easily (ie; not a production/client server). No matter what I do: reboot shutdown -p shutdown -r This specific server will stop at "All buffers
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys, I really love 5.3 in many ways but here're some unbelievable transfer rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve my performance problem (*laugh*): (In short, see *** below) Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI Desktop adapter MT) connected directly without a switch/hub and "device