Displaying 8 results from an estimated 8 matches for "fin_wait1".
2007 Sep 06
0
Server crashes...
...7278
FIN_WAIT2
tcp 0 0 localhost:37384 213.41.23.61:80
TIME_WAIT
tcp 0 0 localhost:80 72.232.110.18:4972
TIME_WAIT
tcp 0 0 localhost:80 60.28.157.113:2827
TIME_WAIT
tcp 0 1 localhost:80 121.10.164.117:13600
FIN_WAIT1
tcp 0 0 192.168.1.21:1521 192.168.1.21:32771
ESTABLISHED
tcp 0 0 localhost:80 125.45.81.248:1408
ESTABLISHED
tcp 0 0 localhost:80 116.25.7.188:64423
ESTABLISHED
tcp 0 0 localhost:80 72.232.234.146:56469
ES...
2007 Apr 23
1
NAT: pings/DNS works but not the rest
...eserverside.com
--12:11:51-- http://www.theserverside.com/
=> `index.html''
Resolving www.theserverside.com... 65.214.43.44
Connecting to www.theserverside.com|65.214.43.44|:80... connected.
HTTP request sent, awaiting response...
Netstat shows one connection in stat FIN_WAIT1
tcp 0 110 10.0.0.51:57142 65.214.43.44:80
FIN_WAIT1
Kernel version:
2.6.19-4-generic #2 SMP Thu Apr 5 06:06:18 UTC 2007 i686 GNU/Linux
Iptables output on Dom0:
root@ishtar01:~# iptables -L -n
Chain INPUT (policy ACCEPT)
target prot opt source destinat...
2013 Jun 28
0
Re: kernel panic in skb_copy_bits
...f812605bb>] memcpy+0xb/0x120
>
>
> Per vmcore, the socket info as below:
> ------------------------------------------------------------------------------
> <struct tcp_sock 0xffff88004d344e00> TCP
> tcp 10.1.1.11:42147 10.1.1.21:3260 FIN_WAIT1
> windows: rcv=122124, snd=65535 advmss=8948 rcv_ws=1 snd_ws=0
> nonagle=1 sack_ok=0 tstamp_ok=1
> rmem_alloc=0, wmem_alloc=10229
> rx_queue=0, tx_queue=149765
> rcvbuf=262142, sndbuf=262142
> rcv_tstamp=51.4 s, lsndtime=0.0 s ago...
2008 Jun 05
0
tcp_tw_recycle / tcp_tw_reuse
...not because there's no apparent setting for
that).
Currently I have the system configured to take a lower amount of traffic,
hovering at around 47% CPU, at around 833 req/s and it has 43k
connections in these states:
#### state
------------------
65 CLOSING
94 ESTABLISHED
172 FIN_WAIT1
50 FIN_WAIT2
10 LAST_ACK
497 SYN_RECV
43480 TIME_WAIT
On a side note, the 'active tcp sockets' reported by sar seems wildly
inaccurate, it reports only ~10 active tcp sockets, it barely varies
from the system being idle to the system being maxxed out.
The docs I can find s...
2009 Apr 05
1
select() hangs in sftp_server_main()
...# cat /proc/net/sockstat
sockets: used 304
TCP: inuse 444 orphan 302 tw 152 alloc 451 mem 5280
UDP: inuse 4
RAW: inuse 0
FRAG: inuse 0 memory 0
root at dl:~# netstat -tan | awk '{print $6}' | sort | uniq -c
2 CLOSE_WAIT
121 CLOSING
1 established)
109 ESTABLISHED
17 FIN_WAIT1
9 FIN_WAIT2
1 Foreign
300 LAST_ACK
20 LISTEN
2 SYN_RECV
433 TIME_WAIT
It also doesn't seem to be out of file descriptors but I'm not 100%
sure on that. And even if it were, wouldn't that produce an error, not
hang?
It does seem to be somewhat related to...
2006 Dec 19
0
connection unexpectedly closed
...t_cleanup(code=12, file=io.c, line=463): about to call exit(12)
This occours always after ~1180000 Bytes, not at a certain file. During
the last 60 seconds a netstat on the Linux bos says:
> tcp 0 13442 pippi:54779 10.10.10.90:ssh ESTABLISHED
> tcp 0 4594 pippi:59693 10.10.10.90:ssh FIN_WAIT1
...and on the Windows box:
> Aktive Verbindungen
>
> Proto Lokale Adresse Remoteadresse Status PID
> TCP taquiri18:22 10.10.10.90:0 ABH?REN 248 [sshd.exe]
> TCP taquiri18:22 pippi:54779 HERGESTELLT 248 [sshd.exe]
> TCP taquiri18:22...
2004 Aug 06
1
Icecast2 and IceS2 client problem
...), 44100 Hz, quality 3.000000
(etc., etc., etc.)
relevant netstat output:
tcp 0 0 10.10.20.4:8000 0.0.0.0:* LISTEN
tcp 0 0 10.10.20.4:33300 10.10.20.4:8000 ESTABLISHED
tcp 0 14314 10.10.20.4:8000 10.10.20.21:1028 FIN_WAIT1
tcp 0 0 10.10.20.4:8000 10.10.20.4:33300 ESTABLISHED
tcp 0 1355 10.10.20.4:8000 10.10.20.11:3822 ESTABLISHED
(10.10.20.21 is a debian system running xmms, and 10.10.20.11 is a winXP system
running Winamp 2.8. 10.10.20.4 is the Icecast server...
2011 Dec 30
3
imap process limits problem
I am having a problem with the number of current processes that I cannot
seem to diagnose adequately, and is a possible bug. This will be a bit
long, but usually more info is better.
I am running dovecot 2.0.16 on a CentOS 5 x86_64 server with the mailstore
on gfs (output from dovecot -n at bottom). This is an imap issue. This is
mostly to do with one client, but none of my tests indicate an