Displaying 6 results from an estimated 6 matches for "fin_wait2".
Did you mean:
fin_wait_2
2010 Jan 27
5
sshd killed due to dos attack
Hi,
I am not sure to report this as a bug. so mailing to the list.
I have sshd(openssh3.5p1) server running on my router and when i run tcpjunk
to that port, sshd gets killed after some time
192.168.71.1 is my sshd server and 192.168.71.4 is my client from where i
send my dos attack
This is the tcpjunk command i gave to the ssh server
#tcpjunk -s 192.168.71.1 -p 22 -c req -i 100
req session
2007 Sep 06
0
Server crashes...
...0
TIME_WAIT
tcp 0 1 localhost:36886 66.37.52.232:80
SYN_SENT
tcp 0 10164 localhost:80 58.187.121.173:10350
ESTABLISHED
tcp 0 0 localhost:80 85.140.195.98:4052
TIME_WAIT
tcp 0 0 localhost:80 66.199.253.130:57278
FIN_WAIT2
tcp 0 0 localhost:37384 213.41.23.61:80
TIME_WAIT
tcp 0 0 localhost:80 72.232.110.18:4972
TIME_WAIT
tcp 0 0 localhost:80 60.28.157.113:2827
TIME_WAIT
tcp 0 1 localhost:80 121.10.164.117:13600
FIN_WAIT1
tcp...
2008 Jun 05
0
tcp_tw_recycle / tcp_tw_reuse
...#39;s no apparent setting for
that).
Currently I have the system configured to take a lower amount of traffic,
hovering at around 47% CPU, at around 833 req/s and it has 43k
connections in these states:
#### state
------------------
65 CLOSING
94 ESTABLISHED
172 FIN_WAIT1
50 FIN_WAIT2
10 LAST_ACK
497 SYN_RECV
43480 TIME_WAIT
On a side note, the 'active tcp sockets' reported by sar seems wildly
inaccurate, it reports only ~10 active tcp sockets, it barely varies
from the system being idle to the system being maxxed out.
The docs I can find says seek expert as...
2009 Apr 05
1
select() hangs in sftp_server_main()
...ckstat
sockets: used 304
TCP: inuse 444 orphan 302 tw 152 alloc 451 mem 5280
UDP: inuse 4
RAW: inuse 0
FRAG: inuse 0 memory 0
root at dl:~# netstat -tan | awk '{print $6}' | sort | uniq -c
2 CLOSE_WAIT
121 CLOSING
1 established)
109 ESTABLISHED
17 FIN_WAIT1
9 FIN_WAIT2
1 Foreign
300 LAST_ACK
20 LISTEN
2 SYN_RECV
433 TIME_WAIT
It also doesn't seem to be out of file descriptors but I'm not 100%
sure on that. And even if it were, wouldn't that produce an error, not
hang?
It does seem to be somewhat related to the number of conn...
2007 Jul 19
1
one mongrel with hundreds of CLOSE_WAIT tcp connections
...ose_wait
connections to amazon s3?
lsof -i -P | grep CLOSE_ | grep mongrel | wc -l
703
netstat | grep 56586 # an example port
tcp 1 0 localhost.localdomain:8011 localhost.localdomain:56586
CLOSE_WAIT
tcp 0 0 localhost.localdomain :56586 localhost.localdomain:8011
FIN_WAIT2
getnameinfo failed
getnameinfo failed
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://rubyforge.org/pipermail/mongrel-users/attachments/20070718/1aef129a/attachment.html
2007 Jul 19
0
one mongrel with *lots* of close_wait tcp connections
...e_wait
connections to amazon s3?
lsof -i -P | grep CLOSE_ | grep mongrel | wc -l
703
netstat | grep 56586 # an example port
tcp 1 0 localhost.localdomain:8011 localhost.localdomain:
56586
CLOSE_WAIT
tcp 0 0 localhost.localdomain :56586 localhost.localdomain:
8011
FIN_WAIT2
getnameinfo failed
getnameinfo failed
#background loop to set the bad mongrel to debug mode during the
close_wait period
def debug_mongrel_loop
sleep (60) until (`lsof -i -P | grep CLOSE_WAIT | grep mongrel |
wc -l`).to_i > 100
`killall -USR1 mongrel_rails`
AdminMailer.deliver_m...