similar to: [Bug 49] TCP conntrack entries with huge timeouts

Displaying 20 results from an estimated 120 matches similar to: "[Bug 49] TCP conntrack entries with huge timeouts"

2004 Aug 04
2
[Bug 40] system hangs, Availability problems, maybe conntrack bug, possible reason here.
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=40 ------- Additional Comments From pmccurdy@net-itech.com 2004-08-04 06:06 ------- We have managed to replicate this bug in-house. It seems to happen to us when we have a machine acting as a NAT router that we saturate with outgoing UDP packets; we use hping2 to generate them from a workstation connected via 100 Mbit
2003 Apr 28
3
[Bug 87] 'iplimit' match is misnamed, should be 'tcplimit'
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=87 laforge@netfilter.org changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED ------- Additional Comments From laforge@netfilter.org 2003-04-28 08:25 ------- The misnomer is true. I
2004 May 15
1
RV: RV: LATENCY PROBLEMS
I thought of creating an htb class for each user, but as you said I haven''t got enough bw to do soo. That’s why my setup only has 5 classes with WRR queues so I get sure each user doesn’t affects the other users. On top of that I have an iplimit to a maximum of 15 parallel connections per user. So I get the following conclusions: A) change link B) upgrade to kernel 2.6 and use l7
2003 Mar 20
6
[Bug 68] Kernel panic
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=68 ------- Additional Comments From laforge@netfilter.org 2003-03-20 10:55 ------- This looks strange. The BUG in slab.c tells us that there is a GFP_ATOMIC missing. This means that we are allocating kernel memory from softirq context with only GFP_KERNEL. If I understand your backtrace correctly, what happens is: - you are
2004 May 14
9
RV: LATENCY PROBLEMS
Hello there, I''m having lots of problems with my setup here. Let me explain: I am network administrator for my university dorm. We are about 300 users, and we have 2 ADSL connections doing load balancing with 300kbits upstream and 2Mbit downstream. The load balancing is working great, we are doing connection tracking so I can mark and hence prioritize interactive traffic and ACKS
2004 Jun 04
1
samba with acl support as member auf a samba controlled domain?
Hi I am running a Samba PDC and a Samba member server in his domain. The member server acts as a file server with unix acl's working. Is it possible to get these acl's working under samba to? The docs seem to say, that acl's are only possible if samba is a memberserver in an NT-Domain using winbind. In my case the PDC acts as a LDAP Server and the Member server is gets the unix
2003 Apr 25
0
[Bug 87] New: 'iplimit' match is misnamed, should be 'tcplimit'
https://bugzilla.netfilter.org/cgi-bin/bugzilla/show_bug.cgi?id=87 Summary: 'iplimit' match is misnamed, should be 'tcplimit' Product: netfilter/iptables Version: linux-2.4.x Platform: All OS/Version: All Status: NEW Severity: normal Priority: P2 Component: unknown AssignedTo:
2007 Sep 14
2
Problems with quota dict in 1.1.alpha4
Hello, there are problems with quota dict when multiple dovecot deliver processes are launched in parallel. It can be reproduced by sending a mail with multiple different recipients - the mail is delivered OK to all recipients, but the quota are not updated correctly in sql. I looked at the code and it seems that the problem is somewhere in dict cache. If I configure in Postfix max number of
2007 Dec 14
4
v1.1.beta11 quota plugin and dict server
Hello, when dovecot is started, it prints the following error to the console: ILoading modules from directory: /usr/local/dovecot/lib/dovecot/imap IModule loaded: /usr/local/dovecot/lib/dovecot/imap/lib10_quota_plugin.so IEffective uid=65534, gid=65534, home= Idict quota: user = dump-capability, uri = proxy:/var/run/dovecot/dict-server:quotadict Enet_connect_unix(/var/run/dovecot/dict-server)
2007 Nov 20
1
1.1.beta8 crashes with segfault when SIGHUP
Hello, dovecot crashes when it receives -HUP signal. It happens always if there was some activity - for example, if I start dovecot, check any account through POP3 and then send -HUP to dovecot process, it crashes with the following log entry: segfault at 00000008 eip 0804d3fb esp bfdd3860 error 4 If there were no activity at all since starting, it does not crash. my dovecot -n output: #
2004 Nov 01
2
does shorewall support more advance features of netfilter ?
e.g. string-matching CodeRed or Nimda viruses before they hit your Web server. The following rules achieve this: # DROP HTTP packets related to CodeRed and Nimda # viruses silently iptables -t filter -A INPUT -i $EXT_IFACE -p tcp \ -d $IP --dport http -m string \ --string "/default.ida?" -j DROP iptables -t filter -A INPUT -i $EXT_IFACE -p tcp \ -d $IP --dport http -m string \
2007 Sep 11
2
Possible bug in authentication cache in dovecot 1.1.alpha4
Hello, it seems that there is some bug in authentication cache code in dovecot version 1.1.alpha4 - after login attempt with wrong password the correct password also will fail. I can reproduce it very easy: $telnet 10.10.10.30 110 +OK Server. <861.2.46e6c679.jZ8QYpFmU8ZN6XIq7zPhkw==@server2> user testuser +OK pass pass +OK Logged in. quit +OK Logging out. Connection closed by foreign host.
2008 Sep 26
2
imap-quota not working
Hello, imap-quota plugin always returns empty quota: a1 GETQUOTA "" * QUOTA "" () a1 OK Getquota completed. a2 GETQUOTAROOT INBOX * QUOTAROOT "INBOX" a2 OK Getquotaroot completed. quota_rule and quota_rule2 for this user are defined in db as following: '*:storage=5000000' '*:messages=50000' Quota for LDA is working OK. I am using
2006 Jan 28
0
Centos Folding@Home team!
Centos now has its own Folding at Home team! If you would like to learn more about what we do, please follow this <a href="http://folding.stanford.edu/">link</a> to the Folding at Home homepage. New to distributed computing?, its not a problem. Here is the simplest description: Many computers working on separate pieces of the same puzzle are faster than a few computers
2007 Sep 24
1
v1.1.beta1 POP3 delete problem
Hello, POP3 server does not delete mails when user quits POP3 session, but only at the next login. It looks so - user logs in with USER and PASS commands, at this moment new messages are moved from /new to /cur maildir folder. Then messages are deleted with DELE command(before deleting there can be other POP3 commands) and the user quits session. The server says "+OK Logging out,
2007 Dec 11
1
minor issue - dovecot -n output with 1.1.beta11
Hello, when running diff for 'dovecot -n' outputs from beta10 and beta11(for the same config file), I noticed that 'dovecot -n' beta11 output does not show anymore the following parameters, which are set to non-default values(and which 'dovecot -a' shows): login_processes_count login_max_processes_count first_valid_uid first_valid_gid cache_size cache_ttl
2007 Dec 11
1
1.1.beta10 pop3 process hangs with 100% CPU
Hello, we have observed pop3 process which got stuck consuming all available CPU. It seems that it happened because of some kind of abnormal POP3 connection termination. Here is strace info for this process: 13:36:05.866190 writev(1, [{"508qWWH96If+uVXeH2Zxl/hkn+plVwmI"..., 3975}, {"HP1oxt+np0o4Xtz27VQBtxx0zWfGuA3r"..., 193}], 2) = -1 EPIPE (Broken pipe) 13:36:05.866250 ---
2007 Sep 15
1
v1.1.alpha5 crashes with segmentation fault
Hello, dovecot process crashes on startup with segmentation fault. Here is backtrace: (gdb) r Starting program: /usr/local/dovecot/sbin/dovecot Program received signal SIGSEGV, Segmentation fault. settings_is_active (set=0x0) at master-settings.c:505 505 if (*set->protocols == '\0') { (gdb) bt full #0 settings_is_active (set=0x0) at master-settings.c:505 No locals. #1
2003 Feb 21
1
flush ip_conntrack table manually?
i just got a ''ip_conntrack: table full, dropping packet'' because a p2p-application ran amok. i''ve killed the process but /proc/net/ip_conntrack still got more than 7000 (now stale) entries of 8184 max. since the table is now after ~70 minutes down to 6995 entries, i wonder if i can flush this table manually. the entries in there look like tcp 6 155674
2007 Nov 27
1
1.1.beta9 deliver crashes with segfault
Hello, after upgrade from beta8 to beta9 deliver process crashes with segfault. Deleting old maildirs does not help. Here is backtrace: Program received signal SIGSEGV, Segmentation fault. 0x0808e36d in mail_cache_field_get_decision (cache=0x8114da0, field_idx=128) at mail-cache-lookup.c:301 301 i_assert(field_idx < cache->fields_count); (gdb) bt #0 0x0808e36d in