similar to: panic with search

Displaying 20 results from an estimated 100 matches similar to: "panic with search"

2005 Sep 09
1
1.0alpha1: stack frame core
Hi, Today's core dump from 1.0alpha1 came from a syslog message of: IMAP(user): pool_data_stack_realloc(): stack frame changed gdb info on the resulting core dump attached. Question: how many people are building/using dovecot 1.0alpha1 with gcc 4.0.1 versus gcc 3.4.x? I am wondering if these issues come from the compiler instead of dovecot itself? Jeff Earickson Colby College
2009 Nov 02
2
dovecot-1.2.6: Panic: pool_data_stack_realloc(): stack frame changed
When playing with large numbers of IMAP keywords on dovecot-1.2.6 imap crashed: Panic: pool_data_stack_realloc(): stack frame changed Looks like either maildir_file_do() shouldn't T_BEGIN/T_END or the keywords array should start larger. 0 libSystem.B.dylib 0x00007fff875f4eba __kill + 10 1 libSystem.B.dylib 0x00007fff875f4eac kill + 14 2 libSystem.B.dylib
2010 Jun 14
1
Patch to fix leak in imap_refresh_proctitle in beta[5, 6]
The "imap" process of dovecot-2.0.beta[5,6] grows very large (I impose no system limits), e.g. exceeding 4.8GB on a 64-bit system. These messages appear in the logs: Warning: Growing pool 'imap client' with: 2048 Warning: Growing pool 'Cache fields' with: 2048 Warning: Growing data stack with: 32768 Warning: Growing data stack with: 65536 Warning: Growing data stack
2011 Mar 25
1
imaptest assertion failure
While trying to use imaptest (-20100922, from http://dovecot.org/nightly/imaptest/imaptest-latest.tar.gz) with the included tests directory, it assert-crashes after a suspicious connect() failure: $ ./imaptest user=foo pass=bar host=localhost test=tests Error: connect() failed: No route to host Panic: file test-exec.c: line 903 (test_send_lstate_commands): assertion failed:
2013 Jul 04
1
dovecot 2.2 Panic: pool_data_stack_realloc(): stack frame changed
Hi again, we've been trying dovecot 2.2 in our setup and we see thousands of messages like these : Jul 4 12:29:47 pop01 dovecot: lmtp(2899): Debug: auth input: rigakis2 at otenet.gr home=/var/mail/folders/U/9/5/rigakis2 quota_rule=*:storage=50M uid=531846 gid=100 mail=mbox:~/:INBOX=/var/mail/U/9/5 /rigakis2:INDEX=/indexes/4/1/b/rigakis2 at otenet.gr Jul 4 12:29:47 pop01 dovecot:
2009 Jul 29
1
sieve 0.1.8 raw backtrace
hi, I had two problems with deliver / sieve. First one is not reproducible anymore. Im using debian unstable (amd64), often dist-upgraded, with some experimental stuff too (wine i think). The first basically triggered a backtrace when confronted an email with spammassassins report headers prepended. (user_prefs: report_safe 0) However i dont have that backtrace anymore, sorry. The second
2007 Oct 01
1
imap sort assertion failure
The following error occurred after a "SORT (TO) US-ASCII ALL" command. This only seems to happen with "to", all our other tests work as expected. Oct 1 20:32:49 dovecot: IMAP(example at example.com): pool_data_stack_realloc(): stack frame changed Oct 1 20:32:49 dovecot: IMAP(example at example.com): Raw backtrace: imap [0x80c5ed1] -> imap [0x80c5dec] -> imap
2012 Oct 31
1
backtrace for non-existant %{ldap:attr} on login
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello, I'm fetching the user and auth data from LDAP, this is the string: pass_attrs = uid=user,userPassword=password,homeDirectory=userdb_home,mailUidNumber=userdb_uid,mailGidNumber=userdb_gid,mailLocationDovecot=userdb_mail,uid=userdb_user,=userdb_quota_rule=*:bytes=%{ldap:mailQuotaBytes},
2012 Jun 15
3
doveadm backup panic
using latest auto build didn't help. this happens only with a specific account. # doveadm -o imapc_user=----- at domain.com -o imapc_password=---- backup -u =----- at domain.com -R imapc: dsync(---- at domain.com): Panic: pool_data_stack_realloc(): stack frame changed dsync(---- at domain.com): Error: Raw backtrace: /usr/lib/dovecot/libdovecot.so.0(+0x4209a) [0xb762b09a] ->
2007 Oct 12
1
dovecot 1.1beta2 and dovecot-sieve 1.1.2 - crash in LDA
Hi, I've upgraded to dovecot 1.1beta2 and the latest dovecot-sieve release yesterday and have one single mail that cannot be delivered repeatedly. Log (reformatted): Oct 12 15:19:19 vs02 deliver(bernilrz): pool_data_stack_realloc(): stack frame changed Oct 12 15:19:19 vs02 deliver(bernilrz): Raw backtrace: /usr/local/libexec/dovecot/deliver(i_syslog_panic_handler+0x1e) [0x47117e] ->
2009 Jul 12
1
How to create managesieve core dumps
I've found this in my dovecot.log: ,--[ /path/to/dovecot.log ]-- | Jul 11 01:44:29 dovecot: Info: Dovecot v1.2.1 starting up | ... | Jul 11 01:50:21 auth(default): Info: client in: AUTH 1 PLAIN service=sieve lip=192.168.111.222 rip=192.168.111.122 lport=12000 rport=35084 resp=<hidden> | Jul 11 01:50:21 auth(default): Info: sql(j.doe at
2017 Mar 22
0
replicator crashing - oom
Think I got it: #0 0x00007fddaf597c37 in __GI_raise (sig=sig at entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007fddaf59b028 in __GI_abort () at abort.c:89 #2 0x00007fddaf9c0c86 in default_fatal_finish (type=<optimized out>, status=status at entry=0) at failures.c:201 #3 0x00007fddaf9c0d6e in i_internal_fatal_handler (ctx=0x7fff7197d000, format=<optimized out>,
2017 Mar 24
3
replicator crashing - oom
Sorry for the re-post - just want to make sure you saw this: #0 0x00007fddaf597c37 in __GI_raise (sig=sig at entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56 #1 0x00007fddaf59b028 in __GI_abort () at abort.c:89 #2 0x00007fddaf9c0c86 in default_fatal_finish (type=<optimized out>, status=status at entry=0) at failures.c:201 #3 0x00007fddaf9c0d6e in i_internal_fatal_handler
2009 Jul 28
0
Xen 3.2 @ Debian 5.0 / Debian 4.0 DomU crashes daily
Hello, on my xen host, Debian 5.0 I have few domU''s running, most of them are Lenny''s, a couple is running at Etch (Debian 4.0). All domU''s are running stable except of one (Etch / Debian 4.0). This one crashes almost daily and I wasn''t able figure out the reason why. Inside the domU there is nothing installed except of base system and Zimbra ZCS 5.0 and its
2001 Sep 09
1
Backtrace off Lithium's strem
Here it is, all nice and decoded: --- (gdb) bt #0 0x4003be91 in _vds_shared_init () at eval.c:41 #1 0x4003ce36 in vorbis_synthesis_init () at eval.c:41 #2 0x4002ff23 in _make_decode_ready () at eval.c:41 #3 0x4003039c in _process_packet () at eval.c:41 #4 0x4003238e in ov_read () at eval.c:41 #5 0x0804a0d1 in alarm () #6 0x0804993b in alarm () #7 0x400b6177 in __libc_start_main
2002 Feb 24
0
Problem with codepages and Localized windows
Hello. I am admin of our linux student server and I configured Samba. = Everything works perfect, except one thing. We are serving to Windows 95 = machines. These are the first version of Win95 (not OSR2), localized for = Czech Republic. It uses codepage CP1250, which is not shipped with Samba distribution. As other middle = European countries it uses codepage 852 under DOS. So I set up: client
2004 Aug 06
2
Coredumps when --enable-sse is selected
System: Linux 2.4.25, glibc-2.3.2, gcc-3.2.3 (weird palindrome there), on a Williamette core Pentium 4 (1.6Ghz) system. I've tried both speex 1.1.5 release, and the current CVS (which self-IDs as 1.1.4), and the result is the same. I suspect some funk in the use of the SSE intrinsics macros. Backtrace: #0 0x40024594 in filter_mem2_10 (x=0x805f31c, _num=0x8061fb8, _den=0x8061fe4,
2004 Aug 06
0
Coredumps when --enable-sse is selected
Hi, I've tried the same configure options on my system and it doesn't crash. I have the same glibc and gcc 3.3.2 (can you see if a newer gcc works?). Also, could you explore a bit with different options so we can narrow it down a bit. For example, does it work with the default CFLAGS or without --vbr or --dtx. Last thing, maybe it's the file. If so, please send me the smallest sample
2004 Sep 10
0
ERROR: mismatch in decoded data, verify FAILED!
On Sun, Jun 24, 2001 at 02:30:56PM -0700, Josh Coalson wrote: > There have been reports of -9 using huge amounts of memory. -9 is really > theoretical, but people always seem to want to try the max setting. Anyway, > that's not an excuse but figuring out why -9 is using so much memory is lower > on my list than other stuff. -8 should get within 0.01% of -9 and is pretty >
2003 Sep 14
0
ov_clear(&vorbis) segfaults? (Backtrace and test case)
Well, I've done more work on this today and done the following: -I simplified the test program to just under 125 lines of code. -I discovered that the code works fine on my friend's Darwin machine. -I discovered that the code works fine in my friend's i386 linux machine. -I discovered that the code fails in ALL of my i386 redhat 9 machines. (Even the ones where I've rebuilt