similar to: child process killed with signal 11

Displaying 20 results from an estimated 5000 matches similar to: "child process killed with signal 11"

2007 Oct 23
4
How to debug slow operation
Hello, I am no programmer at all. I have dovecot set up on two very similar machines. On one of the boxes, the dovecot server has recently been showing slower response times. It feels slower. Where it is slower, dovecot works as an auth server for postfix. I have replaced exim with postfix on that box and I think since then dovecot started to slow down. On the other machine the MTA is exim.
2008 Feb 05
2
ssl-parameters.dat.tmp missing
Hello, My dovecot works well but I have noticed in logs the following line: Feb 5 09:19:51 lists dovecot: ssl-build-param: Can't create temporary SSL parameters file /var/lib/dovecot/ssl-parameters.dat.tmp: No such file or directory I am not sure when it first appeared but I have not seen it before (and I am using dovecot for more almost 2 years now). $ dovecot -n # 1.0.10:
2008 Jan 29
2
sa learning from an imap spam folder
Hello, I am sorry if I am writing to a wrong list because honestly I do not know where to write but I would like to be able to set up sa-learn via cron to learn from spam folder of a particular email account (via IMAP). Can anyone share how to do it? Is dovecot involved in it? I am using: Exim 4.69 Dovecot 1.0.10 p5-Mail-SpamAssassin-3.2.1 Thanks a lot for helping. Zbigniew Szalbot
2007 Feb 24
1
openssl mkcert problem
Hello, Can someone point me to what I should do to install the missing files? I am trying to generate self-signed certificates using mkcert.sh but I get the following error: $ /usr/local/share/dovecot/mkcert.sh error on line -1 of ./dovecot-openssl.cnf} 6213:error:02001002:system library:fopen:No such file or
2007 Apr 30
1
200 pop3-login processes
Dear all, I am using dovecot 1.0.0 and have only 5 pop3 users. However, I have counted over 200 pop3-loging processes. I just wonder if it is normal for dovecot (has been running maybe for 2-3 weeks after installing the stable verion). Thanks! -- Zbigniew Szalbot
2006 Oct 27
1
making dovecot and exim write to one log
Hello, I am looking for some advice. I am trying to force dovecot to run to the same log as exim does. In dovecot.conf I put the exim log path /var/log/exim/mainlog. I restarted dovecot and the process worked fine. However at midnight exim log file is rotated and since then dovecot stops logging to this log. In syslog.conf I put: mail.*
2007 Feb 17
1
upgrade from 1.0rc7
Hello, Have been to dovecot website but could not find information on what I need to check before I upgrade from the current 1.0rc7 which I am using on a FreeBSD 6.2 system. Will the old conf (virtual users, no ssl) file work with upgraded dovecot? Many thanks! -- Zbigniew Szalbot
2007 Feb 17
1
wrong dovecot GID while upgrading
Hello, I tried to upgrade dovecot from 1.0.rc7 to rc22 but I got this error at the end of the upgrade process: Dovecot has reserved the groupname 'dovecot' and gid '143': ERROR: groupname 'dovecot' already in use by gid '1003' Please resolve these issues and try again: Either remove the conflicting group or if you wish to continue using a legacy gr
2009 Mar 13
1
pam_authenticate() failed: authentication error
Hello, I would like to ask for your help. I have noticed some error messages issued by dovecot. Mar 13 20:00:57 relay dovecot: auth-worker(default): pam(example at example.com): pam_authenticate() failed: authentication error (/etc/pam.d/dovecot missing?) Not surprisingly $ l /etc/pam.d/dovecot ls: /etc/pam.d/dovecot: No such file or directory The funny thing is that authentication does work
2007 Feb 17
0
[Fwd: wrong dovecot GID while upgrading]
Sorry to have bothered - just read the updating file...! I should have let the upgrading process remove the dovecot user... Warm regards, ZS ---------------------------- Original Message ---------------------------- Subject: wrong dovecot GID while upgrading From: "Zbigniew Szalbot" <zbyszek at szalbot.homedns.org> Date: Sat, February 17, 2007 21:10 To: dovecot at
2007 Mar 22
5
rc2x seems to break Pegasus Mail
Hi guys, I upgraded from rc7 to rc24 a couple weeks ago, and noticed today that Pegasus Mail's IMAP no longer connects. So I put rc27 in, and it worked, but only for a little while... I'm not sure what the problem might be, the logs look fine, and I can still use Horde and Evolution without a problem. Pegasus Mail's error is unfortunately not helpful. Restarting rc27 didn't
2017 Sep 08
3
GlusterFS as virtual machine storage
2017-09-08 13:44 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > I did not test SIGKILL because I suppose if graceful exit is bad, SIGKILL > will be as well. This assumption might be wrong. So I will test it. It would > be interesting to see client to work in case of crash (SIGKILL) and not in > case of graceful exit of glusterfsd. Exactly. if this happen, probably there
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2001 Dec 06
2
Terminal Services are extremely slow!
Hi there, I've been trying to run Terminal Services using wine and it seems to work but it's sloooow. I'm running dual boot system with Caldera Workstation 3.1, Windows2000 Professional and Wine 20011106. Any ideas or success stories? Here some output lines: wine --winver win2k "MTSC.exe" fixme:imm:ImmAssociateContext (0x00010025, 0x00000000): stub
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:07 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > OK, so killall seems to be ok after several attempts i.e. iops do not stop > on VM. Reboot caused I/O errors after maybe 20 seconds since issuing the > command. I will check the servers console during reboot to see if the VM > errors appear just after the power cycle and will try to crash the VM after >
2017 Sep 08
2
GlusterFS as virtual machine storage
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, isn't possible server hard-crash too much? I mean if reboot > reliably kills the VM, there is no doubt network crash or poweroff > will as well. IIUP, the only way to keep I/O running is to gracefully exiting glusterfsd. killall should send signal 15 (SIGTERM) to the process, maybe a bug in signal
2017 Jul 10
0
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
I upgraded from 3.8.12 to 3.8.13 without issues. Two replicated volumes with online update, upgraded clients first and followed by servers upgrade, "stop glusterd, pkill gluster*, update gluster*, start glusterd, monitor healing process and logs, after completion proceed to the other node" check gluster logs for more information. -- Respectfully Mahdi A. Mahdi
2017 Jul 11
3
Upgrading Gluster revision (3.8.12 to 3.8.13) caused underlying VM fs corruption
On Mon, Jul 10, 2017 at 10:33 PM, Mahdi Adnan <mahdi.adnan at outlook.com> wrote: > I upgraded from 3.8.12 to 3.8.13 without issues. > > Two replicated volumes with online update, upgraded clients first and > followed by servers upgrade, "stop glusterd, pkill gluster*, update > gluster*, start glusterd, monitor healing process and logs, after > completion proceed to
2017 Sep 08
2
GlusterFS as virtual machine storage
I would prefer the behavior was different to what it is of I/O stopping. The argument I heard for the long 42 second time out was that MTBF on a server was high, and that the client reconnection operation was *costly*. Those were arguments to *not* change the ping timeout value down from 42 seconds. I think it was mentioned that low ping timeout settings could lead to high cpu loads with many
2017 Sep 08
0
GlusterFS as virtual machine storage
2017-09-08 14:11 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>: > Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few > minutes. SIGTERM on the other hand causes crash, but this time it is > not read-only remount, but around 10 IOPS tops and 2 IOPS on average. > -ps So, seems to be reliable to server crashes but not to server shutdown :)