similar to: LMTP crashing heavily for my 2.2.36 installation

Displaying 20 results from an estimated 200 matches similar to: "LMTP crashing heavily for my 2.2.36 installation"

2018 Jul 11
4
LMTP crashing heavily for my 2.2.36 installation
On Wed, Jul 11, 2018 at 10:46 AM, Timo Sirainen <tss at iki.fi> wrote: > On 11 Jul 2018, at 8.41, Wolfgang Rosenauer <wrosenauer at gmail.com> wrote: > > > > I'm running 2.2.36 (as provided by openSUSE in their server:mail > repository) and at least at one of my systems LMTP is crashing regularly on > certain messages (apparently a lot of them). > > >
2018 Jul 11
0
LMTP crashing heavily for my 2.2.36 installation
On 11 Jul 2018, at 8.41, Wolfgang Rosenauer <wrosenauer at gmail.com> wrote: > > Hi, > > I'm running 2.2.36 (as provided by openSUSE in their server:mail repository) and at least at one of my systems LMTP is crashing regularly on certain messages (apparently a lot of them). > > Sometimes (but not always a backtrace is posted to the logs: > >
2018 Jul 11
0
LMTP crashing heavily for my 2.2.36 installation
follow up question. Is there a commit which is reasonable to backport for me into the packages or is it too intrusive or based on heavily changed code? Thanks, Wolfgang On Wed, Jul 11, 2018 at 3:32 PM, Wolfgang Rosenauer <wrosenauer at gmail.com> wrote: > > On Wed, Jul 11, 2018 at 10:46 AM, Timo Sirainen <tss at iki.fi> wrote: > >> On 11 Jul 2018, at 8.41, Wolfgang
2018 Jul 12
0
LMTP crashing heavily for my 2.2.36 installation
On Wed, Jul 11, 2018 at 6:03 PM, Aki Tuomi <aki.tuomi at dovecot.fi> wrote: > One alternative is to migrate into sdbox format, in which this is supported > > > > --- > Aki Tuomi > Dovecot oy > > -------- Original message -------- > From: Wolfgang Rosenauer <wrosenauer at gmail.com> > Date: 11/07/2018 18:14 (GMT+02:00) > To: Timo Sirainen <tss at
2018 Jul 12
0
LMTP crashing heavily for my 2.2.36 installation (and now with 2.3.2.1)
Hi, I will try to create a coredump later but now I see version 2.3.2.1 also crashing in LMTP :-( 2018-07-12T10:09:57.336062+02:00 saruman dovecot: lmtp(an007498)<11814><zrPDEdUMR1smLgAAQ/KzDw>: Fatal: master: service(lmtp): child 11814 killed with signal 6 (core dumps disabled - https://dovecot.org/bugreport.html#coredumps) 2018-07-12T10:09:57.382925+02:00 saruman dovecot:
2018 Oct 19
2
systemd automount of cifs share hangs
> > But if I start the automount unit and ls the mount point, the shell hangs > and eventually, a long time later (I haven't timed it, maybe an hour), I > eventually get a prompt again. Control-C won't interrupt it. I can still > ssh in and get another session so it's just the process that's accessing > the mount point that hangs. > I don't have a
2004 Nov 16
2
RE: basic encoder help
>I'm currently facing the same problem. >I added the libFLAC++ libraries to my MSVC application. >I implemented the same quality levels (0-8) as used in the FLAC frontend application. >But the resulting files are remarkable different between my application and the FLAC frontend >(although using the same settings). It did turn out to be something in my byte ordering in the end
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated servers for 16 qemu/kvm/libvirt virtual machines using image files stored in gluster and accessed via libgfapi. Eight of these disk images are standalone, while the other eight are qcow2 images which all share a single backing file. For the most part, this is all working very well. However, one of the gluster servers
2004 Sep 17
1
linking against the static libraries
We would like to use the static libraries in our commercial software. This software is an MFC application which is statically linked to the MFC libraries. We added LibFLAC_static.lib and LibFLAC++_static.lib but this causes an error when trying to run our application ('A required file was missing MSVCRTXX.DLL'). After looking in the Project Settings of the FLAC source, I found that the
2004 Nov 05
1
RE: basic encoder help
I'm currently facing the same problem. I added the libFLAC++ libraries to my MSVC application. I implemented the same quality levels (0-8) as used in the FLAC frontend application. But the resulting files are remarkable different between my application and the FLAC frontend (although using the same settings). for example: FLAC frontend (quality = 8) --------------------------------
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2007 Mar 15
5
[PATCH 0/5] fix gcc warnings in CVS HEAD
Hi, I have rewritten the patches I submitted earlier today for the CVS HEAD. Some of the changes were already committed months ago. On 2007/03/15 12:30, Timo Sirainen <tss at iki.fi> wrote: > That's ok, but I'm not sure about bsearch_insert_pos(). It's the way it > is mostly because I wanted to keep bsearch() API. If it can't return > void * then maybe it could be
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,
2020 Apr 28
3
Diagnosing IPv6 routing
On 4/28/2020 3:17 PM, Chris Adams wrote: > - gateway sends a router solicitation and gets a router advertisement > with "stateful config" set, which tells gateway to do DHCPv6 (but > default route comes from RA) I'm not seeing any outbound IPv6 traffic from my CentOS 7 box on the WAN interface. I do see RA's emitting from the LAN interface, from radvd. Is there
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote: > > > Since arbiter bricks need not be of same size as the data bricks, if you > > > can configure three more arbiter bricks > > > based on the guidelines in the doc [1], you can do it live and you will > > > have the distribution count also unchanged. > > > > I can probably find
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > > If you want to use the first two bricks as arbiter, then you need to be > > aware of the following things: > > - Your distribution count will be decreased to 2. > > What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote: > If you want to use the first two bricks as arbiter, then you need to be > aware of the following things: > - Your distribution count will be decreased to 2. What's the significance of this? I'm trying to find documentation on distribution counts in gluster, but my google-fu is failing me. > - Your data on
2018 Jul 13
4
dsync panic
I think I get pretty much the same issue: dsync(support): Panic: file mailbox-attribute.c: line 360 (mailbox_attribute_get_stream): assertion failed: (value_r->value != NULL || value_r->value_stream != NULL) dsync(support): Error: Raw backtrace: /usr/lib64/dovecot/libdovecot.so.0(+0xc9e06) [0x7fba8a348e06] -> /usr/lib64/dovecot/libdovecot.so.0(default_fatal_handler+0x2a) [0x7fba8a348e4a]