similar to: Upstart script for Poolmon

Displaying 20 results from an estimated 600 matches similar to: "Upstart script for Poolmon"

2017 Jan 10
2
Poolmon: Problem with index-locking
I have Poolmon (https://github.com/brandond/poolmon) set up. When it does all the checks concurrently, obviously there are locking issues on each mailserver it tests: "Warning: Locking transaction log file xxxx/indexes/dovecot.list.index.log took 60 seconds (syncing)" It's just an empty mailbox. Is there any way to do a login test, without locking the index files? Hence
2014 Aug 18
1
Health monitoring of backend servers
Does Dovecot Director health monitoring of backend servers ? Or is poolmon (http://www.dovecot.org/list/dovecot/2010-August/051946.html) the best option for this? Maybe Ldirectord? Thanks! -- Thiago Henrique www.adminlinux.com.br
2017 Jan 10
0
Poolmon: Problem with index-locking
On 10 Jan 2017, at 20.38, Tom Sommer <mail at tomsommer.dk> wrote: > > I have Poolmon (https://github.com/brandond/poolmon) set up. When it does all the checks concurrently, obviously there are locking issues on each mailserver it tests: > > "Warning: Locking transaction log file xxxx/indexes/dovecot.list.index.log took 60 seconds (syncing)" > > It's just
2014 Aug 11
0
poolmon improvements
I've been planning to improve poolmon failure checking for a long time already, but I still haven't managed to get to it. Maybe somebody else has more time, so here's a feature request for anyone to implement: poolmon currently gives up immediately if the first check to any service fails. It really should be trying multiple times over multiple seconds before giving up. I think ideally
2018 Sep 07
1
Auth process sometimes stop responding after upgrade
In data venerd? 7 settembre 2018 11:20:49 CEST, Sami Ketola ha scritto: > > On 7 Sep 2018, at 11.25, Simone Lazzaris <simone.lazzaris at qcom.it> wrote: > > Actually, I have a poolmon script running that should drop vhost count for > > unresponsive backends; the strage thing is, the backends are NOT > > unresponsive, they are working as ususal. > If it's this
2018 Sep 18
2
Auth process sometimes stop responding after upgrade
In data marted? 11 settembre 2018 10:46:30 CEST, Timo Sirainen ha scritto: > On 11 Sep 2018, at 10.57, Simone Lazzaris <s.lazzaris at interactive.eu> wrote: > > Sep 11 03:25:55 imap-front4 dovecot: director: Panic: file > > doveadm-connection.c: line 1097 (doveadm_connection_deinit): assertion > > failed: (conn->to_ring_sync_abort == NULL) Sep 11 03:25:55 imap-front4
2017 Feb 24
3
Director+NFS Experiences
On Thu, Feb 23, 2017 at 3:45 PM, Zhang Huangbin <zhb at iredmail.org> wrote: > > > On Feb 24, 2017, at 6:08 AM, Mark Moseley <moseleymark at gmail.com> wrote: > > > > * Do you use the perl poolmon script or something else? The perl script > was > > being weird for me, so I rewrote it in python but it basically does the > > exact same things. >
2020 Jul 16
2
NFS vs Replication
Thank you all for replies!!! Some missing infos... - As load balancer I'm using a pair of keepalived with simple setup and not the DNS - Load balancer algorithm is "Weighted Least-Connection" - About 20 domains and 3000 email - I'm monitoring my backend servers with poolmon - The backend servers are virtual machine (vmware) with datastore on "all flash" storage based
2018 Sep 07
3
Auth process sometimes stop responding after upgrade
In data venerd? 7 settembre 2018 10:06:00 CEST, Sami Ketola ha scritto: > > On 7 Sep 2018, at 11.00, Simone Lazzaris <s.lazzaris at interactive.eu> > > wrote: > > > > > > The only suspect thing is this: > > > > Sep 6 14:45:41 imap-front13 dovecot: director: doveadm: Host > > 192.168.1.142 > > vhost count changed from 100 to 0 >
2015 Dec 05
6
Dovecot cluster using GlusterFS
Hello, I have recently setup mailserver solution using 2-node master-master setup (mainly based on MySQL M-M replication and GlusterFS with 2 replica volume) on Ubuntu 14.04 (Dovecot 2.2.9). Unfortunately even with shared-storage-aware setting: mail_nfs_index = yes mail_nfs_storage = yes mail_fsync = always mmap_disable = yes ..I have hit strange issues pretty soon especially when user was
2017 Feb 23
5
Director+NFS Experiences
As someone who is about to begin the process of moving from maildir to mdbox on NFS (and therefore just about to start the 'director-ization' of everything) for ~6.5m mailboxes, I'm curious if anyone can share any experiences with it. The list is surprisingly quiet about this subject, and articles on google are mainly just about setting director up. I've yet to stumble across an
2011 Jun 02
1
director monitoring?
I'm working the kinks of a new director based setup for the eventual migration away from courier. At this point, with everything basically working I'm trying to ensure that things are properly monitored and I've run into an issue. There doesn't appear to be a way to get dovecot to tell if it is (or is not) connected and properly synced with the other director servers in the ring
2007 Aug 10
9
Problems monitoring Mongrel with F5 BigIP
If this has already been covered, please point me to that (I didn''t find anything in my searches)... We are using F5 BigIP LTM load balancers. They have many pools of Mongrels they load balance across, and I of course want the F5 to know when a Mongrel goes down or is unavailable, etc. To do that, I need to have an F5 health monitor for HTTP make a request to the Mongrel. We do this
2011 Nov 18
4
BigIP and Puppet
Has anyone successfully puppetized BigIP (F5)? I''m specifically trying to figure out a path in making our BigIP instances be under puppet so all the VIPs, pools, profiles, etc. are all under puppet control. My requirements are probably going to be fulfilled even with just uploading a rules file if a delta''s detected. The main issue I have now is figuring out when to reload the
2020 Sep 25
4
Debian client/workstation pam_mount
On 24/09/2020 12:47, Christian Naumer via samba wrote: > I am using it on Fedora with Volume Definition looking like this: and I use this: <volume fstype="cifs" ??????? server="CIFS_SERVER_FQDN" ??????? path="linprofiles" ??????? mountpoint="/mnt/%(USER)" options="username=%(USER),uid=%(USERUID),gid=%(USERGID),domain=%(DOMAIN_NAME)"
2005 Oct 28
2
VLAN tagging problems
We are using Centos behind an F5 Bigip load balancer. The linux box is using bonding and tagged VLAN's Everything works fine except that when traffic is forwarded from the BigIP to the linux box on the VLAN where the web server is running the linux box returns the traffic on the wrong VLAN, It returns traffic on the lowest ordered VLAN. ie. here is a tcpdump on my load balancer showing
2018 Sep 07
0
Auth process sometimes stop responding after upgrade
> On 7 Sep 2018, at 11.25, Simone Lazzaris <simone.lazzaris at qcom.it> wrote: > Actually, I have a poolmon script running that should drop vhost count for unresponsive backends; the strage thing is, the backends are NOT unresponsive, they are working as ususal. > If it's this one https://github.com/brandond/poolmon/blob/master/poolmon
2009 Feb 05
1
squid HA failover?
I'm running a pair of squids as an internal cache for some intermediate data used by a web server farm and currently doing failover by going through an F5 bigip. However, I'd like to avoid the bigip and use heartbeat since there are only two machines. I don't need to sync any content since it is a fast-changing cache and either machine can handle everything. Is it possible to
2012 Oct 10
4
Irrelevant information filling logs
Hi, I have a "Ubuntu10.04 + dovecot-2.0.13" configuration in my server. My mailbox server is shared by ~ 10k domains. It works fine with ~50k accounts. There is a lot of logs of "quota exceeded" like this: Oct 10 13:00:56 mailboxserver5 dovecot: lmtp(29105, user at mailboxserver5): Error: ifcIN1NxdVCxcQAAMBx7mQ: sieve: msgid=unspecified: failed to store into mailbox
2012 Jun 29
1
director directing to wrong server (sometimes)
Hello, I have discovered a strange behaviour with director proxying... I have a user, its assigned server is 155.54.211.164. The problem is that I don't know why director sent him yesterday to a different server, because my server was up all the time. Moreover, I'm using poolmon in director servers to check availability of final servers and it didn't report any problem with the