Displaying 20 results from an estimated 1000 matches similar to: "Number of imap-login processes always keeps growing, never goes down"
2019 Jul 11
2
Getting login stats
Hello,
I'm trying to get some IMAP auth stats on a Dovecot 2.3.6 instance, but whatever I declare in metric, it always show 0.
What I want basically is how many IMAP auth attempts there was on the server, and optional a way to filter on the auth attempt status (successful or failed).
My server uses a simple auth (with LDAP backend) and supports only "auth_mechanisms = plain login"
2019 Jul 11
0
Getting login stats
> I'm trying to get some IMAP auth stats on a Dovecot 2.3.6 instance, but whatever I declare in metric, it always show 0.
None of these auth_* requests exist in 2.3.6.
> I tried using the following metrics:
>
>
> --------------------------------
> metric auth_request_finished {
> event_name = auth_request_finished
> }
>
> metric
2020 Aug 22
2
Metric label values truncated when using OpenMetrics endpoint
Hi,
Recently we upgraded to Dovecot 2.3.11.3 and configured an example metric like this:
metric imap_command {
event_name = imap_command_finished
group_by = cmd_name tagged_reply_state user remote_ip
}
And enabled the OpenMetrics listener like this:
service stats {
inet_listener http {
port = 5000
}
}
While the result is great, I noticed that some metrics are being truncated
2019 Aug 15
2
2.3.7 + stats
Is there any additional documentation/information around the new stats
module.
Have added some metrics just to see what they produce
##
## Metrics
###
metric imap {
??? event_name = imap_command_finished
??? #source_location = example.c:123
??? #categories =
??? fields = name args running_usecs bytes_in bytes_out
??? #filter {
??? #??? field_key = wildcard
??? #}
}
metric sql {
???
2019 Aug 16
0
2.3.7 + stats
Some of the behaviours you observe may be due to the same bug I encountered:
https://dovecot.org/pipermail/dovecot/2019-July/116475.html
Especially, regarding the ?successful' field for auth, which does not exists and is really named ?success', and which is never set anyway.
> Le 15 ao?t 2019 ? 23:57, Matt Bryant via dovecot <dovecot at dovecot.org> a ?crit :
>
> Is
2020 Aug 24
0
Metric label values truncated when using OpenMetrics endpoint
On Sat, Aug 22, 2020 at 00:31:36 +0000, Daan van Gorkum wrote:
> Hi,
>
> Recently we upgraded to Dovecot 2.3.11.3 and configured an example metric
> like this:
>
>
> metric imap_command {
> event_name = imap_command_finished
> group_by = cmd_name tagged_reply_state user remote_ip
Grouping by remote_ip seems a bit dangerous unless the ips are somehow
limited.
2018 Jan 19
1
Fatal: nfs flush requires mail_fsync=always
Hiya all,
I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
my testbed. The issue is that from what I can see, mail_fsync is set
to always :
# doveconf -n | grep mail_fs
mail_fsync = always
The result is that the client does not connect at all, which is not
really what I wanted to happen :)
Any idea what is going wrong here?
Best regards
S?ren P. Skou
2019 Nov 26
0
Shared Mailboxes
Hello,
I have a mail server linked to Freeipa server, so all users are UNIX users.
i want to share the inbox of one user to another, but i can't figure it out
how to do it.
this is my dovecot -n output
# 2.3.8 (9df20d2db): /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.8 (b7b03ba2)
# OS: Linux 3.10.0-957.21.3.el7.x86_64 x86_64 CentOS Linux release 7.6.1810
(Core) xfs
# Hostname:
2020 Aug 25
2
Metric label values truncated when using OpenMetrics endpoint
Hi Jeff,
Thanks for your reply!
Regarding grouping by remote address, I understand and for now I'll keep a close eye. Maybe it's an option to group by /24 for ipv4 and /64 for IPv6? We currently do that based on the logs but the OpenMetrics endpoint seem a lot easier.
A slight hijack of the original question: but I tried to log only IP addresses (+ result) of failed login attempts but
2018 May 30
0
Fatal: nfs flush requires mail_fsync=always
This fix is part of next release.
---Aki TuomiDovecot oy
-------- Original message --------From: "Juan C. Blanco" <jcblanco at fi.upm.es> Date: 30/05/2018 19:31 (GMT+02:00) To: Dovecot Mailing List <dovecot at dovecot.org> Subject: Re: Fatal: nfs flush requires mail_fsync=always
Hello, any news about the attached error?
I'm preparing the 2.2 to 2.3 upgrade and
2018 Jan 16
0
nfs flush requires mail_fsync=always
Hiya,
I'm getting nfs flush requires mail_fsync=always rather consistently from my servers.
As you can see below this has been enabled already - So what else am I missing?
Best Regards
S?ren
# 2.3.1.alpha0 (6f9ffa758) [XI:2:2.3.1~alpha0-1~auto+6]: /etc/dovecot/dovecot.conf
# Pigeonhole version 0.5.1.alpha0 (c9f2afe0)
# OS: Linux 4.9.0-4-amd64 x86_64 Debian 9.3 nfs
auth_worker_max_count =
2018 May 30
2
Fatal: nfs flush requires mail_fsync=always
Hello, any news about the attached error?
I'm preparing the 2.2 to 2.3 upgrade and having the same error.
We have the mail stores in an NFS filer.
Regards
> On 19.01.2018 11:55, S?ren Skou wrote:
>> Hiya all,
>>
>> I'm seeing this "Fatal: nfs flush requires mail_fsync=always" error on
>> my testbed. The issue is that from what I can see, mail_fsync
2020 Oct 29
2
dovecot replicator not replicating automatically?
Hello all.
I'm trying to set up the dovecot replicator plugin to automatically
replicate mail between 2 servers.
The servers are both using iredmail with dovecot as the imap server.
I have everything working if i run dovecot replicator replicate \* all
accounts update. But if an account receives a message it isn't
automatically sent to the other system.
The system config.
OS,
2019 Sep 10
0
dovecot duplicate emails
Hello. I have problems with duplicate emails when using replication and
configured message filters on the client.
Looks like when letter comes, it processed by replication and
immediately after this processed by filter on client and moved to
another folder.
When it arrives to slave server into INBOX, it replicated back to master
into INBOX and only after that - slave receives MOVE command.
The
2006 Aug 07
2
Dynamically created queries
Hello,
I am having difficulty dynamically building a query. I want to ensure
that I''m using Active Record protection against SQL injection attacks.
In PHP land, I would have built up the query in my logic & attempted to
clean every variable - a bit tedious really.
I want to be able to achieve something like:
events = Event.find(:all,
:conditions => [**DynamicallyBuiltQuery**
2003 Jul 31
0
Trouble with optim
Dear All;
Searching on the achieve, many questions on optim() have been asked,
but I haven't seen the following.
The question began with my original inquiry on "Optimization failed
in fitting mixture 3-parameter Weibul l distribution using fitdistr()" which
I posted on Jul. 28, Prof. Ripley kindly advised me to look into options of
optim() for the answer. Following his advice and
2019 Jun 02
4
Stats/Metrics in 2.3
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512
Greetings,
So the changes to the stats modules between v2.2 and v2.3 have broken some
of my monitoring. I am attempting to use the new method of gathering
"metrics" from "events" - but the details in the documentation are a bit
thin.
I started with the examples provided at
https://wiki2.dovecot.org/Statistics and tried to
2007 Jul 19
0
one mongrel with *lots* of close_wait tcp connections
* cross posted to the mongrel mailing list*
Hi, I''m running into a strange issue where one mongrel will sometimes
develop hundreds of CLOSE_WAIT TCP connections, mostly to apache (I
think --
see sample lsof output below). I haven''t had a chance to get the
mongrel
with this behavior into USR1 debug mode yet. I wrote a little loop
below that will catch it next time.
This
2007 Jul 19
1
one mongrel with hundreds of CLOSE_WAIT tcp connections
Hi, I''m running into a strange issue where one mongrel will sometimes
develop hundreds of CLOSE_WAIT TCP connections, mostly to apache (I think --
see sample lsof output below). I haven''t had a chance to get the mongrel
with this behavior into USR1 debug mode yet. I didn''t catch it in time.
This happens a couple times a day on average at seemingly random times.
2004 Jan 26
0
My god, it full of "CLOSE_WAITS"
Hello samba people. I am having a odd problem related to winbindd.
When I leave it running overnight on a quiet (not silent however)
server I find that it generates a large number of CLOSE_WAIT state
connections. 84 on a fairly quiet system in the last 18 hours. I
can't seem to find any command that directly affects the number of
CLOSE_WAITS. Below is my netstat listing. Any ideas would