On Tue, Jul 18, 2017 at 10:40 PM, Aki Tuomi <aki.tuomi at dovecot.fi>
wrote:
>
>
> On 19.07.2017 02:38, Mark Moseley wrote:
> > I've been playing with weakforced, so it fills in the
'fail2ban across a
> > cluster' niche (not to mention RBLs). It seems to work well, once
you've
> > actually read the docs :)
> >
> > I was curious if anyone had played with it and was *very* curious if
> anyone
> > was using it in high traffic production. Getting things to
'work' versus
> > getting them to work *and* handle a couple hundred dovecot servers is
a
> > very wide margin. I realize this is not a weakforced mailing list
(there
> > doesn't appear to be one anyway), but the users here are some of
the
> > likeliest candidates for having tried it out.
> >
> > Mainly I'm curious if weakforced can handle serious concurrency
and
> whether
> > the cluster really works under load.
>
> Hi!
>
> Weakforced is used by some of our customers in quite large
> installations, and performs quite nicely.
>
>
>
Cool, good to know.
Do you have any hints/tips/guidelines for things like sizing, both in a
per-server sense (memory, mostly) and in a cluster-sense (logins per sec ::
node ratio)? I'm curious too how large is quite large. Not looking for
details but just a ballpark figure. My largest install would have about 4
million mailboxes to handle, which I'm guessing falls well below 'quite
large'. Looking at stats, our peak would be around 2000 logins/sec.
I'm also curious if -- assuming they're well north of 2000 logins/sec --
the replication protocol begins to overwhelm the daemon at very high
concurrency.
Any rules of thumb on things like "For each additional 1000 logins/sec, add
another # to setNumSiblingThreads and another # to setNumWorkerThreads"
would be super appreciated too.
Thanks! And again, feel free to point me elsewhere if there's a better
place to ask. For a young project, the docs are actually quite good.