On Thu, Apr 6, 2017 at 9:22 PM, Christian Balzer <chibi at gol.com> wrote:> > Hello, > > On Thu, 6 Apr 2017 22:13:07 +0300 Timo Sirainen wrote: > > > On 6 Apr 2017, at 21.14, Mark Moseley <moseleymark at gmail.com> wrote: > > > > > >> > > >> imap-hibernate processes are similar to imap-login processes in that > they > > >> should be able to handle thousands or even tens of thousands of > connections > > >> per process. > > >> > > > > > > TL;DR: In a director/proxy setup, what's a good client_limit for > > > imap-login/pop3-login? > > > > You should have the same number of imap-login processes as the number of > CPU cores, so they can use all the available CPU without doing unnecessary > context switches. The client_limit is then large enough to handle all the > concurrent connections you need, but not so large that it would bring down > the whole system if it actually happens. > > > Also keep in mind that pop3 login processes deal with rather ephemeral > events, unlike IMAP with IDLE sessions lasting months. > So they're unlike to grow beyond their initial numbers even with a small > (few hundreds) client_limit. > > On the actual mailbox servers, either login processes tend to use about > 1% of one core, very lightweight. > > > > Would the same apply for imap-login when it's being used in proxy > mode? I'm > > > moving us to a director setup (cf. my other email about director rings > > > getting wedged from a couple days ago) and, again, for the sake of > starting > > > conservatively, I've got imap-login set to a client limit of 20, since > I > > > figure that proxying is a lot more work than just doing IMAP logins. > I'm > > > doing auth to mysql at both stages (at the proxy level and at the > backend > > > level). > > > > Proxying isn't doing any disk IO or any other blocking operations. > There's no benefit to having more processes. The only theoretical advantage > would be if some client could trigger a lot of CPU work and cause delays to > handling other clients, but I don't think that's possible (unless somehow > via OpenSSL but I'd guess that would be a bug in it then). > > > Indeed, in proxy mode you can go nuts, here I see pop3-logins being > busier, but still just 2-5% of a core as opposed to typically 1-2% for > imap-logins. > That's with 500 pop3 sessions at any given time and 70k IMAP sessions per > node. > Or in other words, less than 1 core total typically. > > > > Should I be able to handle a much higher client_limit for imap-login > and > > > pop3-login than 20? > > > > Yeah. > > > The above is with a 4k client_limit, I'm definitely going to crank that up > to 16k when the opportunity arise (quite disruptive on a proxy...).Timo, any sense on where (if any) the point is where there are so many connections on a given login process that it would get too busy to keep up? I.e. where the sheer amount of stuff the login process has to do outweighs the CPU savings of not having to context switch so much? I realize that's a terribly subjective question, so perhaps you might have a guess in very very round numbers? Given a typical IMAP userbase (moderately busy, most people sitting in IDLE, etc), I woud've thought 10k connections on a single process would've been past that tipping point. With the understood caveat of being totally subjective and dependent on local conditions, should 20k be ok? 50k? 100k? Maybe a better question is, is there anywhere in the login process that is possible to block? If not, I'd figure that a login process that isn't using up 100% of a core can be assumed to *not* be falling behind. Does that seem accurate?
On 10 Apr 2017, at 21.49, Mark Moseley <moseleymark at gmail.com> wrote:> > Timo, any sense on where (if any) the point is where there are so many > connections on a given login process that it would get too busy to keep up? > I.e. where the sheer amount of stuff the login process has to do outweighs > the CPU savings of not having to context switch so much?There might be some unexpected bottleneck somewhere, but I haven't heard of anyone hitting one.> I realize that's a terribly subjective question, so perhaps you might have > a guess in very very round numbers? Given a typical IMAP userbase > (moderately busy, most people sitting in IDLE, etc), I woud've thought 10k > connections on a single process would've been past that tipping point. > > With the understood caveat of being totally subjective and dependent on > local conditions, should 20k be ok? 50k? 100k?I only remember seeing a few thousand connections per process, but the CPU usage there was almost nothing. So I'd expect it to scale well past 10k connections. I think it's mainly limited by Linux, and a quick google shows 500k, but I guess that's per server and not per process. Still, that's likely not all that many CPUs/processes. http://stackoverflow.com/questions/9899532/maximum-socket-connection-with-epoll <http://stackoverflow.com/questions/9899532/maximum-socket-connection-with-epoll>> Maybe a better question is, is there anywhere in the login process that is > possible to block?Shouldn't be. Well, logging, but all the login processes are sharing the same log pipe so if one blocks the others would block too.> If not, I'd figure that a login process that isn't using > up 100% of a core can be assumed to *not* be falling behind. Does that seem > accurate?Should be. In general I haven't heard of installations hitting CPU limits in proxies. The problem so far has always been related to getting enough outgoing sockets without errors, which is a server-wide problem. 2.2.29 has one tweak that hopefully helps with that.
On Mon, 10 Apr 2017 23:11:24 +0300 Timo Sirainen wrote:> On 10 Apr 2017, at 21.49, Mark Moseley <moseleymark at gmail.com> wrote: > > > > Timo, any sense on where (if any) the point is where there are so many > > connections on a given login process that it would get too busy to keep up? > > I.e. where the sheer amount of stuff the login process has to do outweighs > > the CPU savings of not having to context switch so much? > > There might be some unexpected bottleneck somewhere, but I haven't heard of anyone hitting one. >I haven't, OTOH context switching isn't _that_ bad and unless you're having a very dedicated box just doing this one thing it might happen anyway. Never mind that NUMA and device IRQ adjacency also factor into this. So you want to probably start with some reasonable (and that means larger than what you expect to be needed ^o^) value and if it grows beyond that no worries.> > I realize that's a terribly subjective question, so perhaps you might have > > a guess in very very round numbers? Given a typical IMAP userbase > > (moderately busy, most people sitting in IDLE, etc), I woud've thought 10k > > connections on a single process would've been past that tipping point. > > > > With the understood caveat of being totally subjective and dependent on > > local conditions, should 20k be ok? 50k? 100k? > > I only remember seeing a few thousand connections per process, but the CPU usage there was almost nothing. So I'd expect it to scale well past 10k connections. I think it's mainly limited by Linux, and a quick google shows 500k, but I guess that's per server and not per process. Still, that's likely not all that many CPUs/processes. http://stackoverflow.com/questions/9899532/maximum-socket-connection-with-epoll <http://stackoverflow.com/questions/9899532/maximum-socket-connection-with-epoll>As I wrote and from my substantial experience, 8k connections per process are no issue at all, I'd expect it to go easily up to 50k. But w/o any pressing reason I'd personally keep it below 20k, too many eggs in one basket and all that. And in the original context of this thread, an imap-hibernate process with 2.5k connections uses about 10MB RAM and 0.5% of a CPU core, so 16k per process as configured here should be a breeze.> > Maybe a better question is, is there anywhere in the login process that is > > possible to block? > > Shouldn't be. Well, logging, but all the login processes are sharing the same log pipe so if one blocks the others would block too. > > > If not, I'd figure that a login process that isn't using > > up 100% of a core can be assumed to *not* be falling behind. Does that seem > > accurate? > > Should be. In general I haven't heard of installations hitting CPU limits in proxies. The problem so far has always been related to getting enough outgoing sockets without errors, which is a server-wide problem. 2.2.29 has one tweak that hopefully helps with that. >Which would be? The delayed connection bit? Anyway, with a properly sized login_source_ips pool this shouldn't be an issue, I have 80k sessions (that's 160k connections total) per proxy server now and they are bored. Christian -- Christian Balzer Network/Systems Engineer chibi at gol.com Global OnLine Japan/Rakuten Communications http://www.gol.com/