similar to: multiple messages per second to a single mailbox

Displaying 20 results from an estimated 20000 matches similar to: "multiple messages per second to a single mailbox"

2015 Aug 12
3
multiple messages per second to a single mailbox
On Aug 12, 2015, at 11:04 AM, Andrzej A. Filip <andrzej.filip at gmail.com> wrote: > > <..snip..> > Could you provide the following info: > a) mailbox type (maildir/mbox/dbox/...) maildir > [mail_location in dovecot's config] /srv/mail/<domain>/<user-mailbox>/ > b) file system type (ext2/ext3/ext4/fat32/...) > [provided by "df -T"
2015 Aug 14
2
multiple messages per second to a single mailbox
On Aug 14, 2015, at 1:01 PM, Andrzej A. Filip <andrzej.filip at gmail.com> wrote: > > > Are docecot and postfix located on the same server? > Can postfix access (deliver) directly maildir file directory dovecot uses? > For the moment yes they are on the same server. I designed it to be modular, the various components can be placed on different systems with no
2015 Aug 12
0
multiple messages per second to a single mailbox
Chad M Stewart <cms at balius.com> wrote: > Dovecot 2.2.18 on CentOS 6 > > I have a pair of servers setup with MySQL, Postfix, and Dovecot. Replication is setup and working between the two dovecot instances. > > The problem I'm running into is that a single mailbox receives a lot > of messages, at times the rate is multiple messages per > second. Delivery from
2015 Aug 12
2
multiple messages per second to a single mailbox
On Aug 12, 2015, at 11:58 AM, Daniel Tr?der <troeder at univention.de> wrote: > On 08/12/2015 17:19, Chad M Stewart wrote: >> What I'm seeing is very high load on the system (40) and queues building on the Postfix side. > High load means, that there are a lot of processes waiting to run. The > most likely cause for this is not CPU consumption, but I/O wait. > >
2015 Aug 14
0
multiple messages per second to a single mailbox
Chad M Stewart <cms at balius.com> wrote: > On Aug 14, 2015, at 1:01 PM, Andrzej A. Filip <andrzej.filip at gmail.com> wrote: > >> >> >> Are docecot and postfix located on the same server? >> Can postfix access (deliver) directly maildir file directory dovecot uses? >> > > For the moment yes they are on the same server. I designed it to be
2015 Aug 12
0
multiple messages per second to a single mailbox
On 08/12/2015 17:19, Chad M Stewart wrote: > What I'm seeing is very high load on the system (40) and queues building on the Postfix side. High load means, that there are a lot of processes waiting to run. The most likely cause for this is not CPU consumption, but I/O wait. Please run vmstat and iostat and post their output. Greetings Daniel -------------- next part -------------- A
2015 Aug 17
1
multiple messages per second to a single mailbox
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2015-08-14 7:52 AM, Chad M Stewart wrote: > > The problem happened again this morning. Removing fsync calls helped, but I'm not sure about leaving that enabled long term. > > I still believe the problem is multiple dovecot processes trying to write to a single folder at the same time. (If I could run dtrace I might be able to
2015 Aug 14
0
multiple messages per second to a single mailbox
Chad M Stewart <cms at balius.com> wrote: > On Aug 12, 2015, at 11:04 AM, Andrzej A. Filip <andrzej.filip at gmail.com> wrote: >> >> > > <..snip..> > >> Could you provide the following info: >> a) mailbox type (maildir/mbox/dbox/...) > > maildir > >> [mail_location in dovecot's config] > >
2015 Aug 14
0
multiple messages per second to a single mailbox
The problem happened again this morning. Removing fsync calls helped, but I'm not sure about leaving that enabled long term. I still believe the problem is multiple dovecot processes trying to write to a single folder at the same time. (If I could run dtrace I might be able to cobble together a script to prove it.) I tried writing a sieve script to direct the messages to a set of folders,
2015 Jul 21
3
dovecot proxy/director and high availability design
Round-robin DNS last I checked can be fraught with issues. While doing something else I came up with this idea: Clients --> Load Balancer(HAProxy) --> Dovecot Proxy(DP) --> Dovecot Director(DD) --> MS1 / MS2. When DP checks say user100 it'll find a host=DD-POD1 that returns two IPs, those of the two DD that sit in front of POD1. This DD pair is the only pair in the ring and
2015 Jul 20
3
dovecot proxy/director and high availability design
I'm trying to determine which dovecot components to use and how to order them in the network path from client to mail store. If I have say 1,000 users, all stored in MySQL (or LDAP) and have 4 mail stores, configured into 2, 2 node pods. MS1 and MS2 are pod1 and are configured with replication (dsync) and host users 0-500. MS3 and MS4 are pod2 and are configured with replication between
2009 Aug 31
2
Trace Disk I/O per guest domain!
Dear All, I am trying to trace all the disk I/O accesses made by the xen guest domains to dom0 domain. I would request for critics or comments in doing the same. Its easy to run "iostat -x" on all domains and dom0 to get the disk i/o utilization stats, but I am just looking if there can be any other way than running this utility in all domUs but using simply dom0 to get all guest
2019 Nov 11
2
cli Checking disk i/o
OK.? That is interesting.? I am assuming tps is transfers per sec? I would have to get a stop watch, but it seems to go a bit of time, and then a write. Is there something that would accumulate this and give me a summary over some period of time?? Of course it better NOT be doing its own IOs... On 11/10/19 6:03 PM, shimi wrote: > iostat 1 > > On Mon, 11 Nov 2019, 00:11 Robert
2018 Sep 20
3
per share way to not follow msdfs links
Re-sending with right email... msdfs root is set to "no" by default and is per-share. [myshare] msdfs root = no path = ... Should do the trick. Otherwise if mounting on linux you can also use the 'nodfs' mount option (mount.cifs //host/share/... /mnt/ -o ...,nodfs) to disable DFS resolving and automatic sub-mounting. Chad W Seys <cwseys at
2012 Dec 07
2
Performance problems while running doveadm purge
I have a rather large and active mdbox (28 GB, 3M mess, 1200 deliveries/day). I usually have no problems working with those mails, and there is some batch processing going on (via doveadm). Every few weeks I try my luck running doveadm purge, and this a) crunches about 5GB (to be expected), b) takes rather long (ok), c) leads to long stretches of blocked mdbox, which is the problem. I always
2008 Dec 02
18
How to dig deeper
In order to get more information on IO performance problems I created the script below: #!/usr/sbin/dtrace -s #pragma D option flowindent syscall::*write*:entry /pid == $1 && guard++ == 0/ { self -> ts = timestamp; self->traceme = 1; printf("fd: %d", arg0); } fbt::: /self->traceme/ { /* elapsd =timestamp - self -> ts; printf("
2010 Oct 19
2
pdflush kernel thread pops up every 10 seconds or so and video decoding grinds to a halt for 1/2 a second
Hi. A friend of mine was doing real-time video decoding on Fedora Core 13 and he had a performance glitch (1/2 a second freeze) every 5-10 seconds. "top" showed flush-253:0 process at the moment of the freeze. Major device number 253 corresponds to device-mapper. I advised my friend to re-install his FC13 without LVM, to see if the glitch is related to LVM. After re-installing FC13
2018 Sep 21
1
per share way to not follow msdfs links
Chad W Seys <cwseys at physics.wisc.edu> writes: >> Yep, sounds like a bug indeed. You still have the option to edit the smb.conf >> on the server side if you want to use smb2+. > > Good to keep in mind. > I'm speculating leaving 'nodfs' out of smb2+ was purposeful. Originally > it was a workaround for Samba 3.something . Maybe the cifs authors were
2010 Jan 18
1
Getting Closer (was: Fencing options)
One more follow on, The combination of kernel.panic=60 and kernel.printk=7 4 1 7 seems to have netted the culrptit: E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_write_timeout:137 ERROR: Heartbeat write timeout to device dm-12 after 60000 milliseconds E01-netconsole.log:Jan 18 09:45:10 E01 (10,0):o2hb_stop_all_regions:1517 ERROR: stopping heartbeat on all active regions.
2006 Mar 30
8
iostat -xn 5 _donot_ update: how to use DTrace
on Solaris 10 5.10 Generic_118822-23 sun4v sparc SUNW,Sun-Fire-T200 I run #iostat -xn 5 to monitor the IO statistics on SF T2000 server. The system also have a heavy IO load, for some reason iostat donot refresh (no any update). It seems like iostat is calling pause() and stucked there. Also my HBA driver''s interrupt stack trace indicates there is a lot of swtch(), the overall IOPS