similar to: Dovecot Max Connections & mbox vs. maildir format - Recommendations?

Displaying 20 results from an estimated 7000 matches similar to: "Dovecot Max Connections & mbox vs. maildir format - Recommendations?"

2012 Nov 14
1
GE LP Series?
Hi all We have a 100kVA GE LP Series UPS. I can't find this series in the HCL, but other GE UPSes are listed. Would it be possible to somehow use NUT with this UPS? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 98013356 roy at karlsbakk.net http://blogg.karlsbakk.net/ GPG Public key: http://karlsbakk.net/roysigurdkarlsbakk.pubkey.txt -- I all pedagogikk er det
2009 May 23
2
setgid error
Hi all Trying to setup dovecot with mysql and postfix, I have configured it as given below. thecot user has the dovecot group as primary, and is also a member of mail and dovecot-users. Still, it can't setgid to dovecot-users. I tried changing the shell for the dovecot user to something useful and chmod'ing a file to dovecot-users, and it work well. Still, no mail comes through
2009 Jun 03
3
remove-source-files checking for open files?
hi We have a box here connected to an antenna receiving rather large amounts of metheorological data from a satellite. the data is received and ransferred to another box and removed from the receiving server. I first thought of using rsync for this, but it seems --remove-source- files has no way of checking if the file is open or not, so if the receiving process is still writing to a
2009 Jun 01
2
v1.1.16 released
http://dovecot.org/releases/1.1/dovecot-1.1.16.tar.gz http://dovecot.org/releases/1.1/dovecot-1.1.16.tar.gz.sig Fixes a couple of bugs in v1.1.15's changes. Hopefully the last v1.1 release before v1.2.0. - v1.1.15 could have crashed if mailbox-closing command was pipelined after a mailbox-accessing command. - v1.1.15's zlib plugin may have caused crashes when fetching
2009 Jun 01
2
v1.1.16 released
http://dovecot.org/releases/1.1/dovecot-1.1.16.tar.gz http://dovecot.org/releases/1.1/dovecot-1.1.16.tar.gz.sig Fixes a couple of bugs in v1.1.15's changes. Hopefully the last v1.1 release before v1.2.0. - v1.1.15 could have crashed if mailbox-closing command was pipelined after a mailbox-accessing command. - v1.1.15's zlib plugin may have caused crashes when fetching
2009 Jun 02
1
How to push files from Linux to Windows
Hi, I am newbie on rsync. I want to push files from a rsync repository on a Linux machine (hostname=myserver) to some Windows machines (hostname=mydesktopn, where n is a sequence number to identify the Windows PC) to force updates of the files into C:\mypath\to\files. Is there any way to do that? Or is it impossible and I need to content pulling the files instead of pushing them. Could
2006 Jun 20
1
asterisk-backports.org
hi all I just setup a new site, perhaps soon a wiki, to collect what's out there of useful backports from Trunk/1.4 beta back to 1.2. Take a look at http://http://www.asterisk-backports.org/ and judge for yourself ;) roy -- Roy Sigurd Karlsbakk roy@karlsbakk.net (+47) 98013356 --- In space, loud sounds, like explosions, are even louder because there is no air to get in the way.
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2011 Jun 26
2
recovering from "zfs destroy -r"
Hi, Is there a simple way of rolling back to a specific TXG of a volume to recover from such a situation? Many thanks, Przem -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20110627/9b1c5a85/attachment.html>
2011 Jun 30
14
700GB gone?
I have a 1.5TB disk that has several partitions. One of them is 900GB. Now I can only see 300GB. Where is the rest? Is there a command I can do to reach the rest of the data? Will scrub help? -- This message posted from opensolaris.org
2011 Mar 01
14
Good SLOG devices?
Hi I''m running OpenSolaris 148 on a few boxes, and newer boxes are getting installed as we speak. What would you suggest for a good SLOG device? It seems some new PCI-E-based ones are hitting the market, but will those require special drivers? Cost is obviously alsoo an issue here.... Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net
2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2010 Jul 05
5
never ending resilver
Hi list, Here''s my case : pool: mypool state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 147h19m, 100.00% done, 0h0m to go config: NAME STATE READ WRITE CKSUM filerbackup13
2010 Sep 25
4
dedup testing?
Hi all Has anyone done any testing with dedup with OI? On opensolaris there is a nifty "feature" that allows the system to hang for hours or days if attempting to delete a dataset on a deduped pool. This is said to be fixed, but I haven''t seen that myself, so I''m just wondering... I''ll get a 10TB test box released for testing OI in a few weeks, but before
2009 Jul 20
5
Offtopic: Which SAS / SATA HBA do you recommend?
Hi all, Sorry for the offtopic question. I hope though that others on this list or reading the archive find the answers useful too. It seems the Adaptec 1405 4port SAS HBA I bought only works with RHEL and SuSE through a closed source driver, and thus is quite useless :-( I was stupid enought to think "Works with RHEL and SuSE" meant "Certified for RHEL and SuSE, but driver in
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings: zpool create data c8t1d0 zfs create data/shared zfs set dedup=on data/shared The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x. -- This
2010 Oct 16
4
resilver question
Hi all I''m seeing some rather bad resilver times for a pool of WD Green drives (I know, bad drives, but leave that). Does resilver go through the whole pool or just the VDEV in question? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres
2010 Oct 19
8
Balancing LVOL fill?
Hi all I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2003 Apr 24
3
new mgcp patch errors
see below I tried to call 98013356 from the following phone (from mgcp.conf) [iptlf03] host = 192.168.33.3 context = default inbanddtmf = 1 callerid = 22545062 line => aaln/1 Console output: == Spawn extension (capiring, 9988001133335566, 1) exited non-zero on 'MGCP/aaln/1@iptlf03-1' -- MGCP mgcp_hangup(MGCP/aaln/1@iptlf03-1) on aaln/1@iptlf03 -- Delete connection 4
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds