Displaying 20 results from an estimated 10000 matches similar to: "1.0rc8 status report"
2006 Oct 12
2
1.0rc8: another problem? Possibly 64-bit index?
Yesterday I gave a status report about 10.rc8 in production, which
mentioned a problem about "Login process died too early..."
Timo suggested a patch for logging error messages. I've applied this.
Others suggested increasing "login_max_processes_count". That was already
way above our likely maximum, but I've doubled it anyway.
Today, I've just repeated the
2006 Oct 03
2
dovecot, procmail and deliver
(Using dovecot 1.0 RC7 on Fedora Core 5)
<scene set>
Hitherto we have used UW-IMAP on a "farm" of Linux machines mounting NFS
from a NetApp. (The UW-IMAP author doesn't like use of NFS, but with
careful use of NFS mount arguments ('noac,actimeo=0' etc.) and trying to
ensure that all activity for a given user takes place within one machine
in the farm, we seem to
2006 Oct 20
4
1.0.rc10 status report
(Background: Relatively new to dovecot; looking to do transparent
replacement of long-established UW-IMAP on cluster of Linux boxes which
NFS-mount a shared "/var/spool/mail".)
With rc8, where I had already increased "login_max_processes_count" from
default 128 to 1024, we had still hit the issue of too many logins
crashing dovecot, so that trial had only lasted a couple of
2006 Aug 21
4
RC7: its issues or mine?
Background: I'm new to dovecot (although with many years Washington IMAP
behind me). We're considering migrating from Washington IMAP to dovecot
on the main service here, and have just started trying dovecot, using RC7.
Washington, IMAP has the usual(-ish) "/var/spool/mail" shared area for the
INBOX (trad. UNIX "From " format); a user's folders default to being
2006 Oct 16
1
indexes?
Picture: A set of very similar UN*X IMAP servers all NFS-mounting their
INBOX area (traditional Unix format) from a common "/var/spool/mail"
area; activity for any given user ought to be within one box although this
cannot be 100% guaranteed. There is the risk of multiple simultaneous
access (e.g. simultaneous LDA/delivers; simultaneous LDA/deliver and
user-driven IMAP update; etc.).
2007 May 22
1
simultaneous access to folder
We have for many years been a UW-IMAP site, with users having their own
traditional, private, mbox-format INBOX and folders: almost (but not
quite) no complications of shared or simultaneous access. We have just
completed a transparent transition to dovecot (official 1.0.0 release).
But we have one residual issue affecting one important user account.
UW-IMAP specifically only allows single
2006 Jul 19
3
/var/spool/mail directory size and subdirectories
(Complete newbie to dovecot. I hope what follows isn't something I've
missed in some FAQ somewhere...)
On a traditional UNIX filesystem with UW-IMAP several years ago, we
encountered major performance problems when "/var/spool/mail/" got big (we
would currently be ~20,000 entries). This was due to the inefficiency of
the UNIX filesystem when creating and deleting the lockfiles
2006 Jul 03
0
No subject
I'd need "04" including its leading "0". ("1200"->"00" etc.)
> Depending on what other tools you use, you could also use the hash (H)
> modifier, but maybe your delivery agent can't do that (unless of course
> you plan to use dovecot-lda too)
With our UW set-up, everything goes through UW's c-client library
(sendmail local
2006 Nov 24
1
mailadm? authentication vs. authorization?
Does "dovecot" have anything similar to the UW IMAP "mailadm" group
operation? From near the end of:
http://www.washington.edu/imap/documentation/RELNOTES.html
'Support for SASL authentication identity vs. authorization identity in
the IMAP and POP3 servers. If the user indicated by the authentication
identity is in the "mailadm" group, he may
2006 Aug 31
1
deliver LDA and INBOX location
(OS: Fedora Core 5; dovecot: 1.0 rc7)
On a typical UNIX-like OS, the INBOXes are in "/var/spool/mail/" using the
user identifier: so user 'fred' has INBOX "/var/spool/mail/fred".
We have a well-established different convention which subdivides this,
based on the last two digits of the uid: "/var/spool/mail/12/fred" (for
fred's uid as something ending
2006 Feb 06
0
Mixed IMAP/POP3 environment and message status flags
Hello All,
I'm trying to migrate from UW Imap to Dovecot (latest 1.0 beta 2). The
target is to go from the actual environment (Eudora + POP3) to a web based
IMAP environment (IMAP + SquirrelMail). While migrating we will need to
keep both environments for some time, with users having both the ability to
use Eudora through POP3 while in the office and also the ability to check
e-mail
2007 Feb 09
5
resilience suggestion
On the whole we are pleased with our trials of dovecot to replace UW-IMAP.
But (ah!) we have hit one particular problem, in which we think dovecot
could probably benefit from a resilience improvement.
We're running dovecot on Fedora Core 5 (FC5), with passwd map details
supplied by NIS. We have found that "nscd" sometimes thinks that a
username is invalid, even though it is valid.
2006 Nov 24
1
Thanks! Migration UWimap -> Dovecot report
Best Dovecot devs,
We moved from UW-imap&pop3 to Dovecot this morning (~500 accounts) and
reduced our traffic from the home directory server to the imap server
bigtime:
| 22 Nov| 0.1 0.8| 0.0 0.0| 0.4 0.5| 1550.6
42.9| 1557.3 67.9|
| 23 Nov| 0.3 1.0| 0.0 0.1| 0.4 0.6| 1331.8
37.3| 1337.2 46.3|
| 24 Nov| 0.0 0.4| 0.0 0.0|
2007 Mar 22
5
netapp/maildir/dovecot performance
We are seeing some poor performance recently that is focused around
users with large mailboxes (100,000 message /INBOX, 80,000 message
subfolders, etc).
The performance problem manifests as very high system% utilization -
basically iowait for NFS.
There are two imap servers with plenty of horsepower/memory/etc. They
are connected to a 3050c cluster via gig-e. Here are the mount
options:
2003 Dec 04
1
Severe pop3 incompatibility report
Hello,
According to our experiences, most recent dovecot incompatible with
eudora pop3 client
- eudora (5.1, 6), pop3, 'leave on server' enabled: the clients receive
the full inbox *every time* they check the inbox on the server.
[Users using other MUAs experienced the same once, at switching from
uw-pop3d/imapd to dovecot.]
That is absolutely devastating, especially for home users
2007 Aug 01
2
Mount options and NFS: just checking...
Greetings -
I'm now in the last couple of weeks before going live with Dovecot
(v1.0.3) on our revamped IMAP service. I'd like to double-check
about the best mount options to use; could someone advise, please?
I have three separate directory trees for the message store, the
control files and the index files. These are arranged as follows:
Message Store
Mounted over NFS from
2005 Jun 17
2
GSSAPI support status
Hi list!
I'm wondering, what is the current status of GSSAPI (krb5) support in
Dovecot? I know from googling that there used to be a patch for it
around a year ago, but I haven't seen a trace of that patch ever since,
and GSSAPI doesn't even seem to be mentioned in the Dovecot source tree
(at least not according to "grep -irl").
Is GSSAPI planned at all, as it stands? If
2003 Jul 29
2
corrupt mbox, mailboxes not found, and message read status
I'm running dovecot 0.99.10-0.rc2 deb packages for Debian Woody from
braincells.com. My old environment was a server running UW-IMAP for
IMAP and POP services. We are still using mbox for all mailboxes. I
use IMP for web based access.
Forgive me if these issues have already been discussed or fixed in cvs.
Any additional pointers to this info. would be appreciated.
I have noticed a
2008 Jul 01
0
[Fwd: Re: University of Washington lays off 66 technology workers.]
I would expect this means the end of UWIMAP....which probably leaves DC as
open-source IMAP of choice. There were 66 people doing IMAP and Pine/Alpine
development that were laid off at UWash due to funding cuts; Mark Crispin, one
of the fathers of IMAP, was among those laid off.
From the keyboard of:
James Morris
Lead Engineer, UW Technology
University of Washington
1999 Feb 17
3
Variable Names
Dear R users,
That's probably a silly question even for a newbie but :
Is it possible to assign variable prices as variable names ?
For instance, would something like the following work :
for (i in 1:4)
{
error.i <- some_calculation
}
so "error.i" variables would be created for i=1,2,3,4
i.e. error.1 error.2 error.3 etc. etc.
Costas
--