Hi, I have about 100TB of mailboxes in Maildir format on NFS (NetApp FAS) and works very well, for performance but also stability. The main problem of using Ceph or GlusterFS to store Maildir is the high use of metadata that dovecot require for check new messages and others activity. On my storage/NFS the main part of the traffic and I/O is metadata traffic on small file (high file count workload). And Ceph or GlusterFS are very inefficient with this kind of workload (many metadata GETATTR/ACCESS/LOOKUP and high numer of small files). Ciao Il 05/04/22 01:40, dovecot at ptld.com ha scritto:> Do all of the configuration considerations pertaining to using NFS on > > https://doc.dovecot.org/configuration_manual/nfs/ > > equally apply to using something like Ceph / GlusterFS? > > > And if people wouldn't mind chiming in with which (NFS, Ceph & GlusterFS) they feel is better for maildir mail storage on dedicated non-container servers? > Which is better for robustness / stability? > Which is better for speed / performance? > > > Thank you.
> > I have about 100TB of mailboxes in Maildir format on NFS (NetApp FAS) > and works very well, for performance but also stability.Hmmm, I would like to read something else. Eg that the design/elementary properties of distributed storage result into that all such systems are performing about the same. Maybe there should be more focus on ceph performance development instead of this cephadm?> The main problem of using Ceph or GlusterFS to store Maildir is the high > use of metadata that dovecot require for check new messages and others > activity. On my storage/NFS the main part of the traffic and I/O is > metadata traffic on small file (high file count workload).That is why I am using mdbox files of 4MB. I hope that should give me hardly any write amplification. I am also seperating between ssd and hdd pools by auto archiving email to the hdd pools> > And Ceph or GlusterFS are very inefficient with this kind of workload > (many metadata GETATTR/ACCESS/LOOKUP and high numer of small files).I am using rbd. After luminuous I had some issues with the cephfs and do not want to store operational stuff on it yet.