similar to: LMTP re-reading all messages

Displaying 20 results from an estimated 4000 matches similar to: "LMTP re-reading all messages"

2011 Jul 05
2
Many "Error: Corrupted index cache file /XXX/dovecot.index.cache: invalid record size"
Hi all, I just joigned this list, so I'm sorry if this problem has already been reported. I'm running Dovecot 2.0.13 on many servers, one for POP/IMAP access, others for LDA, others for authentification only, etc. All servers are accessing a shared file system, based on MooseFS (www.moosefs.org). The FS is mounted using FUSE. All my Dovecot servers have this configuration :
2025 Apr 17
1
Gluster with ZFS
Gagan: Throwing my $0.02 in -- It depends on the system environment of how you are planning on deploying Gluster (and/or Ceph). I have Ceph running on my three node HA Proxmox cluster using three OASLOA Mini PCs that only has the Intel N95 Processor (4-core/4-thread) with 16 GB of RAM and a cheap Microcenter store brand 512 GB NVMe M.2 2230 SSD and my Ceph cluster has been running without any
2014 Mar 05
2
Tons of "Failed to decrypt and verify packet"
Hi all, I tried Tinc 1.1 from git on 4 nodes, each one in a datacenter. They were able to ping each other, etc... but I had problem with multicast, nothing seemed to pass (all is OK with Tinc 1.0.23). I checked logs and on every nodes I have a lot of: Failed to decrypt and verify packet And Error while decrypting: error:00000000:lib(0):func(0):reason(0) So I get back to 1.0.23 which works
2025 Apr 17
4
Gluster with ZFS
HI Alexander, Thanks for the update. Initially, I also thought of deploying Ceph but ceph is quite difficult to set-up and manage. Moreover, it's also hardware demanding. I think it's most suitable for a very large set-up with hundreds of clients. What do you think of MooseFS ? Have you or anyone else tried MooseFS. If yes, how was its performance?
2018 Jan 19
0
Error: Corrupted dbox file
Hello Florent, How did you proceed with the upgrade? Did you follow the recommended steps guide to upgrade ceph? (mons first, then OSDs, then MDS) Did you interrupt dovecot before upgrading the MDS specially? Did you remount the filesystem? Did you upgrade the ceph client too? Give people the complete scene and someone might be able to help you. Ask on ceph-users list too. Regards, Webert
2010 Jun 11
2
MooseFS repository
Hi, A repository for MooseFS is just born. It provides CentOS 5.5 SRPMS, i386 and x86_64. cd /etc/yum.repos.d/; wget http://centos.kodros.fr/moosefs.repo ; yum install mfs Two points: - DNS may not be up to date where you are. The subdomain has just been created. Please be patient. - I have made an update to the .spec file, to move config files to /etc/mfs instead of /etc. I had no time to test
2016 Jul 05
3
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 21:16, Alex Crow wrote: > > > On 05/07/16 21:00, Rowland penny wrote: >> On 05/07/16 20:49, Alex Crow wrote: >>> FYI, by "it did, completely" I meant it failed completely. >>> >>> Even if the only file we have to have on the cluster FS (which now >>> seems to be down solely to the ctdb lock file), I'm still worried
2016 Jul 05
2
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 20:49, Alex Crow wrote: > FYI, by "it did, completely" I meant it failed completely. > > Even if the only file we have to have on the cluster FS (which now > seems to be down solely to the ctdb lock file), I'm still worried what > these failures mean: > > 1) does the -rw (without -m) test suggest any problems with the > MooseFS FS I'm
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 07:55, Anoop C S via samba wrote: > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: >> Hi! >> >> Building current samba on debian bullseye with >> >> ?? ./configure --enable-cephfs >> >> results in the following output: >> >> Checking for header cephfs/libcephfs.h????????????? : yes >> Checking for
2016 Jul 03
4
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 13:06, Volker Lendecke wrote: > On Fri, Jul 01, 2016 at 10:00:21AM +0100, Alex Crow wrote: >> We've had a strange issue after following the recommendations at >> https://wiki.samba.org/index.php/Ping_pong, particularly the part >> about mmap coherence. We are running CTDB/Samba over a MooseFS >> clustered FS, and we'd not done the ping-pong before.
2016 Jul 05
4
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 19:45, Volker Lendecke wrote: > On Tue, Jul 05, 2016 at 07:21:16PM +0100, Alex Crow wrote: >> I've set up the "DR" side of my cluster to "use mmap = no" and with >> "private dir" removed from the smb.conf. > Why do you set "use mmap = no"? > >> I have the MooseFS guys on the case as well. Should I put them in touch
2016 Jul 05
2
Winbind process stuck at 100% after changing use_mmap to no
Hi Volker, I apologise if I came across as an a***hole here. Responses below: On 05/07/16 20:45, Volker Lendecke wrote: > On Tue, Jul 05, 2016 at 08:12:59PM +0100, Alex Crow wrote: >> >> On 05/07/16 19:45, Volker Lendecke wrote: >>> On Tue, Jul 05, 2016 at 07:21:16PM +0100, Alex Crow wrote: >>>> I've set up the "DR" side of my cluster to
2024 Jul 30
1
ceph is disabled even if explicitly asked to be enabled
Hi! Building current samba on debian bullseye with ./configure --enable-cephfs results in the following output: Checking for header cephfs/libcephfs.h : yes Checking for library cephfs : yes Checking for ceph_statx in cephfs : ok Checking for ceph_openat in cephfs : not found Ceph support disabled due to
2016 Jul 13
2
Winbind process stuck at 100% after changing use_mmap to no
On Tue, Jul 05, 2016 at 10:12:31PM +0100, Alex Crow wrote: > I did not "put it" like anything. I just saw an problem in my setup, > read some documents on the samba wiki, followed some advice and saw some > unexpected behaviour. Perhaps I imagined the wiki to be more moderated > that it really is, so my trust was misplaced. I don't blame anyone, all > I'm trying to
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: > Hi! > > Building current samba on debian bullseye with > > ?? ./configure --enable-cephfs > > results in the following output: > > Checking for header cephfs/libcephfs.h????????????? : yes > Checking for library cephfs???????????????????????? : yes > Checking for ceph_statx in
2016 Jul 14
2
Winbind process stuck at 100% after changing use_mmap to no
On Thu, Jul 14, 2016 at 02:39:48PM +0100, Alex Crow wrote: > >The main hint I would like to give to the MooseFS developers is to get > >ping_pong -rw working. If you need any assistance in setting that up > >and getting arguments on why this is important, let us know! > > > >With best regards, > > > >Volker > > Any arguments that I can pass on would
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2016 Apr 22
4
Storage cluster advise, anybody?
Dear Experts, I would like to ask everybody: what would you advise to use as a storage cluster, or as a distributed filesystem. I made my own research of what I can do, but I hit a snag with my seemingly best choice, so I decided to stay away from it finally, and ask clever people what they would use. My requirements are: 1. I would like to have one big (say, comparable to petabyte)
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote: