similar to: LMTP re-reading all messages

Displaying 20 results from an estimated 3000 matches similar to: "LMTP re-reading all messages"

2011 Jul 05
2
Many "Error: Corrupted index cache file /XXX/dovecot.index.cache: invalid record size"
Hi all, I just joigned this list, so I'm sorry if this problem has already been reported. I'm running Dovecot 2.0.13 on many servers, one for POP/IMAP access, others for LDA, others for authentification only, etc. All servers are accessing a shared file system, based on MooseFS (www.moosefs.org). The FS is mounted using FUSE. All my Dovecot servers have this configuration :
2014 Mar 05
2
Tons of "Failed to decrypt and verify packet"
Hi all, I tried Tinc 1.1 from git on 4 nodes, each one in a datacenter. They were able to ping each other, etc... but I had problem with multicast, nothing seemed to pass (all is OK with Tinc 1.0.23). I checked logs and on every nodes I have a lot of: Failed to decrypt and verify packet And Error while decrypting: error:00000000:lib(0):func(0):reason(0) So I get back to 1.0.23 which works
2018 Jan 19
0
Error: Corrupted dbox file
Hello Florent, How did you proceed with the upgrade? Did you follow the recommended steps guide to upgrade ceph? (mons first, then OSDs, then MDS) Did you interrupt dovecot before upgrading the MDS specially? Did you remount the filesystem? Did you upgrade the ceph client too? Give people the complete scene and someone might be able to help you. Ask on ceph-users list too. Regards, Webert
2010 Jun 11
2
MooseFS repository
Hi, A repository for MooseFS is just born. It provides CentOS 5.5 SRPMS, i386 and x86_64. cd /etc/yum.repos.d/; wget http://centos.kodros.fr/moosefs.repo ; yum install mfs Two points: - DNS may not be up to date where you are. The subdomain has just been created. Please be patient. - I have made an update to the .spec file, to move config files to /etc/mfs instead of /etc. I had no time to test
2016 Jul 05
3
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 21:16, Alex Crow wrote: > > > On 05/07/16 21:00, Rowland penny wrote: >> On 05/07/16 20:49, Alex Crow wrote: >>> FYI, by "it did, completely" I meant it failed completely. >>> >>> Even if the only file we have to have on the cluster FS (which now >>> seems to be down solely to the ctdb lock file), I'm still worried
2016 Jul 05
2
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 20:49, Alex Crow wrote: > FYI, by "it did, completely" I meant it failed completely. > > Even if the only file we have to have on the cluster FS (which now > seems to be down solely to the ctdb lock file), I'm still worried what > these failures mean: > > 1) does the -rw (without -m) test suggest any problems with the > MooseFS FS I'm
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2016 Jul 03
4
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 13:06, Volker Lendecke wrote: > On Fri, Jul 01, 2016 at 10:00:21AM +0100, Alex Crow wrote: >> We've had a strange issue after following the recommendations at >> https://wiki.samba.org/index.php/Ping_pong, particularly the part >> about mmap coherence. We are running CTDB/Samba over a MooseFS >> clustered FS, and we'd not done the ping-pong before.
2016 Jul 05
4
Winbind process stuck at 100% after changing use_mmap to no
On 05/07/16 19:45, Volker Lendecke wrote: > On Tue, Jul 05, 2016 at 07:21:16PM +0100, Alex Crow wrote: >> I've set up the "DR" side of my cluster to "use mmap = no" and with >> "private dir" removed from the smb.conf. > Why do you set "use mmap = no"? > >> I have the MooseFS guys on the case as well. Should I put them in touch
2016 Jul 05
2
Winbind process stuck at 100% after changing use_mmap to no
Hi Volker, I apologise if I came across as an a***hole here. Responses below: On 05/07/16 20:45, Volker Lendecke wrote: > On Tue, Jul 05, 2016 at 08:12:59PM +0100, Alex Crow wrote: >> >> On 05/07/16 19:45, Volker Lendecke wrote: >>> On Tue, Jul 05, 2016 at 07:21:16PM +0100, Alex Crow wrote: >>>> I've set up the "DR" side of my cluster to
2016 Jul 13
2
Winbind process stuck at 100% after changing use_mmap to no
On Tue, Jul 05, 2016 at 10:12:31PM +0100, Alex Crow wrote: > I did not "put it" like anything. I just saw an problem in my setup, > read some documents on the samba wiki, followed some advice and saw some > unexpected behaviour. Perhaps I imagined the wiki to be more moderated > that it really is, so my trust was misplaced. I don't blame anyone, all > I'm trying to
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2016 Jul 14
2
Winbind process stuck at 100% after changing use_mmap to no
On Thu, Jul 14, 2016 at 02:39:48PM +0100, Alex Crow wrote: > >The main hint I would like to give to the MooseFS developers is to get > >ping_pong -rw working. If you need any assistance in setting that up > >and getting arguments on why this is important, let us know! > > > >With best regards, > > > >Volker > > Any arguments that I can pass on would
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2016 Apr 22
4
Storage cluster advise, anybody?
Dear Experts, I would like to ask everybody: what would you advise to use as a storage cluster, or as a distributed filesystem. I made my own research of what I can do, but I hit a snag with my seemingly best choice, so I decided to stay away from it finally, and ask clever people what they would use. My requirements are: 1. I would like to have one big (say, comparable to petabyte)
2023 Jun 12
2
virsh not connecting to libvertd ?
Just found my issue. After I removed the cephfs mounts it worked! I will debug ceph. I assumed because I could touch files on mounted cephfs it was working. Now virsh list works! thanks jerry Lars Kellogg-Stedman > On Tue, Jun 06, 2023 at 04:56:38PM -0400, Jerry Buburuz wrote: >> Recently both virsh stopped talking to the libvirtd. Both stopped within >> a >> few days of
2018 May 23
3
ceph_vms performance
Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if
2016 Jul 14
2
Winbind process stuck at 100% after changing use_mmap to no
On Thu, Jul 14, 2016 at 02:22:12PM +0100, Alex Crow wrote: > At the moment we're pretty happy with the Samba side of things. I > did wonder if there was any help that you kind chaps might be able > to give to the MooseFS guys if they need it (they've not asked yet > but I've suggested they might be able to). Overall it's working The main hint I would like to give to
2018 Oct 12
1
vfs_ceph quota support?
On Fri, Oct 12, 2018 at 11:19:50AM +0200, David Disseldorp via samba wrote: > Hi Felix, > > On Mon, 8 Oct 2018 16:30:17 +0200, Felix Stolte via samba wrote: > > > is the vfs_ceph supporting quota set on a directory inside cephfs? > > Not at this stage. CephFS uses a non-standard (xattr) interface for > quotas, which is not currently supported by Samba.
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote: > On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote: > > Hello List, > > > > as anyone tried to install samba with/ontop on a ceph cluster? > > Try compiling and setting up with vfs_ceph. Correct, that's basically it. > Needs some more work, but should work. Some posix features are not quite there