similar to: librmb: Mail storage on RADOS with Dovecot

Displaying 20 results from an estimated 2000 matches similar to: "librmb: Mail storage on RADOS with Dovecot"

2017 Sep 24
0
librmb: Mail storage on RADOS with Dovecot
On 22 Sep 2017, at 14.18, mj <lists at merit.unu.edu> wrote: > First, the Github link: > https://github.com/ceph-dovecot/dovecot-ceph-plugin > > I am not going to repeat everything which is on Github, put a short summary: > > - CephFS is used for storing Mailbox Indexes > - E-Mails are stored directly as RADOS objects > - It's a Dovecot plugin > > We
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2019 Jul 08
8
Samba 4.10.6 for rhel7/centos7 rpms
Hi everyone, I've posted some updated source and binary rpms following the release of updates by the samba project. Updated rpms include samba-4.10.6 and ldb-1.5.5. Here are the links: - http://nova.polymtl.ca/~coyote/dist/samba/samba-4.10.6/RHEL7 As usual, these work fine for me (4.10.6) but YMMV. Please do report issues if you find some. Regards, Vincent
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hi, some time back we had similar discussions when we, as an email provider, discussed to move away from traditional NAS/NFS storage to Ceph. The problem with POSIX file systems and dovecot is that e.g. with mdbox only around ~20% of the IO operations are READ/WRITE, the rest are metadata IOs. You will not change this with using CephFS since it will basically behave the same way as e.g. NFS. We
2019 Jul 10
4
Samba 4.10.6 for rhel7/centos7 rpms
Hi Konstantin, On Wed, 10 Jul 2019, Konstantin Shalygin via samba wrote: > Vincent, excellent work! Thank you. Glad it's helping others. > Can you also provide support for rados vfs (already exists in SPEC) and > ctdb_mutex_ceph_rados_helper? That's probably doable but you'd have to BETA-test for me since I don't have a working ceph cluster at home. The information
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 07:55, Anoop C S via samba wrote: > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: >> Hi! >> >> Building current samba on debian bullseye with >> >> ?? ./configure --enable-cephfs >> >> results in the following output: >> >> Checking for header cephfs/libcephfs.h????????????? : yes >> Checking for
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: > Hi! > > Building current samba on debian bullseye with > > ?? ./configure --enable-cephfs > > results in the following output: > > Checking for header cephfs/libcephfs.h????????????? : yes > Checking for library cephfs???????????????????????? : yes > Checking for ceph_statx in
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2024 Jul 30
1
ceph is disabled even if explicitly asked to be enabled
Hi! Building current samba on debian bullseye with ./configure --enable-cephfs results in the following output: Checking for header cephfs/libcephfs.h : yes Checking for library cephfs : yes Checking for ceph_statx in cephfs : ok Checking for ceph_openat in cephfs : not found Ceph support disabled due to
2018 May 23
3
ceph_vms performance
Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if
2023 Jun 12
2
virsh not connecting to libvertd ?
Just found my issue. After I removed the cephfs mounts it worked! I will debug ceph. I assumed because I could touch files on mounted cephfs it was working. Now virsh list works! thanks jerry Lars Kellogg-Stedman > On Tue, Jun 06, 2023 at 04:56:38PM -0400, Jerry Buburuz wrote: >> Recently both virsh stopped talking to the libvirtd. Both stopped within >> a >> few days of
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote: > 31.07.2024 07:55, Anoop C S via samba wrote: > > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote: > > > Hi! > > > > > > Building current samba on debian bullseye with > > > > > > ??? ./configure --enable-cephfs > > > > > > results in
2016 Jan 08
2
Samba & Ceph
Hello List, as anyone tried to install samba with/ontop on a ceph cluster? Regards, Dirk
2014 Feb 11
2
Can you verify currently defined libvirt secret provides valid Cephx auth?
As the subject suggests, I am wondering if its possible to verify if the currently defined libvirt secret provides valid authentication via Cephx to a Ceph cluster? I ask because ideally I would like to verify that the given cephx credentials in my libvirt secret are valid before I even attempt the virsh attach-device on the domain. I tried searching for a solution to this, but I can't seem
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote: > On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote: > > Hello List, > > > > as anyone tried to install samba with/ontop on a ceph cluster? > > Try compiling and setting up with vfs_ceph. Correct, that's basically it. > Needs some more work, but should work. Some posix features are not quite there
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2018 Jun 03
1
CTDB over WAN Link with LMASTER/RECMASTER Disabled
Hi, I came across the 'CTDB_CAPABILITY_LMASTER=no' and 'CTDB_CAPABILITY_RECMASTER=no' options in my quest to salvage a rather poorly performing CTDB cluster over Ceph(fs). Unfortunately, the docs provide not enough information for a clustering noop like myself. Would there be any benefit to disabling those options for a branch office node on a high-latency WAN connection?
2019 Jul 11
2
Samba 4.10.6 for rhel7/centos7 rpms
Hi Konstantin, Thank you for the diff. I will review it and merge it today. About the missing directories, I think it may be doable to add them to the 'ctdb' rpm. As I'm not using ctdb, what should the ownership/permissions be for those directories? Regards, Vincent On Thu, 11 Jul 2019, Konstantin Shalygin wrote: > On 7/10/19 9:49 PM, vincent at cojot.name wrote: > >
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were