similar to: ceph vfs can't find specific path

Displaying 20 results from an estimated 4000 matches similar to: "ceph vfs can't find specific path"

2020 Sep 21
2
ceph vfs can't find specific path
"Rowland penny via samba" samba at lists.samba.org ? 21 September 2020 11:03 > On 21/09/2020 08:55, Jonas via samba wrote: > No idea about the cephs part, but your smb.conf isn't correct, your > workgroup shouldn't be the same as the realm, perhaps use 'INT' instead. > You have no 'idmap config' lines, how are you going to map your AD users > to
2020 Sep 21
0
ceph vfs can't find specific path
On 21/09/2020 08:55, Jonas via samba wrote: > Hello > > > Using two file server with samba 4.12.6 running as a CTDB cluster and trying to share a specific path on a cephfs. After loading the config the ctdb log shows the following error: > > > ctdb-eventd[248]: 50.samba: ERROR: samba directory "/plm" not available > > > Here is my samba configuration: >
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote: > On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote: > > Hello List, > > > > as anyone tried to install samba with/ontop on a ceph cluster? > > Try compiling and setting up with vfs_ceph. Correct, that's basically it. > Needs some more work, but should work. Some posix features are not quite there
2023 Dec 14
2
Gluster -> Ceph
Hi all, I am looking in to ceph and cephfs and in my head I am comparing with gluster. The way I have been running gluster over the years is either a replicated or replicated-distributed clusters. The small setup we have had has been a replicated cluster with one arbiter and two fileservers. These fileservers has been configured with RAID6 and that raid has been used as the brick. If disaster
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hi, some time back we had similar discussions when we, as an email provider, discussed to move away from traditional NAS/NFS storage to Ceph. The problem with POSIX file systems and dovecot is that e.g. with mdbox only around ~20% of the IO operations are READ/WRITE, the rest are metadata IOs. You will not change this with using CephFS since it will basically behave the same way as e.g. NFS. We
2018 May 16
0
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Hello Jack, yes, I imagine I'll have to do some work on tuning the block size on cephfs. Thanks for the advise. I knew that using mdbox, messages are not removed but I though that was true in sdbox too. Thanks again. We'll soon do benchmarks of sdbox vs mdbox over cephfs with bluestore backend. We'll have to do some some work on how to simulate user traffic, for writes and readings.
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi, I'm trying to build an active/active virtualization cluster using a Ceph RBD as backing for each libvirt-managed LXC. I know live migration for LXC isn't yet possible, but I'd like to build my infrastructure as if it were. That is, I would like to be sure proper locking is in place for live migrations to someday take place. In other words, I'm building things as if I were
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute nodes that will be using a ceph rbd pool for storing vm disk image files. I've got a couple of issues I've run into. First, per the standard ceph documentation examples [1], the way to add a disk is to create a block in the VM definition XML that looks something like this: <disk type='network'
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times. My main question I ask when evaluating storage solutions is, "what happens when it fails?" With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2014 Feb 26
1
Samba and CEPH
Greetings all! I am in the process of deploying a POC around SAMBA and CEPH. I'm having some trouble locating concise instructions on how to get them to work together (without having to mount CEPH to the computer first and then exporting that mount via SAMBA). Right now, my stopper is trying to locate ceph.so for x64 CentOS 6.5. [2014/02/26 15:05:23.923617, 0]
2016 Jan 08
2
Samba & Ceph
Hello List, as anyone tried to install samba with/ontop on a ceph cluster? Regards, Dirk
2014 Jan 16
0
Re: Ceph RBD locking for libvirt-managed LXC (someday) live migrations
On Wed, Jan 15, 2014 at 05:47:35PM -0500, Joshua Dotson wrote: > Hi, > > I'm trying to build an active/active virtualization cluster using a Ceph > RBD as backing for each libvirt-managed LXC. I know live migration for LXC > isn't yet possible, but I'd like to build my infrastructure as if it were. > That is, I would like to be sure proper locking is in place for
2018 May 23
3
ceph_vms performance
Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if
2023 May 11
1
MacOS clients - best options
On 5/11/23 15:35, Thomas Hukkelberg via samba wrote: > We have exact same problem but never really found the underlying > issue and whether it's a cephfs bug, a samba bug or only related to > vfs_fruit. it's a known subtle Ceph issue I'm afraid: https://tracker.ceph.com/issues/50719 If the issue still exists with a newer Ceph version then mentioned in the bugreport, please
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2019 Nov 07
2
samba performance when writing lots of small files
hi jeremy / all, On 11/6/19 10:39 PM, Jeremy Allison wrote: > This is re-exporting via ceph whilst creating 1000 files, > yes ? What timings do you get when doing this via Samba > onto a local ext4/xfs/btrfs/zfs filesystem ? yes, creating 10k small files. doing the same on a local ssd, formatted with an ext4 fs without any special options: root at plattentest:/mnt-ssd/os# time for s in
2023 Jun 12
2
virsh not connecting to libvertd ?
Just found my issue. After I removed the cephfs mounts it worked! I will debug ceph. I assumed because I could touch files on mounted cephfs it was working. Now virsh list works! thanks jerry Lars Kellogg-Stedman > On Tue, Jun 06, 2023 at 04:56:38PM -0400, Jerry Buburuz wrote: >> Recently both virsh stopped talking to the libvirtd. Both stopped within >> a >> few days of
2015 Mar 31
0
Re: couple of ceph/rbd questions
On 03/31/2015 11:47 AM, Brian Kroth wrote: > Hi, I've recently been working on setting up a set of libvirt compute > nodes that will be using a ceph rbd pool for storing vm disk image > files. I've got a couple of issues I've run into. > > First, per the standard ceph documentation examples [1], the way to add > a disk is to create a block in the VM definition XML