Displaying 20 results from an estimated 6000 matches similar to: "Samba & Ceph"
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote:
> On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote:
> > Hello List,
> >
> > as anyone tried to install samba with/ontop on a ceph cluster?
>
> Try compiling and setting up with vfs_ceph.
Correct, that's basically it.
> Needs some more work, but should work.
Some posix features are not quite there
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote:
> 31.07.2024 07:55, Anoop C S via samba wrote:
> > On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
> > > Hi!
> > >
> > > Building current samba on debian bullseye with
> > >
> > > ??? ./configure --enable-cephfs
> > >
> > > results in
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 07:55, Anoop C S via samba wrote:
> On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
>> Hi!
>>
>> Building current samba on debian bullseye with
>>
>> ?? ./configure --enable-cephfs
>>
>> results in the following output:
>>
>> Checking for header cephfs/libcephfs.h????????????? : yes
>> Checking for
2014 Feb 26
1
Samba and CEPH
Greetings all!
I am in the process of deploying a POC around SAMBA and CEPH. I'm
having some trouble locating concise instructions on how to get them to
work together (without having to mount CEPH to the computer first and
then exporting that mount via SAMBA).
Right now, my stopper is trying to locate ceph.so for x64 CentOS 6.5.
[2014/02/26 15:05:23.923617, 0]
2018 May 23
3
ceph_vms performance
Hi,
I'm testing out ceph_vms vs a cephfs mount with a cifs export.
I currently have 3 active ceph mds servers to maximise throughput and
when I have configured a cephfs mount with a cifs export, I'm getting
a reasonable benchmark results.
However, when I tried some benchmarking with the ceph_vms module, I
only got a 3rd of the comparable write throughput.
I'm just wondering if
2024 Aug 04
1
ceph is disabled even if explicitly asked to be enabled
31.07.2024 09:38, Anoop C S via samba wrote:
> On Wed, 2024-07-31 at 08:36 +0300, Michael Tokarev via samba wrote:
>> The problem is that ceph is disabled by configure even if it is
>> explicitly enabled by the command-line switch.? Configure should fail
>> here instead of continuing, - *that* is the problem.
>
> This is/was always the situation because building
2020 Sep 21
2
ceph vfs can't find specific path
Hello
Using two file server with samba 4.12.6 running as a CTDB cluster and trying to share a specific path on a cephfs. After loading the config the ctdb log shows the following error:
ctdb-eventd[248]: 50.samba: ERROR: samba directory "/plm" not available
Here is my samba configuration:
[global]
clustering = Yes
netbios name = FSCLUSTER
realm = INT.EXAMPLE.COM
registry
2019 Nov 07
2
samba performance when writing lots of small files
hi jeremy / all,
On 11/6/19 10:39 PM, Jeremy Allison wrote:
> This is re-exporting via ceph whilst creating 1000 files,
> yes ? What timings do you get when doing this via Samba
> onto a local ext4/xfs/btrfs/zfs filesystem ?
yes, creating 10k small files. doing the same on a local ssd, formatted
with an ext4 fs without any special options:
root at plattentest:/mnt-ssd/os# time for s in
2024 Jul 31
1
ceph is disabled even if explicitly asked to be enabled
On Tue, 2024-07-30 at 21:12 +0300, Michael Tokarev via samba wrote:
> Hi!
>
> Building current samba on debian bullseye with
>
> ?? ./configure --enable-cephfs
>
> results in the following output:
>
> Checking for header cephfs/libcephfs.h????????????? : yes
> Checking for library cephfs???????????????????????? : yes
> Checking for ceph_statx in
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack.
That's good to know. It is definitely something to consider.
In a distributed storage scenario we might build a dedicated pool for that
and tune the pool as more capacity or performance is needed.
Regards,
Webert Lima
DevOps Engineer at MAV Tecnologia
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2024 Jul 30
1
ceph is disabled even if explicitly asked to be enabled
Hi!
Building current samba on debian bullseye with
./configure --enable-cephfs
results in the following output:
Checking for header cephfs/libcephfs.h : yes
Checking for library cephfs : yes
Checking for ceph_statx in cephfs : ok
Checking for ceph_openat in cephfs : not found
Ceph support disabled due to
2018 Oct 08
3
vfs_ceph quota support?
Hi Folks,
is the vfs_ceph supporting quota set on a directory inside cephfs?
Regards Felix
--
Forschungszentrum Jülich GmbH
52425 Jülich
Sitz der Gesellschaft: Jülich
Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir. Dr. Karl Eugen Huthmacher
Geschäftsführung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv.
2014 Jan 15
2
Ceph RBD locking for libvirt-managed LXC (someday) live migrations
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were
2015 Mar 31
2
couple of ceph/rbd questions
Hi, I've recently been working on setting up a set of libvirt compute
nodes that will be using a ceph rbd pool for storing vm disk image
files. I've got a couple of issues I've run into.
First, per the standard ceph documentation examples [1], the way to add a
disk is to create a block in the VM definition XML that looks something
like this:
<disk type='network'
2023 Dec 14
2
Gluster -> Ceph
Hi all,
I am looking in to ceph and cephfs and in my
head I am comparing with gluster.
The way I have been running gluster over the years
is either a replicated or replicated-distributed clusters.
The small setup we have had has been a replicated cluster
with one arbiter and two fileservers.
These fileservers has been configured with RAID6 and
that raid has been used as the brick.
If disaster
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't
mind if something seems too obvious for you.
Hi,
I have a question for both dovecot and ceph lists and below I'll explain
what's going on.
Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when
using sdbox, a new file is stored for each email message.
When using mdbox, multiple
2023 Dec 14
2
Gluster -> Ceph
Big raid isn't great as bricks. If the array does fail, the larger brick means much longer heal times.
My main question I ask when evaluating storage solutions is, "what happens when it fails?"
With ceph, if the placement database is corrupted, all your data is lost (happened to my employer, once, losing 5PB of customer data). With Gluster, it's just files on disks, easily
2018 Jun 03
1
CTDB over WAN Link with LMASTER/RECMASTER Disabled
Hi,
I came across the 'CTDB_CAPABILITY_LMASTER=no' and 'CTDB_CAPABILITY_RECMASTER=no' options in my quest to salvage a rather poorly performing CTDB cluster over Ceph(fs). Unfortunately, the docs provide not enough information for a clustering noop like myself. Would there be any benefit to disabling those options for a branch office node on a high-latency WAN connection?
2023 May 09
2
MacOS clients - best options
Hi list,
we have migrated a single node Samba server from Ubuntu Trusty to a
3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is
CephFS. We are running Samba in Standalone Mode with LDAP Backend.
Samba Version: sernet-samba 99:4.18.2-2debian11
I don't know if it is relevant here's how we have mounted CephFS on the
samba nodes:
(fstab):/samba /srv/samba ceph
2017 Sep 22
3
librmb: Mail storage on RADOS with Dovecot
Hi ceph-ers,
The email below was posted on the ceph mailinglist yesterday by Wido den
Hollander. I guess this could be interesting for user here as well.
MJ
-------- Forwarded Message --------
Subject: [ceph-users] librmb: Mail storage on RADOS with Dovecot
Date: Thu, 21 Sep 2017 10:40:03 +0200 (CEST)
From: Wido den Hollander <wido at 42on.com>
To: ceph-users at ceph.com
Hi,
A tracker