similar to: CTDB over WAN Link with LMASTER/RECMASTER Disabled

Displaying 20 results from an estimated 800 matches similar to: "CTDB over WAN Link with LMASTER/RECMASTER Disabled"

2014 Jul 08
1
smbd does not start under ctdb
Hi 2 node drbd cluster with ocfs2. both nodes: openSUSE 4.1.9 with drbd 8.4 and ctdbd 2.3 All seems OK with ctdb: n1: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK (THIS NODE) pnn:1 192.168.0.11 OK Generation:1187222392 Size:2 hash:0 lmaster:0 hash:1 lmaster:1 Recovery mode:NORMAL (0) Recovery master:0 n2: ctdb status Number of nodes:2 pnn:0 192.168.0.10 OK pnn:1 192.168.0.11
2018 Sep 18
4
CTDB potential locking issue
Hi All I have a newly implemented two node CTDB cluster running on CentOS 7, Samba 4.7.1 The node network is a direct 1Gb link Storage is Cephfs ctdb status is OK It seems to be running well so far but I'm frequently seeing the following in my log.smbd: [2018/09/18 19:16:15.897742, 0] > ../source3/lib/dbwrap/dbwrap_ctdb.c:1207(fetch_locked_internal) > db_ctdb_fetch_locked for
2014 Oct 07
1
CDTB On Samba 4.1.12 As Member files server.
Hello all, I've some CTDB issue which I'm not sure where to start... I follow this guide by steve which is nice. The different on what have is that I don't have drbd running... Also I've 4 x GE which is all Connected to a switch with different ip but same subnet This is suppose to load balance the traffic as under samba dns, they all have the same name. I'm only planing 3
2018 Sep 19
3
CTDB potential locking issue
Hi Martin Many thanks for the detailed response. A few follow-ups inline: On Wed, Sep 19, 2018 at 5:19 AM Martin Schwenke <martin at meltin.net> wrote: > Hi David, > > On Tue, 18 Sep 2018 19:34:25 +0100, David C via samba > <samba at lists.samba.org> wrote: > > > I have a newly implemented two node CTDB cluster running on CentOS 7, > Samba > > 4.7.1
2018 May 16
2
dovecot + cephfs - sdbox vs mdbox
I'm sending this message to both dovecot and ceph-users ML so please don't mind if something seems too obvious for you. Hi, I have a question for both dovecot and ceph lists and below I'll explain what's going on. Regarding dbox format (https://wiki2.dovecot.org/MailboxFormat/dbox), when using sdbox, a new file is stored for each email message. When using mdbox, multiple
2016 Aug 31
3
status of Continuous availability in SMB3
hi Michael Adam: Thanks for you work on samba. Here I am looking for some advice and your help. I have been stuck in continuous availability of samba 4.3.9 for two weeks. Continuous availability in SMB3 is an attractive feature and I am strugling to enable it. smb.conf, ctdb.conf are attached. Cluster file system is cephfs and mount to /CephStorage client: Windows 8 Pro root at node0:~# samba
2011 Apr 11
1
[CTDB] how does LMASTER know where the record is stored?
Greetings list, I was looking at the wiki "samba and clustering" and a ctdb.pdf, admittedly both are quite old (2006 or 2007) and I don't know how things change over years, but I just have two questions about LMASTER: < this is from pdf > LMASTER fixed ? LMASTER is based on record key only ? LMASTER knows where the record is stored ? new records are stored on LMASTER Q1.
2018 May 16
1
[ceph-users] dovecot + cephfs - sdbox vs mdbox
Thanks Jack. That's good to know. It is definitely something to consider. In a distributed storage scenario we might build a dedicated pool for that and tune the pool as more capacity or performance is needed. Regards, Webert Lima DevOps Engineer at MAV Tecnologia *Belo Horizonte - Brasil* *IRC NICK - WebertRLZ* On Wed, May 16, 2018 at 4:45 PM Jack <ceph at jack.fr.eu.org> wrote:
2023 May 09
2
MacOS clients - best options
Hi list, we have migrated a single node Samba server from Ubuntu Trusty to a 3-node CTDB Cluster on Debian Bullseye with Sernet packages. Storage is CephFS. We are running Samba in Standalone Mode with LDAP Backend. Samba Version: sernet-samba 99:4.18.2-2debian11 I don't know if it is relevant here's how we have mounted CephFS on the samba nodes: (fstab):/samba /srv/samba ceph
2023 Jun 12
2
virsh not connecting to libvertd ?
Just found my issue. After I removed the cephfs mounts it worked! I will debug ceph. I assumed because I could touch files on mounted cephfs it was working. Now virsh list works! thanks jerry Lars Kellogg-Stedman > On Tue, Jun 06, 2023 at 04:56:38PM -0400, Jerry Buburuz wrote: >> Recently both virsh stopped talking to the libvirtd. Both stopped within >> a >> few days of
2018 May 23
3
ceph_vms performance
Hi, I'm testing out ceph_vms vs a cephfs mount with a cifs export. I currently have 3 active ceph mds servers to maximise throughput and when I have configured a cephfs mount with a cifs export, I'm getting a reasonable benchmark results. However, when I tried some benchmarking with the ceph_vms module, I only got a 3rd of the comparable write throughput. I'm just wondering if
2018 Oct 08
3
vfs_ceph quota support?
Hi Folks, is the vfs_ceph supporting quota set on a directory inside cephfs? Regards Felix -- Forschungszentrum Jülich GmbH 52425 Jülich Sitz der Gesellschaft: Jülich Eingetragen im Handelsregister des Amtsgerichts Düren Nr. HR B 3498 Vorsitzender des Aufsichtsrats: MinDir. Dr. Karl Eugen Huthmacher Geschäftsführung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender), Karsten Beneke (stellv.
2020 Mar 09
4
[home] trash folder
Hi, i have a share called [home]. Designed like described here: https://wiki.samba.org/index.php/User_Home_Folders#Creating_the_Home_Folder_for_a_New_User. First i have no problems but i want to enable a trash folder for each user. At this time i have about 8000 home directories. The directorys are subfolders from [home]. Is it possible to enable a trash folder inside the home directory
2019 Jul 11
2
Samba 4.10.6 for rhel7/centos7 rpms
Hi Konstantin, Thank you for the diff. I will review it and merge it today. About the missing directories, I think it may be doable to add them to the 'ctdb' rpm. As I'm not using ctdb, what should the ownership/permissions be for those directories? Regards, Vincent On Thu, 11 Jul 2019, Konstantin Shalygin wrote: > On 7/10/19 9:49 PM, vincent at cojot.name wrote: > >
2018 Oct 12
1
vfs_ceph quota support?
On Fri, Oct 12, 2018 at 11:19:50AM +0200, David Disseldorp via samba wrote: > Hi Felix, > > On Mon, 8 Oct 2018 16:30:17 +0200, Felix Stolte via samba wrote: > > > is the vfs_ceph supporting quota set on a directory inside cephfs? > > Not at this stage. CephFS uses a non-standard (xattr) interface for > quotas, which is not currently supported by Samba.
2020 Jan 02
2
Access Error for Roaming Profiles Share
Hi, I am trying to address some error messages that are hitting the log files for two 4.9.5-Debian file servers in our all-Samba AD domain. Most prominently "connect to service Profiles initially as user MYDOMAIN\tc-mj00y2ps$ (uid=11128, gid=10515) (pid 1634)" "../source3/smbd/uid.c:453(change_to_user_internal)" "change_to_user_internal: chdir_current_service()
2016 Jan 08
2
Samba & Ceph
Hello List, as anyone tried to install samba with/ontop on a ceph cluster? Regards, Dirk
2016 Jan 08
1
Samba & Ceph
On 2016-01-08 at 09:31 -0800, Jeremy Allison wrote: > On Fri, Jan 08, 2016 at 04:26:24PM +0100, Dirk Laurenz wrote: > > Hello List, > > > > as anyone tried to install samba with/ontop on a ceph cluster? > > Try compiling and setting up with vfs_ceph. Correct, that's basically it. > Needs some more work, but should work. Some posix features are not quite there
2019 Jul 12
1
Samba 4.10.6 for rhel7/centos7 rpms
Hi Konstantin, On Fri, 12 Jul 2019, Konstantin Shalygin wrote: > On 7/11/19 8:59 PM, vincent at cojot.name wrote: >> Thank you for the diff. I will review it and merge it today. >> About the missing directories, I think it may be doable to add them to the >> 'ctdb' rpm. As I'm not using ctdb, what should the ownership/permissions >> be for those
2020 Oct 29
1
CTDB Question: external locking tool
Hi Bob, On Tue, 27 Oct 2020 15:09:34 +1100, Martin Schwenke via samba <samba at lists.samba.org> wrote: > On Sun, 25 Oct 2020 20:44:07 -0400, Robert Buck <robert.buck at som.com> > wrote: > > > We use a Golang-based lock tool that we wrote for CTDB. That tool interacts > > with our 3.4 etcd cluster, and follows the requirements specified in the > >