Displaying 20 results from an estimated 10000 matches similar to: "CTDB and LDAP: anyone?"
2008 Feb 07
0
CTDB and LDAP
Hi there,
I am looking into using CTDB between a PDC and a BDC. I assume this is
possible!
However I have a few questions:
1: Do I have to use tdb2 as an Idmap backend? Can I not stay with ldap?
(from the CTDB docs:
A clustered Samba install must set some specific configuration
parameters
clustering = yes
idmap backend = tdb2
private dir = /a/directory/on/your/cluster/filesystem
It is
2014 Jan 30
1
Glusterfs/CTDB/Samba file locking problem
Hi guys,
I try to set up two identical installed up to date CentOS6 machines with Glusterfs/CTDB/Samba .
I have set up Glusterfs and it works. I have set up CTDB from CentOS and it seems to work too.
Samba is AD integrated and works mainly.
The main problem is that file locking seem to not work between the machines at all. If two Win7 clients try to open an document
from the same Samba server
2008 Dec 25
1
CTDB + Samba + Winbind + ActiveDirectory
Hi All,
Are there any special CTDB/SMB configuration settings/dependencies to manage
Winbind across CTDB managed servers authenticating via Active
Directory(AD)? An example would be Samba's IDMAP backend for Winbind: RID
vs. AD or tag Winbind to a primary CTDB node and point other nodes to
authenticate from AD via proxy primary CTDB node?
/etc/sysconfig/ctdb on all nodes is as follows:
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller
<ulrich.sibiller at atos.net> wrote:
> Martin Schwenke schrieb am 15.02.2023 23:23:
> > OK, this part looks kind-of good. It would be interesting to know how
> > long the entire failover process is taking.
>
> What exactly would you define as the begin and end of the failover?
From "Takeover run
2020 Aug 06
2
CTDB question about "shared file system"
Very helpful. Thank you, Martin.
I'd like to share the information below with you and solicit your fine
feedback :-)
I provide additional detail in case there is something else you feel
strongly we should consider.
We made some changes last night, let me share those with you.
The error that is repeating itself and causing these failures is:
Takeover run starting
RELEASE_IP 10.200.1.230
2023 Jan 26
1
ctdb samba and winbind event problem
Hi to all,
I'm having a CTDB-Cluster with two nodes (both Ubuntu wit
Sernet-packages 4.17.4). Now I want to replace one of the nodes. The
first step was to bring a new node to the CTDB-Cluster. This time a
Debian 11 but with the same sernet-packages (4.17.4). Adding the new
node to /etc/ctdb/nodes at the end of the list. And the virtual IP to
/etc/ctdb/public_addresses, also at the end
2020 Aug 08
1
CTDB question about "shared file system"
On Sat, Aug 8, 2020 at 2:52 AM Martin Schwenke <martin at meltin.net> wrote:
> Hi Bob,
>
> On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com>
> wrote:
>
> > And so we've been rereading the doc on the public addresses file. So it
> may
> > be we have gravely misunderstood the *public_addresses* file, we never
> read
> >
2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello,
we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this:
[Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:22:31 2023] nfs: server
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody,
I just updated my ctdb node from Samba version
4.9.4-SerNet-Debian-11.stretch to Samba version
4.9.8-SerNet-Debian-13.stretch.
After restarting the sernet-samba-ctdbd service the node doesn't come
back and remains in state "UNHEALTHY".
I can find that in the syslog:
May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not
listening on TCP port 445
May 16
2011 Apr 05
1
samba ctdb clustering with ldap backend?
Dear all,
I have two samba servers auth agains ldap, so I use:
idmap backend = ldap:ldap://127.0.0.1
Is it possible to setup ctdb to run with a ldap backend?
I know ctdb uses:
idmap backend = tdb2
Any suggestions?
Greetings
Daniel
-----------------------------------------------
EDV Daniel M?ller
Leitung EDV
Tropenklinik Paul-Lechler-Krankenhaus
Paul-Lechler-Str. 24
72076 T?bingen
Tel.:
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli,
[Sorry for slow response, life is busy...]
On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> we are using ctdb 4.15.5 on RHEL8 (Kernel
> 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8
> clients. Whenever an ip takeover happens most clients report
> something like this:
> [Mon Feb 13 12:21:22
2020 Feb 10
2
ctdb failover interrupts Windows file copy
Hello,
We have setup ctdb+Samba v 4.11.1 and testing with Windows client and
failover works mostly ok. However when using regular Windows file copy,
the copy operation is interrupted during ip takeover.
Is there any solution to make the failover transparent to Windows file
copy ?
I understand that SMB2 durable handles should not be used in a clustered
setup, and SMB3 persistent handles +
2014 May 30
1
SAMBA & CTDB Configuration Issue using LDAP as back end.
I have setup a SAMBA 3 cluster (running on SLES-11sp3) with 2 nodes. CTDB
is managing SAMBA, using a floating IP between the two hosts. Everything is
running fine as far as the cluster fail-over, etc. However, the SAMBA
cluster server(s) are supposed to be member servers, connecting to an
existing PDC which is using LDAP as the password backend.
CTDB is inserting the following lines into the
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all
We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'��
1st:
During write, unplug the network cable of serving node A
->NFS Client took a few seconds to recover to conitinue writing.
After some minutes, plug the network cable of serving node A
->NFS Client also took a few seconds to recover
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23:
> Hi Uli,
>
> [Sorry for slow response, life is busy...]
Thanks for answering anyway!
> On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
> OK, this part looks kind-of good. It would be interesting to know how
> long the entire failover process is taking.
What exactly would you define as the begin and end of the
2016 Jul 04
2
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 21:47, Volker Lendecke wrote:
> On Sun, Jul 03, 2016 at 08:42:36PM +0100, Alex Crow wrote:
>> I've only put the "private dir" onto MooseFS, as instructed in the CTDB
>> docs.
> Can you quote these docs, so that we can correct them?
>
>
Also Gluster have the same tip here:
2016 Jul 04
1
Winbind process stuck at 100% after changing use_mmap to no
On 03/07/16 21:47, Volker Lendecke wrote:
> On Sun, Jul 03, 2016 at 08:42:36PM +0100, Alex Crow wrote:
>> I've only put the "private dir" onto MooseFS, as instructed in the CTDB
>> docs.
> Can you quote these docs, so that we can correct them?
Here, under the lustre section. I applied the same config at it's a
similar FS (ie distributed with a central metadata
2015 Oct 16
2
Problems with TDBs on CTDB-managed Samba instance
Hi All,
My site has two separate clustered Samba instances (managed by two independent CTDB instances) running over GPFS. In the last couple of weeks, we have seen a recurring issue that causes the `smbd` process in *one* of these instances to become unresponsive (as seen by CTDB), which results in flapping of CTDB and multiple IP takeover runs.
The symptoms that we observe are:
1) Samba
2023 Nov 26
1
CTDB: some problems about disconnecting the private network of ctdb leader nodes
My ctdb version is 4.17.7
Hello, everyone.
My ctdb cluster configuration is correct and the cluster is healthy before operation.
My cluster has three nodes, namely host-192-168-34-164, host-192-168-34-165, and host-192-168-34-166. And the node host-192-168-34-164 is the leader before operation.
I conducted network oscillation testing on node host-192-168-34-164?I down the interface of private
2014 Jul 11
1
ctdb PARTIALLYONLINE
drbd ctdb ocfs2
Hi
Everything seems OK apart from the IP takeover.
public_addresses
192.168.1.80/24 enp0s3
192.168.1.81/24 enp0s3
ctdb status
Number of nodes:2
pnn:0 192.168.1.10 PARTIALLYONLINE
pnn:1 192.168.1.11 PARTIALLYONLINE (THIS NODE)
Generation:2090727463
Size:2
hash:0 lmaster:0
hash:1 lmaster:1
Recovery mode:NORMAL (0)
Recovery master:1
but we are getting:
2014/07/11