similar to: CTDB fails to set routes

Displaying 20 results from an estimated 7000 matches similar to: "CTDB fails to set routes"

2014 Jul 21
0
CTDB no secrets.tdb created
Hi 2 node ctdb 2.5.3 on Ubuntu 14.04 nodes apparmor teardown and firewall and stopped dead The IP takeover is working fine between the nodes: Jul 21 14:12:03 uc1 ctdbd: recoverd:Trigger takeoverrun Jul 21 14:12:03 uc1 ctdbd: recoverd:Takeover run starting Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.81/24 on interface bond0 Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.80/24 on
2020 Aug 08
1
CTDB question about "shared file system"
On Sat, Aug 8, 2020 at 2:52 AM Martin Schwenke <martin at meltin.net> wrote: > Hi Bob, > > On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com> > wrote: > > > And so we've been rereading the doc on the public addresses file. So it > may > > be we have gravely misunderstood the *public_addresses* file, we never > read > >
2023 Jan 26
1
ctdb samba and winbind event problem
Hi Stefan, On Thu, 26 Jan 2023 16:35:59 +0100, Stefan Kania via samba <samba at lists.samba.org> wrote: > I'm having a CTDB-Cluster with two nodes (both Ubuntu wit > Sernet-packages 4.17.4). Now I want to replace one of the nodes. The > first step was to bring a new node to the CTDB-Cluster. This time a > Debian 11 but with the same sernet-packages (4.17.4). Adding the
2016 Nov 10
1
CTDB IP takeover/failover tunables - do you use them?
I'm currently hacking on CTDB's IP takeover/failover code. For Samba 4.6, I would like to rationalise the IP takeover-related tunable parameters. I would like to know if there are any users who set the values of these tunables to non-default values. The tunables in question are: DisableIPFailover Default: 0 When set to non-zero, ctdb will not perform failover or
2023 Jan 26
1
ctdb samba and winbind event problem
Hi to all, I'm having a CTDB-Cluster with two nodes (both Ubuntu wit Sernet-packages 4.17.4). Now I want to replace one of the nodes. The first step was to bring a new node to the CTDB-Cluster. This time a Debian 11 but with the same sernet-packages (4.17.4). Adding the new node to /etc/ctdb/nodes at the end of the list. And the virtual IP to /etc/ctdb/public_addresses, also at the end
2020 Aug 08
0
CTDB question about "shared file system"
Hi Bob, On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com> wrote: > And so we've been rereading the doc on the public addresses file. So it may > be we have gravely misunderstood the *public_addresses* file, we never read > that part of the documentation carefully. The *nodes* file made perfect > sense, and the point we missed is that CTDB is using
2019 May 16
0
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi Benedikt, On Thu, 16 May 2019 10:32:51 +0200, Benedikt Kaleß via samba <samba at lists.samba.org> wrote: > Hi everybody, > > I just updated my ctdb node from Samba version > 4.9.4-SerNet-Debian-11.stretch to Samba version > 4.9.8-SerNet-Debian-13.stretch. > > After restarting the sernet-samba-ctdbd service the node doesn't come > back and remains in state
2020 Aug 06
2
CTDB question about "shared file system"
Very helpful. Thank you, Martin. I'd like to share the information below with you and solicit your fine feedback :-) I provide additional detail in case there is something else you feel strongly we should consider. We made some changes last night, let me share those with you. The error that is repeating itself and causing these failures is: Takeover run starting RELEASE_IP 10.200.1.230
2017 Nov 06
0
ctdb vacuum timeouts and record locks
On Thu, 2 Nov 2017 12:17:56 -0700, Computerisms Corporation via samba <samba at lists.samba.org> wrote: > hm, I stand correct on the problem solved statement below. Ip addresses > are simply not cooperating on the 2nd node. > > root at vault1:~# ctdb ip > Public IPs on node 0 > 192.168.120.90 0 > 192.168.120.91 0 > 192.168.120.92 0 > 192.168.120.93 0 > >
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody, I just updated my ctdb node from Samba version 4.9.4-SerNet-Debian-11.stretch to Samba version 4.9.8-SerNet-Debian-13.stretch. After restarting the sernet-samba-ctdbd service the node doesn't come back and remains in state "UNHEALTHY". I can find that in the syslog: May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445 May 16
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli, [Sorry for slow response, life is busy...] On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba <samba at lists.samba.org> wrote: > we are using ctdb 4.15.5 on RHEL8 (Kernel > 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 > clients. Whenever an ip takeover happens most clients report > something like this: > [Mon Feb 13 12:21:22
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller <ulrich.sibiller at atos.net> wrote: > Martin Schwenke schrieb am 15.02.2023 23:23: > > OK, this part looks kind-of good. It would be interesting to know how > > long the entire failover process is taking. > > What exactly would you define as the begin and end of the failover? From "Takeover run
2018 Sep 05
1
[ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
There is a 3 nodes ctdb cluster is running. When one of 3 nodes is powered down, lots of logs will be wrote to log.ctdb. node1: repeat logs: 2018/09/04 04:35:06.414369 ctdbd[10129]: Recovery has started 2018/09/04 04:35:06.414944 ctdbd[10129]: connect() failed, errno=111 2018/09/04 04:35:06.415076 ctdbd[10129]: Unable to run startrecovery event node2: repeat logs: 2018/09/04 04:35:09.412368
2008 Feb 12
1
CTDB and LDAP: anyone?
Hi there, I am looking into using CTDB between a PDC and a BDC. I assume this is possible! However I have a few questions: 1: Do I have to use tdb2 as an Idmap backend? Can I not stay with ldap? (from the CTDB docs: A clustered Samba install must set some specific configuration parameters clustering = yes idmap backend = tdb2 private dir = /a/directory/on/your/cluster/filesystem It is
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'�� 1st: During write, unplug the network cable of serving node A ->NFS Client took a few seconds to recover to conitinue writing. After some minutes, plug the network cable of serving node A ->NFS Client also took a few seconds to recover
2008 Feb 07
0
CTDB and LDAP
Hi there, I am looking into using CTDB between a PDC and a BDC. I assume this is possible! However I have a few questions: 1: Do I have to use tdb2 as an Idmap backend? Can I not stay with ldap? (from the CTDB docs: A clustered Samba install must set some specific configuration parameters clustering = yes idmap backend = tdb2 private dir = /a/directory/on/your/cluster/filesystem It is
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23: > Hi Uli, > > [Sorry for slow response, life is busy...] Thanks for answering anyway! > On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba > OK, this part looks kind-of good. It would be interesting to know how > long the entire failover process is taking. What exactly would you define as the begin and end of the
2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello, we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this: [Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying [Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying [Mon Feb 13 12:22:31 2023] nfs: server
2023 Nov 26
1
CTDB: some problems about disconnecting the private network of ctdb leader nodes
My ctdb version is 4.17.7 Hello, everyone. My ctdb cluster configuration is correct and the cluster is healthy before operation. My cluster has three nodes, namely host-192-168-34-164, host-192-168-34-165, and host-192-168-34-166. And the node host-192-168-34-164 is the leader before operation. I conducted network oscillation testing on node host-192-168-34-164?I down the interface of private