similar to: ctdb split brain nodes doesn't see each other

Displaying 20 results from an estimated 600 matches similar to: "ctdb split brain nodes doesn't see each other"

2015 May 19
0
ctdb_client.c control timed out - banning nodes
Hello, We are using CTDB / Samba to serve a number of windows users, at this point around 1200. We have a 4 node CTDB setup. CTDB version - ctdb-1.0.114.7-1 Samba Version - sernet-samba-4.1.16-10 In recent months we've seen a big problem when 1 of the CTDB nodes is stopped or disconnected either manually or resulting from a problem. On some occasions, all other nodes get banned if a node
2014 Feb 26
0
CTDB Debug Help
Hello, I've got a two node CTDB/Samba cluster that I'm having trouble with trying to add back a node after having to do an OS reload on it. The servers are running CTDB 2.5.1 and Samba 4.1.4 on AIX 7.1 TL2. The Samba CTDB databases and Samba service work fine from the node that was not reloaded. The rebuilt node is failing to re-add itself to the cluster. I'm looking for
2013 Apr 09
0
Failed to start CTDB first time after install
Hi, I am setting up a two node Samba cluster with CTDB in AWS in two different subnets. All IP ports are open between these two subnets. I am initially forming the Samba cluster with one node, then will add the second node after startup of CTDB. I am not using public_addresses for CTDB due to AWS not supporting VIP's. I am using 64bit Amazon Linux with two NICs defined, eth0 as the
2014 Mar 31
0
ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store
Hello I find the following email form internet, and I have the same problem, can you share your information about this issue? [Samba] ctdb issue: existing header for db_id 0xf2a58948 has larger RSN 1 than new RSN 1 in ctdb_persistent_store Nate Hardt nate at scalecomputing.com
2019 May 16
0
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi Benedikt, On Thu, 16 May 2019 10:32:51 +0200, Benedikt Kaleß via samba <samba at lists.samba.org> wrote: > Hi everybody, > > I just updated my ctdb node from Samba version > 4.9.4-SerNet-Debian-11.stretch to Samba version > 4.9.8-SerNet-Debian-13.stretch. > > After restarting the sernet-samba-ctdbd service the node doesn't come > back and remains in state
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody, I just updated my ctdb node from Samba version 4.9.4-SerNet-Debian-11.stretch to Samba version 4.9.8-SerNet-Debian-13.stretch. After restarting the sernet-samba-ctdbd service the node doesn't come back and remains in state "UNHEALTHY". I can find that in the syslog: May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445 May 16
2014 Oct 29
1
smbstatus hang with CTDB 2.5.4 and Samba 4.1.13
Can anyone help with some pointers to debug a problem with Samba and CTDB with smbstatus traversing the connections tdb? I've got a new two node cluster with Samba and CTDB on AIX. If I run smbstatus when the server has much user activity it hangs and the node it was run on gets banned. I see the following in the ctdb log: 2014/10/29 11:12:45.374580 [3932342]:
2018 Sep 05
1
[ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
There is a 3 nodes ctdb cluster is running. When one of 3 nodes is powered down, lots of logs will be wrote to log.ctdb. node1: repeat logs: 2018/09/04 04:35:06.414369 ctdbd[10129]: Recovery has started 2018/09/04 04:35:06.414944 ctdbd[10129]: connect() failed, errno=111 2018/09/04 04:35:06.415076 ctdbd[10129]: Unable to run startrecovery event node2: repeat logs: 2018/09/04 04:35:09.412368
2012 Apr 17
0
CTDB panics when vacuuming serverid.tdb
CTDB Samba Team, I have a two-node cluster successfully running a GFS2 filesystem. I compiled ctdb ver 1.12 with Samba 3.6.3 for 64-bit systems. Running on RHEL 5.7. I was able to add the cluster to the domain but after I restarted CTDB, it panics right after doing a vacuum of the serverid.tdb database. The lock file is on the GFS FS so both nodes can access it. Any ideas as to what
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
Am Montag, 26. Februar 2018, 17:26:06 CET schrieb zhu.shangzhong--- via samba: Decoded base64 encoded body with some chinese characters: ------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba at lists.samba.org <samba at lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2012 May 24
0
Is it possible to use quorum for CTDB to prevent split-brain and removing lockfile in the cluster file system
Hello list, We know that CTDB uses lockfile in the cluster file system to prevent split-brain. It is a really good design when all nodes in the cluster can mount the cluster file system (e.g. GPFS/GFS/GlusterFS) and CTDB can work happily in this assumption. However, when split-brain happens, the disconnected private network violates this assumption usually. For example, we have four nodes (A, B,
2012 Jun 28
1
CTDB and IPv6
I am attempting to enable IPv6 on our CTDB setup. I have placed the IPv6 address in the public_addresses file with the correct prefix. The addresses never come up and I recieve these messages in the log 2012/06/28 10:54:43.313227 [ 1820]: Async operation failed with ret=0 res=1 opcode=0 2012/06/28 10:54:43.313918 [ 1820]: Async operation failed with ret=0 res=1 opcode=0 2012/06/28
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
Thanks Martin! We are using the ctdb 4.6.10. Are you able to recreate this every time? Sometimes? Rarely? Rarely. Note that you're referring to nodes 1, 2, 3 while CTDB numbers the nodes 0, 1, 2. In fact, the situation is a little more confused than this: This is my wrong. The CTDB numbers the nodes is 0,1,2. # ctdb status Number of nodes:3 pnn:0 10.231.8.70 OK pnn:1 10.231.8.68 OK
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
Thanks Martin! We are using the ctdb 4.6.10. Are you able to recreate this every time? Sometimes? Rarely? Rarely. Note that you&apos;re referring to nodes 1, 2, 3 while CTDB numbers the nodes 0, 1, 2. In fact, the situation is a little more confused than this: This is my wrong. The CTDB numbers the nodes is 0,1,2. # ctdb status Number of nodes:3 pnn:0 10.231.8.67 OK pnn:1 10.231.8.65 OK
2019 Aug 23
0
plenty of vacuuuming processes
Yes, Please start with telling the running OS and samba version and an output of smb.conf And it looks like : https://bugzilla.samba.org/show_bug.cgi?id=13168 Increase the loglevels and post these also when you answer above. Greetz, Louis > -----Oorspronkelijk bericht----- > Van: samba [mailto:samba-bounces at lists.samba.org] Namens > Benedikt Kale? via samba > Verzonden:
2018 Sep 06
1
[ctdb]Unable to run startrecovery event
Martin, I have checked more logs. Before ctdb-eventd went away, system memory utilization is very high, almost 100%. Is it related to "Bad talloc magic value - wrong talloc version used/mixed"? 2018/08/14 15:22:57.818762 ctdb-eventd[10131]: 05.system: WARNING: System memory utilization 95% >= threshold 80% 2018/08/14 15:22:57.818800 ctdb-eventd[10131]: 05.system: WARNING: System