Displaying 20 results from an estimated 52 matches for "recoverd".
Did you mean:
recovery
2012 May 24
0
Could not find node to take over public address
...isabled. So then I issue
# ctdb enable
on each node. After that ctdb will not be able to assign the public ip
addresses. On the first node I get repeatedly:
> 2012/05/24 14:32:09.408217 [ 6773]: Forced running of eventscripts with argument
> s ipreallocated
> 2012/05/24 14:32:09.442628 [recoverd: 6800]: Public address '10.94.43.67' is not
> assigned and we could serve this ip
> 2012/05/24 14:32:09.442643 [recoverd: 6800]: Public address '10.94.43.66' is not
> assigned and we could serve this ip
> 2012/05/24 14:32:09.442648 [recoverd: 6800]: Public address '...
2018 Sep 05
1
[ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
...wrote to log.ctdb.
node1: repeat logs:
2018/09/04 04:35:06.414369 ctdbd[10129]: Recovery has started
2018/09/04 04:35:06.414944 ctdbd[10129]: connect() failed, errno=111
2018/09/04 04:35:06.415076 ctdbd[10129]: Unable to run startrecovery event
node2: repeat logs:
2018/09/04 04:35:09.412368 ctdb-recoverd[9437]: ../ctdb/server/ctdb_recoverd.c:1267 Starting do_recovery
2018/09/04 04:35:09.412689 ctdb-recoverd[9437]: Already holding recovery lock
2018/09/04 04:35:09.412700 ctdb-recoverd[9437]: ../ctdb/server/ctdb_recoverd.c:1326 Recovery initiated due to problem with node 1
2018/09/04 04:35:09.412974...
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
...o "/usr/libexec/ctdb/ctdb_lock_helper"
2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2)
2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started
2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3)
2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting
2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election
2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master
2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED
2018/02/12 19:38:56.90287...
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
...o "/usr/libexec/ctdb/ctdb_lock_helper"
2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2)
2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started
2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3)
2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting
2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election
2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master
2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED
2018/02/12 19:38:56.90287...
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
...o "/usr/libexec/ctdb/ctdb_lock_helper"
2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2)
2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started
2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3)
2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting
2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election
2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master
2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED
2018/02/12 19:38:56.90287...
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
...o "/usr/libexec/ctdb/ctdb_lock_helper"
2018/02/12 19:38:54.579527 ctdbd[6602]: Set runstate to SETUP (2)
2018/02/12 19:38:54.881828 ctdbd[6602]: Keepalive monitoring has been started
2018/02/12 19:38:54.881873 ctdbd[6602]: Set runstate to FIRST_RECOVERY (3)
2018/02/12 19:38:54.882020 ctdb-recoverd[7182]: monitor_cluster starting
2018/02/12 19:38:54.882620 ctdb-recoverd[7182]: Initial recovery master set - forcing election
2018/02/12 19:38:54.882702 ctdbd[6602]: This node (1) is now the recovery master
2018/02/12 19:38:55.882735 ctdbd[6602]: CTDB_WAIT_UNTIL_RECOVERED
2018/02/12 19:38:56.90287...
2014 Oct 29
1
smbstatus hang with CTDB 2.5.4 and Samba 4.1.13
...isconnect for database:0x6b06a26d
2014/10/29 11:12:45.429592 [3932342]: Recovery daemon ping timeout. Count
: 0
2014/10/29 11:12:45.430655 [3932342]: Handling event took 195 seconds!
2014/10/29 11:12:45.452636 [3932342]: pnn 1 Invalid reqid 220668 in
ctdb_reply_control
2014/10/29 11:12:48.462334 [recoverd:6488266]: server/ctdb_recoverd.c:3990
Remote node:0 has different flags for node 1. It has 0x02 vs our 0x00
2014/10/29 11:12:48.462448 [recoverd:6488266]: Use flags 0x00 from local
recmaster node for cluster update of node 1 flags
2014/10/29 11:12:48.483362 [3932342]: Freeze priority 1
2014/10/29...
2013 Apr 09
0
Failed to start CTDB first time after install
...:486 Eventscript init finished with state 0
2013/04/09 16:10:00.248978 [30575]: Keepalive monitoring has been started
2013/04/09 16:10:00.249024 [30575]: Monitoring has been started
2013/04/09 16:10:00.249057 [30575]: server/eventscript.c:800 Starting eventscript setup
2013/04/09 16:10:00.249415 [recoverd:30648]: monitor_cluster starting
2013/04/09 16:10:00.251621 [30575]: server/ctdb_daemon.c:182 Registered message handler for srvid=17870283321406128128
2013/04/09 16:10:00.251760 [30575]: server/ctdb_daemon.c:182 Registered message handler for srvid=17870564796382838784
2013/04/09 16:10:00.251858 [...
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
...samba: samba not
listening on TCP port 445
May 16 11:25:42 ctdb-lbn1 ctdb-eventd[13184]: monitor event failed
May 16 11:25:46 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not
listening on TCP port 445
May 16 11:25:46 ctdb-lbn1 ctdb-eventd[13184]: monitor event failed
May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]:
../ctdb/server/ctdb_client.c:678 control timed out. reqid:2147483471
opcode:80 dstnode:1
May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]:
../ctdb/server/ctdb_client.c:791 ctdb_control_recv failed
May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]: Async operation failed
with state 3, opcode:80
M...
2018 May 04
2
CTDB Path
Hello,
at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE!
I compiled samba as follow:
|./configure| |--with-cluster-support
||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad|
The whole SAMBA enviroment is located in /usr/local/samba/.
CTDB is located in /usr/local/samba/etc/ctdb.
I guess right that the correct path of ctdbd.conf (node file, public
address file
2017 Jan 24
2
public ip is assigned to us but not on an interface - error
hi all
I had a working cluster, very basic, standard. I'm not sure
if recent updates broke it.
I see these:
2017/01/24 22:20:05.025164 [recoverd: 3474]: Public IP
'10.5.10.51' is assigned to us but not on an interface
2017/01/24 22:20:05.027571 [recoverd: 3474]: Trigger takeoverrun
2017/01/24 22:20:05.053386 [recoverd: 3474]: Takeover run
starting
2017/01/24 22:20:05.106044 [ 3309]: Takeover of IP
10.5.10.51/28 on interface eth0...
2019 May 16
0
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
...g on TCP port 445
> May 16 11:25:42 ctdb-lbn1 ctdb-eventd[13184]: monitor event failed
> May 16 11:25:46 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not
> listening on TCP port 445
> May 16 11:25:46 ctdb-lbn1 ctdb-eventd[13184]: monitor event failed
> May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]:
> ../ctdb/server/ctdb_client.c:678 control timed out. reqid:2147483471
> opcode:80 dstnode:1
> May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]:
> ../ctdb/server/ctdb_client.c:791 ctdb_control_recv failed
> May 16 11:25:50 ctdb-lbn1 ctdb-recoverd[13293]: Async operation failed...
2018 May 07
2
CTDB Path
...local/samba/libexec/ctdb/ctdb_lock_helper"
2018/05/07 15:31:40.992886 ctdbd[2093]: Set runstate to SETUP (2)
2018/05/07 15:31:41.098136 ctdbd[2093]: Keepalive monitoring has been
started
2018/05/07 15:31:41.098283 ctdbd[2093]: Set runstate to FIRST_RECOVERY (3)
2018/05/07 15:31:41.098653 ctdb-recoverd[2160]: monitor_cluster starting
2018/05/07 15:31:41.102114 ctdb-recoverd[2160]: Initial recovery master
set - forcing election
2018/05/07 15:31:41.102652 ctdbd[2093]: This node (1) is now the
recovery master
2018/05/07 15:31:42.098950 ctdbd[2093]: CTDB_WAIT_UNTIL_RECOVERED
2018/05/07 15:31:43.099...
2020 Aug 08
1
CTDB question about "shared file system"
...k, this is great.
Yes, those log messages, they were occurring once per second (precisely).
Then after several hours they stopped after these messages in the log:
ctdbd[1220]: 10.206.2.124:4379: node 10.200.1.230:4379 is dead: 0 connected
ctdbd[1220]: Tearing down connection to dead node :0
ctdb-recoverd[1236]: Current recmaster node 0 does not have CAP_RECMASTER,
but we (node 1) have - force an election
ctdbd[1220]: Recovery mode set to ACTIVE
ctdbd[1220]: This node (1) is now the recovery master
ctdb-recoverd[1236]: Election period ended
ctdb-recoverd[1236]: Node:1 was in recovery mode. Start rec...
2024 Dec 27
1
ctdb + gluster9 = not working
...L_RECOVERED
2024-12-27T19:47:17.590589+05:00 samba1 ctdbd[25431]:
CTDB_WAIT_UNTIL_RECOVERED
2024-12-27T19:47:18.591106+05:00 samba1 ctdbd[25431]:
CTDB_WAIT_UNTIL_RECOVERED
2024-12-27T19:47:19.592017+05:00 samba1 ctdbd[25431]:
CTDB_WAIT_UNTIL_RECOVERED
2024-12-27T19:47:19.594252+05:00 samba1 ctdb-recoverd[25443]: Leader
broadcast timeout
2024-12-27T19:47:19.594308+05:00 samba1 ctdb-recoverd[25443]: Start
election
2024-12-27T19:47:19.595217+05:00 samba1 ctdb-recoverd[25443]: Attempting
to take cluster lock ("/mnt/gluster/ctdb/.ctdb.lock")
2024-12-27T19:47:19.609195+05:00 samba1 ctdbd[254...
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
...ons. When we know what
version you are running then we can say whether it is a known issue or
a new issue.
I have been working on the following issue for most of this week:
> 2018/09/04 04:29:52.465663 ctdbd[10129]: This node (1) is now the recovery master
> 2018/09/04 04:29:55.468771 ctdb-recoverd[11302]: Election period ended
> 2018/09/04 04:29:55.469404 ctdb-recoverd[11302]: Node 2 has changed flags - now 0x8 was 0x0
> 2018/09/04 04:29:55.469475 ctdb-recoverd[11302]: Remote node 2 had flags 0x8, local had 0x0 - updating local
> 2018/09/04 04:29:55.469514 ctdb-recoverd[11302]: ../...
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
...ons. When we know what
version you are running then we can say whether it is a known issue or
a new issue.
I have been working on the following issue for most of this week:
> 2018/09/04 04:29:52.465663 ctdbd[10129]: This node (1) is now the recovery master
> 2018/09/04 04:29:55.468771 ctdb-recoverd[11302]: Election period ended
> 2018/09/04 04:29:55.469404 ctdb-recoverd[11302]: Node 2 has changed flags - now 0x8 was 0x0
> 2018/09/04 04:29:55.469475 ctdb-recoverd[11302]: Remote node 2 had flags 0x8, local had 0x0 - updating local
> 2018/09/04 04:29:55.469514 ctdb-recoverd[11302]: ../...
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller
<ulrich.sibiller at atos.net> wrote:
> Martin Schwenke schrieb am 15.02.2023 23:23:
> > OK, this part looks kind-of good. It would be interesting to know how
> > long the entire failover process is taking.
>
> What exactly would you define as the begin and end of the failover?
From "Takeover run
2014 Jul 21
0
CTDB no secrets.tdb created
Hi
2 node ctdb 2.5.3 on Ubuntu 14.04 nodes
apparmor teardown and firewall and stopped dead
The IP takeover is working fine between the nodes:
Jul 21 14:12:03 uc1 ctdbd: recoverd:Trigger takeoverrun
Jul 21 14:12:03 uc1 ctdbd: recoverd:Takeover run starting
Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.81/24 on interface
bond0
Jul 21 14:12:04 uc1 ctdbd: Takeover of IP 192.168.1.80/24 on interface
bond0
Jul 21 14:12:05 uc1 ctdbd: Monitoring event was cancelled
Jul 21 14...
2018 Sep 06
1
[ctdb]Unable to run startrecovery event
...ons. When we know what
version you are running then we can say whether it is a known issue or
a new issue.
I have been working on the following issue for most of this week:
> 2018/09/04 04:29:52.465663 ctdbd[10129]: This node (1) is now the recovery master
> 2018/09/04 04:29:55.468771 ctdb-recoverd[11302]: Election period ended
> 2018/09/04 04:29:55.469404 ctdb-recoverd[11302]: Node 2 has changed flags - now 0x8 was 0x0
> 2018/09/04 04:29:55.469475 ctdb-recoverd[11302]: Remote node 2 had flags 0x8, local had 0x0 - updating local
> 2018/09/04 04:29:55.469514 ctdb-recoverd[11302]: ../...