search for: do_recoveri

Displaying 14 results from an estimated 14 matches for "do_recoveri".

Did you mean: do_recovery
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
Am Montag, 26. Februar 2018, 17:26:06 CET schrieb zhu.shangzhong--- via samba: Decoded base64 encoded body with some chinese characters: ------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba at lists.samba.org <samba at lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------ 发件人:朱尚忠10137461 收件人:samba@lists.samba.org <samba@lists.samba.org> 日 期 :2018年02月26日 17:10 主 题 :[ctdb] Unable to take recovery lock - contention When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The
2018 Sep 05
1
[ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
There is a 3 nodes ctdb cluster is running. When one of 3 nodes is powered down, lots of logs will be wrote to log.ctdb. node1: repeat logs: 2018/09/04 04:35:06.414369 ctdbd[10129]: Recovery has started 2018/09/04 04:35:06.414944 ctdbd[10129]: connect() failed, errno=111 2018/09/04 04:35:06.415076 ctdbd[10129]: Unable to run startrecovery event node2: repeat logs: 2018/09/04 04:35:09.412368
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time. Which cases will the "unable to take lock" errror be output? Thanks! The following the ctdb logs: 2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node 2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602 2018/02/12 19:38:51.529060
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
Thanks Martin! We are using the ctdb 4.6.10. Are you able to recreate this every time? Sometimes? Rarely? Rarely. Note that you're referring to nodes 1, 2, 3 while CTDB numbers the nodes 0, 1, 2. In fact, the situation is a little more confused than this: This is my wrong. The CTDB numbers the nodes is 0,1,2. # ctdb status Number of nodes:3 pnn:0 10.231.8.70 OK pnn:1 10.231.8.68 OK
2018 Sep 05
0
[ctdb]Unable to run startrecovery event(if mail contentis encrypted, please see the attached file)
Thanks Martin! We are using the ctdb 4.6.10. Are you able to recreate this every time? Sometimes? Rarely? Rarely. Note that you&apos;re referring to nodes 1, 2, 3 while CTDB numbers the nodes 0, 1, 2. In fact, the situation is a little more confused than this: This is my wrong. The CTDB numbers the nodes is 0,1,2. # ctdb status Number of nodes:3 pnn:0 10.231.8.67 OK pnn:1 10.231.8.65 OK
2020 Aug 08
1
CTDB question about "shared file system"
On Sat, Aug 8, 2020 at 2:52 AM Martin Schwenke <martin at meltin.net> wrote: > Hi Bob, > > On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com> > wrote: > > > And so we've been rereading the doc on the public addresses file. So it > may > > be we have gravely misunderstood the *public_addresses* file, we never > read > >
2018 Sep 06
1
[ctdb]Unable to run startrecovery event
Martin, I have checked more logs. Before ctdb-eventd went away, system memory utilization is very high, almost 100%. Is it related to "Bad talloc magic value - wrong talloc version used/mixed"? 2018/08/14 15:22:57.818762 ctdb-eventd[10131]: 05.system: WARNING: System memory utilization 95% >= threshold 80% 2018/08/14 15:22:57.818800 ctdb-eventd[10131]: 05.system: WARNING: System
2013 Apr 09
0
Failed to start CTDB first time after install
Hi, I am setting up a two node Samba cluster with CTDB in AWS in two different subnets. All IP ports are open between these two subnets. I am initially forming the Samba cluster with one node, then will add the second node after startup of CTDB. I am not using public_addresses for CTDB due to AWS not supporting VIP's. I am using 64bit Amazon Linux with two NICs defined, eth0 as the
2020 Aug 06
2
CTDB question about "shared file system"
Very helpful. Thank you, Martin. I'd like to share the information below with you and solicit your fine feedback :-) I provide additional detail in case there is something else you feel strongly we should consider. We made some changes last night, let me share those with you. The error that is repeating itself and causing these failures is: Takeover run starting RELEASE_IP 10.200.1.230
2018 May 04
2
CTDB Path
Hello, at this time i want to install a CTDB Cluster with SAMBA 4.7.7 from SOURCE! I compiled samba as follow: |./configure| |--with-cluster-support ||--with-shared-modules=idmap_rid,idmap_tdb2,idmap_ad| The whole SAMBA enviroment is located in /usr/local/samba/. CTDB is located in /usr/local/samba/etc/ctdb. I guess right that the correct path of ctdbd.conf (node file, public address file
2018 May 07
2
CTDB Path
Hello, i'm still trying to find out what is the right path for ctdb.conf (ubuntu 18.04, samba was compiled from source!!). When im trying to start CTDB without any config file, my log in /usr/local/samba/var/log/log.ctdb shows me: 2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file "/usr/local/samba/etc/ctdb/nodes" 2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2017 Apr 19
6
CTDB problems
Hi, This morning our CTDB managed cluster took a nosedive. We had member machines with hung smbd tasks which causes them to reboot, and the cluster did not come back up consistently. We eventually got it more or less stable with two nodes out of the 3, but we're still seeing worrying messages, eg we've just noticed: 2017/04/19 12:10:31.168891 [ 5417]: Vacuuming child process timed