Displaying 20 results from an estimated 7000 matches similar to: "CTDB Path"
2018 May 07
2
CTDB Path
Hello,
i'm still trying to find out what is the right path for ctdb.conf
(ubuntu 18.04, samba was compiled from source!!).
When im trying to start CTDB without any config file, my log in
/usr/local/samba/var/log/log.ctdb shows me:
2018/05/07 12:56:44.363513 ctdbd[4503]: Failed to read nodes file
"/usr/local/samba/etc/ctdb/nodes"
2018/05/07 12:56:44.363546 ctdbd[4503]: Failed to
2018 Feb 26
2
答复: [ctdb] Unable to take recovery lock - contention
------------------原始邮件------------------
发件人:朱尚忠10137461
收件人:samba@lists.samba.org <samba@lists.samba.org>
日 期 :2018年02月26日 17:10
主 题 :[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time.
Which cases will the "unable to take lock" errror be output?
Thanks!
The
2018 Feb 26
2
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time.
Which cases will the "unable to take lock" errror be output?
Thanks!
The following the ctdb logs:
2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node
2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602
2018/02/12 19:38:51.529060
2019 May 16
2
CTDB node stucks in " ctdb-eventd[13184]: 50.samba: samba not listening on TCP port 445"
Hi everybody,
I just updated my ctdb node from Samba version
4.9.4-SerNet-Debian-11.stretch to Samba version
4.9.8-SerNet-Debian-13.stretch.
After restarting the sernet-samba-ctdbd service the node doesn't come
back and remains in state "UNHEALTHY".
I can find that in the syslog:
May 16 11:25:40 ctdb-lbn1 ctdb-eventd[13184]: 50.samba: samba not
listening on TCP port 445
May 16
2020 Aug 06
2
CTDB question about "shared file system"
Very helpful. Thank you, Martin.
I'd like to share the information below with you and solicit your fine
feedback :-)
I provide additional detail in case there is something else you feel
strongly we should consider.
We made some changes last night, let me share those with you.
The error that is repeating itself and causing these failures is:
Takeover run starting
RELEASE_IP 10.200.1.230
2018 Sep 05
1
[ctdb]Unable to run startrecovery event(if mail content is encrypted, please see the attached file)
There is a 3 nodes ctdb cluster is running. When one of 3 nodes is powered down, lots of logs will be wrote to log.ctdb.
node1: repeat logs:
2018/09/04 04:35:06.414369 ctdbd[10129]: Recovery has started
2018/09/04 04:35:06.414944 ctdbd[10129]: connect() failed, errno=111
2018/09/04 04:35:06.415076 ctdbd[10129]: Unable to run startrecovery event
node2: repeat logs:
2018/09/04 04:35:09.412368
2023 Jan 26
1
ctdb samba and winbind event problem
Hi to all,
I'm having a CTDB-Cluster with two nodes (both Ubuntu wit
Sernet-packages 4.17.4). Now I want to replace one of the nodes. The
first step was to bring a new node to the CTDB-Cluster. This time a
Debian 11 but with the same sernet-packages (4.17.4). Adding the new
node to /etc/ctdb/nodes at the end of the list. And the virtual IP to
/etc/ctdb/public_addresses, also at the end
2023 Feb 16
1
ctdb tcp kill: remaining connections
On Thu, 16 Feb 2023 17:30:37 +0000, Ulrich Sibiller
<ulrich.sibiller at atos.net> wrote:
> Martin Schwenke schrieb am 15.02.2023 23:23:
> > OK, this part looks kind-of good. It would be interesting to know how
> > long the entire failover process is taking.
>
> What exactly would you define as the begin and end of the failover?
From "Takeover run
2020 Aug 08
1
CTDB question about "shared file system"
On Sat, Aug 8, 2020 at 2:52 AM Martin Schwenke <martin at meltin.net> wrote:
> Hi Bob,
>
> On Thu, 6 Aug 2020 06:55:31 -0400, Robert Buck <robert.buck at som.com>
> wrote:
>
> > And so we've been rereading the doc on the public addresses file. So it
> may
> > be we have gravely misunderstood the *public_addresses* file, we never
> read
> >
2013 Apr 09
0
Failed to start CTDB first time after install
Hi,
I am setting up a two node Samba cluster with CTDB in AWS in two different subnets. All IP ports are open between these two subnets. I am initially forming the Samba cluster with one node, then will add the second node after startup of CTDB. I am not using public_addresses for CTDB due to AWS not supporting VIP's. I am using 64bit Amazon Linux with two NICs defined, eth0 as the
2014 Oct 29
1
smbstatus hang with CTDB 2.5.4 and Samba 4.1.13
Can anyone help with some pointers to debug a problem with Samba and CTDB
with smbstatus traversing the connections tdb? I've got a new two node
cluster with Samba and CTDB on AIX. If I run smbstatus when the server
has much user activity it hangs and the node it was run on gets banned. I
see the following in the ctdb log:
2014/10/29 11:12:45.374580 [3932342]:
2019 Feb 25
2
glusterfs + ctdb + nfs-ganesha , unplug the network cable of serving node, takes around ~20 mins for IO to resume
Hi all
We did some failover/failback tests on 2 nodes��A and B�� with architecture 'glusterfs + ctdb(public address) + nfs-ganesha'��
1st:
During write, unplug the network cable of serving node A
->NFS Client took a few seconds to recover to conitinue writing.
After some minutes, plug the network cable of serving node A
->NFS Client also took a few seconds to recover
2023 Feb 13
1
ctdb tcp kill: remaining connections
Hello,
we are using ctdb 4.15.5 on RHEL8 (Kernel 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8 clients. Whenever an ip takeover happens most clients report something like this:
[Mon Feb 13 12:21:22 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:21:28 2023] nfs: server x.x.253.252 not responding, still trying
[Mon Feb 13 12:22:31 2023] nfs: server
2018 Feb 26
0
答复: [ctdb] Unable to take recovery lock - contention
Am Montag, 26. Februar 2018, 17:26:06 CET schrieb zhu.shangzhong--- via samba:
Decoded base64 encoded body with some chinese characters:
------------------原始邮件------------------
发件人:朱尚忠10137461
收件人:samba at lists.samba.org <samba at lists.samba.org>
日 期 :2018年02月26日 17:10
主 题 :[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock
2018 Feb 26
0
[ctdb] Unable to take recovery lock - contention
When the ctdb is starting, the "Unable to take recovery lock - contention" log will be output all the time.
Which cases will the "unable to take lock" errror be output?
Thanks!
The following the ctdb logs:
2018/02/12 19:38:51.147959 ctdbd[5615]: CTDB starting on node
2018/02/12 19:38:51.528921 ctdbd[6602]: Starting CTDBD (Version 4.6.10) as PID: 6602
2018/02/12 19:38:51.529060
2023 Feb 15
1
ctdb tcp kill: remaining connections
Hi Uli,
[Sorry for slow response, life is busy...]
On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
<samba at lists.samba.org> wrote:
> we are using ctdb 4.15.5 on RHEL8 (Kernel
> 4.18.0-372.32.1.el8_6.x86_64) to provide NFS v3 (via tcp) to RHEL7/8
> clients. Whenever an ip takeover happens most clients report
> something like this:
> [Mon Feb 13 12:21:22
2023 Feb 16
1
ctdb tcp kill: remaining connections
Martin Schwenke schrieb am 15.02.2023 23:23:
> Hi Uli,
>
> [Sorry for slow response, life is busy...]
Thanks for answering anyway!
> On Mon, 13 Feb 2023 15:06:26 +0000, Ulrich Sibiller via samba
> OK, this part looks kind-of good. It would be interesting to know how
> long the entire failover process is taking.
What exactly would you define as the begin and end of the
2020 Mar 03
5
start/stop ctdb
Hi,
i updated the variables for my scenario. But CTDB wont start:
Mar? 3 09:50:50 ctdb1 systemd[1]: Starting CTDB...
Mar? 3 09:50:50 ctdb1 ctdbd[24663]: CTDB starting on node
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Starting CTDBD (Version 4.11.6) as
PID: 24667
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Created PID file
/usr/local/samba/var/run/ctdb/ctdbd.pid
Mar? 3 09:50:50 ctdb1 ctdbd[24667]: Removed
2020 Mar 03
3
start/stop ctdb
Hi,
i configured a running three node CTDB SAMBA cluster (hope so). Virtual
ip's are floating between the nodes after faling one of them. Im using
SAMBA 4.11.6 on Ubuntu 18.04. Samba was compiled from source. I
configured some systemD start/stop scripts for samba (smbd, nmbd) and
winbind, also disabled them for manage via ctdb.
I enabled ctdb to mange samba and winbind via this
2018 Sep 06
1
[ctdb]Unable to run startrecovery event
Martin,
I have checked more logs.
Before ctdb-eventd went away, system memory utilization is very high, almost 100%.
Is it related to "Bad talloc magic value - wrong talloc version used/mixed"?
2018/08/14 15:22:57.818762 ctdb-eventd[10131]: 05.system: WARNING: System memory utilization 95% >= threshold 80%
2018/08/14 15:22:57.818800 ctdb-eventd[10131]: 05.system: WARNING: System